1. 3

    One of the best talks I have recently seen. Thank you! What this inspires in me is:

    1. Take seriously what people believe, but explain when and what makes it true, and when and what makes it false. Your citrus-analogy is almost as powerful as Dijkstra’s sterile analogy [EWD1054]. Taking this highly critical stance, then allows you to separate deeply ingrained ideas (by “internet advice gurus”) into clear and meaningful concepts.

    2. We need more and clear material for understanding principles. Your presentation did not provide much details into the state of the art (rightfully so), but neither gives any concrete pointers to the audience on how to proceed with the paths you laid out.

    However, the latter point I fail myself to provide. The pointers I have are very technical in nature and do not provide a good overview on principles per se. My own education consisted into collecting many different sources, which is a regrettable state of affairs of science of computing.

    1. 3

      Agreed on both points. Collecting such material from various sources and explaining principles is much of the work I do in my blog and my business.

    1. 1

      a+ expert troll

      1. 1

        Pseudo-mathematical proof. Stark, unnuanced conclusion.

      1. 9

        I haven’t deeply researched the history of project methodologies, but I’d be unsurprised if the belief that “all we did was waterfall until Agile came along” isn’t true. Pretty much every OOP book I’ve read from like 1980-1995 talks about how it works with an “Incremental model”, where you deliver short term wins continuously and constantly reevaluate customer requirements.

        1. 9

          You might find this thread helpful. Turns out iterative development was the norm, waterfall was an illustration of what wasn’t done, and one guy used that to con folks for money on top of DOD’s top-down management. Then, the world changed for the worst. That’s the story so far.

          1. 4

            Nothing I’ve read from programming books since the 1970s suggests that Waterfall was “the way” that software was developed. I’ve also not researched this deeply, but I don’t think Waterfall was ever taken seriously. Everything I read talks about the importance of testing intertwined with implementation, in some form, when developing software. Furthermore, there was always talk of iteration to get APIs to be better. The key difference from the early days is that those cycles were longer, which gives the appearance of Waterfall in action.

            1. 3

              I think our understanding of classical engineering is partially shaped by the agile industrial complex.

              Nowadays if you attend a Scrum training somewhere you get a trainer who typically never worked in a classical engineering project. Yet they have learned the story that classical engineering was waterfall and that no project was succesful ever and that every developer suffered. So they perpetuate this story. Since they don’t know stuff, they just randomly talk about waterfall/classical methods/v-model as if they were all the same. This portrayal of classical methods has shaped our mental model so much, that it also shaped how classical engineering is implemented.

              I have an older colleague who always dies a little every time a 30-something agile consultant starts lecturing people on the perils of classical engineering and the gospel of agility. “If the V-Model was waterfall, it would be the ’'-Model”.

              Bertrand Meyers article I also found interesting on this http://se.ethz.ch/~meyer/publications/methodology/agile_software.pdf . Especially his defense of requirements documents IIRC.

              Maybe this is a general thing to remark about adoption of ideas here. It is always a good idea to read and study the primary and early sources for ideas. Quite often, ideas deteriorate while they are adopted. Most material on user stories is a very shallow remainder of the ideas that Cohn wrote down in the 2000s in his book. The same goes for the original “waterfall paper”, the agile manifesto, even the scrum guide (ever realized it does not mention agile once, it does not mention story once?). I’d be actually curious about historic, alternative frameworks/processes that were created at the same time but that were not widely adopted and have silently vanished. I think a lot of wisdom could be found.

              1. 2

                Do requirements documents need to be defended?

                If you have no requirements document, how do you know when you’ve fulfilled your contractual obligations?

                1. 1

                  You probably don’t have contractual obligations. You’re probably building internal software and the requirements could change for those on some sprint-like basis. And that’s okay.

                  Essentially you just have a tight iteration loop and all is well.

                  1. 1

                    Well, many agile consultants will tell you they are the devil and not agile. (They are not agile if you set requirements in stone at some point). When in fact, user stories (advocated as the alternative) are just a different flavour of requirement engineering.

                    Personally I think writing a large requirements document upfront might be a bit risky if you already fear that your project is based on risky assumptions. On the other hand in the Scrum-hamster wheel, it is sometimes hard for the team and the PO to think for a bit and come up with a better solution than the lowest hanging fruit.

                    I have bad experience with requirements documents when customers commissioned software development, because often they were too detailed and kind of outlined problems the user didn’t have. Then I would have favoured a more agile approach.

                    In SaaS contexts, I would have loved to have a well-maintained requirements document at times, especially when parts of the system were rewritten or exchanged

              2. 2

                From the picture of the process of Agile in the article, I see: Design, Develop, Test, Deploy. But where is verification? Poof, gone.

                Typically I say that there are two kinds of validation (“does the software posses certain qualities?”): testing and verification. The former is established empirically (user tests, examples, etc.) the latter established mathematically (through proof and reasoning).

                Testing essentially establishes that software has the intended functionality; after succesful testing, there should not be any confusion over what the software does, i.e. no misunderstanding over that the software provides certain functionality.

                Verification essentially explores unknown consequences, not foreseen by testing. But only by rigorous argumentation, one can explore all possibilities in which problems may lie hidden. After succesful verification, there should not be any junk in the software, i.e. no unexpected behaviors.

                Both phases provide essential information for clear and consise documentation.

                Agile vs waterfal discussions seem to attack a strawman, while not providing any useful information to conmplete the verification phase.

              1. 8

                Reuse works, but if it does it remains invisible and non-obvious. In the big: always write an OS, compiler, I/O routines, such as graphics or networking, from scratch? No? These “softwares” are very suitable to reuse. In the small: always write a custom data structure, and manipulation routines such as parsing, sorting, printing, for each different kind of object? No? The tiny objects that are sufficiently encapsulated can be treated generically, thus enabling reuse.

                1. 4

                  I have seen many argue that certain programming languages are discovered. I disagree. For the lambda calculus, Turing machines, and recursive functions, one uses as argument they are equivalent by mere chance, thus they are independent discoveries of the same “thing” that already exists “out there”.

                  This is false. Church, Gödel and Turing all worked at some point even at the same location. Consider that they could have exchanged ideas either directly or indirectly (through collegues).

                  The programmers of the Functional camp also easily fall into the trap, that algebraic data types, type classes and other features such as unification are “out there” to be “discovered”. The fact that these languages can be understood in a minimal calculus, only highlights a fundamental fact:

                  We are good at transmitting information about programming. Two people can learn and interpret information from two different sources, and still derive the “same” structure.

                  We could equally jubilate that “programming education” is so effective, since so many people are drawn out to see the same “light”.

                  1. 8

                    Maybe this is the case, maybe not. There are some very interesting connections. I apologize in advance for category theory.

                    First, the category Set, whose objects are sets and arrows are functions, is also a topos. In fact it is the simplest topos, in a sense; Set can be built by considering all of the different ways to examine a point as a topological space.

                    Every topos is Cartesian closed, which means that it has actions which resemble currying and application. Indeed, every topos has an internal logic, and there is a famous isomorphism, the Curry-Howard correspondence, which connects this logic to an internal language. Every Cartesian closed category has an internal language that looks like typed lambda calculus, and thus, so does every topos. This connection is sometimes called the Curry-Howard-Lambek correspondence.

                    Now, having laid out the foundations, you can consider this listing of examples of the correspondence.

                    To address your points more directly: It is true that Church, Gödel, and Turing were all on very similar paths. Gödel and Turing, along with Nash, also had hinted at describing the basics of modern complexity theory, talking about what would become P and NP. Support for the Church-Turing thesis comes, to me, from Turing-complete systems being shown to be Turing-equivalent (so far), and from metalogical results like those of not just Gödel and Tarski, but also more modern progress in constructive mathematics, where it is not false that all real numbers are computable and all computable functions are continuous, just like in Turing’s model.

                    Algebraic data types, as the name implies, arise from algebras on types, like in the typedefs mini-language. They are not limited to “functional” programmers; Zephyr ASDL is used by (my language) Monte, Python, Oil, and probably others.

                    Some type classes naturally arise, in the sense that we can imagine topoi where class-like features would be granted automatically to all computations, without changing the underlying logic. This is possible because logic is portable between topoi. In particular, the Haskell typeclass Ord corresponds to every type having a free (partial) order, and Eq to equality; we could give both for free by working in a topos like Pos, the category of partially-ordered sets; logic in this topos is the same as in Set, but richer in the values.

                    Unification arises in a beautiful way, to me. In Rel, the category whose objects are sets like Set, but whose arrows are relations, the directions of arrows do not matter much. If we bend all of the arrows to one side, then the unifications bulge out on the other, and computation in Rel can consist entirely of first bending the arrows (choosing which values are inputs and which are outputs) and then reducing the diagram piece-by-piece and checking each unification as it bulges out.

                    What I find entirely fascinating is that, according to the categorical logic, there should be at least one more family of logics besides the Cartesian-closed (“functional”) and monoidal-closed (“relational”, “logic”) languages, corresponding to weaker closure, and there is: linear logic! Linear logic is being explored right now in PLD and PLT circles, and its relevance was predicted by the mathematical framework.

                    1. 2

                      Your description of unification via Rel is fascinating. What should I read to understand it better?

                      1. 5

                        There is a paper, Knowledge Representation in Bicategories of Relations, which has diagrams that depict relations in a visceral way. I usually imagine a pure logical language, too, like µKanren, which can be viewed as an internal language for the categorical diagrams.

                        I apologize for not having a page with images which show exactly what I described. I started work once on a writeup of how µKanren or other pure logical languages relate to relational categories, via the relatively obvious functor which sends diagrams to µKanren programs, but didn’t think that there would be much interest.

                        1. 1

                          Awesome, thanks.

                    2. 6

                      Even without collaboration, the combination of consistency and minimalism that acts as the fitness function for mathematical discovery is a pretty intense constraint, so it should be unsurprising that lots of the things we invent under those constraints are equivalent.

                      If you’ve got an urn with ten thousand green balls, one hundred light-blue balls, and two dark-blue balls, and we’ve got a rule that we only pick blue balls, we shouldn’t be surprised that a lot of the ones we pick are light blue – because we’ve made the green balls irrelevant to the selection. In the same way, if from the problem space of mathematical ideas we only choose consistent and minimal sets of ideas, and most forms of consistent minimality have certain attributes in common, we shouldn’t be surprised that we keep choosing ideas with those attributes – the majority of potential mathematical ideas are irrelevant to our selection purposes, and the rest are biased in particular ways.

                      The languages people claim are ‘discovered’ are merely very internally consistent. Consistency is valuable in of itself, because it makes predicting new behaviors based on knowledge of a priori rules easy. But, consistency is enough to vastly increase the likelihood of overlap with other consistent systems (like… all of mathematics).

                      1. 3

                        This is basically my position on the topic.

                        I can’t justify platonism, it doesn’t seem sensible to me. But I do think if we solve similar problems while filtering for simplicity then we’re likely to solve them in similar ways.

                      2. 3

                        Other than lambda calculus, Turing machines and recursive functions being equivalent, what is also very important is that since then no non-equivalent alternative has been found. You can’t blame that on everyone having exchanged ideas: all kinds of mathematical inventions/discoveries were found by people whose coworkers actually disliked and disagreed with their conclusions as long as possible. The invention/discovery of complex numbers, quaternions, Gödel’s uncertainty principles, cardinal numbers due to Cantor’s diagonalization argument… the list is endless. It seems quite unlikely a non-equivalent alternative will still be found, independent of whether you consider it a discovery or an invention.

                      1. 2

                        The first link in the article points to a source that misrepresents facts. The link title is “Today, Google and Facebook control over 75% of all internet traffic.” The article pointed to confuses Internet traffic, Web traffic, and a statistic of referals as measured by a set of online news publishers. This news article points to a random blog on the Internet, that is dubious in terms of its many claims, since there are no clear arguments that separates fact from hypothesis.

                        1. 3

                          Further, the ones controlling the traffic are ISP’s. They can literally disconnect people from Internet. Google and Facebook are services people vountarily use which have alternatives.

                        1. 2

                          Really beautiful website. Very accessible and good use of multimedia. However, it is still only a document. The web is more than documents, or non-fixed canvas sizes. It’s also how entities are linked and allows for (partial) modification over time.

                          1. 1

                            I use Ipe, and I can recommend it! It runs on Linux and Windows, and allows export to EPS, which is easily included in LaTeX.

                            URL: http://ipe.otfried.org

                            1. 0

                              Fashion software

                              1. 4

                                I also applied this tactic on one of my first jobs. At my desk, an A3 paper was beneath my keyboard, and I rested my arms on the thick paper. Everytime someone asked me to do something, or something popped in my mind, I wrote it on the paper.

                                Eventually, there were so many tasks on the paper that I applied some backpressure. I discussed the relevant tasks, to decide what was still necessary, and crossed out all other tasks. Tasks that were completed were marked by a checkmark.

                                After a few months, I got a little archive of A3 papers. This really helped me get a sense of progress.

                                1. 2

                                  I have investigated this sentence: “The first author’s analysis of political metaphors in the Java libraries (Blackwell 2006) has already drawn attention to the fact that social and political concerns may be central to future research in the psychology of programming.”

                                  There is a confusion in reference, but (2006a) “Metaphors we program by: Space, action and society in Java.“ was a very interesting read. It did not conclude anything on politics, but did a literal corpus analysis of Javadoc texts. Many surprising concepts were revealed, but suggestions or convincing explanation lacking. Examples:

                                  • Economy: “deliver”, (“return”, “import” were disqualified due to keywords of Java)
                                  • Law: “contract”, “forbidden”, “obliged”, “constraint”
                                  • Biological: “head”, “tail”
                                  • et cetera
                                  1. 4

                                    I really appreciate that the misuse of “enumeration” is pointed out. See previous thread:

                                    https://lobste.rs/s/ach1fc/openssh_username_enumeration#c_4gcwqw

                                    1. 1

                                      The idea of enumeration, in my mind, seems to mean: emumerate all possibilities and filter out if some element is not valid. This results in an enumeration of all valid elements.

                                      Some might interpret it to mean: enumerate only valid elements. This is equivalent. Consider the extreme case: if there are no valid usernames (i.e. fake SSH servers that disallow any username) then the method still enumerates all possibilities but never finds out any true username. Vacously, this enumerates all valid usernames.

                                    1. 33

                                      I don’t really think that you should be allowed to ask the users the sign a new EULA for security patches. You fucked up. People are being damaged by your fuck up and you should not use that as leverage to make the users do what you want so they can stop your fuck up from damaging them further.

                                      Patches only count if they come with the same EULA as the original hardware/software/product.

                                      1. 9

                                        Sure - you’re welcome to refuse the EULA and take your processor back to the retailer, claiming it is faulty. When they refuse, file a claim in court.

                                        Freedom!

                                        1. 6

                                          This suggestion reminds me of the historical floating point division bug. See https://en.m.wikipedia.org/wiki/Pentium_FDIV_bug

                                          There was a debate about the mishandling by Intel. Also, there was debate over “real-world impact,” estimates were all over the charts.

                                          Here, it seems that the impact is SO big, that almost any user of the chip can demonstrate significant performance loss. This might become even bigger than the FDIV bug.

                                          1. 4

                                            They are being sued by over 30 groups (find “Litigation related to Security Vulnerabilities”). It already is.

                                            As of February 15, 2018, 30 customer class action lawsuits and two securities class action lawsuits have been filed. The customer class action plaintiffs, who purport to represent various classes of end users of our products, generally claim to have been harmed by Intel’s actions and/or omissions in connection with the security vulnerabilities and assert a variety of common law and statutory claims seeking monetary damages and equitable relief. The securities class action plaintiffs, who purport to represent classes of acquirers of Intel stock between July 27, 2017 and January 4, 2018, generally allege that Intel and certain officers violated securities laws by making statements about Intel’s products and internal controls that were revealed to be false or misleading by the disclosure of the security vulnerabilities […]

                                            As for replacing defective processors, I’d be shocked. They can handwave enough away with their microcode updates because the source is not publicly auditable.

                                            1. 1

                                              The defense could try to get the people who are discovering these vulnerabilities in on the process to review the fixes. They’d probably have to do it under some kind of NDA which itself might be negotiable given a court is involved. Otherwise, someone who is not actively doing CPU breaks but did before can look at it. If it’s crap, they can say so citing independent evidence of why. If it’s not, they can say that, too. Best case is they even have an exploit for it to go with their claim.

                                        2. 4

                                          I don’t really think that you should be allowed to ask the users the sign a new EULA for security patches.

                                          A variation of this argument goes that security issues should be backported or patched without also including new features. It is not a new or resolved issue.

                                          Patches only count if they come with the same EULA as the original hardware/software/product.

                                          What is different here is that this microcode update also requires operating system patches and possibly firmware updates. Further not everyone considers the performance trade-off worth it: there are a class of users for whom this is not a security issue. Aggravating matters, there are OEMs that must be involved in order to patch or explicitly fail to patch this issue. Intel had to coordinate all of this, under embargo.

                                          1. 2

                                            This reminds me of HP issuing a “security” update for printers that actually caused the printer to reject any third-party ink. Disgusting.

                                            1. 2

                                              I had not considered the case where manufacturers and end-users have different and divergent security needs.

                                              1. 2

                                                It’s worth thinking on more broadly since it’s the second-largest driver of insecurity. Demand being the first.

                                                The easiest example is mobile phones. The revenue stream almost entirely comes from sales of new phones. So, they want to put their value proposition and efforts into the newest phones. They also want to keep costs as low as they can legally get away with. Securing older phones, even patching them, is an extra expense or just activity that doesn’t drive new phone sales. It might even slow them. So, they stop doing security updates on phones fairly quickly as extra incentive for people to buy new phones which helps CEO’s hit their goalposts in sales.

                                                The earliest form I know of was software companies intentionally making broken software when they could spend a little more to make it better. Although I thought CTO’s were being suckers, Roger Schell (co-founder of INFOSEC) found out otherwise when meeting a diverse array of them under Black Forrest Group. When he evangelized high-assurance systems, the CTO’s told him they believed they’d never be able to buy them from the private sector even though they were interested in them. They elaborated that they believed computer manufacturers and software suppliers were intentionally keeping quality low to force them to buy support and future product releases. Put/leave bugs in on purpose now, get paid again later to take them out, and force new features in for lock-in.

                                                They hit the nail on the head. Biggest examples being IBM, Microsoft, and Oracle. Companies are keeping defects in products in every unregulated sub-field of IT to this day. It should be default assumption with default mitigation being open API’s and data formats so one can switch vendors if encountering a malicious one.

                                                EDIT: Come to think of it, the hosting industry does the same stuff. The sites, VPS’s, and dedi’s cost money to operate in a highly-competitive space. Assuming they aren’t loss-leaders, I bet profitability on the $5-10 VM’s might get down to nickles or quarters rather than dollars. There’s been products on market touting strong security like LynxSecure with Linux VM’s. The last time I saw price of separation kernels w/ networking and filesystems it was maybe $50,000. Some supplier might take that a year per organization just to get more business. They all heavily promote the stuff. Yet, almost all hosts use KVM or Xen. Aside from features, I bet the fact that they’re free with commoditized support and training factors into that a lot. Every dollar in initial profit you make on your VM’s or servers can further feed into the business’s growth or workers’ pay. Most hosts won’t pay even a few grand for a VMM with open solutions available, much less $50,000. They’ll also trade features against security like management advantages and ecosystem of popular solutions. I’m not saying any of this is bad choices given how demand side works: just that the business model incentivizes against security-focused solutions that currently exist.

                                          2. 1

                                            I think you have to be presented with the EULA before purchase for it to be valid anyway

                                          1. 8

                                            I wrote a small paper for class on this, actually! https://bernsteinbear.com/dat-paper/

                                            1. 3

                                              Good technical overview. Identifying WordPress (25% of WWW) as a potential for growth is an interesting suggestion.

                                              1. 1

                                                Thanks! Glad you thought so. I figured it would be a good starting point since many WordPress blogs use minimal dynamic content that cannot be expressed as a static website.

                                            1. 3

                                              Very interesting paper, especially the appendices on computing/recognizing by extending derivatives of regular languages, to derivatives of context-free languages. Afterwards, a comment was made regarding intersection types: context-free languages are not closed under intersection, but it is possible to add an intersection type to the type system: all their theorems remain true. This increases the expressivity of the grammar beyond that of context-free languages.

                                              1. 2

                                                Gödel’s Theorem (aka Gödel’s First Theorem) says that any sufficiently powerful formal system is either incomplete or inconsistent — in essence, either it can’t prove everything that’s true, or it can prove things that aren’t true.

                                                This is a conceptual mistake. For starters, the notion of “everything that’s true” isn’t well-defined. Are we talking about truth in a theory, or truth in some specific model?

                                                As furia-krucha said in a comment, Gödel’s completeness theorem says that a statement in a first-order theory is provable (i.e. a theorem) iff it holds in every model of the theory. For example, if a statement in the language of groups is neither provable nor refutable, there exist groups where the statement holds, as well as groups where the statement doesn’t hold.

                                                Gödel’s first incompleteness theorem then says that every effectively axiomatized first-order theory (i.e., whose theorems can be enumerated) contains statements that are neither provable nor refutable. It says nothing about the “actual truth” of said statements. They are true in some models, false in others. Heck, they remain neither provable nor refutable in some models!

                                                1. 1

                                                  Gödel’s first incompleteness theorem then says that every effectively axiomatized first-order theory (i.e., whose theorems can be enumerated) contains statements that are neither provable nor refutable. It says nothing about the “actual truth” of said statements. They are true in some models, false in others. Heck, they remain neither provable nor refutable in some models!

                                                  I am considering this: a theory is defined to be the set of statements that are provable by the axioms and deduction rules of said theory. So “a first-order theory contains statements that are neither provable nor refutable,” makes no sense to me.

                                                  What do you mean by “actual truth”? A theory is merely a language; within it we say things, which we are supposed to believe to be true. If a theory is unsound, then you can say something in the theory that is not “actually true”, and conversely if a theory is sound then every expression that can be proved must be “actually true”. So, a theory says at least something about “actual truth” of statements, namely that if we believe a theory to be sound then we believe every provable statement to be true.

                                                  Sometimes you have reasons to believe a theory to be unsound, namely if you can find one counter-example of a statement that is provable, but which you know to be false.

                                                  “Heck, they remain neither provable nor refutable in some models!” I don’t understand you here; how can you prove something in a model, if a model is just a way of expressing what is true, viz. a semantics? Models don’t necessarily have an associated proof theory; they are simply constructions which we believe characterize certain real-world elements. Only for sound and complete axiomatizations is it the case that we have that all models correspond to a statement in its (sound and complete) proof theory. But Gödel here argues that no such complete proof theory exists for sufficiently powerful axiomatizations.

                                                  Coming back to the original quote: “in essence, either it can’t prove everything that’s true, or it can prove things that aren’t true.”

                                                  I believe that this correct. I read “it can’t prove everything that’s true” as simply meaning “there exists something that is true which it cannot prove”, as a counter-example to the universal quantification, and to me that is incompleteness.

                                                  As his theorem is a basic diagonalization theorem, I expect this to be of equal value as: “a Turing machine either can’t prove that it halts, or it has incorrect proofs,” which is basically the Halting theorem: it is undecidable by a Turing machine whether some other Turing machine halts on a given input.

                                                  Do you agree or is this just nonsense?

                                                  1. 2

                                                    I am considering this: a theory is defined to be the set of statements that are provable by the axioms and deduction rules of said theory. So “a first-order theory contains statements that are neither provable nor refutable,” makes no sense to me.

                                                    Errr, sorry. What I mean is: You have a theory T defined using a language L. If T is both consistent and sufficiently powerful, then L must contain statements that are neither provable nor refutable in T.

                                                    What do you mean by “actual truth”? A theory is merely a language; within it we say things, which we are supposed to believe to be true.

                                                    I’m not supposed to “believe” anything. I’m supposed to use it to prove things. If I’m given an inconsistent theory, I can prove contradictory statements, and that’s all there is to it.

                                                    If a theory is unsound, then you can say something in the theory that is not “actually true”, and conversely if a theory is sound then every expression that can be proved must be “actually true”.

                                                    I disagree with the notion of “actual truth” to begin with, at least as something mathematics should be concerned with. Provability offers a compelling, very concrete alternative, as it can be computably verified, if we are careful about how define our theories. Truth is best left to philosophers.

                                                    Models don’t necessarily have an associated proof theory; they are simply constructions which we believe characterize certain real-world elements.

                                                    Whether there exist models of mathematical theories “in the real world” doesn’t really concern me. For me, a model must itself be a mathematical object, for example, a group is a model of the theory of a group, and a category of group objects is a model of the theory of groups (as entities that interact with one another).

                                                    I believe that this correct. I read “it can’t prove everything that’s true” as simply meaning “there exists something that is true which it cannot prove”, as a counter-example to the universal quantification, and to me that is incompleteness.

                                                    Why do you assert that unprovable statements in a theory are true, if there exist models of the theory in which they are false?

                                                    1. 2

                                                      I’m not supposed to “believe” anything. I’m supposed to use it to prove things. If I’m given an inconsistent theory, I can prove contradictory statements, and that’s all there is to it.

                                                      I agree with you, in principle. However, I also consider that it is in general undecidable whether a given first-order theory is consistent. Therefore, if you give me a random theory, I have no other choice than to believe.

                                                      Provability offers a compelling, very concrete alternative, as it can be computably verified, if we are careful about how define our theories. Truth is best left to philosophers.

                                                      Very good point; some logicians also prefer provability over truth. But we simply shift our focus here, from “that is true in the real-world” to “that can be computed in the real-world”, both of which are still left to philosophers: simply stated, computability still suffers from the assumption of the Church-Turing thesis.

                                                      What you say about verifiability, is also quite subtle: the “carefulness” in the definition of theories probably means that we need a decidable theory. Hence, theories not “sufficiently powerful”.

                                                      Sorry for my vague meaning, but with “real-world elements” I, indeed, also mean mathematical constructions. However, it also includes the objects for which mathematics not yet has adequate characterizations.

                                                      Why do you assert that unprovable statements in a theory are true, if there exist models of the theory in which they are false?

                                                      Not sure. What is a model of a theory? To me that sounds like a proof object. For example, groups are a model of group theory, and groups themselves are proof terms of said theory. If your definition of “truth” is exactly the same as your theory, then, indeed, you are right: in terms of programming it means “all its bugs are its features.”

                                                      But to me that feels not right: a theory is true until you encounter a sentence which it proves but to which you not agree. Similarly, Brouwer felt that “the law of excluded middle” is somehow not “actually true,” and started developing Constructive Logic. There are similar reasons for the existence of Linear Logic, which basically amounts to not believing that you can apply a certain term (think: proof of some lemma) indefinitely often.

                                                      I think another approach of viewing Gödel’s incompleteness theorem is as follows:

                                                      We know that certain languages, say Peano arithmetic, can “realize” certain theories. That means, within the axiomatic system of Peano arithmetic, we can encode a proof theory that expresses certain truths. This “embedding” is also known from constructive first-order logic, where we can perform double negation translation of any classical sentence and find a constructive sentence that is true if and only if the classical sentence is true.

                                                      Now consider a system that is self-embeddable, that is: we know that every statement can be encoded into the object-level of the language and we can prove that this is a correct realization. Now, if we analyse the statement something along the lines of a Liar paradox: “I am not provable”. If the statement is true and the system is complete, it means that there exists a proof. We now encode the proof into the object-level, i.e. by realizing the proof into the system itself, call this the number G. Then we construct a new statement: “G is not provable”. Clearly this is not true, since as counter-example we can point to the proof G itself. However, “I am not provable” is true, but “G is not provable” is false: contradiction, hence the system cannot be complete.

                                                      And this last paragraph is the “basic diagonalization” which I mentioned last time: feeding the system into itself. It is the same trick that is used by Cantor to show that the Reals are larger than the Rationals. If you think Gödel’s Incompleteness Theorem is false, then you must also believe that the Reals are equally large as the Rationals!

                                                      1. 1

                                                        Therefore, if you give me a random theory, I have no other choice than to believe.

                                                        You don’t need to “believe” in a theory to use it.

                                                        But to me that feels not right: a theory is true until you encounter a sentence which it proves but to which you not agree.

                                                        It isn’t clear to me what you mean by a theory being “true”. For me, all that a theory can aspire to be is “useful”, and that ultimately depends on your purposes. Programming has uses for inconsistent theories (such as most type systems, when viewed as logics), and I wouldn’t be surprised if so did physics.

                                                        We know that certain languages, say Peano arithmetic, can “realize” certain theories. That means, within the axiomatic system of Peano arithmetic, we can encode a proof theory that expresses certain truths.

                                                        I’m aware. In fact, in my original comment, I was going to say something along the lines of “Arithmetic is very powerful. If mathematical induction is available to you, you can define more general inductive structures, such as the syntax and (proof-theoretic) semantics of first-order logic…”, but then I forgot how I was going to relate that to the original topic, so I dropped it.

                                                        If you think Gödel’s Incompleteness Theorem is false, then you must also believe that the Reals are equally large as the Rationals!

                                                        Of course, I don’t think Gödel’s incompleteness theorem is false. I just disagree that you can extract any conclusions about “truth” from it, because “truth” is extra-mathematical.

                                                      2. 1

                                                        Why do you assert that unprovable statements in a theory are true, if there exist models of the theory in which they are false?

                                                        I see now what you mean, by “conceptual mistake”. By soundness and completeness, a statement is provable if and only if the statement is true in every model. If a statement is unprovable, there exists some model in which the statement is false. So it can not be the case that an unprovable statement is true.

                                                        I think both the author of the original post and I have mixed up the two notions of completeness here:

                                                        • incomplete: there are statements that are neither provable nor refutable.
                                                        • completeness: if a statement is true in every model, there exists a proof of the same statement.
                                                    2. 1

                                                      Just reading this topic from a while back. Thank you for this discussion, really wonderful to read back!

                                                    1. 8

                                                      Thanks for this amazing project!

                                                      If I may, I’d like to gently complain about this sentence:

                                                      Provably correct code is code that you can totally guarantee does what you say it does.

                                                      Thinking about total guarantees in this way confused me about proving code for a long time. Whenever I saw a “proven” piece of software, I could immediately think of several possible ways for it to nevertheless be wrong, so it was unclear how the “proof” was qualitatively different from other verification strategies.

                                                      Maybe a more accurate description could be:

                                                      Provably correct code is code that has been rigorously shown to implement a thorough specification using proof techniques from mathematical and logical praxis.

                                                      1. 5

                                                        Yeah, I’m against the use of word guarantee or anything absolute like that. It hurt the formal methods field in the past when they were overpromising. Maybe something like, “Provably correct code is mathematically-proven to implement its intended behavior. If the specification is correct, code built this way usually has few to no errors in practice.”

                                                        Something along those lines. Maybe even with a link to CompCert’s testing results or something like that to illustrate the point.

                                                        1. 4

                                                          Correctness is a relation between the input and output of a program. We say a program is correct if for every possible input, an execution of the program eventually results in a termination with the expected output.

                                                          The correctness relation is called functional if for every input only a single output is related. That is, each input uniquely determines an output.

                                                          Not all correctness relations have a correct program. An input-output relation for which no correct program exists is called undecidable. On the other hand, some correctness relations have multiple correct programs.

                                                          Typically, a programming language itself is already a specification language that allows one to establish an input-output relation in a rigorous manner. Sometimes one wishes to give a functional/relational specification before implementing a program, typically written down using mathematical notation.

                                                          Recent tools allow us to express functional/relational specifications on the computer too:

                                                          1. “Abstract spectification:” To specify the input output relation in a more expressive logic. This logic might not have a direct computational or algorithmic interpretation, and this logic typically has many undecidable properties,
                                                          2. “Concrete Implementation:” The specification in a target programming language. We also need a termination proof of the program.
                                                          3. “Refinement:” showing that each input-output pair in the concrete implementation is also in the abstract specification (soundness), and each input-output pair in the abstract specification is also included in the input-output pair of the concrete implementation (completeness).

                                                          This is, of course, a simplification. Typically, the system one wishes to formalize is not functional, and thus requires more effort in abstract specification and refinement.

                                                          The words “rigorous” and “thorough” are not needed.

                                                          1. 3

                                                            There’s a difference between establishing all that in math and it actually being true for the real code. It requires your specification to be correct, any specs/logics it depends on, the proofs, the checker, usually a parser, usually some I/O, the hardware, and so on. In this case, some of them needed automated solvers to be correct that are pretty complicated. I’m no sure that all have witness certificates or whatever they’re called yet. To be fair to proof engineers, many of them mention all of the above in their papers in the section describing assumptions and/or the TCB.

                                                            So, mbrock’s point that calling it a guarantee is too much seems to stand. I mean, even the seL4 people didn’t call it a total guarantee. They said the code should follow the spec assuming (long list here) isn’t wrong or broken.

                                                            1. 3

                                                              I agree with your points. A proof is as much worth as you want it to be. See the DeepSpec project which tries to obtain proofs for a pragmatic computer: hardware specifications, OS specifications, prover specifications et cetera. However, this still not guarantees anything absolute.

                                                              My point is still that “Rigorously shown:” is vague. When is a method that shows correctness rigorous, when is it not? “Thorough specification:” is vague. When is a specification thorough, when not? Both are matters of trust, it seems to me.

                                                              “But there is a difference between showing that in math and it really being true for code.” Well, if one mathematically deduces a property of some code, and it turns out to be really false, one should start to doubt the foundations or mechanism by which the deduction took place. In principle, any deduction should be true for the real code.

                                                              Although in some logics, not all true properties can be deduced (incompleteness), it does not mean that we should accept logics in which false properties can be deduced (unsoundness).

                                                              Or did I misunderstand what you mean by “the real code”?

                                                              1. 1

                                                                “My point is still that “Rigorously shown:” is vague”

                                                                Ok. That’s a decent point. It is kind of vague. It does have a subjective meaning that a lot of people already understand. They think that a “rigorous” analysis, testing, or whatever phase means piles of effort was put in vs a non-rigorous approach. They expect to get significantly more results or less problems. The fact that people rarely use that word for software QA might make that perception sink in even more.

                                                                I think I mainly keep using it for the psychological impact on people that might not understand the formal terminology well. I could tell them it relates to the specification. Yet, they might think UML when I say that. So, personally, I use a mix of precise terms from the field and terms that seem to convey understanding to those outside of it. I have no proof this term is better: by all means experiment to see if it works in your conversations or find a better way to say it. Talking rigorous vs typical QA has just worked for me. From there, I might say low, medium, or high rigor, confidence, and so on. Although the field is called assurance, I’m thinking of ditching that word, too, since confidence clicks in more people’s heads. I’ve been seeing some people talking about “high-confidence” software or techniques that “increase confidence that software works as intended.” Robustness and resilience are the other two terms many are using.

                                                        1. 2

                                                          The message posted about how the sign-up page contains a bug was the reason I also tried to sign up. After three times and being warped back to the homepage, I gave up. How ironic.

                                                          1. 1

                                                            When the author mentions OO:

                                                            There’s a very specific poor design at the core of the whole, well, not concept in the abstract, but ideology as practiced.

                                                            I can’t resist myself to refuse the notion of OO ideology. What does ideology even mean here? Let me guess:

                                                            If one tries to understand the theory behind OO, and fails, and applies some of the ideas in an apparant chaotic way that resemblences it.

                                                            But this, in my opinion, happens with most things one tries to learn. No blaim to the student, for he is only interested in learning more!

                                                            Let me close with a poem:

                                                            Object-orientation without subclasses is like functions without arguments.

                                                            1. 4

                                                              Inheritance is not important for OO programming at all. It’s about substitutability, and if done the way it was originally intended, message passing.

                                                              1. 2

                                                                Inheritance and subclassing are different. In inheritance, one imposes a constraint declaratively (say, “these objects must be a subclass of other objects”) and that can be done, often without any verification what so ever. Subclassing, in contrast, just exists even if one would not explicitly implement it. Consider:

                                                                The expected object was Y, now we substitute for some object X. This substitution only makes sense when X is compatible with Y.

                                                                There exists a compatibility relation between Xs and Ys, let’s call it the “compatible” relation.

                                                                If X is compatible with Y, and Y is compatible with Z, then X is compatible with Z. Every X is compatible with itself. There exists the most compatible object (say, nil, that can always be substituted). There exists the least compatible object (say, the empty object, which is compatible only with itself).

                                                                Just from substitution, we arrive at subclassing: our “compatible” relation now takes place of subclassing. This does not have to be checked by a compiler, it could equally persist only in the programmers’ mind.

                                                                Consider how subclassing works differently for: languages with inheritance declarations, algebraically typed languages (cf. coercions), un(i)typed languages.

                                                                One can also argue that the notion of “compatibility” only makes sense whenever one works with errors. Again, errors can be kept implicit or explicitly declared as exceptions. Without errors, every object could be substituted for another, and everything is compatible!

                                                                Why do I call them classes of objects and not just sets of objects? Technical, mathematical terminology: objects are non-well-founded, non-hereditary, graph-like constructions, while most people assume set theory to be well-founded, hereditary, tree-like.

                                                            1. 1

                                                              Suggesting that Haskell has no null is doubtful. A diverging computation, e.g. null = 1 + null is typable for, say Integers, but admits no value. So, there you go: another bottom value that fits in every type.

                                                              1. 6

                                                                The nice thing about bottom, compared to null, is that it’s not observable. In other words, we can’t do something like:

                                                                myValue = if isBottom myOtherValue
                                                                             then thingA
                                                                             else thingB
                                                                

                                                                In this sense, bottom acts more like a thrown exception (control flow) than a null (data value); although it looks like a null since laziness combines data flow with control flow.

                                                                In a sense, this more faithfully represents an “unknown” value, since attempting to figure out which value it is also ends up “unknown”. In contrast, null values are distinct from other values. For example, if I don’t know the value of a particular int, the answer to “is it 42?” should be “I don’t know”, not “no”. Three value logic attempts to act like this, but is opt-in.

                                                                1. 1

                                                                  Please could you explain how “laziness combines data flow with control flow”?

                                                                  1. 2

                                                                    Just the idea that evaluation proceeds based on the definition of data, rather than separate control mechanisms, e.g. we can write ifThenElse as a normal function:

                                                                    ifThenElse True  x y = x
                                                                    ifThenElse False x y = y
                                                                    
                                                                    1. 2

                                                                      Oh, I see. Thanks. I was thinking that the usual C-style control mechanisms, e.g. if and for, still kind of entangle control flow and data flow, albeit with special syntax rather than just functions. I wonder if it is possible to disentangle this? What would that look like?