1. 4

    I’ll admit that I continue to be surprised when someone who learned text-based programming claims that visual programming is bad, and a whole audience of text-based programmers nod their head sagely “yes, yes, of course!”

    Let’s take the author’s points:

    Textual programming languages obfuscate what is essentially a simple process

    It’s unlikely that they think this. Instead, they think “seeing the relationships between different tasks is easier and more approachable if visualized instead of in text”. And that’s both true and hugely helpful in situations where you’re working with connective code rather than constructive code. And that’s why so many of the visual programming languages in use today are domain-specific, handling ETL, data flow (including visual and audio processing), automation, etc.

    Abstraction and decoupling play a small and peripheral part in programming

    I mean, they do. Hugely important parts, but a rare task in the scheme of things. Most programming in the real world is detail work, not abstract work.

    The tools that have been developed to support programming are unimportant

    Again, few, if any, think this. Instead, they think that lack of tooling shouldn’t stand in the way of innovation.

    1. 2

      I mean, they do. Hugely important parts, but a rare task in the scheme of things. Most programming in the real world is detail work, not abstract work.

      Abstraction, not abstract work. The point is that most programming work is not about shifting blocks around, but rather about changing the boundaries of the blocks, changing what is “detail” versus “abstract” in this context.

    1. 3

      Wrap on integer overflow is a good idea, because it will get rid of one of undefined behavior in C.

      Undefined behavior is evil. One evil it causes is that it makes codes optimization-unstable. That is, something can work in debug build but does not work in release build, which is very undesirable. The article does not address this point at all.

      1. 1

        The article does not address this point at all.

        To remove all undefined behaviour in C would severely impact the performance of C programs.

        The post does suggest that trap-on-overflow is a superior alternative to wrap-on-overflow, and of course you could apply it to both debug and release builds if this optimisation instability concerns you. Even if you apply it just to your debug build, you’ll at least avoid the possibility that something works in the debug build but not in the release.

        1. 2

          Note that Rust wraps on overflow and as far as I can tell this does not impact performance of Rust programs.

          1. 1

            This is essentially the same as one of the arguments I addressed in the post. Although in certain types of programs, particularly lower-level languages (like C and Rust) where the code is written directly by a human programmer, there probably are not going to be many cases where the optimisations enabled by having overflow be undefined will kick in. However, if the program makes heave use of templates or generated code, or is produced by transpiling a higher-level language with a naive transpiler, then it could do (I’ll conceded this is theoretical in that I can’t really give a concrete example). The mechanism by which the optimisation works is well understood and it isn’t too difficult to produce an artificial example where the optimisation grants a significant speed-up in the generated code.

            Also, in the case of Rust programs, you can’t really reliably assess the impact of wrapping on overflow unless there is an option to make overflow undefined behaviour. Is there such an option in Rust?

            1. 2

              No, there is no such option, and there never will be. Rust abhors undefined behaviors. Performance impact assessment I had in mind was comparison with C++ code.

              On the other hand, rustc is built on LLVM so it is rather trivial to implement: rustc calls LLVMBuildAdd in exactly one place. One can replace it with LLVMBuildNSWAdd (NSW stands for No Signed Wrap).

          2. 0

            To remove all undefined behaviour in C would severely impact the performance of C programs.

            This cannot be entirely true. As a reducto ad absurdum, it would be possible in principle to laboriously define all the things that compilers currently do with undefined behaviour and make that the new definition of the behaviour, and there would then be zero performance impact.

            C compiler writers might argue that removing all undefined behaviour without compromising performance would be prohibitively expensive, but I’m not entirely convinced; there are carefully-optimized microbenchmarks on which the naive way of defining currently-undefined behaviour produces a noticeable performance degradation, but I don’t think it’s been shown that that generalises to realistic programs or to a compiler that was willing to put a bit more effort in.

            1. 2

              This cannot be entirely true. As a reducto ad absurdum, it would be possible in principle to laboriously define all the things that compilers currently do with undefined behaviour and make that the new definition of the behaviour, and there would then be zero performance impact

              Clearly that would be absurd, and it’s certainly not what I meant by “remove all undefined behaviour”. Your “possible in principle” suggestion is practically speaking completely impossible, and what I said was true if you don’t take such a ridiculously liberal interpretation of it. Let’s not play word games here.

              C compiler writers might argue that removing all undefined behaviour without compromising performance would be prohibitively expensive

              They might, but that’s not what I argued in the post.

              I don’t think it’s been shown that that generalises to realistic programs or to a compiler that was willing to put a bit more effort in.

              There’s no strong evidence that it does, nor that it wouldn’t ever do so.

              1. 1

                Well then, what did you mean? You said “To remove all undefined behaviour in C would severely impact the performance of C programs.” I don’t think that’s been demonstrated. I’m not trying to play word games, I’m trying to understand what you meant.

                1. 1

                  I meant “remove all undefined behaviour” in the sense and context of this discussion, in particular related to what @sanxiyn above says:

                  Undefined behavior is evil. One evil it causes is that it makes codes optimization-unstable. That is, something can work in debug build but does not work in release build, which is very undesirable

                  To avoid that problem, you need to define specific behaviours for cases that are currently specify undefined behaviour (not ranges of possible behaviour that could change depending on optimisation level). To do so would significantly affect performance, as I said.

                  1. 1

                    I could believe that removing all sources of differing behaviour between debug and release builds would significantly affect performance (though even then, I’d want to see the claim demonstrated). But even defining undefined behaviour to have the current range of behaviour would be a significant improvement, as it would “stop the bleeding”: one of the insidious aspects of undefined behaviour is that the range of possible impacts keeps expanding with new compiler versions.

                    1. 2

                      I could believe that removing all sources of differing behaviour between debug and release builds would significantly affect performance

                      It’s not just about removing the sources of differing behaviour - but doing so with sensibly-defined semantics.

                      though even then, I’d want to see the claim demonstrated

                      A demonstration can only show the result of applying one set of chosen semantics to some particular finite set of programs. What I can do is point out that C has pointer arithmetic and this is one source of undefined behaviour; what happens if I take a pointer to some variable and add some arbitrary amount, then store a value through it? What if doing so happens to overwrite part of the machine code that makes up the program? Do you really suppose it is possible to practically define what the behaviour should be in this case, such that the observable behaviour will always be the same when the program is compiled with slightly different optimisation options - which might result in the problematic store being to a different part of code? To fully constrain the behaviour, you’d need pointer bounds checking or similar - and that would certainly have a performance cost.

                      But even defining undefined behaviour to have the current range of behaviour would be a significant improvement, as it would “stop the bleeding”

                      As I’ve tried to point out with the example above, the current range of undefined behaviour is already unconstrained. But for some particular cases of undefined behaviour, I agree that it would be better to have more restricted semantics. For integer overflow, in particular, I think it could reasonably be specified that the result becomes unstable (eg. behaves incosistently in comparisons), but the behaviour is otherwise defined - for example. Note that even this would impede some potential optimisations. (And that I still advocate trap on overflow as the preferred implementation).

              2. 2

                I suspect that one issue is that compilers may manifest different runtime behaviour for undefined behaviour, depending on what specific code the compiler decided to generate for a particular source sequence. In theory you could document this with sufficient effort, but the documentation would not necessarily be useful; it would wind up saying ‘the generated code may do any of the following depending on factors beyond your useful control’.

                (A canonical case of ‘your results depend on how the compiler generates code’, although I don’t know if it depends on undefined behaviour, is x86 floating point calculations, where your calculations may be performed with extra precision depending on whether the compiler kept everything in 80-bit FPU registers, spilled some to memory (clipping to 64 bits or less), or used SSE (which is always 64-bit max).)

                1. 1

                  It’s not only possible: Ive seen formal semantics that define various undefined behaviors just like you said. People writing C compilers can definitely do it if they wanted to.

            1. 2

              For the trivial case where wrapping behaviour does allow simply detecting overflow after it occurs, it is also straightforward to determine whether overflow would occur, before it actually does so. The example above can be rewritten as follows:

              If it’s really so trivial then why not have the compiler do that rewrite? Given that the “wrong” version is valid language syntax, programmers will write it and compilers will have to compile it; no amount of encouraging programmers to rewrite gets us away from having to decide what the compiler should do when fed the “wrong” code.

              An obvious mitigation for the problem of programmers expecting this particular behaviour is for the compiler to issue a warning when it optimises based on the alternative undefined-behaviour-is-assumed-not-to-occur semantics.

              Building for maximum performance and warning about a correctness violation seems like the wrong priority. Why not build the code to behave in the way that you’re sure matches the programmer’s intent and warn about the missed optimisation opportunity?

              Also, even without overflow check elimination, it is not necessarily correct to assume that wrapping integers has minimal direct cost even on machines which use 2’s complement representation. The Mips architecture, for example, can perform arithmetic operations only in registers, which are fixed size (32 bit). A “short int” is generally 16 bits and a “char” is 8 bits; if assigned to a register, the underlying width of a variable with one of these types will expand, and forcing it to wrap according to the limit of the declared type would require at least one additional operation and possibly the use of an additional register (to contain an appropriate bitmask). I have to admit that it’s been a while since I’ve had exposure to any Mips code and so I’m a little fuzzy on the precise cost involved, but I’m certain it is non-zero and other RISC architectures may well have similar issues.

              It would be good to permit trapping. It would be good to permit whatever the native MIPS behaviour is. But it’s obviously absurd to permit optimizing out the programmer’s overflow check.

              How about: “An expression in which signed integer overflow occurs shall evaluate to an implementation-defined value and may also cause the program to receive a signal”? In conjunction with the rules in 5.1.2.3.5 this still permits the compiler to trap, still permits the compiler to use the machine behaviour (twos-complement wrapping, some other form of wrapping, saturating or what-have-you), and still permits the compiler to reorder arithmetic operations (provided they don’t cross a sequence point), but rules out craziness like the elimination of overflow checks.

              1. 3

                If it’s really so trivial then why not have the compiler do that rewrite?

                Because the compiler doesn’t know what was intended. And I sure as heck don’t want the compiler re-writing any of my code to what it “thinks” I intended. And automatically “fixing” it in some cases but not others (less trivial) will likely lead to confusion.

                Why not build the code to behave in the way that you’re sure matches the programmer’s intent and warn about the missed optimisation opportunity?

                The compiler can’t be sure what the programmer intended.

                You can’t be sure that the code will match the programmer’s intent. Maybe that “overflow check” really is redundant. Maybe the code is generated. Maybe the “overflow check” is only obviously redundant after several other optimisation passes have taken effect.

                Giving a warning lets the programmer decide: Is this really ok or did I make a mistake? “Fixing” it for them pesimises without any way to undo that pesimisation, except in the very trivial cases, which, again, might be in generated source code.

                It would be good to permit trapping. It would be good to permit whatever the native MIPS behaviour is. But it’s obviously absurd to permit optimizing out the programmer’s overflow check.

                Both trapping and the MIPS behaviour (where a value in an integer of some type can be stored in a register wider than that type, which can incidentally also be done on just about any architecture) would make leaving the overflow check in absurd - because it wouldn’t achieve what it was intended to achieve anyway (even if the compiler could determine the intent).

                1. 2

                  In Ada, you can tell the compiler what behavior you want for integer overflow (or checks to catch it). Are there compiler hints in C for that or other undefined behavior that make the result predictable?

                  1. 2

                    -fwrapv

                    1. 1

                      Thanks! Ill look it up.

                  2. 2

                    Because the compiler doesn’t know what was intended. And I sure as heck don’t want the compiler re-writing any of my code to what it “thinks” I intended. And automatically “fixing” it in some cases but not others (less trivial) will likely lead to confusion.

                    It would be hard to do worse than deleting overflow checks that the programmer wrote into the code, but only sometimes, which is the current behaviour. Undefined behaviour practically guarantees all the things that you’ve just said you don’t want. (Indeed, the compiler is already permitted to behave in the way I’ve suggested it should - it’s just also permitted to do other, less helpful and more confusing, things).

                    (I do agree that it’s not actually trivial for the compiler to fix these cases - my point was if it’s not so trivial for the compiler, it’s not so trivial for the programmer either. So “the programmer should just rewrite their code so the problem doesn’t happen” isn’t a good answer.)

                    You can’t be sure that the code will match the programmer’s intent. Maybe that “overflow check” really is redundant. Maybe the code is generated. Maybe the “overflow check” is only obviously redundant after several other optimisation passes have taken effect.

                    Indeed you can’t be sure, which is why a responsible compiler should fail-safe rather than fail-dangerous. A missed optimization opportunity is much better than a security bug.

                    Both trapping and the MIPS behaviour (where a value in an integer of some type can be stored in a register wider than that type, which can incidentally also be done on just about any architecture) would make leaving the overflow check in absurd - because it wouldn’t achieve what it was intended to achieve anyway (even if the compiler could determine the intent).

                    Trapping achieves what the programmer usually intends - the programmer usually wants the function to abort/error when overflow occurs. Certainly it avoids the worst possible outcome, the one thing we can be certain that the programmer didn’t intend - blindly continuing to execute after overflow.

                    On second thoughts you’re right about MIPS. The programmer almost certainly never intended for an integer to be operated on at a certain width and then truncated at some mysterious later point in the execution of their program. That is so rarely an intended behaviour that I don’t think any responsible compiler should implement it without being very explicitly instructed to.

                    1. 3

                      Undefined behaviour practically guarantees all the things that you’ve just said you don’t want.

                      No, it doesn’t. In making this claim, you are implying that code which invokes undefined behaviour still has defined semantics. Certainly, the compiler won’t guess what was intended and then try to use undefined behaviour to try and implement that; it assumes that what it was told was intended (i.e. what was expressed by the code, according to the semantics of the language) is what was intended. That’s a reasonable assumption to make given the nature and purpose of a compiler.

                      It would be hard to do worse than deleting overflow checks that the programmer wrote into the code, but only sometimes, which is the current behaviour.

                      I agree that, if the compiler could divine that some check (which it would otherwise optimise away) is meant to be a wrapping overflow check for some particular preceding operation, it would be nice if the compiler could issue a strong warning. I do not believe it should ever silently “fix” the problem by applying wrapping semantics to the relevant preceding operations, because that is going to lead to C programmers increasingly believing that overflow does have wrapping behaviour, and I don’t think that is a good idea even if wrapping was the ideal overflow behavior.

                      One question is, would assuming that most/all code that fits that general pattern is an indication that wrapping behaviour was required by previous operations and compiling accordingly (with a warning), have a negative impact on optimisations and performance? I think the answer is “probably not for most existing programs”, but I also think it could clearly have a significant impact potentially. This could be extended generally to a question about whether wrapping semantics generally could impact optimisation (which I think has the same answer).

                      Another question is, is wrapping behaviour generally useful, other than for purpose of these overflow checks? I’ve address that in the post; I think the answer is clearly no.

                      However, is wrapping better behaviour (disregarding performance) than undefined behaviour? From a safety perspective it may be slightly better, since the effect of overflow bugs is more constrained, but the bugs can still happen and can still be exploited. This is anecdotal, but most of the overflow-related security holes that I’ve seen, other than the small number due to compiler-removed post-overflow checks, have been directly caused by the wrapping and not by any other associated undefined behaviour.

                      So:

                      • I’m not convinced that wrapping semantics will never significantly impact performance in real programs
                      • I don’t see wrapping as a useful behaviour, other than for implementing erroneous post-overflow checks, which can and should anway be re-written as correct pre-overflow checks
                      • I’m not convinced that wrapping on overflow is much better than UB on overflow, in practice (yes, in theory UB can do very nasty things. But for this particular case of UB, practically speaking, the effects are usually constrained). But let’s assume that we want to avoid arbitrary UB. In that case:
                      • Trap-on-overflow is a superior alternative because it’s safer, except in cases where performance is critical (and in those cases, wrapping might not be the right choice either).

                      Getting back to what you said, the only way to not delete any overflow checks is to enforce wrapping semantics everywhere, and I disagree with that due to the points above.

                      Indeed you can’t be sure, which is why a responsible compiler should fail-safe rather than fail-dangerous. A missed optimization opportunity is much better than a security bug.

                      Agreed - that’s why trapping and not wrapping is the right behaviour.

                      Thanks for bringing up some reasonable discussion. I don’t think there’s a totally objective right/wrong answer at this point - but I hope you see some validity in the points I’ve raised above.

                      (some small edits made after posting to improve readability).

                1. 4

                  Something that doesn’t seem to be documented but I found in a GitHub issue, you can use nim c --cc:vcc to compile using Visual Studio in Windows. It only works in a “VS Tools prompt” but it saves installing MinGW if you fancy trying out Nim quickly.

                  For me, the most powerful aspect of Nim is that it targets other compilers. So for example you can just include a .c file from a .nim file and it’ll get linked in. You can then import the methods from that C file into your Nim code and call it. Same works both ways too, exposing Nim methods to C code is trivial.

                  1. 4

                    So for example you can just include a .c file from a .nim file and it’ll get linked in. You can then import the methods from that C file into your Nim code and call it. Same works both ways too, exposing Nim methods to C code is trivial.

                    How does that interact with garbage collection?

                    1. 1

                      I’ve not experimented long enough to know yet.

                  1. 8

                    Is it possible there’s no information to recover? Like if I tried hard enough, I think I could rip a waffle, but I’m not sure the result would be sensible.

                    1. 16

                      I’m going to go ahead and postulate that if you tried to rip a waffle three hundred times, at some point during that process the bit stream would start to diverge wildly as mould grows on it, pieces get flung away by rotation and eventually it rots.

                      1. 8

                        The fact that he was able to get 13 copies (even out of 300) that have the same CRC suggests that there’s something real there. That wouldn’t happen by chance.

                        1. 1

                          And thinking about it no, you couldn’t rip a waffle. For a CD drive to read anything at all it has to see pit/land transitions every 3 spaces (at least) at something approximating the correct rate. A random physical object would just have no signal at all and you’d know there was no signal, like trying to receive digital radio when there’s no carrier wave.

                        1. 1

                          Seems worth saying that this mainly consists of implementing iteratees in rust (conduit is open about being an iteratee library, but as a non-Haskeller iteratees is the name I know for the pattern).

                          1. 9

                            It’s interesting because the author is not thoughtlessly in favour of GitHub, but I think that his rebuttals are incomplete and ultimate his point is incorrect.

                            Code changes are proposed by making another Github-hosted project (a “fork”), modifying a remote branch, and using the GUI to open a pull request from your branch to the original.
                            

                            That is a bit of a simplification, and completely ignores the fact that GitHub has an API. So does GitLab and most other similar offerings. You can work with GitHub, use all of its features, without ever having to open a browser. Ok, maybe once, to create an OAuth token.

                            Whether using the web UI or the API, one is still performing the quoted steps (which notably never mention the browser).

                            A certain level of discussion is useful, but once it splits up into longer sub-threads, it becomes way too easy to loose sight of the whole picture.

                            That’s typically the result of a poor email client. We’ve had threaded discussions since at least the early 90s, and we’ve learned a lot about how to present them well. Tools such as gnus do a very good job with this — the rest of the world shouldn’t be held back because some people use poor tools.

                            Another nice effect is that other people can carry the patch to the finish line if the original author stops caring or being involved.
                            

                            On GitHub, if the original proposer goes MIA, anyone can take the pull request, update it, and push it forward. Just like on a mailing list. The difference is that this’ll start a new pull request, which is not unreasonable: a lot of time can pass between the original request, and someone else taking up the mantle. In that case, it can be a good idea to start a new thread, instead of resurrecting an ancient one.

                            What he sees as a benefit I see as a detriment: the new PR will lose the history of the old PR. Again, I think this is a tooling issue: with good tools resurrecting an ancient thread is just no big deal. Why use poor tools?

                            While web apps deliver a centrally-controlled user interface, native applications allow each person to customize their own experience.
                            

                            GitHub has an API. There are plenty of IDE integrations. You can customize your user-experience just as much as with an email-driven workflow. You are not limited to the GitHub UI.

                            This is somewhat disingenuous: while GitHub does indeed have an API, the number & maturity of GitHub API clients is rather less than the number & maturity of SMTP+IMAP clients. We’ve spent about half a century refining the email interface: it’s pretty good.

                            Granted, it is not an RFC, and you are at the mercy of GitHub to continue providing it. But then, you are often at the mercy of your email provider too.

                            There’s a huge difference between being able to easily download all of one’s email (e.g. with OfflineIMAP) and only being at the mercy of one’s email provider going down, and being at the mercy of GitHub preserving its API for all time. The number of tools which exist for handling offline mail archives is huge; the number of tools for dealing with offline GitHub project archives is … small. Indeed, until today I’d have expected it to be almost zero.

                            Github can legally delete projects or users with or without cause.
                            

                            Whoever is hosting your mailing list archives, or your mail, can do the same. It’s not unheard of.

                            But of course my own maildir on my own machine will remain.

                            I use & like GitHub, and I don’t think email is perfect. I think we’re quite aways from the perfect git UI — but I’m fairly certain that it’s not a centralised website.

                            1. 9

                              We’ve spent about half a century refining the email interface: it’s pretty good.

                              We’ve spent about half a century refining the email interface. Very good clients exist…. but most people still use GMail regardless.

                              1. 6

                                That’s typically the result of a poor email client. We’ve had threaded discussions since at least the early 90s, and we’ve learned a lot about how to present them well. Tools such as gnus do a very good job with this — the rest of the world shouldn’t be held back because some people use poor tools.

                                I have never seen an email client that presented threaded discussions well. Even if such a client exists, mailing-list discussions are always a mess of incomplete quoting. And how could they not be, when the whole mailing list model is: denormalise and flatten all your structured data into a stream of 7-bit ASCII text, send a copy to every subscriber, and then hope that they’re able to successfully guess what the original structured data was.

                                You could maybe make a case for using an NNTP newsgroup for project discussion, but trying to squeeze it through email is inherently always going to be a lossy process. The rest of the world shouldn’t be held back because some people use poor tools indeed - that means not insisting that all code discussion has to happen via flat streams of 7-bit ASCII just because some people’s tools can’t handle anything more structured.

                                I agree with there being value in multipolar standards and decentralization. Between a structured but centralised API and an unstructured one with a broader ecosystem, well, there are arguments for both sides. But insisting that everything should be done via email is not the way forward; rather, we should argue for an open standard that can accommodate PRs in a structured form that would preserve the very real advantages people get from GitHub. (Perhaps something XMPP-based? Perhaps just a question of asking GitHub to submit their existing API to some kind of standards-track process).

                                1. 1

                                  You could maybe make a case for using an NNTP newsgroup for project discussion

                                  While I love NNTP, the data format is identical to email, so if you think a newsgroup can have nice threads, then so could a mailing list. They’re just different network distribution protocols for the same data format.

                                  accommodate PRs in a structured form

                                  Even if you discount structured text, emails and newsgroup posts can contain content in any MIME type, so a structured PR format could ride along with descriptive text in an email just fine.

                                  1. 1

                                    Even if you discount structured text, emails and newsgroup posts can contain content in any MIME type, so a structured PR format could ride along with descriptive text in an email just fine.

                                    Sure, but I’d expect the people who complain about github would also complain about the use of MIME email.

                                  2. 1

                                    You could maybe make a case for using an NNTP newsgroup for project discussion, but trying to squeeze it through email is inherently always going to be a lossy process.

                                    Not really — Gnus has offered a newsgroup-reader interface to email for decades, and Gmane has offered actual NNTP newsgroups for mailing lists for 16 years.

                                    But insisting that everything should be done via email is not the way forward; rather, we should argue for an open standard that can accommodate PRs in a structured form that would preserve the very real advantages people get from GitHub. (Perhaps something XMPP-based? Perhaps just a question of asking GitHub to submit their existing API to some kind of standards-track process).

                                    I’m not insisting on email! It’s decent but not great. What I would insist on, were I insisting on anything, is real decentralisation: issues should be inside the repo itself, and PRs should be in some sort of pararepo structure, so that nothing more than a file server (whether HTTP or otherwise) is required.

                                  3. 4

                                    …the new PR will lose the history of the old PR.

                                    Why not just link to it?

                                    This is somewhat disingenuous: while GitHub does indeed have an API, the number & maturity of GitHub API clients is rather less than the number & maturity of SMTP+IMAP clients. We’ve spent about half a century refining the email interface: it’s pretty good.

                                    That strikes me as disingenuous as well. Email is older. Of course it has more clients, with varying degrees of maturity & ease of use. That has no bearing on whether the GitHub API or an email-based workflow is a better solution. Your point is taken; the GitHub API is not yet “Just Add Water!”-tier. But the clients and maturity will come in time, as they do with all well-used interfaces.

                                    Github can legally delete projects or users with or without cause.

                                    Whoever is hosting your mailing list archives, or your mail, can do the same. It’s not unheard of.

                                    But of course my own maildir on my own machine will remain.

                                    Meanwhile, the local copy of my git repo will remain.

                                    I think we’re quite aways from the perfect git UI — but I’m fairly certain that it’s not a centralised website.

                                    I’m fairly certain that it’s a website of some sort—that is, if you intend on using a definition of “perfect” that scales to those with preferences & levels of experience that differ from yours.

                                    1. 2

                                      Meanwhile, the local copy of my git repo will remain.

                                      Which contains no issues, no discussion, no PRs — just the code.

                                      I’d like to see a standard for including all that inside or around a repo, somehow (PRs can’t really live in a repo, but maybe they can live in some sort of meta- or para-repo).

                                      I’m fairly certain that it’s a website of some sort—that is, if you intend on using a definition of “perfect” that scales to those with preferences & levels of experience that differ from yours.

                                      Why on earth would I use someone else’s definition? I’m arguing for my position, not someone else’s. And I choose to categorically reject any solution which relies on a) a single proprietary server and/or b) a JavaScript-laden website.

                                      1. 1

                                        Meanwhile, the local copy of my git repo will remain.

                                        Which contains no issues, no discussion, no PRs — just the code.

                                        Doesn’t that strike you as a shortcoming of Git, rather than GitHub? I think this may be what you are getting at.

                                        Why on earth would I use someone else’s definition?

                                        Because there are other software developers, too.

                                        I choose to categorically reject any solution which relies on a) a single proprietary server and/or b) a JavaScript-laden website.

                                        I never said anything about reliance. That being said, I think the availability of a good, idiomatic web interface is a must nowadays where ease-of-use is concerned. If you don’t agree with that, then you can’t possibly understand why GitHub is so popular.

                                    2. 3

                                      (author here)

                                      Whether using the web UI or the API, one is still performing the quoted steps

                                      Indeed, but the difference between using the UI and the API, is that the latter is much easier to build tooling around. For example, to start contributing to a random GitHub repo, I need to do the following steps:

                                      • Tell my Emacs to clone & fork it. This is as simple as invoking a shortcut, and typing or pasting the upstream repo name. The integration in the background will do the necessary forking if needed. Or I can opt not to fork, in which case it will do it automatically later.
                                      • Do the changes I want to do.
                                      • Tell Emacs to open a pull request. It will commit my changes (and prompt for a commit message), create the branch, and open a PR with the same commit message. I can use a different shortcut to have more control over what my IDE does, name the branch, or create multiple commits, etc.

                                      It is a heavily customised workflow, something that suits me. Yet, it still uses GitHub under the hood, and I’m not limited to what the web UI has to offer. The API can be built upon, it can be enriched, or customised to fit one’s desires and habits. The difference in what I need to do to get the same steps done differs drastically. Yes, my tooling does the same stuff under the hood - but that’s the point, it hides those detail from me!

                                      (which notably never mention the browser).

                                      Near the end of the article I replied to:

                                      “Tools can work together, rather than having a GUI locked in the browser.”

                                      From this, I concluded that the article was written with the GitHub web UI in mind. Because the API composes very well with other tools, and you are not locked into a browser.

                                      That’s typically the result of a poor email client.

                                      I used Gnus in the past, it’s a great client. But my issue with long threads and lots of branches is not that displaying them is an issue - it isn’t. Modern clients can do an amazing job making sense of them. My problem is the cognitive load of having to keep at least some of it in mind. Tools can help with that, but I can only scale so far. There are people smarter than I who can deal with these threads, I prefer not to.

                                      What he sees as a benefit I see as a detriment: the new PR will lose the history of the old PR. Again, I think this is a tooling issue: with good tools resurrecting an ancient thread is just no big deal. Why use poor tools?

                                      The new PR can still reference the old PR, which is not unlike having an In-Reply-To header that points to a message not in one’s archive. It’s possible to build tooling on top of this that would go and fetch the original PR for context.

                                      Mind you, I can imagine a few ways the GitHub workflow could be improved, that would make this kind of thing easier, and less likely to loose history. I’d still rather have an API than e-mail, though.

                                      This is somewhat disingenuous: while GitHub does indeed have an API, the number & maturity of GitHub API clients is rather less than the number & maturity of SMTP+IMAP clients. We’ve spent about half a century refining the email interface: it’s pretty good.

                                      Refining? You mean that most MUAs look just like they did thirty years ago? There were many quality of life improvements, sure. Lots of work to make them play better with other tools (this is mostly true for tty clients and Emacs MUAs, as far as I saw). But one of the most wide-spread MUA (gmail) is absolutely terrible when it comes to working with code and mailing lists. Same goes for Outlook. The email interface story is quite sad :/

                                      There’s a huge difference between being able to easily download all of one’s email (e.g. with OfflineIMAP) and only being at the mercy of one’s email provider going down, and being at the mercy of GitHub preserving its API for all time.

                                      Yeah, there are more options to back up your mail. It has been around longer too, so that’s to be expected. Email is also a larger market. But there are a reasonable number of tools to help backing up one’s GitHub too. And one always makes backups anyway, just in case, right?

                                      So yeah, there is a difference. But both are doable right now, with tools that already exist, and as such, I don’t see the need for such a fuss about it.

                                      I use & like GitHub, and I don’t think email is perfect. I think we’re quite aways from the perfect git UI — but I’m fairly certain that it’s not a centralised website.

                                      I don’t think GitHub is anywhere near perfect, especially not when we consider that it is proprietary software. It being centralised does have advantages however (discoverability, not needing to register/subscribe/whatever to N+1 places, and so on).

                                    1. 7

                                      Freedom is meaningless if you don’t believe your deepest enemies should have it. Supporting free speech, or the right to a fair trial, or basic income, means supporting it even for despicable people. So it is with supporting freedom. It’s fine to believe that only people who agree with you should be able to run your code, just as it’s fine to believe that only people who pay you should be able to run your code, but at that point you’re firmly in the proprietary camp.

                                      1. 6

                                        None of these tactics remove or prevent vulnerabilities, and would therefore by rejected by a “defense’s job is to make sure there are no vulnerabilities for the attackers to find” approach. However, these are all incredibly valuable activities for security teams, and lower the expected value of trying to attack a system.

                                        I’m not convinced. “Perfect” is an overstatement for the sake of simplicity, but effective security measures need to be exponentially more costly to bypass than they are to implement, because attackers have much greater resources than defenders. IME all of the examples this page give are too costly to be worth it, almost all of the time: privilege separation, asset depreciation, exploit mitigation and detection are all very costly to implement while representing only modest barriers to skilled attackers. Limited resources would be better expended on “perfect” security measures; someone who expended the same amount of effort while following the truism and focusing on eliminating vulnerabilities entirely would end up with a more secure system.

                                        1. 2

                                          You would end up with a more secure system from attackers with less resources. For example, you can make your system secure to all common methods used by script kiddies but what happens when a state-level actor is attacking your system? In this case and as your threats get more advanced I agree with the article. In higher levels of threats it becomes a problem of economics.

                                          1. 2

                                            You would end up with a more secure system from attackers with less resources. For example, you can make your system secure to all common methods used by script kiddies but what happens when a state-level actor is attacking your system?

                                            I think just the opposite actually. The attitude proposed by the article would lead you to implement things that defended against common methods used by script kiddies (i.e. cheap attacks) but did nothing against funded corporate rivals. Whereas following the truism would lead you to make changes that would protect against all attackers.

                                            1. 4

                                              The attitude proposed by the article would lead you to implement things that defended against common methods used by script kiddies (i.e. cheap attacks) but did nothing against funded corporate rivals.

                                              That’s not what I understand from this article. The attitude proposed by the article should, IMO, lead you to think of the threat model of the system you’re trying to protect.

                                              If it’s your friend’s blog, you (probably) shouldn’t have to consider state actors. If it’s a stock exchange, you should. If you’re Facebook or Amazon, not the same as lobsters or your sister’s bike repair shop. If you’re a politically exposed individual, exploiting your home automation raspberry pi might be worth more than exploiting the same system belonging to someone who is not a public figure at all.

                                              Besides that, I disagree that all examples are too costly to be worth it. Hashing passwords is always worth it, or at least I can’t think of a case where it wouldn’t be.

                                              To summarize with an analogy, I don’t take the exact same care of my bag when my laptop (or other valuables) are in it than when it only contains my water bottle, and Edward Snowden should care more about the software he uses than the ones I use.

                                              Overall I really like the way of thinking presented by the author!

                                              1. 2

                                                Whereas following the truism would lead you to make changes that would protect against all attackers.

                                                Or mess with your sense of priority such that all vulnerabilities are equally important so “let’s just go for the easier mitigations”, rather than evaluating based on the cost of the attack itself.

                                                1. 1

                                                  If you’re thinking about “mitigations” you’re already in the wrong mentality, the one the truism exists to protect you against.

                                                  1. 1

                                                    It’s important to acknowledge that it’s somewhat counterintuitive to think about the actual parties attempting to crack your defenses. It requires more mental work, in a world where people assume they can get all the info they need just by reading their codebase & judging it on its own merits. It requires methodical, needs-based analysis.

                                                    The present mentality is not a pernicious truism; it’s an attractive fallacy.

                                            2. 2

                                              IME all of the examples this page give are too costly to be worth it, almost all of the time: privilege separation, asset depreciation, exploit mitigation and detection are all very costly to implement while representing only modest barriers to skilled attackers.

                                              How do you figure it’s too costly? If anything, all these things are getting much easier, as they become primitive to deployment environments, and frameworks. Additionally, there are services out there that scan dep. vulnerabilities if you give them a Gemfile, or access to your repo.

                                              Limited resources would be better expended on “perfect” security measures; someone who expended the same amount of effort while following the truism and focusing on eliminating vulnerabilities entirely would end up with a more secure system.

                                              Perfect it all you want. The weakest link is still an employee who is hung over and provides their credentials to a fake GMail login page. (Or the equivalent fuck up) If anything, what’s costly is keeping on your employees to not take shortcuts, and on stay alert to missing access cards, rouge network devices in the office, badge surfing, and that they don’t leave their assets lying around.

                                              1. 1

                                                If anything, all these things are getting much easier, as they become primitive to deployment environments, and frameworks.

                                                I’d frame that as: deployment environments are increasingly set up so that everyone pays the costs.

                                                The weakest link is still an employee who is hung over and provides their credentials to a fake GMail login page. (Or the equivalent fuck up)

                                                So fix that, with a real security measure like hardware tokens. Thinking in terms of costs to attackers doesn’t make that any easier; indeed it would make you more likely to ignore this kind of attack on the grounds that fake domains are costly.

                                            1. 6

                                              I’m glad they’re going with the GTK ecosystem and contributing to it, I’d love to see more people building GTK apps.

                                              (I keep a list of GTK apps, btw. Most new apps seem to be built by elementary OS users, with Vala+Granite as the elementary “SDK” instructs them to do.)

                                              1. 3

                                                While I prefer Gnome, it seems foolish to me Purism is inventing a mobile stack from essentially whole cloth instead of trying to adopt Plasma Active, which while extremely rough, has had a head start by actually existing.

                                                1. 3

                                                  While there’s nothing wrong with GTK as such, Qt has always looked nicer, had a nicer API, better designer tool, better Python bindings, better documentation… and for all the issues with C++, it’s a first-class programming language with a tool/library ecosystem that Vala can never hope to match. Given that the licensing issues have been resolved years or decades ago, I do think the best thing for the community would be to get behind Qt.

                                                  1. 7

                                                    I never liked Qt. It has some advantages (native look on Windows, various options for running without any window system), but it feels very much like a “jack of all trades” thing, and it feels like a Heavy Framework.

                                                    The C++ thing makes it hard to use from other languages. While GObject has been designed for auto-generating language bindings. You can write GTK apps in Haskell, Rust, Nim, D…

                                                    Documentation… honestly, no UI toolkits has great docs.

                                                    Also, GTK 4 is bringing GPU rendering to the existing regular desktop widget system, instead of creating a new separate thing (QML/QtQuick) that’s rarely going to be used, especially on desktop apps :P

                                                    1. 2

                                                      it feels very much like a “jack of all trades” thing, and it feels like a Heavy Framework.

                                                      Given the limited state of package management in the C/C++ world I think monolithic frameworks are an advantage. Slackware gave up on packaging Gnome because the build dependencies were too complex to keep track of, whereas Qt’s qtcore/qtnet/qtui/… model is straightforward enough to make up for using bigger libraries than strictly necessary, IMO.

                                                      The C++ thing makes it hard to use from other languages. While GObject has been designed for auto-generating language bindings. You can write GTK apps in Haskell, Rust, Nim, D…

                                                      Not convinced that that’s worked out in practice. Certainly when working in Python, the Qt bindings were much nicer to use than the GTK ones. “It’s harder to bind to C++ than C” is true as far as it goes, but binding to GObject doesn’t really mean binding to C, it means binding to either a pile of preprocessor macros or to Vala, both of which seem harder than binding vanilla C++.

                                                      Documentation… honestly, no UI toolkits has great docs.

                                                      True as far as it goes, but there are definitely some with better docs than others.

                                                      Also, GTK 4 is bringing GPU rendering to the existing regular desktop widget system, instead of creating a new separate thing (QML/QtQuick) that’s rarely going to be used, especially on desktop apps

                                                      We’ll see how that works out for them. Back when I worked on a game in ~2010 I found the Qt approach to this very easy and common-sense: set the flag to enable OpenGL on the part for which it made sense (my game scene), don’t try to use it on the vanilla widgets (because why would I want to?)

                                                1. 4

                                                  I use an Ebuyer Extra Value keyboard that cost me the princely sum of 1.89. It’s more comfortable than any other keyboard I’ve tried.

                                                  No programming but I do use Dvorak layout (just set in software - I used to physically move keycaps around but it turns out a lot of people who think they can touch-type a Qwerty layout can’t actually touch-type a Qwerty layout).

                                                  1. 15

                                                    From the outside, the controversy over PEP 572 looks like out of control bike shedding. I’m honestly surprised anybody on either side felt strongly enough that it would get to this point. AFAICT, it’s just syntactic sugar to streamline a few common idioms and avoid a common typo.

                                                    1. 12

                                                      Syntax is at the heart of Python’s value proposition, and assignment syntax is pretty core. In most languages I’d agree with you, but this is a language that prides itself on being “executable pseudocode” and is very commonly recommended as a teaching/first language for just that reason.

                                                      1. 5

                                                        I realize that, but I just don’t see this new syntax as a big deal at all. It’s not mandatory. Existing code still works exactly like it always has. And the new extension isn’t difficult to understand or explain. There’s value in keeping the language small and simple, but that ship sailed a long time ago with Python.

                                                        But, at the same time, it’s just syntactic sugar and really isn’t necessary at all.

                                                        I honestly don’t care either way, I’m just surprised it blew up so much.

                                                    1. 3

                                                      Brutalism as an architectonic style is disgusting and oppressive as shit (intentionally). I spent quite a bit of time in a brutalist building, I felt like shit. Like how did intentional hostility ever become a trend?

                                                      1. 10

                                                        While the term certainly originates from concrete, the author is not trying to advocate making websites out of concrete (figuratively). I think the main point can be seen in the paragraph mentioning Truth to Materials. That is, don’t try to hide what the structure is made out of - and in the case of a website it is a hypertext document.

                                                        This website could be seen in that light. It is very minimally styled and operates exactly how the elements of the interface should (be expected to). The points of interaction are very clear.

                                                        The styling doesn’t even have to be minimal, but there is certainly a minimalism implied.

                                                        1. 9

                                                          I respect your opinion, but I personally really enjoy brutalist architecture. I like the minimalism and utilitarian simplicity of the concrete exteriors, and I like how the style emphasizes the structure of the buildings.

                                                          1. 2

                                                            I think if you added a splash of color it would make the environment much more enjoyable while still embracing the pragmatism and the seriousness.

                                                          2. 5

                                                            It isn’t intentionally being oppressive or hostile. It represents pragmatism, modernity, and moral seriousness. However it doesn’t take a large logical jump to realize that pragmatism, modernity, and moral seriousness could feel oppressive. In the same way to the architects who designed brutalism, the indulgent designs of 1930’s-1940’s might feel like a spit in the face if you’re struggling to make ends meet. Neither were trying to hurt anyone, yet here we are.

                                                            1. 3

                                                              I consider the 1930s designs (as can be seen in shows such as Poirot) to be rather elegant styling. But I also see the pragmatism that was prompted with the war shortages.

                                                              I am not a great fan of giant concrete structures that have no accommodation for natural lighting, but I also dislike the “glass monstrosities” that have been built after brutalist designs.

                                                              I find myself respecting the exterior of some of the brick buildings of the 19th Century and possibly early 20th. Western University in London Canada has many buildings with that style.

                                                              Some of the updates done to the Renaissance Center in Detroit have mitigated some of the problems with Brutalist - ironically with a lot of glass.

                                                              1. 2

                                                                This might be true of Brutalism specifically, but (at least some) modern (“Modern”, “Post-modern”, etc.) architecture is deliberately hostile.

                                                              2. 3

                                                                I found this article on that very topic pretty interesting.

                                                                1. 2

                                                                  In my home town, the public library and civic center (pool, gymnasium) are brutalist. It was really quite lovely. Especially the library was extremely cozy on the inside, with big open spaces with tables and little nooks with comfortable chairs.

                                                                  1. 1

                                                                    My pet theory is that brutalism is a style that looks good in black-and-white photographs at the extent of looking good in real life. So it was successful in a time period when architects were judged mainly on black-and-white photographs of their buildings.

                                                                  1. 3

                                                                    This part was never hard to get for me: I unterstand how monads are defined, how they are used and that they let me write a pretty sequences of Maybe (or in this case attempt) operations.

                                                                    But what I don’t get is 1. what all the singular monads practically have in common, in the case of Haskell for example: IO, State, Maybe, etc., 2. why monads are necessary (if they even are) for pure functional programming languages. 3. In what sense is this related to Monads from philosophy (simplest substance without parts)

                                                                    I haven’t yet ever had the experience of seeing a problem and saying “Why, I could express this as a Monad”. And if peoole will go on writing Monad articles, which I am sure they will do, I would very much appreciate it if someone could touch on these issues.

                                                                    1. 4

                                                                      Apparently monads were considered a kind of breakthrough for dealing with some ugly things (IO, exceptions, etc) in a purely functional programming language. I would highly recommend this paper for understanding why they were introduced into Haskell https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/mark.pdf I found the historical perspective especially interesting.

                                                                      1. 2

                                                                        That’s a great paper! He makes it a lot more intuitive than most explanations.

                                                                      2. 1
                                                                        1. what all the singular monads practically have in common, in the case of Haskell for example: IO, State, Maybe, etc.

                                                                        They have a common signature that follows the same laws. This means you can write code that works generically for any monad, and a lot of standard helper functions are already written for you, e.g. traverse or cataM. For a concrete example, in a previous codebase I had a bunch of per-client reports that had different requirements - one needed to accumulate some additional statistics over the report, and another needed a callback to their web API. So I wrote the general report logic generically in terms of any monad, and then for the client that needed the extra statistics I used Writer and for the client that needed the callbacks I used Future so that I could use a nonblocking HTTP library.

                                                                        1. why monads are necessary (if they even are) for pure functional programming languages.

                                                                        They’re never absolutely necessary, but it turns out that a lot of the time where you’d be tempted to use impure code to solve a problem, a monad lets you use the same code style but have your functions remain pure. E.g. rather than a global variable you can use Reader or State. Rather than a global logging facility you can use Writer. Rather than exceptions you can use Either. Rather than a thread-local transaction handle you can use a transaction monad. etc.

                                                                        1. In what sense is this related to Monads from philosophy (simplest substance without parts)

                                                                        Not at all, it’s an accident of terminology.

                                                                        I haven’t yet ever had the experience of seeing a problem and saying “Why, I could express this as a Monad”.

                                                                        Once you’re used to them you start seeing them everywhere.

                                                                        1. 1

                                                                          They have a common signature that follows the same laws.

                                                                          Ostensibly they follow the same laws. But sometimes people let them break the laws at the edge cases. For example, State is not a monad.

                                                                          1. 1

                                                                            State is not a monad.

                                                                            State is a monad under the usual definition of equivalence used when reasoning about Haskell (in which bottom is considered equivalent to everything). Under a more natural definition of equivalence, seq is not a function and State is still a monad (defined only on the legitimate/categorical fragment of Haskell). To pretend that the weirdnesses of seq and lazy evaluation have anything to do with State specifically is grossly misleading.

                                                                      1. 2

                                                                        I found this very readable. I learned last week from another lobsters post that I had been using monads in Rust without knowing (Option and and_then) and this post helped to reinforce what I had learned.

                                                                        I think the author down-plays exceptions a little too far though. In a language like Java or Python, you could have different exception types for each step (say ScanError, ParseError, …) all subclassing a common base exception (say InterpError), then do (e.g. Python):

                                                                        def interpret(program):
                                                                            try:
                                                                                tokens = scan(program)
                                                                                ast = parse(tokens)
                                                                                checked-ast = typecheck(ast)
                                                                                result = eval(checked-ast)
                                                                            except InterpError as e:
                                                                                print(e.to_string())
                                                                        

                                                                        It may not be as concise as:

                                                                        fun compile program =
                                                                          (Success program) >>= scan >>= parse >>= typecheck >>= eval
                                                                        

                                                                        but it isn’t painful to look at or understand either.

                                                                        (I’ll probably be flamed for this!)

                                                                        1. 1

                                                                          but it isn’t painful to look at or understand either.

                                                                          The problem is you can’t safely refactor it. You can’t move your calls around even when they look like they have nothing to do with each other, because changing the order they’re called in changes the control flow. And within the try: block you no longer have any way to tell which functions might error and which functions can’t, which makes it much harder to reason about possible paths through the function.

                                                                        1. 4

                                                                          My experience is that this is just utterly wrong - I’m not even sure how to start to respond. Of course the best way to express a program fragment is a programming language. Of course the best way to think about a program is with a programming language. There is no distinction between programming and mathematics - of course you want to think mathematically about what you’re constructing, but the best languages for that are programming languages. Why would you want two subtly different descriptions of your program that need to be kept in sync when you could have one description of your program? Some programming languages distract from writing a good expression of your construction by making you specify irrelevant execution details, but the appropriate response is to avoid those languages. A good mathematical description of your algorithm is a program that implements your algorithm, given a decent compiler - and thankfully we’re good enough at those these days.

                                                                          1. 9

                                                                            Lobster’s own Hillel expressed it really well just a few days ago:

                                                                            So many software development practices - TDD, type-first, design-by-contract, etc - are all reflections of the same core idea:

                                                                            1. Plan ahead.
                                                                            2. Sanity-check your plan

                                                                            It’s reasonable to want that “plan ahead” stuff to be incorporated in the program (design by contract, tdd), but using an external plan can have a large chunk of the benefit.

                                                                            1. 6

                                                                              Maybe, but that’s not an argument for using a non-integrated, harder-to-check plan if you have the option of building the “plan” right into the program.

                                                                              1. 4

                                                                                Because there’s design tradeoffs in specification. Integration is a pretty big benefit but also a pretty big cost, often reducing your expressiveness (what you can say) and your legibility (what properties you can query). As a couple of examples, you can’t use integrable specifications to assert a global property spanning two independent programs. You also can’t distinguish between what are possible states of the system and what are valid states, or what behavioral properties must be satisfied.

                                                                            2. 7

                                                                              Strong disagree here; you get a lot more expressive power when using a specification language. Let me pose a challenge: given a MapReduce algorithm with N workers and 1 reducer, how do you specify the property “if at least one worker doesn’t crash or stall out, eventually the reducer obtains the correct answer”? In TLA+ it’d look something like this:

                                                                              (\E w \in Workers: WF_vars(Work(w))) /\ WF_vars(Reducer) 
                                                                                => <>[](reducer.result = ActualResult)
                                                                              

                                                                              Why would you want two subtly different descriptions of your program that need to be kept in sync when you could have one description of your program?

                                                                              I’ve written 100 line TLA+ specs that captured the behavior of 2000+ lines of Ruby. Keeping them in sync is not that hard.

                                                                              1. 9

                                                                                I’ve written 100 line TLA+ specs that captured the behavior of 2000+ lines of Ruby. Keeping them in sync is not that hard.

                                                                                Keeping code in sync with comments that literally live along side them is even more “not that hard”, and yet fails to happen on an incredibly regular basis.

                                                                                In my experience, in any given system where two programmer artifacts have to be kept in sync manually, they will inevitably fall out of sync, and the resulting conflicting information, and confusion or mistaken assumption of which one is correct, will result in bugs and other programmer errors impacting users. The solution is usually to either generate one artifact from the other, or try to restructure one artifact such that it obviates the need for the other.

                                                                                1. 4

                                                                                  Keeping code in sync with comments that literally live along side them is even more “not that hard”, and yet fails to happen on an incredibly regular basis.

                                                                                  The difference is that if your code falls out of sync with your comments, your comments are wrong. But if your code falls out of sync with your formal spec, your code probably has a subtle bug. So there’s a lot more instutional pressure to update your spec when you update the code, just to make sure it still satisfies all of your properties.

                                                                                  The solution is usually to either generate one artifact from the other, or try to restructure one artifact such that it obviates the need for the other.

                                                                                  This has been a cultural problem with formal methods for a long time: people don’t value specifications that aren’t directly integrated into code. This has held the field back, because actually getting direct integration is really damn hard. It’s only in the past 15ish years that we’ve accepted that it’s alright to write specs that can’t generate code, and that’s why Alloy and TLA+ are becoming more popular now.

                                                                                  1. 6

                                                                                    The difference is that if your code falls out of sync with your comments, your comments are wrong. But if your code falls out of sync with your formal spec, your code probably has a subtle bug

                                                                                    What justifies that assumption. Some junior will inevitably, in response to some executive running in with their hair on fire over some “emergency”, alter the behaviour of the code to “get it done quick” and defer updating the spec until a “later” that may or may not ever arrive. Coming along and then altering the code to meet the spec then re-introduces the emergency situation.

                                                                                    The fundamental problem here is that you’ve created two sources of truth about what the application should be doing, and you cannot a priori conclude that one or the other is always the correct one.

                                                                                    1. 6

                                                                                      And what happens when that “emergency” fix loses your client data, or breaks your data structure, or ruins your consistency model, or violates your customer requirements, or melts your xbox, or drops completed jobs?

                                                                                      Yes, it’s true that sometimes the spec needs to be changed to match changing circumstances. It’s also seen again and again that specs catch serious bugs and that diverging from them can be seriously dangerous.

                                                                                      1. 2

                                                                                        And what happens when that “emergency” fix loses your client data, or breaks your data structure, or ruins your consistency model, or violates your customer requirements, or melts your xbox, or drops completed jobs?

                                                                                        Nobody’s arguing that the spec is useless, just that the reality is that it does introduce risks that require care and attention and which cannot be handwaved away with “keeping them in sync is not that hard” because sync issues will bite organizations in the ass.

                                                                                    2. 2

                                                                                      It’s only in the past 15ish years that we’ve accepted that it’s alright to write specs that can’t generate code, and that’s why Alloy and TLA+ are becoming more popular now.

                                                                                      It would be very helpful if you could at least generate test cases from those specs though. But then that’s why I work on a model based testing tool ;)

                                                                                      1. 2

                                                                                        Which one?

                                                                                        1. 3

                                                                                          Proprietary of my employer (Axini). Based on symbolic transition systems, a Promela and LOTOS inspired modeling language and the ioco conformance relation. Related open source tools are TorX/JTorX/TorXakis. Our long term goal is model checking, but we believe model based testing is a good (necessary?) intermediate step to convince the industry of the added value by providing a way where formal modeling can directly help them test their software more thoroughly.

                                                                                          1. 2

                                                                                            Really neat stuff. Thanks. I’ll try to keep Axini in mind if people ask about companies to check out.

                                                                                            1. 2

                                                                                              Thanks, I’ve also been regularly forwarding articles and comments by you to colleagues :)

                                                                                              1. 1

                                                                                                Cool! Glad thry might be helpig yall out.:)

                                                                                    3. 2

                                                                                      This was a problem in high-assurance systems. All write-ups indicated it takes discipline. That’s no surprise given that’s what good systems take to build regardless of method. Many used tools like Framemaker to keep it all together. That said, about every case study I can remember found errors via the formal specs. Whether it was easy or not, they all thought they were beneficial for software quality. It was formal proof that varied considerably in cost and utility.

                                                                                      In Cleanroom, they use semi-formal specs meant for human eyes that are embedded right into the code as comments. There was tooling from commercial suppliers to make its process easier. Eiffel’s Design-by-Contract kept theirs in the code as well with EiffelStudio layering benefits on top of that like test generation. Same with SPARK. The coder that doesn’t change specs with code or vice versa at that point is likely just being lazy.

                                                                                  2. 3

                                                                                    Why would you want two subtly different descriptions of your program that need to be kept in sync when you could have one description of your program?

                                                                                    For example to increase the number and variety of reviewers and thus reducing bugs.

                                                                                    A good mathematical description of your algorithm is a program that implements your algorithm

                                                                                    Are you thinking of a specific compiler? I agree that programmers think mathematically even when they use programming languages to express their reasoning, but I still feel some “impedence” in every language I use.

                                                                                    1. 3

                                                                                      For example to increase the number and variety of reviewers and thus reducing bugs.

                                                                                      That seems very unlikely though. Even obscure programming languages are better-known than TLA+. More generally I can’t imagine getting valuable input on this kind of subject from anyone who wasn’t capable of understanding a programming language. I find the likes of Cucumber to be worse than useless, and the theoretical rationale for those is stronger since test cases seem further away from the essence of the program than program analysis is.

                                                                                      Are you thinking of a specific compiler? I agree that programmers think mathematically even when they use programming languages to express their reasoning, but I still feel some “impedence” in every language I use.

                                                                                      I mostly work in Scala so I guess that influences my thoughts. There are certainly improvements to the language that I can imagine, but not enough to be worth using something other than the official language when communicating with other people.

                                                                                  1. 4

                                                                                    Not to spoil the article, but the ending quote in bold really sort of scares me. I’ve spent most of my life hearing stories about how machines will take human jobs. The reality of that has played out much less scary (so far) than they’d have had us believe 20 or 30 years ago. It’s never really occurred to me, though, that in another 20 or 30 years that my job as a programmer might be obsoleted as well. It’s like the matrix; funny ha ha, but for real.

                                                                                    Thankfully I hope to be retired 30 years from now :P

                                                                                    1. 4

                                                                                      I think this is the natural progression of things, isn’t it? Programming isn’t immune to the effects of automation - just the opposite, in fact. It’s like boiling a frog - things are automated so often and so incrementally that programmers no longer notice when jobs that would have taken 10x longer a few years ago are basically instantaneous today.

                                                                                      1. 11

                                                                                        Programming will be the last thing to be automated, because it is itself automation - once you have automated programming you just have to run your automated programmer and then you’ve automated everything.

                                                                                        1. 2

                                                                                          …No. The only thing that will save programming from being automated NEXT is… wait, I see what you did there. “Your keys are always found in the last place you look.” :)

                                                                                          On a serious note, regarding future job prospects, I think programming will not be the last available job. Some job that isn’t an attractive candidate for automation will be the last available job. Programming, with all its expense, is a prime target.

                                                                                          1. 4

                                                                                            Once you can automate programming you can automate everything else at approaching 0 cost, so it’s moot.

                                                                                            1. 1

                                                                                              Can you? I would imagine lots of jobs rely on intrinsically tacit, “local” intuition, and not merely knowledge and cognitive function, which is what it seems to me the only thing that “solving programming” entails automatically.

                                                                                              1. 1

                                                                                                Programming often relies on intrinsically tacit, local intuition. I mean think of the last time you received feedback from the customer about how they felt the software should work.

                                                                                                1. 1

                                                                                                  Good point I didn’t think about that end of the situation

                                                                                        2. 2

                                                                                          Hopefully, this allows them (and me) to do their (and my) jobs more efficiently, and focusing on other more important things. Of course, other stuff will eventually fall into obsolescence, but don’t we have graveyard keepers, working on decrepit technologies for sizeable amounts of money? COBOL experts, where art thou?

                                                                                          1. 2

                                                                                            All very true. I think the reality just sort startled me.

                                                                                          2. 5

                                                                                            This is why it’s important to move past capitalism ASAP: it’s more and more immoral to couple the ability to get a job with the ability to stay alive and retain dignity. Once all labor is automated, there shouldn’t be any jobs (coerced or obligatory labor), and we should all be rejoicing.

                                                                                            1. 0

                                                                                              Will there still be a free market? Or will what we consume be planned by the machines. At which, point, without the ability to decide what I want - or the illusion thereof - my job as a human is done too …

                                                                                              1. 5

                                                                                                woe to those who think their job as humans is to consume

                                                                                                1. 1

                                                                                                  I eat, therefore I am.

                                                                                                2. 1
                                                                                                  1. We all make the world;
                                                                                                  2. define “free market”.
                                                                                                  1. 0

                                                                                                    There is a medium of exchange (please not barter) and a market for goods and services. I have goods/services to offer and I have goods/services I need. I have markets to go to sell and buy these. The market is not controlled by the commissariat which determines how much toothpaste I get and what color tube it comes in because for reasons most people can not fathom, I like to chose.

                                                                                                    1. 1

                                                                                                      you can chose what color tube your toothpaste comes in?

                                                                                                      1. 0

                                                                                                        In capitalist America, toothpaste color chooses you!

                                                                                                      2. 0

                                                                                                        What is available in these markets? What is not? How are its dynamics damped, to avoid balloons and crashes? How are negative externalities, like advertising or air pollution, accounted for? You throw around the “free” as though its interpretation were obvious, when the devil is in the details, and the details are everything.

                                                                                                        1. 0

                                                                                                          This is strawman nonsense, and nowhere do I imply central planning. What you’re really saying is, “I want freedom of choice for consumption and production,” which doesn’t require capitalism, though you’re strongly implying you think it does.

                                                                                                          1. 0

                                                                                                            You need to elaborate your scheme then. Every time I’ve heard someone say “I hate capitalism and I have an alternative for it” what they really have is state capitalism (AKA communism in practice as opposed to the silly theory of communism written down somewhere).

                                                                                                            1. 0

                                                                                                              The universal means of production (automated labor), universally distributed.

                                                                                                              1. 0

                                                                                                                Who decides resource allocation?

                                                                                                                1. 0

                                                                                                                  Who decides it now?

                                                                                                                  1. 0

                                                                                                                    The market

                                                                                                                    1. 0

                                                                                                                      How’s that workin’ out.

                                                                                                                      1. 0

                                                                                                                        Better than anything else people have tried.

                                                                                                                        1. 0

                                                                                                                          Citation needed.

                                                                                                                          1. 0

                                                                                                                            Also, punch cards were better than anything that came before, and then we had better ideas that were enabled by advancing technology. It’s time we did the same for meeting basic human needs.

                                                                                                                            1. -1

                                                                                                                              You haven’t actually said what the replacement is for free markets and capitalism.

                                                                                                                              1. 0

                                                                                                                                Start with democratic socialism. End with technological post-scarcity.

                                                                                                                                1. 0

                                                                                                                                  All countries with governments are socialist, not all are democratic, and not all have free markets. So that doesn’t add anything new.

                                                                                                                                  Post-scarcity is another way of saying we have no plan on how to deal with resource contention, which is the hard problem

                                                                                                      3. -1

                                                                                                        it’s more and more immoral to couple the ability to get a job with the ability to stay alive and retain dignity.

                                                                                                        What dignity is possible once you’re livestock to be taken care of?

                                                                                                        The truth of the matter is there’s an ongoing demographic implosion. If they wait it out awhile, there won’t be that many people to have to have the universal income or whatever it is you’re arguing for.

                                                                                                        1. 3

                                                                                                          You’re assuming that dignity and purpose are only possible under conditions of coerced labor. Your premise is false.

                                                                                                          I’m not arguing for UBI. I’m arguing for democratic access to the means of universal production (robotic labor, molecular nanotech, etc.), removing the need for things like “income”.

                                                                                                    1. 11

                                                                                                      I also like the converse: If you’re writing a program, use a programming language!

                                                                                                      1. 6

                                                                                                        The contrapositive is even better. If you are not using a programming language then you are not writing a program.

                                                                                                        1. 1

                                                                                                          I think you flipped an “ought” to an “is” when you took the contrapositive.

                                                                                                          1. 1

                                                                                                            Is my language incorrect?

                                                                                                            Okay will this then be more clear?

                                                                                                            If not using prog. Lang. => not writing program

                                                                                                            1. 2

                                                                                                              The original statements say how things ought to be, your statements are about how the world is. The proper contrapositive would be “if you aren’t using a programming language, it isn’t true that you should be writing a program.” “if it’s not the case that you should be using a programming language, you aren’t writing a program.”

                                                                                                          2. 0

                                                                                                            Not true for what most people think of as “programming languages” (or, if you prefer, many programming languages are not obviously so).

                                                                                                        1. 1

                                                                                                          There seems to be some unspoken assumption here that SQL is simpler/easier/cheaper than ML/AI. That’s not my experience: everything you can do with modern tools is possible in SQL, sure, but the modern tools make it much easier and eliminate a lot of the pitfalls.

                                                                                                          1. 5

                                                                                                            I don’t follow: what kind of modern ML/AI tools are you thinking of (as simple/easy/cheap as SQL)?

                                                                                                            1. 1

                                                                                                              Mainly Spark. Hardware performance/cost is probably worse unless you’re on a really big dataset, but for me it more than made up for it in programmer cost: I found it so much easier to answer questions based on our data when I could use a shell in a normal programming language with access to our data as normal values and just call e.g. .aggregateByKey and pass code to do what I wanted.

                                                                                                              1. 5

                                                                                                                Calling aggregateByKey isn’t using ML, it’s using straightforward querying of a dataset. Spark isn’t an ML solution so much as it’s a non-relational data store that’s queried differently than SQL-based data stores.

                                                                                                                You can build ML solutions on top of Spark, much the same as you can on top of SQL.

                                                                                                                The big difference between the two is instead the declarative vs. imperative nature of querying. With SQL, you describe what you want, whereas with Spark (and a number of other big data and nosql stores), you describe how to get it. The latter is more familiar to many imperative/OO programmers, but the former is generally more approachable to non-programmers, and tends to deal with changes to data more smoothly.

                                                                                                                In fact, the declarative approach is useful enough that Spark SQL exists and is widely used.

                                                                                                                1. 1

                                                                                                                  So maybe: “you don’t necessarily need ML/AI: you should consider just using SQL if you already know it.“?

                                                                                                                  1. 1

                                                                                                                    I still don’t follow: as far as I know, Spark is not a ML/AI tool.