Threads for JohnCarter

  1. 3

    Similar tools that were already open-source include mermaid, graphviz, and plantUML.

    1. 9

      we have comparisons with all of them at https://text-to-diagram.com

      1. 3

        Sort of misses the best features of GraphViz though.. it’s layout options and tools

        eg. edge concentrators, ratio compress, neato, sccmap, tred, gvpr, unflatten

        But yes, it’s cluster mechanism is a bit klunky.

        Whilst GraphViz doesn’t do sequences something with a very very similar syntax does, namely mscgen,

        1. 3

          Most of the time I reach for graphviz, I already know what shape I want the output to be. Then I struggle to make graphviz do it.

          1. 1

            …it has, ahhh, opinions. ;-)

        2. 3

          Oh, I have looked for something like this for years. Thanks!

      1. 4

        Not convinced… and I say that as someone who has designed a little language in the past.

        • There is always general purpose stuff you want to do as well… so language gets big.
        • The documentation / tutorials / examples / debuggers / tools / compiler warnings / … compared to a big language like Ruby is non-existent.
        • If I were to do it again, I’d use a general purpose language like Ruby and provide enough classes and modules to represent the domain and ordinary ruby to glue it together.
        1. 6

          Ah… my hobby horse, let me ride it….

          State machines are the dual of threads. Every state machine (or conversely thread) can be refactored, into a thread (or conversely, state machine).

          The question then is, which is the better representation when.

          If you write your state machine as a transition matrix (ie. a matrix where the row labels are states and the column labels events, the cells the next state to transition to if you receive that event whilst in this state.)…

          Look at the transition matrix… is it sparse? ie. Most of the cells are “can’t happen”, then use threads, otherwise state machine.

          Now the other argument that applies strongly in the embedded space is threads have a stack per thread, which typically is one of your largest usages of ram.

          So if you have hundreds of state machines… you run out of ram just on stacks alone in the threaded representation… unless…. you use protothreads or in C++20, stackless coroutines.

          Conclusion:

          Most state machines I have seen in embedded systems are sparse (higher layers or design makes most transitions “can’t happen”.)

          Thus implementing them as C++20 stackless or Contiki style C protothreads is cleaner, much less boiler plate and much more readable.

          1. 5

            Thus implementing them as C++20 stackless or Contiki style C protothreads is cleaner,

            FWIW some related solutions

            • Foundation DB had a language on top of C++ that let them write in the straight-line / threaded / coroutine style, and then it was compiled to state machines in C++. For the low level portions of the database, threads were considered too expensive.
              • Unfortunately they got swallowed by Apple and I haven’t heard much from them since, but it’s a very interesting system.
              • “Testing Distributed Systems w/ Deterministic Simulation” by Will Wilson (2014) - https://www.youtube.com/watch?v=4fFDFbi3toc
            • Facebook also had some kind of state machine compiler in PHP to do concurrency, which may have led to Hack. I don’t remember the details, but I think is a YouTube talk out there that explains it.

            I spent last week looking through some docs of the P language, which had a good Strange Loop 2022 talk. It was first used ~2014 in the Windows USB 3 stack, and is now being used by AWS (e.g. for S3 strong consistency). The author of the language worked at both places.

            https://github.com/p-org/P

            Formal Modeling of Distributed Systems (2022) - https://www.youtube.com/watch?v=5YjsSDDWFDY&t=1s

            The big question I have from that talk is why he says they “regressed” from generating C code in the Windows case to having a separate model/spec and implementation in the AWS case. He says that it’s because of performance, but I don’t understand why you can generate fast USB device drivers but not fast concurrent protocols in servers (running on server hardware you control, etc.)

            edit: Also I remember a core claim of P is that if you write in their state machine language, it’s easier for the model checker to find bugs (explore the state space) than if you write in a C- or Java-like language. So that could be a benefit to the state machine style.

            They also have a notion of “deferred events” which can collapse the state space explosion. There is some “bounded delay” model I believe – will have to understand that a bit more


            I also mentioned at some point on lobste.rs that Node.js used to have this huge hand-written state machine in C for their HTTP parser, and then they switched to generated code.

            Look at the transition matrix… is it sparse? ie. Most of the cells are “can’t happen”, then use threads, otherwise state machine.

            Hm I would like to see this unpacked more. Examples? I think there is probably some truth there, but not sure it’s that simple

            1. 2

              I found the node.js reference

              llhttp - new HTTP 1.1 parser for Node.js by Fedor Indutny | JSConf EU 2019

              and my comment from a year ago: https://lobste.rs/s/rzhxyk/plain_text_protocols#c_gnp4fm

              Quote about maintaining state machines:

              Let’s face it, http_parser is practically unmaintainable. Even introduction of a single new method results in a significant code churn.

              So they switched to “straight line” source code that generates state machines instead, similar in spirit to FoundationDB.

              https://llparse.org/

              (I guess this is pretty different than P, which is more about writing in a direct state machine style, but you have a model checker that runs in a big parallel cloud of CPUs to explore the state space.)

              1. 1

                I notice in practice a lot of “state machine’s” in the wild are fairly informal.

                A strict model checkable one has a strict rule that the only determinants of which state you transition to is which state you’re in and which event occurred.

                In practice, sadly, a lot of real world code decays from that ideal.

              2. 2

                You might consider jotting that down as a blog-post. I’ve made a similar conclusion recently, but it took me a long time to realize that! This doesn’t seem to be as well understood as it deserves.

                1. 2

                  I would say “function calls”, not threads. Threads are about concurrency, which has nothing to do with state machines – though on embedded the line blurs somewhat. Then you have coroutines, which are more or less a way of making function calls operate like their own little state machines.

                  1. 1

                    The curious “thing” with coroutines (and threads) is when a function blocks… the CPU can go off and do unspecified “other things”, and then “on some event”, something can happen to “unblock” it and continue the function on from the exact point it was blocked at.

                    With a state machine, each function does a little, typically never blocking, and returns unwinding to the highest level… to do that unspecified “other things” and it’s the business the state machine implementation to work our where to carry on when that “on some event” occurs.

                    What goes by unspecified is “If I have more than one state machine interacting… what’s happening?” is each state machine living in it’s own thread? On a thread from a thread pool? All on one thread?

                    Hence when I make the statement “I can refactor any state machine into a thread and vice versa”, I say it in the general sense assuming the general picture.

                  2. 1

                    Could you expand on why threads and statemachines are dual?

                    1. 1

                      Give me any program written with a one, I can refactor it to be written as the other.

                      It’s not an easy task, and the code may become less clean in the transformation, hence the need to choose wisely.

                      Alas, threads (and events for that matter) are not defined by the language (especially C), but involve hard to reason about thinks like i/o, timing, hardware level context switches and schedulers.

                      The core idea is with a state machine some often undefined top level is waiting for events, and then telling the state machine “this event happened”.

                      With the threaded model, every chunk of state code is going to have a blocking “wait for event” point and then code to decide what to do with that event.

                      If after every wait for event you have a massive switch statement… yup, the state machine representation is much cleaner.

                      If for example, only one event is possible at that point…. well, then a state machine makes your code much worse.

                      What we mean by “wait for event” is somewhat undefined, but I will usually be either using an i/o or thread synchronization primitive at that point.

                      With a stackless coroutine or protothread, it unwinds back to that high level “wait for event” point same as with a state machine. So transforming a state machine to a protothread or stackless coroutine is easier than to a OS thread.

                      Edit: Was going to fix the typo “hard to reason about thinks”… but it is an accurate Freudian Slip so leaving it as is.

                      1. 1

                        Give me any program written with a one, I can refactor it to be written as the other.

                        I don’t really know what this means in this context. A state machine is a model of computation, a thread is a model of parallel execution. Generally, each thread runs a Turing-complete execution environment (e.g. a C abstract machine), which means that a single thread is strictly is a more powerful computational abstraction than a state machine. A set of threads can be implemented by a context switch after each small step in your sequential semantics and so is an equivalent model of computation (though one that may be a better match for efficient expression on multicore hardware)

                        With the threaded model, every chunk of state code is going to have a blocking “wait for event” point and then code to decide what to do with that event.

                        If your threads have stacks then even if each thread runs a state machine and not a Turing-complete language then your program is now as expressive as a push-down automaton, which is more expressive than a state machine, but not Turing complete.

                        With a stackless coroutine or protothread, it unwinds back to that high level “wait for event” point same as with a state machine

                        Okay, now we’re getting somewhere. This is the key part: stackless coroutines are an efficient and clean way of expressing state machines in a Turing-complete language. That’s true, but it’s very different from the computational theoretic argument that you started with.

                        1. 1

                          stackless coroutines are an efficient and clean way of expressing state machines in a Turing-complete language.

                          …if the transition matrix is sparse. If the matrix is dense, it’s a less clean way. It’s a real world design trade off.

                  1. 18

                    This client is so good internally. I’m happy to see it opened up so I can use it after I quit :D

                    1. 2

                      Any idea how it compares to Mercurial Evolution? Evolution allows some very useful workflows.

                      1. 2

                        The Sapling front-end is derived from Mercurial and it implements commit evolution directly. You can use sl restack instead of hg evolve, for example. I suspect that distributed evolution isn’t supported in Sapling, since obsmarkers, etc. scale poorly.

                        1. 0

                          Certainly warring evolutions is a excellent way of screwing up your life…. It’s in my “Don’t Do That” list…

                          However, it happens and all assistance in cleaning up / or preventing the result is needed

                        2. 1

                          Sorry, I’ve not used Mercurial Evolution.

                      1. 3

                        My takeaway is that more than I realized the word “type” seems to be synonym for “magic” in a lot of heads.

                        My expected takeaway was something about sheaf theory but okay seems like that kind of article will eventually arrive and this is more about telling us that fact.

                        1. 2

                          Valim’s post seems pretty straightforward and sensible to me.

                          I think you have some context in your mind that is not in mine… because your response makes no sense to me, probably because I’m missing that context.

                          Could you elaborate?

                          1. 1

                            The post is fine, just one “about” higher than I anticipated (so about the plan to make something “about” the title).

                            My guess when I saw set-theoretic types for a language where every function runs in its own thread was that it would be about sheafs (~set valued functors) but sorry I can’t really elaborate because I never had a chance to go that far (although I would like to), my academic life ended broke on the street and now I am an ignored madman hoping against hope that people can be persuaded to change their priorities without a bloody catastrophe.

                            Here’s some justification for my guess (i.e. the context you may be missing): https://www.researchgate.net/publication/222422736_Sheaves_Objects_and_Distributed_Systems

                            1. 1

                              I don’t think every function does run in it’s own thread.. (since it’s a pure language I guess in principle it could).

                              Academia is a strange distorted universe.

                              people can be persuaded to change their priorities without a bloody catastrophe.

                              Oh dear, some mighty catastrophes are clearly incoming… the recent record on “people changing their priorities” is not good.

                              Sigh. I always argue with myself for a pragmatic multiparadigm close to metal language like D, or mathematically correct like Haskell… but feel if I wanted to go that way I’d want Agda, or comfy old Ruby, or cleaner Elixir….

                              But I think Valim and Matz are on the right track, much about types can be deduced from the code itself.

                              1. 1

                                I think all languages have something to offer which is why I slowly got dragged into thinking about the tooling side and interop.

                                W.r.t. what we want from the runtime then this guy gets it: https://youtu.be/8Ab3ArE8W3s

                                1. 2

                                  I got around to watching that…

                                  Hah! The guy is pretty much the same era as me, I (mentally) put my hand up for all his “do you know what / have you used …” questions…

                                  And yes, it frustrates the hell out of me that we have gone backwards on so many fronts.

                                  Alas, I believe the ludicrous situation we’re in is a symptom of how the economy is designed.

                                  eg. I used an excellent smalltalk system in the 1990’s that had a lot of what he is walking about…. it (and a bunch of other excellent products) were bought out…. and left to whither and die.

                                  Parable of the Software Industry: The software industry is like when a miner, through incredible luck, found diamonds lying loose on the ground. He picked them up with just his hands and became the richest miner in the area, hiring hundreds of men to dig for diamonds with just their bare hands. So all the other miners threw away their picks and shovels and used their bare hands… Because that’s what the most successful people were doing.

                                  1. 1

                                    I think the place to do static analysis is the version control system… and as soon as you do risk assessments then you have premise for “block rewards” (or you know whatever, I’m saying you can have finality with X% risk)..

                          2. 2

                            I would enjoy more detail but I think they are really in a preliminar phase I guess.

                            From set theoretic types I think they mean intersection and union types.

                            I have an hypothesis about how an effect system would be like.

                            I think the dynamic calls (apply/3) can be a challenge, because the type of the message must match a type of the actor, so we would need type PIDs. Since there are process registries, they would need to carry some information about type.

                            1. 1

                              “static types” is kind of short for predictable, well behaved static analysis that work on the whole language (or most of it, when you bolt it in after the fact). I don’t really think it’s “magic” :). I think the talk makes a good job of listing reasons the Elixir team wants them, namely tooling (like typescript), documentation, to allow designing type-first for those who like it, and as proofs in the restricted language afforded by whatever type system they eventually pick.

                              I’m not following why you jump to sheaves from “set theoretic types” though. Clearly the simplest interpretation of that, is that types are seen as sets of Elixir values, and there would be intersection and union operators on types to handle the fact that Elixir allows to define multiple overloads of a function, and restrictions on a given clause. I don’t know much about sheaves but they seem to be a category theoretic abstraction of set-like things, which doesn’t bring much to the table when talking about a concrete use of sets.

                              1. 1

                                It wasn’t just because they said “set-theory” it was because it is elixir and they talked about set-theory, but anyway sure, sorry for being rude.

                            1. 1

                              Seems a good time to heartily recommend http://exercism.org

                              Supports 61 language learning tracks, exercises, mentoring and is free! (Hint: pay back by mentoring what you know)

                              1. 4

                                How to get me to be quite, umm, unhappy, with you….

                                Pass an error code up and up and up the call graph and…. cast it to void.

                                I have solved more unsolvable bugs than I can count by the simple algorithm of searching for all cast return codes to void and logging the code if not success, printing a backtrace (yes you can do that in C/C++) and aborting.

                                Weehee! Bugs come crawling out of the wood work and you go whack whack whack and they go away for ever.

                                1. 5

                                  I have solved more unsolvable bugs than I can count by the simple algorithm of searching for all cast return codes to void and logging the code if not success, printing a backtrace (yes you can do that in C/C++) and aborting.

                                  You are going to love this. I’ve seen code like this in at least one library that is widely available in Linux distributions etc. I’m not going to name and shame, because I still need to open an issue.

                                  void add_handler(HandlerFunc mfh) {
                                    struct entry *e = malloc(sizeof(struct entry));
                                    if (e == NULL) {
                                      return;
                                    }
                                    e->func = mfh;
                                    e->next = handlers;
                                    handlers = e;
                                  }
                                  

                                  So we don’t actually know if our handler ever got registered.

                                  That’s the basic outline, anyhow. It’s yet another take on the ostrich method. I’d call the ostrich method the most widely used in C, just because nobody I know of actually checks the return value of printf.

                                  Error handling in C is annoying, and a lot of programmers – including myself – are optimists.

                                  1. 3

                                    Sigh. I do not love. No sir, not at all, at all.

                                    I hate code that is written to “might do something”.

                                    It might work, it might not work, but nobody will know it didn’t unless a user is paying close attention and complains.

                                    If the user asked the program to do something, do it or tell him why you couldn’t, don’t silently do nothing.

                                    Sure there are “best effort” use cases, but they are few and far between and should be made clear from the function name.

                                    If you call a function “do_x” (or “add_handler) I expect it to do x or tell me why not, not “might do x, feeling cute, might not do”.

                                    Grrumble grrowl mutter mutter mutter.

                                    1. 2

                                      aaaaaaaaa

                                      at least do exit(4206969); // lmao or something instead of just returning!

                                      1. 1

                                        While exit() does take an integer value, the C standard explicitly states three values you can give to it—0 (which indicates success to the operating system), EXIT_SUCCESS (which does the same thing, but nothing states it has to be 0) and EXIT_FAILURE. POSIX limits the value to 8-bits. MS-DOS (and probably Windows) has the same limit.

                                        1. 1

                                          Given the context, I genuinely do not know how to respond to this. XD Thanks for the info though.

                                  1. 5

                                    It would also be interesting to compare code density changes for the same target using -Ofast vs -Osmall. I know e.g. that at least on x86_64 dense code is occasionally at odds with fast code (though not as much as it once was), and I wonder how much that applies across other instruction sets.

                                    1. 1

                                      Although I’d hope if you’re caring about code density, you’d be on -Osmall….

                                      1. 1

                                        I’m actually more curious by how much density affects performance.

                                    1. 1

                                      Any comparison of stack frame sizes…? Sparc V8 has notoriously large ones.

                                      1. 6

                                        Note to newbies…

                                        If you’re writing C/C++ (or similar language) and NOT using valgrind… stop whatever you’re doing now, and wire valgrind into your unit tests.

                                        NOW.

                                        1. 10

                                          Honestly, the sentiment isn’t bad, but AddressSanitizer gives you most of the benefit with a fraction of the runtime cost. I prefer to run tests with ASAN enabled and rarely use valgrind these days.

                                          1. 7

                                            I prefer ASan for unit tests in CI (it’s sufficiently fast that I can run the entire test suite with it) but when I do get an ASan failure the output from Valgrind is usually a lot more informative for debugging. There’s also an important category of bug that Valgrind will catch but ASan won’t: ones where you have some subtle undefined behaviour in your program that the compiler exploits at higher optimisation levels. ASan annotations will often block these optimisations, whereas valgrind can run on the release build. The same applies to anything compiled with my nemesis, gcc 9, where valgrind will point to this week’s exciting miscompilation (I can’t wait for the Ubuntu LTS release that included GCC 9 to be EOL, it’s the buggiest compiler I’ve had to support as a target for at least 15 years).

                                            I always cringe a bit when I see people using cachegrind though. Cachegrind does not measure your cache state, it runs your memory trace through a model of a cache that was fairly accurate in the mid ‘90s and now behaves nothing like a modern cache. It’s fine for coarse-grained ‘oh, you’re doing a load of pointer chasing here, that might be slow’ things but it’s unlikely to give a good performance model for a modern CPU.

                                            1. 1

                                              Cachegrind does not measure your cache state, it runs your memory trace through a model of a cache that was fairly accurate in the mid ‘90s and now behaves nothing like a modern cache.

                                              The article also mentions this fact, and I found it interesting. My interest in the tool is mostly rooted in curiosity & personal research rather than any serious work, but I just recently perused cachegrind’s docs — while they’re very explicit on the metrics being a result of a simulation (and a simplified one at that), I failed to perceive it as being very innacurate or obsolete. Good to know.

                                              Are there any alternatives worth exploring?

                                              1. 2

                                                Are there any alternatives worth exploring?

                                                Not good ones. Modern CPUs have quite a rich set of cache performance counters but they are not very fine grained and the thing you really want is to know the top 10 instruction pointer values that trigger cache misses. The only way of getting that is with off-core debugging (Arm’s CoreSight is nice for this, Intel has a similar thing, but they’re very proprietary).

                                            2. 1

                                              There is an large but not full overlap between the two. There are curious corner cases that asan catches that valgrind doesn’t and vice versa.

                                              So after fiddling around and testing for awhile I ended up leaving valgrind for our main build as it seem to catch a larger set of our bugs, but periodically I run asan/ubsan.

                                              I also enabled -fsanitize=bounds-strict -fsanitize-undefined-trap-on-error which catches most of the cases valgrind doesn’t.

                                            3. 3

                                              Similar note to the advanced – if you have any systems with custom memory managers, teach valgrind how to understand them! It’s not hard. There is a reasonable one here: https://developers.redhat.com/articles/2022/03/23/use-valgrind-memcheck-custom-memory-manager , although I did things a bit differently myself. Should write about that.

                                            1. 10

                                              When a disease has many treatments, it has no cure.

                                              I think the origin of that saying was referring to depression, but it really applies wonderfully to serialization!

                                              1. 1

                                                😂

                                              1. 1

                                                Contemplating http://govhack.org

                                                This is still a very much half baked idea at the moment…..

                                                I’m wondering about setting up a team to create Not A Newspaper.

                                                It would be pretty much everything a newspaper is not.

                                                • If it’s not still important a month later, we won’t report it.
                                                • If we report it, it’s placed in context (this is where the open government data comes in) and from the readers perspective (that data reads differently if you’re renting vs own a couple of houses, if you’re a twenty something vs a 60+).
                                                • If we report stuff, we also enumerate the actions you can take with your vote, your pocket book, your feet, your behaviour to improve the world for all.

                                                I believe we do not have a problem with disinformation.

                                                The problem is a vacuum of useful information. The news media is filled with useless out of context hyped up fluff and undigested press releases.

                                                1. 7

                                                  Does anyone have an example of discoverability of urls working?

                                                  I’ve implemented an (JSON over HTTP) API which tried to be HATEOAS in that a client was expected to discover URLs from a root, which it could then manipulate to achieve certain goals.

                                                  I think we had two developer teams using the API (one local, one remote) and the remote one just hardcoded the URL fragments they discovered so they didn’t need to start a session and walk down to discover the correct URLs. The idea was we could change implementation and API versions and clients would handle it, but obviously this broke it.

                                                  In hindsight, latency is king and I don’t blame the remote devs for doing that, I’m just curious if anyone ever got this to work (and how)? I guess returning fully randomised URLs in the discovery phase is one way….

                                                  1. 7

                                                    yes, but probably not one you are thinking about: the web

                                                    1. 3

                                                      The Web is intended to be an Internet-scale distributed hypermedia system, which means considerably more than just geographical dispersion. The Internet is about interconnecting information networks across organizational boundaries. Suppliers of information services must be able to cope with the demands of anarchic scalability and the independent deployment of software components. – intro to Fielding’s thesis

                                                      This is the idea of Anarchic Scalability.

                                                      If the client and server belong to different organizations, the server cannot force the client to upgrade, nor can the client force the server not to.

                                                      You cannot force a client to use your service, you can’t stop a thousand from deciding to use your service… and there is always another web site one click away…

                                                      So how are you going to design a system that allows the server to upgrade and change… without “flag days” arranging for all clients to upgrade at the same time as the server?

                                                      Conversely, if you all part of the same organization… isn’t there something simpler than REST you can do? The downside the boss of your team and their team is usually so far from the technical side… they can’t understand the problem.

                                                    1. 13

                                                      Whenever I see Facebook, LinkedIn, or Google (these days), etc. talking about “the customer” I’m reminded of this scene from Mars Attacks:

                                                      https://www.youtube.com/watch?v=_vUrAMxmO_A

                                                      1. 5

                                                        Or the classic Twilight Zone episode “To Serve Man”

                                                        1. 1

                                                          Distressingly accurate. Remember: If you don’t have to pay for something, then you are not their customer, you are their product.

                                                          1. 13

                                                            Also remember: Plenty of companies are still happy to charge you for the privilege of being their product. The promise of cable TV was that there wouldn’t be advertising, since the subscriber was the customer.

                                                            1. 7

                                                              If you do have to pay for something, then you are not their (only) customer, AND you are also a premium product that has money.

                                                              1. 4

                                                                What about the Linux Kernel? I have (unfortunately) never done anything to further progression in regards to Kernel development.

                                                                1. 2

                                                                  Good question. I think the answer is that the goal of the Linux kernel is not to make a profit.

                                                          1. 8

                                                            I’m glad the author is happy, and it’s true Firefox’s privacy focus is very nice, but I think it bears mentioning that a stunning and ever growing number of websites I encounter just don’t work quite right with Firefox :(

                                                            I think some web devs have decided that Chrome has won and don’t bother testing on anything else.

                                                            1. 17

                                                              Since I uBlock most shite, a lot of websites work better on firefox.

                                                              Not as the authors intended…. but that is a different metric…

                                                              1. 5

                                                                I admire your intestinal fortitude sir :) I can barely use the modern web when it works exactly as authors intended :)

                                                                1. 3

                                                                  Every JavaScript, CSS, UX and Graphics Designer hates my intestines. I make heavy use of, and strongly recommend https://addons.mozilla.org/en-US/firefox/addon/tranquility-1/

                                                                  Your super cluttered, javascript frameworks to the max with megabytes of dynamic css pride and joy…. Click. Gone. Sigh. Tranquility.

                                                                  1. 1

                                                                    I think that given the advanced state of decay the web is now in, most of the time, the web is better when you don’t allow it to function the way the authors intended.

                                                                    1. 0

                                                                      I’ll leave expert assessments on the state of the web to more qualified folks like you, but for the unwashed like myself, what’s the ideal you’re shooting for? Just plain mark-up with no executable code at all perhaps? Or maybe some kind of super restrictive capability model thing?

                                                                      I hear people express opinions like this all the time, and sure what we have is a giant ungainly mess in some regards, but it mostly works for most people’s use cases.

                                                                      I’m just curious how you’d rebuild the web world in your vision of crystalline perfection :)

                                                                2. 8

                                                                  Just wait until Europe finishes implementing its new law that requires Apple to allow alternative browser engines on iOS. I still don’t understand how anyone ever decided that was the biggest anti-competitive issue in the browser market, but here we are, and probably very soon after Chrome’s rendering engine gets onto iOS the browser market will simply cease to exist. Likely will be helped along a bit by Google doing a final “please view our site in Chrome” push across all their properties, and refusing to renew Mozilla’s funding, but hey, at least we will have stuck it to Apple.

                                                                  1. 1

                                                                    I share exactly this concern… the only thing I see that might prevent it is chrome’s battery drain relative to that platform’s default browser.

                                                                  2. 3

                                                                    Keeping an unconfigured instance of Chromium/Chrome around for these situations is a fine strategy. I use Firefox as my primary browser and keep Chromium installed when websites require it- but that only happens every couple of months for me. Google doesn’t get my login info or browsing history, Firefox gets the vast majority of my web traffic.

                                                                    1. 2

                                                                      Definitely true. I usually just don’t use those websites, but sometimes that option is undesirable enough that I do start up Chrome. I do try to make sure to kill it as soon as possible afterwards . . .

                                                                    1. 2

                                                                      I firmly believe Formal Method advocates should aim for a more pragmatic approach and “play nice” with compilers and optimizers and Design by Contract and Unit Testing.

                                                                      Let’s face it, writing a full postcondition is just plain a lot harder than writing the code itself.

                                                                      However, writing a perhaps partial postcondition for a few compelling concrete test cases is usually pretty easy.

                                                                      Instead of aiming for “proving” the programmer right or wrong, if they aimed instead for assisting the programmer as much as they can.

                                                                      If formal methods acted like warnings on steroids (but with no false positives) and a boost for optimizers and a power up boost for internal precondition checks….

                                                                      I can see a lot more adoption.

                                                                      1. 3

                                                                        Let’s face it, writing a full postcondition is just plain a lot harder than writing the code itself.

                                                                        That’s not at all obviously true! We have many examples where it is false, and it’s probably false in general.

                                                                        An example of this is are problems in NP. Problems that can be verified in polynomial time but whose solutions cannot be generated in polynomial time (assuming P!=NP, but in any case, even if P=NP the verification is still likely to be far easier).

                                                                        Problems to do with concurrency are another example. There are models that are very simple to specify, but they result in thousands of tricky edge cases that need to be handled perfectly and that humans just don’t come up with on their own.

                                                                        The real problem is that current tooling makes specifying models far too difficult. And even worse, they make knowing if you’ve specified the right mode quite tricky. This is probably a combination of having immature tools and having chosen the wrong formalism for expressing these models. A lot of logics look great at first but their semantics, particularly when it comes to reasoning across time, turn to be really tricky. We have some glimpses of how this can work with languages like Haskell or Isabelle in domains that can be modeled well and it’s beautiful. But there’s a long road from here to a point where we can express these postconditions in a more natural way and still reason about them automatically.

                                                                        1. 6

                                                                          You’re kinda illustrating @JohnCarter’s point here.

                                                                          I’m not working over in CSAIL, but here in startupcanistan even having basic tooling around post-conditions in design-by-contract for things like “this function will never go negative when calculating a sum of credits”, “this function will always return a smaller value towards zero than its inputs”, “this function will return a URL that doesn’t have a query string with such-and-such values in it” would still be super helpful.

                                                                          We’re trying to make a cup of tea, not boil the ocean. Until formal methods folks realize this, their entire field is of limited utility to practical, real-world software engineering outside of a handful of cases (say, writing a database engine).

                                                                          1. 2

                                                                            I’m going to recommend this great post about combining property based testing with contracts. If you can add the assertions for your pre and post conditions, through something like an assert statement in your language or some library support, you can use property based testing to at least generate a lot of inputs. And that’s more likely to find assertion failures than hand-writing test cases.

                                                                            That’s one of the lightest-weight approaches for adding formal methods concepts to a real word project.

                                                                            1. 1

                                                                              Yes, actually from pre/post-conditions you can get 3 things:

                                                                              • Support for generating random tests, similar to QuickCheck
                                                                              • Verification at compile-time, like Dafny
                                                                              • Fail-fast behavior at runtime, like Eiffel

                                                                              The third item is really pragmatic and underappreciated. At runtime, a failed precondition should prevent a function/method from running. A failed postcondition should prevent the function from returning or writing its side effects. A huge class of software errors would disappear if software was written like this.

                                                                              Probably, you can also get a fourth one, better static analyses (without going into full verification) and a fifth one (more efficient code, as discussed elsewhere in this thread).

                                                                              It is unsurprising design-by-contract is so thorough as it is actually refinement types under disguise.

                                                                              1. 2

                                                                                It is unsurprising design-by-contract is so thorough as it is actually refinement types under disguise.

                                                                                And refinement types are Hoare logic in disguise :)

                                                                                1. 1

                                                                                  True! :)

                                                                          2. 2

                                                                            An example of this is are problems in NP. Problems that can be verified in polynomial time but whose solutions cannot be generated in polynomial time

                                                                            Assuming P!=NP says nothing about the relative difficulty of writing the solver vs verifier, though. Or are you speaking metaphorically?

                                                                            1. 1

                                                                              Practically, most solvers for such problems include the verifier as part of the solver. So the solver tends to be (much) more complex. But yes, that’s just a rough example.

                                                                            2. 2

                                                                              We have many examples where it is false, and it’s probably false in general.

                                                                              Really? Can I see those examples? I’d be really curious to some for something like https://github.com/seL4/seL4/blob/master/src/string.c

                                                                              But I couldn’t see them from a quick dig around the repo.

                                                                            3. 3

                                                                              I totally agree. My personal interest is in how to bring formal methods to work, in whatever package is the most useful. There’s a term called “lightweight formal methods” which is similar in philosophy to what you’re talking about. Basically, take the ideas from formal methods, like invariants / properties / formal logic, but use them in a way that doesn’t require full math proof.

                                                                              I think Amazon’s approach here is really promising. They took the theorems that would be proven in a verification effort, like refinement of a model, and instead used property-based testing to check for it. So the property isn’t proven, but they have some amount of confidence in it. They also found 16 bugs this way, before shipping. And isn’t that the end goal?

                                                                              So yea. I think coming up with the right package of a lightweight, practical way to bring formal methods thinking to work is something that there’s demand for.

                                                                              1. 2

                                                                                This is an interesting point. Do we need mathematical proof? Or can we use scientific proof? Mathematical proof’s epistemology for this domain is fairly straightforward (classical logic for constructive proofs over finite problems). Scientific proof is epistemologically far more fraught…but often a lot easier to produce since it is subject to further refutation.

                                                                                1. 2

                                                                                  Yes definitely - tests are really a scientific / empirical activity. They are about observing what really happens, not just what the logical model says should happen. I would say the main difference between testing in software and actual scientific experiments though is that we don’t know how to talk about the statistical significance of individual test cases. At least I don’t. I’ve heard that the Cleanroom approach has some notion of statistics, but I’m honestly not familiar with it in detail.

                                                                                  As far as which one is appropriate for software development. It could be contextual, but I pretty much solely focus on “business software,” or “regular” software, and for that I would say math proof is almost never necessary. It could be desired in certain cases, but even then I would only prove things about a model or a broad algorithm in general. Proofs at the implementation level are just extremely time consuming, even with things like proof assistants.

                                                                                  So as a broad strategy, I think we (as an industry) have to get better at elevating testing to a point where we statistically know how effective a test suite is. And no, branch coverage is not what I’m talking about, since input data is really the coverage that we should really care about.

                                                                                  1. 1

                                                                                    I would say the main difference between testing in software and actual scientific experiments though is that we don’t know how to talk about the statistical significance of individual test cases.

                                                                                    Statistical significance isn’t really the issue in most software tests because we set them up to be deterministic. Hypothesis testing is a means of working with randomness in observations. The issue is the selection of observations and test cases. This is true in biology or in software testing. Hypothesis testing is a tool for doing an individual observation, but the aggregate set of observations to run (the research program, more or less) doesn’t depend on that. I’ve written some stuff about this.

                                                                                    I would say math proof is almost never necessary.

                                                                                    I disagree. We depend on proofs heavily on a regular basis. Type systems are the most common example. But setting up a system where you control the flow of information to make certain states impossible is another. These aren’t comprehensive proofs, and they are often implicit, but they are properties we prove mathematically.

                                                                                    branch coverage is not what I’m talking about, since input data is really the coverage that we should really care about.

                                                                                    Mutation testing is the most defensible approach to coverage that I’ve seen: if you change the program, does the test suite start throwing errors. I also like property based testing because it gets humans out of the business of selecting test data individually.

                                                                                    1. 1

                                                                                      I also like property based testing because it gets humans out of the business of selecting test data individually.

                                                                                      Unfortunately in practice I’ve found you have to spend a lot of time getting the data generator just right to cover an interesting enough set of inputs.

                                                                                      1. 1

                                                                                        You have probably used it far more in-depth than I have. I was mostly using it to generate fairly straightforward things like lists of a type.

                                                                                      2. 1

                                                                                        Re math proofs and necessity - I was moreso talking about a full proof for the functional correctness of an entire application. I think that is overkill in most cases. Not proofs in general, especially about smaller algorithms.

                                                                              1. 4

                                                                                Rusty’s API Design Manifesto http://sweng.the-davies.net/Home/rustys-api-design-manifesto

                                                                                The idea

                                                                                Application Programming Interface (API) design is hard. But it’s even harder to change once you get it wrong. So what you should do is to spend the effort to get it right the first time around.

                                                                                In the Linux Kernel community Rusty Russell came up with a API rating scheme to help us determine if our API is sensible, or not. It’s a rating from -10 to 10, where 10 is perfect is -10 is hell. Unfortunately there are too many examples at the wrong end of the scale.

                                                                                Rusty’s original descriptions https://ozlabs.org/~rusty/index.cgi/tech/2008-03-30.html

                                                                                What the API levels are
                                                                                
                                                                                    10. It's impossible to get wrong.
                                                                                    9. The compiler/linker won't let you get it wrong.
                                                                                    8. The compiler will warn if you get it wrong.
                                                                                    7. The obvious use is (probably) the correct one.
                                                                                    6. The name tells you how to use it.
                                                                                    5. Do it right or it will always break at runtime.
                                                                                    4. Follow common convention and you'll get it right.
                                                                                    3. Read the documentation and you'll get it right.
                                                                                    2. Read the implementation and you'll get it right.
                                                                                    1. Read the correct mailing list thread and you'll get it right.
                                                                                
                                                                                    -1. Read the mailing list thread and you'll get it wrong.
                                                                                    -2. Read the implementation and you'll get it wrong.
                                                                                    -3. Read the documentation and you'll get it wrong.
                                                                                    -4. Follow common convention and you'll get it wrong.
                                                                                    -5. Do it right and it will sometimes break at runtime.
                                                                                    -6. The name tells you how not to use it.
                                                                                    -7. The obvious use is wrong.
                                                                                    -8. The compiler will warn if you get it right.
                                                                                    -9. The compiler/linker won't let you get it right.
                                                                                    -10. It's impossible to get right.
                                                                                
                                                                                1. 2

                                                                                  People are always so so confused about what asserts mean, and hence we miss so many opportunities for safety AND performance.

                                                                                  Wouldn’t it be nice if a compiler could warn us it there was any path on which a precondition check could fail?

                                                                                  Wouldn’t it be nice if the precondition checks informed the optimizer about characteristics and relationships of the variables involved?

                                                                                  Awhile back I said on this forum it would be interesting to have a doubly link list based language… and it took me quite some time to spot the obvious flaw. You can’t have (cheap) immutable data structures .

                                                                                  But yes, a digraph as a first class citizen would be very useful. It really is a pretty fundamental data structure.

                                                                                  I have lost my fear of metaprogramming. They’re just another program. And like any other program they the intermediate results need to be readable, inspectable, debugable and testable.

                                                                                  And yes, graphics on a ye olde 6502 based UK101 was much easier than anything today.

                                                                                  1. 3

                                                                                    My fav questions:

                                                                                    What good does your company’s existence do for humanity (or the community, depends on the type of compny)? Conversely what bad outcome for humanity are you willing to accept for the financial success of the company?

                                                                                    1. 1

                                                                                      In the past I weeded out quite a bit of that by making it clear to the recruiter I refused to do any military/defense only projects.

                                                                                    1. 8

                                                                                      Lately I’ve been at startups. In addition to the other comments offering excellent advice for any software-engineering interview, here are some startup-specific questions:

                                                                                      • “Are you profitable? If not, how long and stable is the runway?”
                                                                                      • “Who would I report to? What sorts of responsibilities are being delegated to me?”
                                                                                      • “Would I be working on the flagship product or the greenfield product? Or is this role not limited to one product?”
                                                                                      • First, find out what the potential employer does and their target market. Then ask about issues that customers perceive in their products. Finally, pick your favorite issue, and ask, “Hypothetically, how would you feel if I told you that I only want to come onboard in order to fix my favorite issue?”
                                                                                      1. 2

                                                                                        Decades ago I interviewed at two startup companies… Afterwards I phoned my wife and said, “They’re nuts here. One company only has “marketing visionaries” and the other only engineers, no salesmen.”

                                                                                        I didn’t take either job, the only marketing bunch chewed through an amazing amount of venture capital and then sunk without trace.

                                                                                        The engineer only place chugged along for about a decade and then sold out to a larger but fairly like minded bunch and are still going.