Threads for mzl

    1. 3

      As usual with constraint programming systems, there are a lot of example programs available on Håkan Kjellerstrands site.

    2. 1

      I’m skeptical that it’s all-that-capable of “general-purpose applications”, so far it seems more like an imperative language and a constraint solver bolted together. But that’s what I was looking for when I discovered this, so I’ll see if it solves my problems.

      There’s a webide here: http://retina.inf.ufsc.br/picat.html

      1. 2

        For your current use-case, do you need it to be in a logic-programming like style, or is it more that you need constraint programming solver capabilities available in a general purpose language?

        If it is the latter, I personally would consider using a well-known library like OR-Tools or Gecode instead from a “normal” programming language. OR-Tools has a Python-version that is fairly easy to get started with.

        Another alternative is to use CPMpy, which is a generic modelling layer in Python that can be used with different solver back-ends like OR-Tools or SAT-solvers.

        1. 1

          Thanks for the recs! I’ve specced out the same problem in Minizinc and Alloy, and the logic programming stuff ends up being quite useful for me. I’m using it to say things like “schedule stargazing for a day where it’s a clear sky and I’m out of the city unless I’m also doing two other activities that day”.

    3. 1

      This feels like the type of problem that GPTChat would excel at. For example, any LLM could produce to z3 for “There are three Toms named ..,.., and … Each Tom has one of the Queens named …, …, and …”. That would be more than half the code of the article.

      While SAT are interesting, there is the same self continuing priesthood around its discussion that leads to “category theory” discussions for Rust.

      1. 12

        Weeeeell every blog post about Z3 I’ve seen (and I’ve also written two) is like “no seriously look how easy this is, you really don’t have to be a genius or anything to use a SMT solver” but then people lump it in with formal methods generally as being “hard” and I guess that’s good for my hourly rate as a consultant in this area, but I am serious when I say it takes very little investment to add an immensely powerful tool to your toolkit here. It is far, far easier to learn than TLA+ or Coq/Lean. Very intuitive, because people have been learning how to think in “solve for X” terms since fifth grade or so.

        1. 4

          I did a few tiny side projects using Z3 and MiniZinc. They indeed are powerful tools, I agree with you on that.

          What I find difficult is not learning the tool itself. What is difficult is the modelling part, i.e. how do I translate a problem description to a set of equations and constraints? What helper variables should I introduce?

          I’ve been recently modeling a typical engine-building game. Kind of a game in which you spend resources to improve your factories so you get more resources per time unit so you get more resources etc. I’m comparing MiniZinc (different solvers) with using a path-finding algorithm. The goal is to find “good” order of actions in the game so I can maximize levels of my factories in the shortest amount of time.

          It’s not straightforward how to answer following questions:

          1. how do I model time?
          2. how do I model an action like “wait some time” without hardcoding the wait time? Do I need to model such an action at all? (In my case it turned out that I don’t but it required a whole evening to reach this conclusion.)
          3. how do I model that my available resources change over time (because factories constantly produce resources)?
          4. how do I model that I can invest my resources so I get more resources in the future?
          5. what models lead to reasonable performance?

          To an outsider like me, it feels like those in the know are familiar with dozens of modelling… tricks? Tacit knowledge, if you will.

          My question is: I want to make this investment you’re talking about. Are there any resources which focus more on the modelling part, and less on the tooling? What can I do to improve my modelling skills? Is it something that only comes with experience, or can I learn from successes and failures of others?

          1. 3

            For MiniZinc, I would recommend the Coursera courses (https://www.coursera.org/learn/basic-modeling and follow-ups) for learning more about modelling.

            For you engine building game, given your explanation, I would probably make a step-indexed model where each step is an action. So in MiniZinc notation, something like the following might be used

            array[Steps] of var Action: actions;
            array[Steps] of var Time: action_duration;
            array[Steps] of var Time: time;
            
            constraint forall (s in Steps where s+1 in Steps) (
                time[s+1] = time[s] + action_duration[s]
            )
            

            Now, unfortunately encoding planning problems into a static constraint programming or SMT problem can be less than ideal if there are many steps that should be performed. In such cases, a dedicated planning solver might be better.

            1. 1

              For MiniZinc, I would recommend the Coursera courses (https://www.coursera.org/learn/basic-modeling and follow-ups) for learning more about modelling.

              Thanks for the recommendation, it looks promising! If you suggest any supplemental materials, feel free to throw links or titles at me :)

              I would probably make a step-indexed model where each step is an action.

              I have to say that your snippet is basically what I ended up with after a few iterations! I especially like your where s+1 in Steps condition. Much cleaner than what I had.

              Now, unfortunately encoding planning problems into a static constraint programming or SMT problem can be less than ideal if there are many steps that should be performed. In such cases, a dedicated planning solver might be better.

              In my case I expect ~100 actions. What do you mean by dedicated planning solver? Would it be a different backend for MiniZinc or are you talking about a different category of software?

              1. 1

                In my case I expect ~100 actions. What do you mean by dedicated planning solver? Would it be a different backend for MiniZinc or are you talking about a different category of software?

                I’ve done some planning in that size, but I did not try to find an optimal solution then. For a slightly smaller but fun planning problem, I solved Advent of Code 2022 day 16 as a planning problem with MiniZinc.

                Not sure what backend for MiniZinc you’ve used, but I would strongly recommend installing OR-Tools, the binary package includes files for adding it to the MiniZinc IDE (pointing the IDE to the ortools.msc file). With OR-Tools, it is also important to add free search and setting the number of workers to the number of cores you have available, as it works much better that way.

                As for planning solvers, that is a another kind of solver and not something used with MiniZinc. I’ve never really used it actually, and I’m not sure how well it would work. Typically, PDDL is used as the language to describe the problem, and solved with a planner that can use it as input.

          2. 2

            You’re correct that modeling is where the difficulty is. I don’t know any good singular resource on modeling, maybe because it’s highly domain-specific; I just finished reading an entire book on how to model a single work queue on a single processor (never mind a network of them) for example. The entirety of physics is just training you to come up with good models and working through their implications. Your particular exploration/exploitation problem is one that has interested me before (when playing games like Power Grid) so I’d be interested in what you come up with to model it. In the worst case you can always use Monte Carlo.

            1. 2

              I just finished reading an entire book on how to model a single work queue on a single processor

              I have to share the title now :)

              Your particular exploration/exploitation problem is one that has interested me before (when playing games like Power Grid) so I’d be interested in what you come up with to model it. In the worst case you can always use Monte Carlo.

              I’m not interested in the answer per se. I treat it more like an opportunity to compare different approaches, where using constraint programming is one of them. I might add Monte Carlo to the list, thanks.

              1. 2

                I treat it more like an opportunity to compare different approaches, where using constraint programming is one of them.

                Sometimes it is interesting to combine approaches. I’ve previously worked on a program for playing the game Patchwork, where I used constraint prorgamming for representing the game board and flat monte carlo search for evaluation. I wrote a small paper and an associated blog post about how to use CP for the state representation.

              2. 2

                The book is called “Performance Modeling and Design of Computer Systems: Queueing Theory in Action”. Horrible title IMO. I don’t regret reading it but don’t really recommend it either. Probably the main things I got from it were a better understanding of Poisson/exponential/Pareto distributions and classifications of infinite-state CTMCs. Other than that it seemed stuck in the WW2 era where you analyzed systems by setting up a model with a bunch of differential equations solved by hand.

    4. 3

      Automated complex refactors of multi-million line programs are not possible in Rust today.

      With all due respect, a multi-million line code base is a problem in its own and should be separated into more manageable pieces. IDE support is not your main problem at this point :-)

      1. 5

        Trunkbased development in a monorepo is a fairly popular approach in some large organizations, where doing refactorings on a multi-million line code base would be reasonable. It all depends on context.

        Added to that, the support for automated refactorings in modern Java IDEs like IntelliJ is outstandingly good. Part of this is that the language is simple enough to support this, part of it is that the market is huge. While I really think that Rust refactorings in the CLion/IntelliJ Rust plugin are coming along very nicely, they are nowhere near (and will probably never be) what is possible in Java.

        1. 2

          Trunkbased development in a monorepo is a fairly popular approach

          I know, I’m working in one. This is why I’m saying it’s a problem in itself way beyond the IDE abilities :-)

          1. 1

            It all depends on how it’s done. As long as most development can be done in just a single or a few subprojects, then I think it is fine.

            The real benefit is to be able to do wide-ranging refactorings as a single change when needed. Its such a huge win compared to coordinating many releases of many interconnected projects. That is such a hassle that things risk stagnation.

            Of course, I you always have to work in the full multi-million-line code base with no subdivisions, then I agree, that is a bad idea.

    5. 1

      Isn’t this the Rust that everyone has been waiting for? Isn’t this our 1 dot Oh Yeah! It is on!!

      Please correct me, but I believe this is the case.

      1. 2

        Sort of:

        With this stabilization, we hope to give important crates, libraries, and the ecosystem time to prepare for async / .await, which we’ll tell you more about in the future.

        1. 3

          The stabilization report for the MVP of async/.await indicated a goal of 1.38, which comes out in ~3 months. Really looking forward to start playing with that.

          Lots of future improvements on the horizon also. Personally I am really excited for async/await when combined with Generic Associated Types (type-members in traits can be generic, not just concrete as today), since that is a key requirement to be able to have async trait methods. Work on GAT is ongoing but not at all finished.

          1. 2

            That means, BTW, that the feature needs to be mostly done in 1.5 months, as that’s when the beta is being cut.

            1. 3

              Yes,. Here is the issue I mentioned above: https://github.com/rust-lang/rust/issues/62149

              From the issue:

              This leaves documentation and testing as the major blockers on stabilizing this feature.

              and

              As of today we have 6 weeks until the beta is cut, so let’s say we have 4 weeks (until August 1) to get these things done to be confident we won’t slip 1.38.

    6. 2

      Dynamic programming is such an interesting concept, it took a good while (and lots of examples coded) to get my head around it.

      The interesting thing is that I have not used it outside of preparation for interviews. I am not sure if it is still the case, but Google and Facebook would often ask problems that needed dynamic programming approaches to solve (e.g. the classic backpack problem or one of its permutations).

      The same way, I have noticed that the only engineers who can knock dynamic programming solutions off the shelf are either the ones who did competitive programming back in the day (where this is a basic technique) or others who’ve gone through thorough interview preparation.

      Anyone used dynamic programming in the real world or know of libraries that have this kind of algos implemented?

      1. 7

        I’ve used it in the real world. Had a problem a decade or so ago that required computing the difference between tree-structured data, and a dynamic programming method similar to substring edit-distance was ideal. No libraries implemented it for me, and it was a pretty straightforward implementation via a dynamic programming approach. The use case was in the context of static program analysis where the trees I was working with were ASTs. I wrote part of it up a couple years ago for fun in F# (http://syntacticsalt.com/blog/2015-12-24-comparing-trees-functionally.html).

        1. 1

          That’s a cool use case and the post was a good read. Thanks for sharing!

      2. 5

        Anyone used dynamic programming in the real world or know of libraries that have this kind of algos implemented?

        Dynamic programming algorithms are at the core of many, many bioinformatics and computational biology solutions (DNA sequencing, the human genome, and etc). One of the earliest was Needleman-Wunsch (circa 1970). Local alignments, global alignments, different cost structures all took off from the same basic trick.

      3. 4

        I’ve gotten to use dynamic programming at my $DAYJOB, since it was the only reasonable way of solving a problem.

        In school, in competitive programming, and in interview situations, the problems are typically quite well-defined, and the data structure of choice is typically an n-dimensional matrix of integers. For an actual messy real-world problem, issues such as data structure choice, complicated solution types, choice of forward/backward/recursive organization of computation, memory limiting for extreme cases, and making operations parallel crop up. The core took half a day, but making it “industrial strength” took many months with many people involved.

      4. 3

        I think it’s likely that all the common uses of such techniques come as part of standard libraries in most languages, so when you need to use them, you have prepackaged versions. C/C++ code is the most likely place where you end up rolling your own, but with the STL since you can run these algorithms on your own objects it is even less needed.

      5. 2

        One recent use I remember was finding a (bit-) optimal parse for a LZ style compression algorithm. Basically it finds the shortest path from each position to the end (source for anyone interested).

      6. 2

        In university, I had a graph computation problem that had a recursive solution, but you could derive another graph from that that was easier to solve, also recursive. The paper only had pseudo-code and the proof of the time complexity just assumed you could use another algorithm in the middle that had certain complexity, calculated using the recursive method.

        In practice, though, using the recursive way of doing things absolutely sucked and switching it to the dynamic approach changed it from calculating results for subgraphs to, instead, combining results from already calculated subgraphs.

        The funny thing was how straightforward it seemed for the problem given than it was for all the motivating examples before.

        I wish I remembered anything about the problem but it’s all gone with time (this was years ago). It wasn’t particularly complex but it was very satisfying to ‘flip’ it to bottom up and see the effect.

    7. 3

      The history of Programming Languages conference paper on the early history of Smalltalk by Alan Kay has some interesting background also. Thay had a compiler listing for Simula that didn’t work, and the documentation was not the best. Their process of learning the language was to read the compiler:

      another graduate student and I unrolled the program listing 80 feet down the hall and crawled over it yelling discoveries to each other. The weirdest part was the storage allocator, which did not obey a stack discipline as was usual for Algol. A few days later, that provided the clue. What Simula was allocating were structures very much like the instances of Sketchpad. There were descriptions that acted like masters and they could create instances, each of which was an independent entity.

      From http://worrydream.com/EarlyHistoryOfSmalltalk/ For reference, the history of the Simulas is described here: http://cs-exhibitions.uni-klu.ac.at/fileadmin/template/documents/text/The_development_of_the_simula_languages.pdf

    8. 2

      To make a type that can never be created, simply create an empty enum. Use this where you want to prevent compilation of specific codepaths.

      What does that mean, prevent compilation of specific code paths? When are we writing code paths that we don’t want to compile?

      1. 2

        For example, som API might be expecting a Result<V,E> type, but your specific operation can never fail. Then if you use the empty enum mentioned as the E-type, the compiler will know that there will never be an error and can remove all the error handling around your operation.

        There are plans to add a specific type to rust denoted !, called the never type, that has this specific meaning. Unfortunately the never type is problematic, so it has been delayed quite some time.

        1. 2

          On stable you can use ! for functions that never return, which is helpful in embedded where main should never return, or to make match arms typecheck when one of them exits the process, like a usage statement.

    9. 3

      When I was a doctoral student financed form public sources, I was very happy that all the code we produced was released under a liberal open source license (MIT). It really felt like the right thing to do.

    10. 5

      I honestly tried several times to get into it but I don’t know, it just didn’t click yet.

      I find golang much nicer to work with if I need to do what rust promises to be best at.

      I won’t give up on it yet but that’s just my feeling today.

      1. 12

        I’ll likely take some heat for this but my mental model has been:

        • Go is the Python of 2019
        • Rust is the C++ of 2019

        Go has found its niche in small-to-medium web services and CLI tools where “server-side scripting languages” were the previous favorite. Rust has found its niche in large (or performance-sensitive) interactive applications where using GC is untenable. These aren’t strict boundaries, of course, but that’s my impression of where things have ended up.

        1. 9

          I agree with the mental model, although I usually think of Go as the Java for (current year).

          1. 5

            The tooling is light years behind though

            1. 3

              “go fmt” offers a standard way to format code, which removes noise from diffs and makes code other people have written more readable.

              “go build” compiles code faster than javac.

              The editor support is excellent.

              In which way is the Java tooling better than Go, especially for development or deployment?

              1. 8

                How is the debugger these days?

                When I was doing go a few years ago, the answer was “it doesn’t work”, whereas java had time-travel debugging.

                1. 1

                  Delve is a pretty great debugger. VSCode, Atom etc all have good debugging support for Go, through the use of Delve. Delve does not have time-travel, but it works.

                  Packaging Java applications for Arch Linux is often a nightmare (with ant downloading dependencies in the build process), while with Go, packaging does not feel like an afterthought (but it does require setting the right environment variables, especially when using the module system that was introduced in Go 1.11).

                  Go has some flaws, for example it’s a horrible language to write something like the equivalent of a 3D Vector class in Java or C++, due to the lack of operator overloading and multiple dispatch.

                  If there are two things I would point out as one of the big advantages of Go, compared to other languages, it’s the tooling (go fmt, godoc, go vet, go build -race (built-in race detector), go test etc.) and the fast compilation times.

                  In my opinion, the tooling of Go is not “light years behind” Java, but ahead (with the exception of time-travel when debugging).

              2. 2

                My three favourite features when developing Java:

                • The refactoring support and IDE experience for plain Java is outstanding. Method extraction (with duplicate detection), code inlining, and rearranging classes (extract classes, move methods, extract/collapse hierarchies) makes it very easy to re-structure code.
                • Java Flight Recorder is an outstandingly good tool for insight into performance and behaviour, at virtually no overhead.
                • Being able to change code live, drop frames, and restart the system in the debugger is a life-saver when debugging hard-to-reach issues. that the process being debugged can be essentially anywhere is a wonderful bonus.

                Sure, it would be nice if there was a single Java style, but pick one and use that, and IDE’s generally reformat well. Also, the compile times can be somewhat long, but for plain Java thay are usually ok.

                Note that I have never had to work in a Spring/Hibernate/… project with lots of XML-configurations, dependency injections, and annotation processing. The experience then might be very different.

              3. 1

                Just the other day I connected the debugger in my IDE to a process running in a datacenter across the ocean and I could step through everything, interactively explore variables etc. etc. There is nothing like it for golang.

        2. 5

          “I’ll likely take some heat for this but my mental model has been:”

          All kinds of people say that. Especially on HN. So, not likely. :)

          “These aren’t strict boundaries, of course, but that’s my impression of where things have ended up.”

          Yup. I would like to see more exploration of the middle in Rust. As in, the people who couldn’t get past the borrow checker just try to use reference counting or something. They get other benefits of Rust with performance characteristics of a low-latency GC. They still borrow-checker benefits in other people’s code which borrow checks. They can even study it to learn how it’s organized. Things click gradually over time while they still reap some benefits of the language.

          This might not only be for new folks. Others know know Rust might occasionally do this for non-performance-sensitive code that’s not borrow-checking for some reason. They just skip it because the performance-critical part is an imported library that does borrow-check. They decide to fight with the borrow-checker later if it’s not worth their time within their constraints. Most people say they get used to avoiding problems, though, so I don’t know if this scenario can play out in regular use of the language.

          1. 6

            I agree, for 99% of people, the level of strictness in Rust is the wrong default. We need an ownership system where you can get by with a lot less errors for non-performance sensitive code.

            The approach I am pursuing in my current language is to essentially default to “compile time reference counting”, i.e. it does implement a borrow checker, but where Rust would error out, it inserts a refcount increase. This is able to check pretty much all code which previously used runtime reference counting (but with 10x or so less runtime overhead), so it doesn’t need any lifetime annotations to work.

            Then, you can optionally annotate types or variables as “unique”, which will then selectively get you something more like Rust, with errors you have to work around. Doing this ensures that a) you don’t need space for a refcount in those objects, and b) you will not get unwanted refcount increase ops in your hot loop.

            1. 2

              Ha ha, just by reading this comment I was thinking ‘this guy sounds a bit like Wouter van Oortmerssen’, funny that it turns out to be true :-) Welcome to lobste.rs!

              Interesting comment, I assume you are exploring this idea in the Lobster programming language? I would love to hear more about it.

              1. 2

                Wow, I’m that predictable eh? :P

                Yup this is in Lobster (how appropriate on this site :)

                I am actually implementing this as we speak. Finished the analysis phase, now working on the runtime part. The core algorithm is pretty simple, the hard part is getting all the details right (each language feature, and each builtin function, has to correctly declare to its children and its parent wether it is borrowing or owning the values involved, and then keep those promises at runtime). But I’m getting there, should have something to show for in not too long. I should definite do a write-up on the algorithm when I finish.

                If it all works, the value should be that you can get most of the benefit of Rust while programmers mostly don’t need to understand the details.

                Meanwhile, happy to answer any more specific questions :)

        3. 4

          I don’t understand, and have never understood, the comparisons between Go and Python. Ditto for former Pythonistas who are now Gophers. There is no equivalent of itertools in Go, and there can’t be due to the lack of generics. Are all these ex-Python programmers writing for loops for everything? If so, why???

        4. 1

          Not Go but Julia is more likely the Python of 2019.

    11. 1

      Doing it in Rust this year again: https://github.com/zayenz/advent-of-code-2018

      The project is set up so that each problem is its own small CLI program that reads input on standard in and writes the output to standard out. When I set up the repository, I made a made-up solution as an example to start from, and my choice of problem to solve happened to be the exact first problem today :-)

    12. 1

      Does anyone have any link to some example-code that uses DataFixerUpper? I think it would be interesting to see how the concepts look in Java when used.

    13. 2

      I’ve recently started writing some technical stuff on a blog. Will probably continue in the areas of algorithms, AI (combinatorial optimization, constraint programming, game playing AI), and programming languages (Rust related currently)

      Link: https://zayenz.se/blog/

    14. 3

      Kingdomino is a really great game. It has an amazing depth for the short play time (15 min for experienced players). It is easy to learn, so it is fun for casual players as well.

      1. 2

        I agree, and writing AI agents for it is also a lot of fun!

    15. 10

      Using match statements in Rust makes encoding decisions tables very clean, which is something I’ve come to like a lot when writing “complex” logic. Encoding the FizzBuzz example in Rust looks like the following code.

      let output = match (n % 3, n % 5) {
                       (0, 0) => "FizzBuzz".to_owned(),
                       (0, _) => "Fizz".to_owned(),
                       (_, 0) => "Buzz".to_owned(),
                       (_, _) => n.to_string(),
                   };
      
      1. 5

        I use this pattern a lot in Haskell with case.

        In languages without match/case/cond/etc. we can use a datastructure to enumerate all the possibilities, e.g. a Python dictionary:

        def go(x, y):
          return {(0, 0): "foo",
                  (0, 1): "bar",
                  (1, 0): "baz",
                  (1, 1): "quux"}[(x, y)]
        

        If we don’t want to enumerate them all (e.g. like the _ wildcards in your fizzbuzz example) then we can calculate booleans for the keys, and lookup whichever is True:

        def go(n):
          n3 = n % 3 == 0
          n5 = n % 5 == 0
          return {(    n3 and     n5): "fizzbuzz",
                  (    n3 and not n5): "fizz",
                  (not n3 and     n5): "buzz",
                  (not n3 and not n5): str(n)}[True]
        

        I’m not sure if this counts as “simple” or “too clever”, but I’ve found it useful on occasion. The tricky part is that we cannot allow more than one of the booleans to be True at once, since they’ll collide; in this case we need to add and not ... conditions to the fizz and buzz entries to prevent them colliding with the fizzbuzz entry.

    16. 2

      I’ve done a quite small (47 LOC) Haskell solution for this a couple of years ago. Turns out solving Sudokus is quite simple. Blogpost/Code. The basis is just going over the whole grid, pruning possibilities per field, filling in the ones with just one possible solution. Rinse and repeat until you’re done. Very fun exercise to learn a new language as well.

      1. 3

        Um, resolving single candidates is not enough to solve most Sudoku puzzles.

        1. 2

          Yeah generally on “hard” boards you’ll have to randomly pick a value for a couple, then do a DFS on that choice and subsequent choices.

          1. 2

            Depends on the level of inference one is willing to implement.

            Full inference for each row/column/square individually plus something called shaving/singleton arc consistency/several other names which means to do the hypothesis test for each square/value pair “If this square was assigned this value, would that lead to an inconsistency?” is enough to solve all 9x9 Sudokus without search empirically. For details, see Sudoku as a Constraint Problem by Helmut Simonis.

          2. 1

            That just sounds hugely frustrating. I usually just solve them by elimination, but I don’t really do super hard ones.

    17. 3

      Nice write-up of a basic Sudoku solver.

      For more interesting instances to try (since 9x9 Sudoku is trivial to solve for a computer), see the list of examples included starting here: https://github.com/Gecode/gecode/blob/master/examples/sudoku.cpp#L672

      A 25x25 Sudoku is surprisingly hard to solve actually.

    18. 5

      Before reading the article, I’ll quickly note that almost everything in hardware development/synthesis is NP-hard. That hasn’t stopped them from cranking out useful, profitable stuff. Their “compilers” just run a lot slower with the result being more time between iterations of specific components and/or more money spent on compute clusters to reduce that time. For some domains, it theoretically could be a deal-breaker to use NP-hard algorithms on anything iteration speed depended on. I suspect in many cases one could use a quick-and-dirty solution for the app development with the NP-hard version dropped in and tested after it’s synthesized. Even that part might be semi-automated where a command or annotation tells whatever does the NP-hard part where to drop the result. Doing this for certified vs regular vs superoptimized compilation was part of my Brute-Force Assurance concept.

      (reads article)

      One other thing about worst-case performance people often neglect: there’s nothing stopping you from using two algorithms. It seems obvious but programmers seem to obsess about all eggs in one basket solutions. I learned about this trick with QuickSort vs HeapSort. QuickSort was faster on average with terrible worst case performance. HeapSort slower on average with no worst case. The proposed solution was to use QuickSort, observe the time it takes, notice if it’s dragging out way past the average, cancel it if it is, and switch to HeapSort. Now, you go from worst case to two averages combined or something like that.

      You can make this idea more general by defining a way to monitor for worst case or just unacceptable performance on any NP-hard algorithms switching upon failure to something suboptimal that at least works. Alternatively, go “human-in-the-loop” to let a programmer/admin decide how to proceed once failure is detected. I’m not sure you can always detect failures with simple, efficient monitors. You can with many, though.

      On the timing stuff, this is one reason I love watchdog timers in embedded chips. All CPU’s should have them both for kernel and individual apps. That way, a check for something like QuickSort vs HeapSort doesn’t waste any CPU budget: dedicated, low-resource hardware is already watching. Hardware, bounds checking can do something similar to detect size blowup on top of benefits like stopping buffer overflows or out-of-bounds access for pointers.

      1. 2

        There is a lot of interesting research in portfolio algorithms, especially in areas related to combinatorial problems such as SAT solving and constraint programming.

        Solvers are either run in parallel or some heuristic is used to switch between them. Sometimes solvers are used independently, and sometimes information from one solver can be sent to another solver. Sometimes algorithms and heuristics are used to analyse the problem instance to make an informed choice of the right solver to use.

        1. 1

          I’ll go ahead with this submission since you brought that up. :)

    19. 12

      Ha, nice to see that here. By the way, here’s the full playlist for RustFest, which I have run over the last 4 days (there was only 1 talk day):

      https://www.youtube.com/watch?v=23lRkdDXqY0&list=PL85XCvVPmGQgdqz9kz6qH3SI_hp7Zb4s1

      For those interested: next RustFest is in September/October.

      1. 3

        For those interested: next RustFest is in September/October.

        Has the location been decided yet?

        1. 4

          Rome. We’re currently searching for venues, expect a date announcement in June (or later, depending on how well the venue search goes).

        2. 2

          It was announced to be Rome at RustFest Paris, not sure if there has been some official announcement on the internet yet.

        3. 1

          Thanks all. I’ll keep an eye out for the dates and see if I can schedule a little trip from AU to Italy later in the year.

    20. 18

      I believe that the “single entry, single exit” coding style has not been helpful for a very long time, because other factors in how we design programs have changed.

      Unless it happens that you still draw flowcharts, anyway. The first exhortation for “single entry, single exit” I can find is Dijkstra’s notes on structured programming, where he makes it clear that the reason for requiring that is so that the flowchart description of a subroutine has a single entry point and a single exit point, so its effect on the whole program state can be understood by understanding that subroutine’s single collection of preconditions and postconditions. Page 19 of the PDF linked above:

      These flowcharts share the property that they have a single entry at the top and a single exit at the bottom: as indicated by the dotted block they can again be interpreted (by disregarding what is inside the dotted lines) as a single action in a sequential computation. To be a little bit more precise” we are dealing with a great number of possible computations, primarily decomposed into the same time-succession of subactions and it is only on closer inspection–i.e, by looking inside the dotted block–that it is revealed that over the collection of possible computations such a subaction may take one of an enumerated set of distinguished forms.

      These days we do not try to understand the behaviour of a whole program by composing the behaviour of each expression. We break our program down into independent modules, objects and functions so that we only need to understand the “inside” of the one we’re working on and the “outside” of the rest of them (the “dotted lines” from Dijkstra’s discussion), and we have type systems, tests and contracts to support understanding and automated verification of the preconditions and postconditions of the parts of our code.

      In other words, we’re getting the benefits Dijkstra wanted through other routes, and not getting them from having a single entry point and a single exit point.

      1. 7

        An interesting variant history is described in https://softwareengineering.stackexchange.com/a/118793/4025

        The description there is that instead of “single exit” talking about where the function exits from, it actually talks about a function only having a single point it returns to, namely the place where it was called from. This makes a lot more sense, and is clearly a good practice. I’ve heard this description from other places to, but unfortunately I don’t have any better references.

        1. 3

          Yes, very interesting. It makes sense that “single exit” means “don’t jump to surprising places when you’re done” rather than “don’t leave the subroutine from arbitrary places in its flow”. From the Structured Programming perspective, both support the main goal: you can treat the subroutine as a box that behaves in a single, well-defined way, and that programs behave as a sequence of boxes that behave in single, well-defined ways.

      2. 4

        The first exhortation for “single entry, single exit” I can find is Dijkstra’s notes on structured programming

        Also Tom Duff’s “Reading Code From Top to Bottom” says this:

        During the Structured Programming Era, programmers were often taught a style very much like the old version. The language they were trained in was normally Pascal, which only allowed a single return from a procedure, more or less mandating that the return value appear in a variable. Furthermore, teachers, influenced by Bohm and Jacopini (Flow Diagrams, Turing Machines and Languages with only two formation rules, Comm. ACM 9#5, 1966, pp 366-371), often advocated reifying control structure into Boolean variables as a way of assuring that programs had reducible flowgraphs.

      3. 1

        These days we do not try to understand the behaviour of a whole program by composing the behaviour of each expression. We break our program down into independent modules, objects and functions so that we only need to understand the “inside” of the one we’re working on and the “outside” of the rest of them (the “dotted lines” from Dijkstra’s discussion), and we have type systems, tests and contracts to support understanding and automated verification of the preconditions and postconditions of the parts of our code.

        Maybe I’m missing something, but it seems to me that Dijkstra’s methodology supports analyzing one program component at a time, treating all others as black boxes, so long as:

        • Program components are hierarchically organized, with lower-level components never calling into higher-level ones.
        • Relevant properties of lower-level components (not necessarily the whole analysis) are available when analyzing higher-level components.

        In particular, although Dijkstra’s predicate transformer semantics only supports sequencing, selection and repetition, it can be used to analyze programs with non-recursive subroutines. However, it cannot handle first-class subroutines, because such subroutines are always potentially recursive.

        In other words, we’re getting the benefits Dijkstra wanted through other routes, and not getting them from having a single entry point and a single exit point.

        Dijkstra’s ultimate goal was to prove things about programs, which few of us do. So it is not clear to me that we are “getting the benefits he wanted”. That being said, having a single entry point and a single exit point merely happens to be a requirement for using the mathematical tools he preferred. In particular, there is nothing intrinsically wrong about loops with two or more exit points, but they are awkward to express using ALGOL-style while loops.