1. 5

    Before reading the article, I’ll quickly note that almost everything in hardware development/synthesis is NP-hard. That hasn’t stopped them from cranking out useful, profitable stuff. Their “compilers” just run a lot slower with the result being more time between iterations of specific components and/or more money spent on compute clusters to reduce that time. For some domains, it theoretically could be a deal-breaker to use NP-hard algorithms on anything iteration speed depended on. I suspect in many cases one could use a quick-and-dirty solution for the app development with the NP-hard version dropped in and tested after it’s synthesized. Even that part might be semi-automated where a command or annotation tells whatever does the NP-hard part where to drop the result. Doing this for certified vs regular vs superoptimized compilation was part of my Brute-Force Assurance concept.

    (reads article)

    One other thing about worst-case performance people often neglect: there’s nothing stopping you from using two algorithms. It seems obvious but programmers seem to obsess about all eggs in one basket solutions. I learned about this trick with QuickSort vs HeapSort. QuickSort was faster on average with terrible worst case performance. HeapSort slower on average with no worst case. The proposed solution was to use QuickSort, observe the time it takes, notice if it’s dragging out way past the average, cancel it if it is, and switch to HeapSort. Now, you go from worst case to two averages combined or something like that.

    You can make this idea more general by defining a way to monitor for worst case or just unacceptable performance on any NP-hard algorithms switching upon failure to something suboptimal that at least works. Alternatively, go “human-in-the-loop” to let a programmer/admin decide how to proceed once failure is detected. I’m not sure you can always detect failures with simple, efficient monitors. You can with many, though.

    On the timing stuff, this is one reason I love watchdog timers in embedded chips. All CPU’s should have them both for kernel and individual apps. That way, a check for something like QuickSort vs HeapSort doesn’t waste any CPU budget: dedicated, low-resource hardware is already watching. Hardware, bounds checking can do something similar to detect size blowup on top of benefits like stopping buffer overflows or out-of-bounds access for pointers.

    1. 2

      There is a lot of interesting research in portfolio algorithms, especially in areas related to combinatorial problems such as SAT solving and constraint programming.

      Solvers are either run in parallel or some heuristic is used to switch between them. Sometimes solvers are used independently, and sometimes information from one solver can be sent to another solver. Sometimes algorithms and heuristics are used to analyse the problem instance to make an informed choice of the right solver to use.

      1. 1

        I’ll go ahead with this submission since you brought that up. :)

    1. 12

      Ha, nice to see that here. By the way, here’s the full playlist for RustFest, which I have run over the last 4 days (there was only 1 talk day):

      https://www.youtube.com/watch?v=23lRkdDXqY0&list=PL85XCvVPmGQgdqz9kz6qH3SI_hp7Zb4s1

      For those interested: next RustFest is in September/October.

      1. 3

        For those interested: next RustFest is in September/October.

        Has the location been decided yet?

        1. 4

          Rome. We’re currently searching for venues, expect a date announcement in June (or later, depending on how well the venue search goes).

          1. 2

            It was announced to be Rome at RustFest Paris, not sure if there has been some official announcement on the internet yet.

            1. 3

              We can’t get much more official: https://twitter.com/RustFest/status/1000403458212671488

            2. 1

              Thanks all. I’ll keep an eye out for the dates and see if I can schedule a little trip from AU to Italy later in the year.

          1. 18

            I believe that the “single entry, single exit” coding style has not been helpful for a very long time, because other factors in how we design programs have changed.

            Unless it happens that you still draw flowcharts, anyway. The first exhortation for “single entry, single exit” I can find is Dijkstra’s notes on structured programming, where he makes it clear that the reason for requiring that is so that the flowchart description of a subroutine has a single entry point and a single exit point, so its effect on the whole program state can be understood by understanding that subroutine’s single collection of preconditions and postconditions. Page 19 of the PDF linked above:

            These flowcharts share the property that they have a single entry at the top and a single exit at the bottom: as indicated by the dotted block they can again be interpreted (by disregarding what is inside the dotted lines) as a single action in a sequential computation. To be a little bit more precise” we are dealing with a great number of possible computations, primarily decomposed into the same time-succession of subactions and it is only on closer inspection–i.e, by looking inside the dotted block–that it is revealed that over the collection of possible computations such a subaction may take one of an enumerated set of distinguished forms.

            These days we do not try to understand the behaviour of a whole program by composing the behaviour of each expression. We break our program down into independent modules, objects and functions so that we only need to understand the “inside” of the one we’re working on and the “outside” of the rest of them (the “dotted lines” from Dijkstra’s discussion), and we have type systems, tests and contracts to support understanding and automated verification of the preconditions and postconditions of the parts of our code.

            In other words, we’re getting the benefits Dijkstra wanted through other routes, and not getting them from having a single entry point and a single exit point.

            1. 7

              An interesting variant history is described in https://softwareengineering.stackexchange.com/a/118793/4025

              The description there is that instead of “single exit” talking about where the function exits from, it actually talks about a function only having a single point it returns to, namely the place where it was called from. This makes a lot more sense, and is clearly a good practice. I’ve heard this description from other places to, but unfortunately I don’t have any better references.

              1. 3

                Yes, very interesting. It makes sense that “single exit” means “don’t jump to surprising places when you’re done” rather than “don’t leave the subroutine from arbitrary places in its flow”. From the Structured Programming perspective, both support the main goal: you can treat the subroutine as a box that behaves in a single, well-defined way, and that programs behave as a sequence of boxes that behave in single, well-defined ways.

              2. 3

                The first exhortation for “single entry, single exit” I can find is Dijkstra’s notes on structured programming

                Also Tom Duff’s “Reading Code From Top to Bottom” says this:

                During the Structured Programming Era, programmers were often taught a style very much like the old version. The language they were trained in was normally Pascal, which only allowed a single return from a procedure, more or less mandating that the return value appear in a variable. Furthermore, teachers, influenced by Bohm and Jacopini (Flow Diagrams, Turing Machines and Languages with only two formation rules, Comm. ACM 9#5, 1966, pp 366-371), often advocated reifying control structure into Boolean variables as a way of assuring that programs had reducible flowgraphs.

                1. 1

                  These days we do not try to understand the behaviour of a whole program by composing the behaviour of each expression. We break our program down into independent modules, objects and functions so that we only need to understand the “inside” of the one we’re working on and the “outside” of the rest of them (the “dotted lines” from Dijkstra’s discussion), and we have type systems, tests and contracts to support understanding and automated verification of the preconditions and postconditions of the parts of our code.

                  Maybe I’m missing something, but it seems to me that Dijkstra’s methodology supports analyzing one program component at a time, treating all others as black boxes, so long as:

                  • Program components are hierarchically organized, with lower-level components never calling into higher-level ones.
                  • Relevant properties of lower-level components (not necessarily the whole analysis) are available when analyzing higher-level components.

                  In particular, although Dijkstra’s predicate transformer semantics only supports sequencing, selection and repetition, it can be used to analyze programs with non-recursive subroutines. However, it cannot handle first-class subroutines, because such subroutines are always potentially recursive.

                  In other words, we’re getting the benefits Dijkstra wanted through other routes, and not getting them from having a single entry point and a single exit point.

                  Dijkstra’s ultimate goal was to prove things about programs, which few of us do. So it is not clear to me that we are “getting the benefits he wanted”. That being said, having a single entry point and a single exit point merely happens to be a requirement for using the mathematical tools he preferred. In particular, there is nothing intrinsically wrong about loops with two or more exit points, but they are awkward to express using ALGOL-style while loops.

                1. 2

                  that’s pretty exciting! i was just this weekend looking at the oz/mozart home page, wondering if the language were dead or not (i wanted to use it for a small project, just to play with it, but concluded that it did indeed look unmaintained)

                  1. 3

                    I use Mozart/Oz in projects and help keep the bitrot away from the 1.3.x and 1.4.x versions. The Mozart 2 version runs but is lacking features - distributed objects and constraints being the most notable. I feel like there needs to be a “1.5” interim release, maintained and with binaries, to show activity.

                    The project tends to suffer from being used as a university/research project. People do work on masters thesis to contribute a feature to it, then they disappear and that feature doesn’t get maintained, extended, made production ready, etc.

                    That said, it’s still a great system to use.

                    1. 2

                      i should have known that if anyone was using it you would be :) do you have a more recent version of 1.4.x than the one in the repo, or have you been pushing all your changes?

                      1. 2

                        I’ve pushed all my changes so it should build on Linux at least. I recommend using the 1.3.x branch as 1.4.x has some annoying bugs. Distributed search is broken, distribution statistics don’t work, due to a switch to a C++ based library to do object distribution and bugs didn’t get ironed out. It only matters if you plan to use those features though. I’ve backported some of the actual bug fixes from 1.4.x to 1.3.x.

                      2. 1

                        What do you use it for?

                        1. 3

                          Originally I wrote a bitcoin mining pool using Mozart/Oz, before transitioning it to ATS. Current usage is for deployment and management of a couple of servers. It uses the distribution features, and constraints solving, to work out what to install, uninstall, etc based on changes to configuration files. It has a user interface using the Roads web framework. It’s a toy system I built to explore ideas as an ansible alternative. I’ve done various incarnations in Prolog and Mozart/Oz.

                          What might interest you is some old articles on using Mozart/Oz for proving things. See “A Program Verification System Based on Oz”, “Compiling Formal Specifications to Oz Programs” and “Deriving Acceptance Tests from Goal Requirements” in Multiparadigm programming in Mozart/Oz.

                          1. 1

                            I saved it for when I have Springer access. I am interested in it as a multiparadigm language as well. Did you find the constraint solving to be as good as an industrial solver integrated with a good 3GL? Or still better for performance or usability to just go with dedicated tools? I know a lot of people liked that Prolog could do some forms of parsing or solving but language made it harder to use some better methods. I figured something similar could happen with Mozart/Oz trying to do too many paradigms at once.

                            1. 4

                              The constraint solver in Mozart/Oz has many interesting features, but in the end it is, IMHO, just too old to be competitive with a modern solver.

                              For constraint solving, I would probably use Gecode, or_tools, or Choco depending on the particular use-case one has and the technical requirements. If money is not an issue, IBM CP Optimizer seems to be very very good.

                              To explore a particular problem, I typically write the model in MiniZinc since it is a reasonably high level modelling language that allows me the switch between solving backends. In particular, I like that I can try out a problem with both a normal solver (such as Gecode) and a lazy clause generation solver such as Chuffed.

                              Of course, the particular problem might be better suited for SMT solving (using Z3), MIP solvers (CPLEX or Gurobi), or perhaps a custom algorithm.

                              Another thing to consider is correctness. Optimization systems such as constraint solvers are a complex pieces of software, with lots of possibilities for bugs. Off-by-one errors in particular are very common. If I were to depend on an optimization system, I would prefer one that is maintained, has been around for a while, and that has a reasonably large set of automated tests.

                              1. 1

                                Thanks for the great summary!

                                On related note, if you’re into CHR, I just submitted a paper that shows how they compile it to C language. MiniZinc and CHR were most interesting languages I found looking for basic info on constraint handling.

                              2. 2

                                I haven’t used an industrial solver in anger - other than using Z3’s integration with ATS. But the Mozart solver seems to work well, and has features to distribute amongst multiple machines for larger problems. It has tools to visualize and explore the solution space and the ability to customize and add features to the solver in Oz. It’s a pretty old system though and I know that Mozart 2 intends to replace the Oz solver with gecode to get some newer features.

                                The “too many paradigms” is an issue in that it can be hard to decide how to approach a problem. Do you use OO, functional, relational, etc. So many choices that it can be paralysing.

                      1. 6

                        I also run my own DNS server, but I prefer to maintain just the master. I pay ~$15/yr to outsource the slaves to a third party company who specializes in such things, and I don’t have to worry as much if my VPS provider decides to go down for a few hours, etc. I get a more reliable DNS system, and I still get to maintain control, graph statistics, etc, to my heart’s content.

                        Glad to see the discipline of self-hosting isn’t completely going the way of the dodo in this day and age!

                        1. 2

                          Any recommendation for a good third part company for such outsourcing?

                          I also run my own DNS. The main reason is that I run my own mail using https://mailinabox.email/, which has been a reasonably simple and pain-free experience. Paying someone to get better stability could be interesting.

                          1. 3

                            I have added nameservers from BuddyNS to my secondary DNS. For the moment I’m just using their free plan since I’ve delegated to only one nameservers out of the 3 which are serving my zones, and the query count is low enough to keep me on the free plan.

                            1. 1

                              I loved BuddyNS but I went over their query limit and the only payment they accept is PayPal and I boycott PayPal after they stole $900 from me… I wish they would take other forms of payment

                            2. 3

                              I asked for some recommendations online. My biggest requirements were a ‘slave only’ offering, DNSSEC/IPv6 support, and ‘not Dyn’ (I just can’t give Oracle money these days). With all that in mind, I ended up choosing dnsmadesimple.com (edit: looks like they’re $30/yr, not $15 as above. Mea culpa) It was seriously easy to get everything set up (less than 20 minutes!) and now I don’t have to worry about what happens when my master goes down.

                              1. 1

                                Do you mean dnsmadeeasy.com or do you mean dnsimple.com?

                                dnsmadesimple.com doesn’t exist

                                1. 2

                                  My deepest apologies, this is what I get for Internetting when I’m about four cups of coffee short.

                                  dnsmadeasy.com is the correct one.

                              2. 3

                                Hello everyone! This is my first post. :)

                                I’m Vitalie from LuaDNS. We don’t offer slaves right now (only AXFR transfers), but if you don’t mind to fiddle with git, you can add your Bind files to a git repository and push them to us via GitHub/Bitbucket/YourRepo. You can keep using your DNS servers for redundancy as slaves.

                                You get backups via git and free Anycast DNS for 3 zones. :)

                                Shameless Plug

                              3. 1

                                Interesting - that’s not a bad idea.

                                If I were a corp I wouldn’t want this method, but for the single user, the investment has been well worth the pay-off - even if I decide to go with a vendor in future, I’ll understand what I’m paying for.