1. 2

      The non-syncing of spec code with implementation code really feels like the big barrier to making this usable in general.

      One idea I had to tackle this issue in a language like Python would be to allow for executable doc-strings within the code that could let you write specs inline, and have those be parsed out (but by default it would use the actual in-code implementation)

      That way you could write simplifying specs for certain parts of the code (say, the result of input will be any string instead of waiting on stdin when checking), while still avoiding duplication because most code is straightforward

      Though to be honest this might be very hard to get right. I feel like it’s a bit like the ORM/Type System issue, where type systems are usually rigid and don’t give much “type-check-time” flexibility, but ORMs are usually defined dynamically (relative to the type system)

      1. 6

        This is why I ended up spending less time with TLA after learning it. However, learning it was an incredibly useful exercise that has dramatically informed the way I build systems. It made me start to ask why I can’t write TLA style invariants and check executions of concurrent and distributed algorithms I build in general purpose languages.

        I realized I actually can get similar results on real code if I build systems carefully: schedule multithreaded interleavings at cross-thread communication points, simulate distributed clusters with buggy networks in a single process at accelerated speed a la discrete event simulation, things that use files are communicating with future instances of themselves and you can record logs of file operations and arbitrarily truncate them and ensure invariants hold after restart.

        my main project right now is trying to make the above ideas into nice libraries that let people run their code in more realistic ways before opening pull requests, and integrating those tools into the construction process of the sled database.

        1. 3

          An idle thought which can go on my list of side projects to start “one day” (probably right after the bus accident): probably symbolic execution can be used to demonstrate, if not enforce, the synchronisation of TLA+-type models with code. A symbolic executor can show the different cases a program will execute based on its input and the outputs that result; those can be compared with the cases discovered by the model-checking tool.

          Hooray, I’m not the first person to have that idea! You can combine formal methods with symbolic execution and meet in the middle.

          1. 2

            One idea I had to tackle this issue in a language like Python would be to allow for executable doc-strings within the code that could let you write specs inline, and have those be parsed out (but by default it would use the actual in-code implementation)

            While this example was pretty close to the code implementation, TLA+ (and most specification languages) are too flexible to allow easy embedding. Here the processes were actual threads, but they could just as easily be servers, or human agents, or an abstracted day/night cycle. In one spec I wrote, one process represented two separate interacting systems that, at that level of detail, were assumed to have perfect communication.

          1. 2

            Human brains are really bad at reasoning about the correctness of concurrent systems. Even when we use files in simple ways we are usually skirting several race conditions that just happen not to pop up while running the code on a single laptop for a few days. After starting to write my networked code in ways that let me do randomized partition testing in-process, I’ve been so humbled by the dramatic number of bugs that pop out when you just start to impose possible-yet-unanticipated orders of execution on concurrent systems.

            I’ve started to expand the approach to file interfaces that record writes since the last fsync, and can be “crashed” during tests at different points. Concurrent algorithms that can be deterministically replayed in any interleaving of cross-thread communication points. This has been fairly straightforward on systems that run on top of libpthread, but I’ve been struggling to apply these techniques to existing go codebases, where concurrency is not interactive from go programs themselves. External instrumentation of the process at runtime from ptrace gets gnarly really quickly.

            I wish that as a language that encourages the usage of concurrency more than most others, go also embraced understanding its concurrent behavior more. Does anyone know of better options than using rr’s chaos mode on a go system after forcing all goroutines to LockOSThread?

            1. 3

              Do you already know about TSAN? https://golang.org/doc/articles/race_detector.html

              1. 1

                TSAN and the go race detector are similar (I think Dmitry Vyukov might have been involved in creating tsan, in addition to the go race detector) and they work by instrumenting points in the code in similar ways as what I mentioned, but I’m interested in further applying these techniques to actually cause different scheduled interleavings (and ideally having deterministic replay) to gain more confidence in the implementation. Just running TSAN or go -race will only catch synchronization issues (tsan has false positives because it can’t reason about bare or implicit memory fences, but go -race doesn’t have this issue because you can’t express these memory semantics in native go) that your execution happens to stumble on while running an instrumented binary. Sometimes the mere instrumentation of the binary causes timing changes that prevent race conditions from popping out as often as they would in production. I want to force diverse executions (ideally fully exhaust the interleaving space for small bounded execution combinations) in addition to just detecting issues.

                1. 1

                  Are you thinking of something like Jepsen?

                  1. 2

                    Jepsen is thousands to millions of times slower than in-process network partition simulation, takes a month to stand up if you already have clojure skills, and it does not result in deterministically-replayable regression tests for issues it finds. It’s fairly expensive to run on every commit, and you can’t afford to block devs on jepsen runs in most cases. Here’s an example of what I’m talking about for network testing applied to a CASPaxos implementation. In 10 seconds it will find way more bugs than jepsen usually will in 10 hours, so you can actually have devs run the test locally before opening a pull request.

                    1. 1

                      What you’re describing reminds me of the FoundationDB simulation testing talk.

                      They instrumented their C++ codebase to remove all sources of non-determinism when running under test, so they could test their distributed database and replay failures, stressing it in exactly the way you’ve described.

                      Let me try to find a link.

                      Edit: I believe this is it: video

                      1. 2

                        Yeah! This talk was really inspiring to me, and I’ve been pushing to test more things in this way. There have been some advances since that talk that address some of the complexity downsides he mentions, particularly lineage-driven fault injection. LDFI is the perfect way to make a vast fault-space tractably testable, by looking at what goes right and then introduces failures from there.

            1. 6

              This is pretty far off-topic, and most likely to result in a bunch of yelling back and forth between True Believers.

              Flagged.

              EDIT:

              OP didn’t even bother to link to the claimed “increasing evidence”. This is a bait thread. Please don’t.

              1. 17

                Shrug. I find the complete lack of political awareness at most of the tech companies I’ve worked at to be rather frustrating and I welcome an occasional thread on these topics in this venue.

                1. 13

                  It’s possible that many of your coworkers are more politically aware than they let on, and deliberately avoid talking about it in the workplace in order to avoid conflict with people who they need to work with in order to continue earning money.

                  1. 1

                    All work is political. “Jesus take the wheel” for your impact on the world through your employment decisions is nihilistic.

                    1. 8

                      Not trumpeting all your political views in the workplace does not mean completely ignoring political incentives for employment or other decisions. I’m not sure what made you think GP is advocating that.

                2. 3

                  Obviously “off-topic-ness” is subjective, but so far your prediction re: yelling back and forth hasn’t happened. Perhaps your mental model needs updating… maybe your colleagues are better equipped to discuss broad topics politely than you previously imagined?

                  1. 4

                    Obviously “off-topic-ness” is subjective, but so far your prediction re: yelling back and forth hasn’t happened.

                    Probably because everyone on this site is good and right-thinking — or knows well enough to keep his head down and his mouth shut.

                    (Which has nothing to do with the truth of either side’s beliefs; regardless of truth, why cause trouble for no gain?)

                    1. 5

                      To me, the people on this site definitely handle these discussions better. Hard to say how much better given that’s subjective. Let’s try for objective criteria: there’s less flame wars, more people sticking to the facts as they see them vs comments that re pure noise, and moderation techniques usually reduce the worst stuff without censorship of erasing civil dissenters. If those metrics are sound, then Lobsters community are objectively better at political discussions than many sites.

                    1. 5

                      These all seem to say one thing: climate change is going to be worse faster than some other prediction said. But that does not even remotely address your claim that “organized human life might not be possible by the end of the century and possibly sooner”. What on earth makes you think you know anything about what conditions humans need to organize?

                      1. 1

                        This is a good point. I guess my “evidence” would be past civilization collapse as a result of environmental destruction like what happened on Easter Island.

                  1. 6

                    People don’t go nearly far enough with generative testing IMO. I have really great success while using it to test distributed systems by using a generator to seed clusters with client requests at specific time steps, and I also generate a schedule of network weather. For the client requests that receive responses, it’s often useful when testing consensus algorithms etc… to check that they linearize.

                    For concurrent systems, I generate small sets (2-4) operations against the full concurrent API and run them on multiple threads and record their return values. If it’s impossible to then find a single sequential schedule that has the same return values for the same requests by running different permutations of the concurrent ops on a single thread, then I just found a violation of atomicity.

                    For systems that write to disk, I have a file interface that when run in testing mode records a log of writes and fsync calls. I’ll generate operations against the API that persists data, and a crash time when writes are either partially applied or completely deleted after the last fsync call. Upon restart if the system is in an inconsistent state, then I just found incorrect usage of the filesystem (so many databases fail to do this correctly).

                    People tend to just test things that adhere to a function signature, and you get great results for very little effort by doing this, but there are mountains of gold if you get more creative with bigger and messier systems.

                    1. 2

                      Have you looked at secure scuttlebutt or dat? They are currently being used successfully for helping periodically-connected communities exchange information asynchronously, like remote villages and sailors etc…

                      1. 1

                        We’re working on an implementation similar to scuttlebutt but on top of the C zyre libs for our desktop app, looking at scuttlebutt for possible usage on our mobile app.

                        I haven’t seen dat before, doing a bit of reading on that to see if we can use it for some things.

                      1. 3

                        Hey icefall, one thing that would complement this presentation is a page listing every paper and tool in it with links to them.

                        1. 8

                          This is a good idea! Here’s a nice sketch that contains most of them and the two that aren’t on there are the ALICE paper and Simple Testing Can Prevent Most Critical Failures. I’ll cut a summary blog post that goes into these when I have a few hours!

                          1. 4
                        1. 7

                          It depends what you mean by resiliency. I tend to work on things with strong consistency requirements, and to be honest I think the way most people build and talk about distributed systems engineering is pretty gross and unprincipled.

                          Why is Jepsen so successful against most systems, despite it being so incredibly slow to actually exercise communication interleaving patterns? People are building systems from a fundamentally broken perspective where they are not actually considering the realistic conditions that their systems will be running in.

                          In my opinion, the proper response to this should be to ask how we can simulate realistic networks (and filesystems for that matter) on our laptops as quickly as possible, without requiring engineers to work with new tools.

                          My approach is to use quickcheck to generate partitions + client requests and implement participants as things that implement an interface that usually looks like:

                          • receive(at, from, msg) -> [(to, msg)]
                          • tick(at) -> [(to, msg)]

                          And this way a distributed algorithm can be single-stepped in accelerated time, and for each outbound message we use the current state of network weather to assign an arrival time / drop it. Stick it in a priority queue and iterate over this until no messages are in flight.

                          Then as the “property” in property testing, ensure that linearizability holds for all client requests.

                          With something like this, every engineer can get a few thousand jepsen-like runs in a couple seconds before even opening a pull request. They don’t have to use any tools other than their language’s standard test support. You can write the simulator once and it has very high reuse value, since everything just implements the interface you chose. Way higher bug:cpu cycle ratio than jepsen.

                          This does not replace jepsen, as jepsen is still important for catching end-to-end issues in an “as-deployed” configuration. But I really do think jepsen is being totally misused.

                          We should build things in a way that allows us to quickly, cheaply measure whether they will be successful in the actual environments that we expect them to perform in.

                          Maximize introspectability. Everything is broken to some extent, so be sympathetic to future selves that have to debug it in production while failing spectacularly and causing everyone to freak out.

                          One kind of concurrency that few seem to consider until it’s time to upgrade: multiple versions of your code may be live in some situations. Did you build your system in a way that ensures this is safe? Did you build your system in a way that allows you to un-deploy a new version if it fails unexpectedly? Or did you build in points of no return?

                          One reason why I don’t do distributed systems fault injection consulting anymore is because of the egos of people who can’t accept their babies have problems. That got tiring really quickly. The #1 most important thing to building a reliable system is being humble. Everything we do is broken. That’s OK. So many engineers who learn how to open sockets begin to think of themselves as infallible rockstars. It’s really hard to build systems that work with these people.

                          1. 3

                            At first I was rolling my eyes but then realized I was being an elitist. Infrastructure must be sympathetic to its users. Part of that IS forward-compatibility, but security issues like this trump those concerns IMO.

                            1. 7

                              This reminds me a bit of a technique Joe Armstrong mentioned, maybe in an in-person conversation at a conference or maybe more publically, I forget. He said that he will throw away code that takes him more than one day to write, and either start over the next day or do something more important. His justification was that if it takes longer, the approach is probably shit. At the time, I thought it was a pretty extreme approach, and just made a mental note of it and moved on.

                              After spending the last year+ making a reasonably well tested database, I’ve caught myself having taken on this approach myself, arriving at it after an endless series of devastatingly complicated bugs that have deeply humbled me. It was quite a surprise when I realized it was exactly what Joe had mentioned, that I didn’t really believe was an effective strategy at the time.

                              Why do this? For me, it keeps the complexity manageable. If I can’t keep the whole thing in my head when I’m creating it in one shot, there’s very little chance I’ll be able to debug an issue in it in under one or two days. I will create far more bugs when I can’t wield the model of it easily in my mind.

                              How long do you want to spend debugging an issue in a system? Combine this approach with bwk’s “debugging is twice as hard as writing a program in the first place” rule of thumb and throw the whole thing away when you get to 1/2 of your desired debugging budget!

                              1. 5

                                Transactions for sled! I’ve been going through the literature and arriving at a concurrency control scheme loosely based on cicada but a little less complexity around timestamp synchronization, but giving it a clear path toward implementing less centralized timestamp allocation in the future if 100+ core machines become targets. It has been super interesting as a distributed systems person working on a main-memory database to see techniques from the distributed world being applied recently to reduce coordination costs of transaction techniques. I’ve been holding off on releasing sled as alpha because of a few known rare dataloss bugs, but I think it might be better to just roll forward and be clear that it’s got bugs still, which will be the case whether I know about them or not, because it’s still so young.

                                1. 8

                                  I’m working on two projects in my spare time right now:

                                  -going through Type-Driven Development With Idris, to learn Idris for the dependent-types goodness

                                  -trying to teach myself the nuts and bolts of type theory by implementing Hindley-Milner type checking in the toy programming language interpreter I’ve been working on for a while. I’ve found a few resources about exactly how to go about doing this (most notably, this Haskell conference talk and this section from Stephen Diehl’s incomplete tutorial on implementing a Haskell-like functional programming language. I’ve actually had a bit of trouble translating the code from those resources, which is in Haskell and assumes a particular design for the AST data structures, to my own interpreter, which is in Rust and has a different AST design. If anyone is knowledgeable about actually implementing Hindley-Milner in a real programming language, I’d love to get some advice.

                                  1. 4

                                    I’ve got that book sitting in front of me right now! It’s so cool how you can specify protocols and valid state transitions so nicely in idris! I’m fantasizing about using it as the basis for a smart contract language.

                                    1. 4

                                      I tried going through that book, too, but I was not convinced by the examples it used. I recall one example which added the runtime length of an array to the typesystem. Nice, but you still needed do (runtime) checks for the length…

                                      1. 2

                                        You don’t always need to do runtime checks for the length. For example, if the length of a vector is n you can use a data type Fin n to represent an index. If you ask a user for an input, you’ll have to check it there - but once you have a Fin n you can pass it around your program without checking it. It’s always a valid index.

                                      2. 1

                                        If anyone is knowledgeable about actually implementing Hindley-Milner in a real programming language…

                                        I have experience with doing it in Haskell, but it seems like you’re implying that Haskell is not a real programming language…

                                        1. 1

                                          Haha far from it, I’ve written ostensibly-useful code in Haskell myself. There are a number of impedence mismatches between Haskell-style code and Rust-style code, that have made it hard for me to take the example Haskell code I’ve seen for Hindely-Milner and apply it to my own project.

                                      1. 9

                                        I foresake C as someone who works with it all the time. The author of this post makes the point that it’s worth knowing. I think that’s totally true if you interact with low-level systems.

                                        I definitely don’t buy the point about distributed systems usually requiring C because of performance reasons. As a distributed systems engineer, most of my peers work in memory safe languages, with a few things along data paths being in C, and every once in a while people may peer into the kernel to reason about an issue in the networking stack, but I’d imagine that most people who bill themselves as distributed systems engineers today are terrible at C and it probably doesn’t hurt them very much.

                                        When I foresake C I don’t advocate for its ignorance. I advocate for learning all you can about memory corruption, and being honest as a biased human who is building things for other biased humans and with other biased humans. C is a great language to use for learning about bugs and exploitation techniques. There is too much macho bullshit from prominent C engineers, and it catches on with incompetent engineers who make the world a more dangerous place.

                                        1. 4

                                          Many timeseries projects today seem to be borrowing techniques from Gorilla, particularly around compression, so it’s pretty relevant.

                                          1. 11

                                            Thank you for the wonderful comments last week.

                                            I wrote an Earley parser. And a Pratt parser. The Pratt parser is what I’ve been looking for all this time: a modular recursive descent parser. What it lacks in formalism it makes up with in brevity and power-to-weight.

                                            Now, I need to choose a host language. I’d like to pick Rust, but I’m not sure it has a ready-made GC solution right now, and I don’t want to go down that rabbit hole. That leaves C++, JVM, or OTP. Any thoughts?

                                            1. 3

                                              What kind of language are you looking to interpret/execute? The three platforms you mention all have really different tradeoffs.

                                              1. 3

                                                A Lisp-esque language under the hood with a non-Lisp syntax on top. Idea is the functional paradigm can subsume the other two big paradigms (imperative/logic). Can use the CEK machine for proper tail call handling, so that isn’ta requirement of the host. Big thing I’m looking for is a GC (whether lib or built-in) and a language I like that I can target it with.

                                              2. 2

                                                For rust, you can wrap everything in a Rc, or if you have multiple threads an Arc, or if you want tracing GC you can use this, or if you just need epoch-style reclamation there’s crossbeam-epoch or if you just need hazard pointers there’s conc. I’ve had a lot of success with crossbeam-epoch in lock-free systems I’ve built.

                                                1. 1

                                                  Rc (and friends) would need cycle detection, no? Maybe the thing to do is just use Rc and do research on cycle-detection algorithms to see if they are hard or not.

                                                  I looked at Epoch and hazard pointers and wasn’t sure if they were ok as a general GC. I need to do more reading. Thanks!

                                                  1. 2

                                                    Yeah, you can create memory leaks with Rc cycles in rust. But this is rarely an issue in most use cases. Rust memory can feel a little confusing at first, but cycles tend not to come up once you learn some different idioms for structuring things in non-cyclical ways.

                                                    For example, if you want to build a DAG, you can quickly implement it with a HashMap from ID to Node, where ID is some monotonic counter that you maintain. Each Node can contain Vec’s of incoming and outgoing edges. You can implement your own RC-like thing that tracks the sum of indegree and outdegree, and when it reaches 0, you just remove the Node out of the containing hashmap. For the cases where performance or concurrency concerns rule out this approach (which are rare and should not be pursued until this is measured to be a bottleneck) you can always write Rust like C with unsafe pointers, Box::into_raw, dereferencing inside unsafe blocks, and free’ing by calling Box::from_raw (actually calling drop() on that if you want to be explicit about what’s happening, but it will be dropped implicitly when it goes out of scope). Use mutexes on shared state until… basically always, but if you REALLY want to go lock-free, that’s when you can benefit from things like crossbeam-epoch to handle freeing of memory that has been detached from mutable shared state but may still be in use by another thread.

                                                    Feel free to shoot me an email if you’re curious about how something can be done in Rust! I know it can be overwhelming when you’re starting to build things in it, and I’m happy to help newcomers get past the things I banged my head against the wall for days trying to learn :)

                                                2. 2

                                                  FWIW, many languages written in C or C++ use arenas to hold the nodes that result from parsing . For example, CPython uses this strategy. I’m pretty sure v8 does too. So you don’t manage each node individually, which is a large load on the memory allocator/garbage collector – you put them all in a big arena and then free them at once.

                                                  1. 2

                                                    Save the earth , use C++ or OTP

                                                    1. 1

                                                      You also have Go and .NET Core as possible host runtimes.

                                                      1. 1

                                                        What about Nim? It seems to be a memory-safe language with low-latency GC, macros, and produces C. I mean, the Schemes are ideal if doing language building with LISP thing underneath since they start that way.

                                                      1. 8

                                                        Rushing to get my lock-free rust bw-tree-backed embedded database to an alpha state before FOSDEM next weekend, where I hope to encourage a few people to give it a shot for low-impact workloads. Thinking about ways of applying real-time scheduling to threads in a concurrent property testing library I’m writing to tease out bugs in the bw tree to get similar results to PULSE when used with quviq’s quickcheck for erlang. I will name my first child “Determinism” after the past few months of intense debugging…

                                                        1. 34

                                                          The recruiters who perform the first layer of filtering will usually have a set of buzzwords or technologies that they have been told are associated with candidates worth talking to. This might include things like prometheus, kubernetes, golang, aws, puppet, etc… If you are applying for a specific job, try to figure out what stack they use, and familiarize yourself with it by working through a tutorial, so that you can mention at least a basic level of familiarity with tech that the recruiters will be often filtering for. To the good companies that want SRE’s who are curious enough to dig into unfamiliar territory, being open about your previous unfamiliarity and willingness to dive in anyway can be a strong positive signal. But recruiters don’t always share the same cultural values around this as managers or future teammates, so use your discretion about how open you are about this with the recruiter. Some teams really value curiosity, but the recruiters often don’t get that message, and will hear that you’ve only done a quick tutorial on something and will mash the “reject” button in their candidate pipeline management interface.

                                                          When I hire for SRE teams, I care about one thing above all else: ability to drill into unfamiliar systems that are behaving strangely. General curiosity and demonstrations of learning new things are possible indicators of this mindset to me. I actually usually prefer to hire people who are more curious than people who are less curious and more experienced because the curious people will be getting more out of the job on a personal level, and I love teaching. A lot of teams don’t prioritize teaching though, due to mismanagement, incompetence, laziness, or fear-driven mindsets that are afraid a newcomer will outperform them. Avoid these places like the plague. They are the norm, unfortunately, but they will not help you grow at a rapid pace, and if you can afford to keep playing the interview lottery, you should really hold off until you get a place that gives you a strong signal about mentorship.

                                                          I test for drilling proficiency by asking questions about tools that are common, and can be assumed to have some basic familiarity around, like linux or the web, and when we get to a part that they are unfamiliar with, I let them know that they can use me as Google to ask questions about how they can drill into specific behaviors. I ask about how things can fail. What are different ways a file can fail to be created, etc… (they can still use me as Google to look into the specific implementation details of things, or we can strace stuff together, etc…). Basically, I try to show them windows they can use to peer into things I’m asking about the system if they are not already familiar, and then I try to get a sense of how creatively they can use these new tools I’m giving them. That’s how the job will look, so that’s how I try to make the interview.

                                                          Most people suck at interviewing. They will ask you about minutae about tools that you may not be experienced with yet. It’s important to keep a calm head, let them know about where your current level of experience is with these specific technologies, and then explain to them how, in the real world, you would dive into the problem to understand and solve it. You exist to solve problems, deflect bad interview trivia questions with confidence in your ability to solve problems by drilling into them. If the team sucks at interviewing, that’s on them, and the team is less likely to be made up of experienced people who are also good to work with. People skills are the most important engineering skills.

                                                          If you want a laundry list of tech to get familiar with for modern SRE-like jobs:

                                                          • linux fundamentals, what’s an inode, what’s a process, what’s a socket
                                                          • AWS / GCE
                                                          • kubernetes
                                                          • prometheus

                                                          Books:

                                                          Different teams will prioritize coding skills differently. The most common thing I see recruiters filtering for is making sure you have one scripting and one compiled language under your belt. Golang is a good bet, as it’s taken over the pop infrastructure world.

                                                          Have fun :)

                                                          1. 2

                                                            That was really helpful!

                                                          1. 3

                                                            Raw sockets are the strcpy of the current age of terribly-written distributed systems. People stand no chance of finding bugs in the systems they create on their own laptops. The fact that slow, incredibly low bug:cpu cycle black-box partition testing is so successful in finding bugs should be a screaming red alarm that we need to write our systems in ways that are more amenable to testing on our laptops in the first place.

                                                            That means having a pluggable transport layer that can be used with real sockets in production, a simulator that messes up traffic according to the asynchronous network model, or a hit-tracing fuzzer. If you’re using raw sockets in your core serving logic, your architecture is broken, and you will have far more bugs and higher costs to fix the few bugs you discover.

                                                            1. 7

                                                              I recommend looking for a University that does it to learn or work on one of their projects. I suspect it’s very helpful to have experienced people to help you through your first year or two of verifying anything significant.

                                                              In any case, here’s a write-up from one of best in field with advice and one book reference. The other books people mention are Certified Programming with Dependent Types by Chlipala and Software Foundations. If picking based on tooling, Coq and HOL (esp Isabelle) are used on best projects in software with ACL2 being used most for hardware.

                                                              It also helps to see what lightweight verification is like if you need some motivation, a fallback, or just tell you something aint worth proving. Alloy (see site) or TLA+ (learntla.com) are best for that imho.

                                                              1. 2

                                                                Thanks Nick. I suspect it would be helpful to have experienced mentorship as well. It’s certainly a challenging - and large - area. Working with a University sounds like a great idea.

                                                                Thanks for the link! I suppose it’s time to get back to writing algorithms and proving things ;)

                                                                1. 4

                                                                  Well, it might also help to read them to see what tools you even want to use. As always, I suggest something you will enjoy in a category with practical applications immediately or down the line. There’s more practical stuff now than ever. Formal methods still ain’t at Github’s level, though. Still more to do. :)

                                                                  Here’s a few areas of research to consider for what to verify:

                                                                  1. Obviously, a formal semantics, compiler, or optimization for a specific language worth the time. Rust and Nim come to mind. People also keep building important stuff in C++ which was only partly done. Haskell had one, formal semantics done but no verified compiler. Maybe a useful Scheme like Racket or Chicken.

                                                                  2. Tooling for automated proof of safety, termination, and so on for programs in a language like above with minimal specs a la SPARK or Frama-C. Optionally working on SAT/SMT solvers to get such languages closer to 100% on more problems with less schemes like ghost code. There’s lots of potential there.

                                                                  3. Verifying models used in actual languages for safe/effective concurrency (eg Eiffel SCOOP) and/or parallelism (eg Cray Chapel or Taft’s ParaSail). Possibly a mockup in existing language with macros of such features with verified translator to regular code.

                                                                  4. Verifying client-server or peer-to-peer protocols that have a lot of uptake for various properties. Optionally, making something like Verdi that already does it easier to use for non-experts or increased automation.

                                                                  5. Verifying important data structures and sequential algorithms for correctness. Just make sure there’s a C implementation with x86 and ARM7-9 compatibility. If it performs well, people can wrap it in any language with C FFI.

                                                                  6. GUI’s and graphics stacks. That occasionally gets work but not much of it. Graphics drivers are notorious for crashing. Just automated verification of them for safety like Microsoft’s SLAM and no race conditions might be helpful. Fully verification of an OpenGL stack or something might also be interesting. For GUI’s, something like Nitpicker would be easy whereas for GUI programming a DSL compiling to major toolkit would be the route. Maybe an OpenCL model. Who knows.

                                                                  7. Increasing usability, automation, and performance of any tool people have used to do anything on that the lists above. There’s already lots of people increasing their assurance. Cost of verification is a more important problem right now, though. The lightweight methods need more power. The heavyweight methods need more automation. I did speculate about and find a HOL-to-FOL translator used to do the latter once. That, the use of model-checkers to filter before proving, and results of SAT/SMT tooling in general suggest there’s lots of potential here.

                                                                  So, there’s you some ideas that might have immediate or long-term impact on problems that matter. There should also be something in that list you find fun. Maybe also something you know a lot about already. That can help. Either should have potential for projects worth spending a lot of time on.

                                                                  1. 2

                                                                    Thanks for the list. Programming languages are interesting to me and partly what drives my interest here, especially tools like Disel.

                                                                    I’m guessing #7, automation, will be increasingly interesting the further along I get here.

                                                                  2. 4

                                                                    If you’d like to practice on a juicy target, I invite you to join an effort to eradicate bugs in a database I’m working on in my free time, sled! It has a simple interface, but lots of optimizations that really need to be reliable. Nobody wants to use a database that very quickly deletes their data :P

                                                                    I’ve been working extensively with property testing and a few tricks for shaking out interesting thread interleavings, and these approaches have yielded enough bugs to keep me busy for the last 6 months, but it’s time to really ramp up the rigor.

                                                                    These are the approaches I believe will lead to the discovery of interesting bugs:

                                                                    • formally specify the lock-free algorithms in use for the IO buffer and tree using TLA+, alloy, spin, iris etc… with the goal of identifying concerns that have not been considered in the implementation
                                                                    • reproduce the functionality of quviq’s erlang PULSE scheduler using either ptrace or SCHED_FIFO/SCHED_RR in a test harness for performing parallel property testing as a rust library. Bring some of the awesome features of quviq’s quickcheck into the rust world.
                                                                    • implement a concurrency testing library for rust that utilizes ptrace and z3 to get a nice user-friendly practical implementation of Maximal Causality Reduction to identify a minimal set of relevant interleaving schedules, and then use ptrace to schedule the threads according to the generated schedules to suss out crashes or violations of specified invariants.

                                                                    Future directions include:

                                                                    • building a versioned transactional store on top of sled
                                                                    • building a horizontally scalable linearizable kv store on top of sled
                                                                    • building a location-agnostic (phones, multi-dc, PoP’s) store powered by OT & CRDT’s on top of sled

                                                                    The future focus on distributed systems will involve lots of interesting simulation, as well as an attempt to unify concurrency testing and distributed systems simulation. This is sort of a holy grail for me, and I hope to create tooling that lets people build significantly more reliable distributed systems, even more than the databases themselves.

                                                                    Let me know if any of these directions sound like things you would be interested in collaborating on! You can find my email on github if so. Having this stuff on my github has resulted in a bunch of interesting people reaching out about jobs, and I haven’t been asked to do a single technical interview after referring companies to the project to see my output. This is a 100% non-commercial endeavor at this point, but I see it as earning interesting future job opportunities at the very least. I can’t tell you if commercial formal methods people will appreciate your work on these systems or not, but this is a real system that fills a real pain point (existing embedded db’s either have crappy read perf or crappy write perf, are generally confusing for people to configure, and often have poor consistency guarantees), and applying advanced testing techniques to this should actually save people from facing huge issues.

                                                                    1. 1

                                                                      I may have to dig with this. I find having practical examples to chew on while learning quite valuable. Your work looks great, congrats on your success!

                                                                  3. 2

                                                                    “I recommend looking for a University that does it to learn or work on one of their projects. I suspect it’s very helpful to have experienced people to help you through your first year or two of verifying anything significant.”

                                                                    True, you can found some of the courses and lectures in a list.

                                                                  1. 6

                                                                    Making lots of progress toward stabilizing my rust lock-free bw-tree! Hopefully I’ll have an alpha out soon :) The goal for the next week is to have stable disk utilization while under extreme contention and fragmentation. Now that crash testing and interleaving tests have teased out much of the low-hanging fruit, it will soon be time to turn my attention to dynamic instrumentation for teasing out exotic race conditions :] If anyone is curious about lock-free high performance stateful systems in rust, feel free to ping me, and I’d love to spend some time teaching potential collaborators. There are a ton of really juicy database features that I’d love to delegate to other people who are curious about learning how to build them!