1. 2

    Have you looked at secure scuttlebutt or dat? They are currently being used successfully for helping periodically-connected communities exchange information asynchronously, like remote villages and sailors etc…

    1. 1

      We’re working on an implementation similar to scuttlebutt but on top of the C zyre libs for our desktop app, looking at scuttlebutt for possible usage on our mobile app.

      I haven’t seen dat before, doing a bit of reading on that to see if we can use it for some things.

    1. 3

      Hey icefall, one thing that would complement this presentation is a page listing every paper and tool in it with links to them.

      1. 8

        This is a good idea! Here’s a nice sketch that contains most of them and the two that aren’t on there are the ALICE paper and Simple Testing Can Prevent Most Critical Failures. I’ll cut a summary blog post that goes into these when I have a few hours!

        1. 4
      1. 7

        It depends what you mean by resiliency. I tend to work on things with strong consistency requirements, and to be honest I think the way most people build and talk about distributed systems engineering is pretty gross and unprincipled.

        Why is Jepsen so successful against most systems, despite it being so incredibly slow to actually exercise communication interleaving patterns? People are building systems from a fundamentally broken perspective where they are not actually considering the realistic conditions that their systems will be running in.

        In my opinion, the proper response to this should be to ask how we can simulate realistic networks (and filesystems for that matter) on our laptops as quickly as possible, without requiring engineers to work with new tools.

        My approach is to use quickcheck to generate partitions + client requests and implement participants as things that implement an interface that usually looks like:

        • receive(at, from, msg) -> [(to, msg)]
        • tick(at) -> [(to, msg)]

        And this way a distributed algorithm can be single-stepped in accelerated time, and for each outbound message we use the current state of network weather to assign an arrival time / drop it. Stick it in a priority queue and iterate over this until no messages are in flight.

        Then as the “property” in property testing, ensure that linearizability holds for all client requests.

        With something like this, every engineer can get a few thousand jepsen-like runs in a couple seconds before even opening a pull request. They don’t have to use any tools other than their language’s standard test support. You can write the simulator once and it has very high reuse value, since everything just implements the interface you chose. Way higher bug:cpu cycle ratio than jepsen.

        This does not replace jepsen, as jepsen is still important for catching end-to-end issues in an “as-deployed” configuration. But I really do think jepsen is being totally misused.

        We should build things in a way that allows us to quickly, cheaply measure whether they will be successful in the actual environments that we expect them to perform in.

        Maximize introspectability. Everything is broken to some extent, so be sympathetic to future selves that have to debug it in production while failing spectacularly and causing everyone to freak out.

        One kind of concurrency that few seem to consider until it’s time to upgrade: multiple versions of your code may be live in some situations. Did you build your system in a way that ensures this is safe? Did you build your system in a way that allows you to un-deploy a new version if it fails unexpectedly? Or did you build in points of no return?

        One reason why I don’t do distributed systems fault injection consulting anymore is because of the egos of people who can’t accept their babies have problems. That got tiring really quickly. The #1 most important thing to building a reliable system is being humble. Everything we do is broken. That’s OK. So many engineers who learn how to open sockets begin to think of themselves as infallible rockstars. It’s really hard to build systems that work with these people.

        1. 3

          At first I was rolling my eyes but then realized I was being an elitist. Infrastructure must be sympathetic to its users. Part of that IS forward-compatibility, but security issues like this trump those concerns IMO.

          1. 7

            This reminds me a bit of a technique Joe Armstrong mentioned, maybe in an in-person conversation at a conference or maybe more publically, I forget. He said that he will throw away code that takes him more than one day to write, and either start over the next day or do something more important. His justification was that if it takes longer, the approach is probably shit. At the time, I thought it was a pretty extreme approach, and just made a mental note of it and moved on.

            After spending the last year+ making a reasonably well tested database, I’ve caught myself having taken on this approach myself, arriving at it after an endless series of devastatingly complicated bugs that have deeply humbled me. It was quite a surprise when I realized it was exactly what Joe had mentioned, that I didn’t really believe was an effective strategy at the time.

            Why do this? For me, it keeps the complexity manageable. If I can’t keep the whole thing in my head when I’m creating it in one shot, there’s very little chance I’ll be able to debug an issue in it in under one or two days. I will create far more bugs when I can’t wield the model of it easily in my mind.

            How long do you want to spend debugging an issue in a system? Combine this approach with bwk’s “debugging is twice as hard as writing a program in the first place” rule of thumb and throw the whole thing away when you get to 1/2 of your desired debugging budget!

            1. 5

              Transactions for sled! I’ve been going through the literature and arriving at a concurrency control scheme loosely based on cicada but a little less complexity around timestamp synchronization, but giving it a clear path toward implementing less centralized timestamp allocation in the future if 100+ core machines become targets. It has been super interesting as a distributed systems person working on a main-memory database to see techniques from the distributed world being applied recently to reduce coordination costs of transaction techniques. I’ve been holding off on releasing sled as alpha because of a few known rare dataloss bugs, but I think it might be better to just roll forward and be clear that it’s got bugs still, which will be the case whether I know about them or not, because it’s still so young.

              1. 8

                I’m working on two projects in my spare time right now:

                -going through Type-Driven Development With Idris, to learn Idris for the dependent-types goodness

                -trying to teach myself the nuts and bolts of type theory by implementing Hindley-Milner type checking in the toy programming language interpreter I’ve been working on for a while. I’ve found a few resources about exactly how to go about doing this (most notably, this Haskell conference talk and this section from Stephen Diehl’s incomplete tutorial on implementing a Haskell-like functional programming language. I’ve actually had a bit of trouble translating the code from those resources, which is in Haskell and assumes a particular design for the AST data structures, to my own interpreter, which is in Rust and has a different AST design. If anyone is knowledgeable about actually implementing Hindley-Milner in a real programming language, I’d love to get some advice.

                1. 4

                  I’ve got that book sitting in front of me right now! It’s so cool how you can specify protocols and valid state transitions so nicely in idris! I’m fantasizing about using it as the basis for a smart contract language.

                  1. 4

                    I tried going through that book, too, but I was not convinced by the examples it used. I recall one example which added the runtime length of an array to the typesystem. Nice, but you still needed do (runtime) checks for the length…

                    1. 2

                      You don’t always need to do runtime checks for the length. For example, if the length of a vector is n you can use a data type Fin n to represent an index. If you ask a user for an input, you’ll have to check it there - but once you have a Fin n you can pass it around your program without checking it. It’s always a valid index.

                    2. 1

                      If anyone is knowledgeable about actually implementing Hindley-Milner in a real programming language…

                      I have experience with doing it in Haskell, but it seems like you’re implying that Haskell is not a real programming language…

                      1. 1

                        Haha far from it, I’ve written ostensibly-useful code in Haskell myself. There are a number of impedence mismatches between Haskell-style code and Rust-style code, that have made it hard for me to take the example Haskell code I’ve seen for Hindely-Milner and apply it to my own project.

                    1. 9

                      I foresake C as someone who works with it all the time. The author of this post makes the point that it’s worth knowing. I think that’s totally true if you interact with low-level systems.

                      I definitely don’t buy the point about distributed systems usually requiring C because of performance reasons. As a distributed systems engineer, most of my peers work in memory safe languages, with a few things along data paths being in C, and every once in a while people may peer into the kernel to reason about an issue in the networking stack, but I’d imagine that most people who bill themselves as distributed systems engineers today are terrible at C and it probably doesn’t hurt them very much.

                      When I foresake C I don’t advocate for its ignorance. I advocate for learning all you can about memory corruption, and being honest as a biased human who is building things for other biased humans and with other biased humans. C is a great language to use for learning about bugs and exploitation techniques. There is too much macho bullshit from prominent C engineers, and it catches on with incompetent engineers who make the world a more dangerous place.

                      1. 4

                        Many timeseries projects today seem to be borrowing techniques from Gorilla, particularly around compression, so it’s pretty relevant.

                        1. 11

                          Thank you for the wonderful comments last week.

                          I wrote an Earley parser. And a Pratt parser. The Pratt parser is what I’ve been looking for all this time: a modular recursive descent parser. What it lacks in formalism it makes up with in brevity and power-to-weight.

                          Now, I need to choose a host language. I’d like to pick Rust, but I’m not sure it has a ready-made GC solution right now, and I don’t want to go down that rabbit hole. That leaves C++, JVM, or OTP. Any thoughts?

                          1. 3

                            What kind of language are you looking to interpret/execute? The three platforms you mention all have really different tradeoffs.

                            1. 3

                              A Lisp-esque language under the hood with a non-Lisp syntax on top. Idea is the functional paradigm can subsume the other two big paradigms (imperative/logic). Can use the CEK machine for proper tail call handling, so that isn’ta requirement of the host. Big thing I’m looking for is a GC (whether lib or built-in) and a language I like that I can target it with.

                            2. 2

                              For rust, you can wrap everything in a Rc, or if you have multiple threads an Arc, or if you want tracing GC you can use this, or if you just need epoch-style reclamation there’s crossbeam-epoch or if you just need hazard pointers there’s conc. I’ve had a lot of success with crossbeam-epoch in lock-free systems I’ve built.

                              1. 1

                                Rc (and friends) would need cycle detection, no? Maybe the thing to do is just use Rc and do research on cycle-detection algorithms to see if they are hard or not.

                                I looked at Epoch and hazard pointers and wasn’t sure if they were ok as a general GC. I need to do more reading. Thanks!

                                1. 2

                                  Yeah, you can create memory leaks with Rc cycles in rust. But this is rarely an issue in most use cases. Rust memory can feel a little confusing at first, but cycles tend not to come up once you learn some different idioms for structuring things in non-cyclical ways.

                                  For example, if you want to build a DAG, you can quickly implement it with a HashMap from ID to Node, where ID is some monotonic counter that you maintain. Each Node can contain Vec’s of incoming and outgoing edges. You can implement your own RC-like thing that tracks the sum of indegree and outdegree, and when it reaches 0, you just remove the Node out of the containing hashmap. For the cases where performance or concurrency concerns rule out this approach (which are rare and should not be pursued until this is measured to be a bottleneck) you can always write Rust like C with unsafe pointers, Box::into_raw, dereferencing inside unsafe blocks, and free’ing by calling Box::from_raw (actually calling drop() on that if you want to be explicit about what’s happening, but it will be dropped implicitly when it goes out of scope). Use mutexes on shared state until… basically always, but if you REALLY want to go lock-free, that’s when you can benefit from things like crossbeam-epoch to handle freeing of memory that has been detached from mutable shared state but may still be in use by another thread.

                                  Feel free to shoot me an email if you’re curious about how something can be done in Rust! I know it can be overwhelming when you’re starting to build things in it, and I’m happy to help newcomers get past the things I banged my head against the wall for days trying to learn :)

                              2. 2

                                FWIW, many languages written in C or C++ use arenas to hold the nodes that result from parsing . For example, CPython uses this strategy. I’m pretty sure v8 does too. So you don’t manage each node individually, which is a large load on the memory allocator/garbage collector – you put them all in a big arena and then free them at once.

                                1. 2

                                  Save the earth , use C++ or OTP

                                  1. 1

                                    You also have Go and .NET Core as possible host runtimes.

                                    1. 1

                                      What about Nim? It seems to be a memory-safe language with low-latency GC, macros, and produces C. I mean, the Schemes are ideal if doing language building with LISP thing underneath since they start that way.

                                    1. 8

                                      Rushing to get my lock-free rust bw-tree-backed embedded database to an alpha state before FOSDEM next weekend, where I hope to encourage a few people to give it a shot for low-impact workloads. Thinking about ways of applying real-time scheduling to threads in a concurrent property testing library I’m writing to tease out bugs in the bw tree to get similar results to PULSE when used with quviq’s quickcheck for erlang. I will name my first child “Determinism” after the past few months of intense debugging…

                                      1. 34

                                        The recruiters who perform the first layer of filtering will usually have a set of buzzwords or technologies that they have been told are associated with candidates worth talking to. This might include things like prometheus, kubernetes, golang, aws, puppet, etc… If you are applying for a specific job, try to figure out what stack they use, and familiarize yourself with it by working through a tutorial, so that you can mention at least a basic level of familiarity with tech that the recruiters will be often filtering for. To the good companies that want SRE’s who are curious enough to dig into unfamiliar territory, being open about your previous unfamiliarity and willingness to dive in anyway can be a strong positive signal. But recruiters don’t always share the same cultural values around this as managers or future teammates, so use your discretion about how open you are about this with the recruiter. Some teams really value curiosity, but the recruiters often don’t get that message, and will hear that you’ve only done a quick tutorial on something and will mash the “reject” button in their candidate pipeline management interface.

                                        When I hire for SRE teams, I care about one thing above all else: ability to drill into unfamiliar systems that are behaving strangely. General curiosity and demonstrations of learning new things are possible indicators of this mindset to me. I actually usually prefer to hire people who are more curious than people who are less curious and more experienced because the curious people will be getting more out of the job on a personal level, and I love teaching. A lot of teams don’t prioritize teaching though, due to mismanagement, incompetence, laziness, or fear-driven mindsets that are afraid a newcomer will outperform them. Avoid these places like the plague. They are the norm, unfortunately, but they will not help you grow at a rapid pace, and if you can afford to keep playing the interview lottery, you should really hold off until you get a place that gives you a strong signal about mentorship.

                                        I test for drilling proficiency by asking questions about tools that are common, and can be assumed to have some basic familiarity around, like linux or the web, and when we get to a part that they are unfamiliar with, I let them know that they can use me as Google to ask questions about how they can drill into specific behaviors. I ask about how things can fail. What are different ways a file can fail to be created, etc… (they can still use me as Google to look into the specific implementation details of things, or we can strace stuff together, etc…). Basically, I try to show them windows they can use to peer into things I’m asking about the system if they are not already familiar, and then I try to get a sense of how creatively they can use these new tools I’m giving them. That’s how the job will look, so that’s how I try to make the interview.

                                        Most people suck at interviewing. They will ask you about minutae about tools that you may not be experienced with yet. It’s important to keep a calm head, let them know about where your current level of experience is with these specific technologies, and then explain to them how, in the real world, you would dive into the problem to understand and solve it. You exist to solve problems, deflect bad interview trivia questions with confidence in your ability to solve problems by drilling into them. If the team sucks at interviewing, that’s on them, and the team is less likely to be made up of experienced people who are also good to work with. People skills are the most important engineering skills.

                                        If you want a laundry list of tech to get familiar with for modern SRE-like jobs:

                                        • linux fundamentals, what’s an inode, what’s a process, what’s a socket
                                        • AWS / GCE
                                        • kubernetes
                                        • prometheus


                                        Different teams will prioritize coding skills differently. The most common thing I see recruiters filtering for is making sure you have one scripting and one compiled language under your belt. Golang is a good bet, as it’s taken over the pop infrastructure world.

                                        Have fun :)

                                        1. 2

                                          That was really helpful!

                                        1. 3

                                          Raw sockets are the strcpy of the current age of terribly-written distributed systems. People stand no chance of finding bugs in the systems they create on their own laptops. The fact that slow, incredibly low bug:cpu cycle black-box partition testing is so successful in finding bugs should be a screaming red alarm that we need to write our systems in ways that are more amenable to testing on our laptops in the first place.

                                          That means having a pluggable transport layer that can be used with real sockets in production, a simulator that messes up traffic according to the asynchronous network model, or a hit-tracing fuzzer. If you’re using raw sockets in your core serving logic, your architecture is broken, and you will have far more bugs and higher costs to fix the few bugs you discover.

                                          1. 7

                                            I recommend looking for a University that does it to learn or work on one of their projects. I suspect it’s very helpful to have experienced people to help you through your first year or two of verifying anything significant.

                                            In any case, here’s a write-up from one of best in field with advice and one book reference. The other books people mention are Certified Programming with Dependent Types by Chlipala and Software Foundations. If picking based on tooling, Coq and HOL (esp Isabelle) are used on best projects in software with ACL2 being used most for hardware.

                                            It also helps to see what lightweight verification is like if you need some motivation, a fallback, or just tell you something aint worth proving. Alloy (see site) or TLA+ (learntla.com) are best for that imho.

                                            1. 2

                                              Thanks Nick. I suspect it would be helpful to have experienced mentorship as well. It’s certainly a challenging - and large - area. Working with a University sounds like a great idea.

                                              Thanks for the link! I suppose it’s time to get back to writing algorithms and proving things ;)

                                              1. 4

                                                Well, it might also help to read them to see what tools you even want to use. As always, I suggest something you will enjoy in a category with practical applications immediately or down the line. There’s more practical stuff now than ever. Formal methods still ain’t at Github’s level, though. Still more to do. :)

                                                Here’s a few areas of research to consider for what to verify:

                                                1. Obviously, a formal semantics, compiler, or optimization for a specific language worth the time. Rust and Nim come to mind. People also keep building important stuff in C++ which was only partly done. Haskell had one, formal semantics done but no verified compiler. Maybe a useful Scheme like Racket or Chicken.

                                                2. Tooling for automated proof of safety, termination, and so on for programs in a language like above with minimal specs a la SPARK or Frama-C. Optionally working on SAT/SMT solvers to get such languages closer to 100% on more problems with less schemes like ghost code. There’s lots of potential there.

                                                3. Verifying models used in actual languages for safe/effective concurrency (eg Eiffel SCOOP) and/or parallelism (eg Cray Chapel or Taft’s ParaSail). Possibly a mockup in existing language with macros of such features with verified translator to regular code.

                                                4. Verifying client-server or peer-to-peer protocols that have a lot of uptake for various properties. Optionally, making something like Verdi that already does it easier to use for non-experts or increased automation.

                                                5. Verifying important data structures and sequential algorithms for correctness. Just make sure there’s a C implementation with x86 and ARM7-9 compatibility. If it performs well, people can wrap it in any language with C FFI.

                                                6. GUI’s and graphics stacks. That occasionally gets work but not much of it. Graphics drivers are notorious for crashing. Just automated verification of them for safety like Microsoft’s SLAM and no race conditions might be helpful. Fully verification of an OpenGL stack or something might also be interesting. For GUI’s, something like Nitpicker would be easy whereas for GUI programming a DSL compiling to major toolkit would be the route. Maybe an OpenCL model. Who knows.

                                                7. Increasing usability, automation, and performance of any tool people have used to do anything on that the lists above. There’s already lots of people increasing their assurance. Cost of verification is a more important problem right now, though. The lightweight methods need more power. The heavyweight methods need more automation. I did speculate about and find a HOL-to-FOL translator used to do the latter once. That, the use of model-checkers to filter before proving, and results of SAT/SMT tooling in general suggest there’s lots of potential here.

                                                So, there’s you some ideas that might have immediate or long-term impact on problems that matter. There should also be something in that list you find fun. Maybe also something you know a lot about already. That can help. Either should have potential for projects worth spending a lot of time on.

                                                1. 2

                                                  Thanks for the list. Programming languages are interesting to me and partly what drives my interest here, especially tools like Disel.

                                                  I’m guessing #7, automation, will be increasingly interesting the further along I get here.

                                                2. 4

                                                  If you’d like to practice on a juicy target, I invite you to join an effort to eradicate bugs in a database I’m working on in my free time, sled! It has a simple interface, but lots of optimizations that really need to be reliable. Nobody wants to use a database that very quickly deletes their data :P

                                                  I’ve been working extensively with property testing and a few tricks for shaking out interesting thread interleavings, and these approaches have yielded enough bugs to keep me busy for the last 6 months, but it’s time to really ramp up the rigor.

                                                  These are the approaches I believe will lead to the discovery of interesting bugs:

                                                  • formally specify the lock-free algorithms in use for the IO buffer and tree using TLA+, alloy, spin, iris etc… with the goal of identifying concerns that have not been considered in the implementation
                                                  • reproduce the functionality of quviq’s erlang PULSE scheduler using either ptrace or SCHED_FIFO/SCHED_RR in a test harness for performing parallel property testing as a rust library. Bring some of the awesome features of quviq’s quickcheck into the rust world.
                                                  • implement a concurrency testing library for rust that utilizes ptrace and z3 to get a nice user-friendly practical implementation of Maximal Causality Reduction to identify a minimal set of relevant interleaving schedules, and then use ptrace to schedule the threads according to the generated schedules to suss out crashes or violations of specified invariants.

                                                  Future directions include:

                                                  • building a versioned transactional store on top of sled
                                                  • building a horizontally scalable linearizable kv store on top of sled
                                                  • building a location-agnostic (phones, multi-dc, PoP’s) store powered by OT & CRDT’s on top of sled

                                                  The future focus on distributed systems will involve lots of interesting simulation, as well as an attempt to unify concurrency testing and distributed systems simulation. This is sort of a holy grail for me, and I hope to create tooling that lets people build significantly more reliable distributed systems, even more than the databases themselves.

                                                  Let me know if any of these directions sound like things you would be interested in collaborating on! You can find my email on github if so. Having this stuff on my github has resulted in a bunch of interesting people reaching out about jobs, and I haven’t been asked to do a single technical interview after referring companies to the project to see my output. This is a 100% non-commercial endeavor at this point, but I see it as earning interesting future job opportunities at the very least. I can’t tell you if commercial formal methods people will appreciate your work on these systems or not, but this is a real system that fills a real pain point (existing embedded db’s either have crappy read perf or crappy write perf, are generally confusing for people to configure, and often have poor consistency guarantees), and applying advanced testing techniques to this should actually save people from facing huge issues.

                                                  1. 1

                                                    I may have to dig with this. I find having practical examples to chew on while learning quite valuable. Your work looks great, congrats on your success!

                                                3. 2

                                                  “I recommend looking for a University that does it to learn or work on one of their projects. I suspect it’s very helpful to have experienced people to help you through your first year or two of verifying anything significant.”

                                                  True, you can found some of the courses and lectures in a list.

                                                1. 6

                                                  Making lots of progress toward stabilizing my rust lock-free bw-tree! Hopefully I’ll have an alpha out soon :) The goal for the next week is to have stable disk utilization while under extreme contention and fragmentation. Now that crash testing and interleaving tests have teased out much of the low-hanging fruit, it will soon be time to turn my attention to dynamic instrumentation for teasing out exotic race conditions :] If anyone is curious about lock-free high performance stateful systems in rust, feel free to ping me, and I’d love to spend some time teaching potential collaborators. There are a ton of really juicy database features that I’d love to delegate to other people who are curious about learning how to build them!

                                                  1. 30

                                                    I’ve worked part-time for about six years of my career. I started it because I’d repeatedly burned out of full-time jobs. Working 3 days/week was great for me, far more rewarding than the added salary I passed on could have been. Aside from lower work anxiety, I had time to write two books, give three conference talks, get engaged, get married, take up several hobbies, and enjoy life thoroughly. My work has been overwhelmingly better: I stay out of rabbit holes, I recognize deep patterns, I prioritize ruthlessly, I deliver the things my users didn’t realize they need. It’s not magic, it’s just downtime for my unconscious to noodle around with problems without pressure.

                                                    I think working part time is a hugely valuable experience for anyone who doesn’t have a pressing need for dollars in the door (eg to pay off US medical bills or student loans). There are plenty of blogs out there on frugal living + investing (I recommend MrMoneyMustache and Bogleheads wiki), so developers can easily live comfortably and still save significantly towards retirement.

                                                    1. 5

                                                      I’m trying to pull back my working to part-time as well. Unfortunately many companies seem to want full-time or nothing. I’ve switched over to consulting to give me more freedom, we’ll see how that goes. I’m taking around 1.5 months off from work right now which is great. For the first few weeks it felt awkward to have no reason to do anything at any particular time, but after awhile it’s become really pleasant.

                                                      1. 12

                                                        About a year and a half ago I stopped working full-time, and it’s been really wonderful. I found I can work 2 months on a SF salary and live for a year in Berlin pretty comfortably. Sometimes I pick up more consulting work when I want structure, and sometimes I think about moving back to NYC where I would have to work a little more regularly, but I wouldn’t change anything about how I’ve spent my time up until now. I’ve been able to dive really deeply into a bunch of things I would never have had the time or energy to pursue if I were still a wageslave. The things I’ve built in my free time have also turned into tons of job opportunities, and I’ve stopped doing technical interviews now that people can just look at work I put on github and stuff. So, it can lead to lots of nice career things, too. I don’t want to stop engineering, but I am quite happy to live life outside of a shitty startup office a bit more.

                                                        Almost no jobs will appreciate it when you tell them you’d like to work less. But if you go into a new thing with clear expectations set, I’ve found it to be much easier.

                                                        1. 6

                                                          This is awesome! How do you go about getting consulting work - do you look for clients, or do they approach you? Did you have a ramp-up period before you felt comfortable that you’d have enough consulting work when you need it?

                                                          1. 3

                                                            I think most opportunities come my way because I genuinely enjoy talking to people about systems and engineering, and when I don’t have a full-time job I can spend more time communicating about those things. It’s networking, but for something that doesn’t feel gross to me. I am lucky to have this alignment between my personal interests and what businesses currently value. My current gig came from a comment I made here on lobste.rs ;)

                                                            A key to being comfortable is having enough runway where I know I will be OK for a while if I don’t find any work. This means being careful about burn rate. I consider potential purchases and recurring obligations in terms of how much life without work I’m giving up to have them. When my friends from work were increasing their rent to keep up with 30% of their salaries (or more) I was building the buffer that would keep me calm without work. They are worth a lot more money than me now, but I’ve been able to grow in ways that I’m extremely grateful for. Also after quitting my last full-time job I went through a period of intentional “unlearning of engineer-in-a-fun-city spending habits” which gave me a lot of peace of mind, by tripling my runway.

                                                            When I decided to stop working full-time, I didn’t know if it was going to just be a long break or a whole new arrangement. After getting over burnout from the SF startup I was at, I cold-emailed a company doing interesting work to me, and they enthusiastically agreed to a 10 hour/wk contract. That showed me that I might be able to keep it going.

                                                            When you pay 1/7 the rent, even a small trickle of engineering work feels like a geyser.

                                                            1. 1

                                                              Thanks, this is an excellent approach.

                                                        2. 3

                                                          Unfortunately many companies seem to want full-time or nothing. I’ve switched over to consulting to give me more freedom, we’ll see how that goes.

                                                          While this is true, as Mike points out in the interview it’s possible to convince some companies some of the time to hire you part-time anyway. It’s much more effort, and you need to be willing to push back much harder. But it can be done. Since it’s not the default, you really want to only mention part time bit after company has committed to hiring you.

                                                      1. 2

                                                        I think it’s important for any kind of infrastructure decision to evaluate how commonly other people do similar things with the tool. Assuming a uniform bug density across functionality space, how much of that functionality are you going to be the first one to ever exercise? The more you’re using something in a way you haven’t heard of others doing, the more you need to be willing and capable of fixing the system yourself, or be able to wait on the maintainers to fix things. Just because a big name is behind the tool, it doesn’t mean it gets used at all in the way you intend to use it.

                                                        Do bugs get fixed at all anymore?

                                                        Are there people from outside the origin organization that contribute to an ecosystem of supporting tooling? Will the entire ecosystem become legacy when the hot SF startup that built it dies / cancels the project?

                                                        Was it built by someone who has ever been on-call before? Or do you have to write telemetry into the system before you can safely deploy it?

                                                        Am I going to have to teach the organization everything about it, or is there a rich ecosystem of existing documentation?

                                                        Do the creators support graceful transitions across major upgrades, or do you need to write your own export/import tooling when bugfixes you depend on lie on the other side of the great migration?

                                                        All this should be considered after actually determining semantic fitness, of course!

                                                        1. 49

                                                          Thanks very much for this detailed response.

                                                          Free speech and public moderation is a core community value on this site. After all, the site was founded in response to capricious, arbitrary, and unannounced moderation on HN.

                                                          That said, I think that jcs might have been a bit too light in his hand in moderation and I approve of pushcx trying to improve the quality of discussion here.

                                                          1. 26

                                                            I agree, especially with this sentiment:

                                                            Communities like Usenet, 4chan, and YouTube with little to no human moderation sink into useless garbage.

                                                            As much as technologists want to believe that the right software will enable the virtues of humanity to shine through, it’s ultimate a human problem that can’t be solved with a technological solution. I wish I could find it now, but there was a study on unmoderated vs moderated hate speech in online communities which found that moderating hate speech does, in fact, extinguish it, as opposed to it finding other avenues.

                                                            Moderation does matter.

                                                            1. 8

                                                              At Tumblr I think we solved it (mostly) mechanically. I left in 2015 so some of this may have changed, but we felt very strongly that “sunlight is the best disinfectant” and we did not delete content unless it was child porn, self-harm, or gore. You can still go see terrorist cat blogs posting under #catsofjihad. The community was great about shutting down those who were abusive, and the mechanics of response were important for that. Once you created a post, anyone could reference your original, even if you deleted it. Tumblr threads are not trees, but the particular chain of responses that are favored by whoever decided to respond, giving responders full control over the context, making responses far more clear. You lose the full-picture, but gain clarity of context. This was backed up by a support team that was usually great about taking action when abuse was happening. The clarity of context is what I believe gave the community the tools it needed to effectively self-police. This is something I believe is sorely missed in other platforms.

                                                              1. 3

                                                                we felt very strongly that “sunlight is the best disinfectant”

                                                                This is, unfortunately, naïve, even to the point of being malicious—always in consequence, even if not in intent. And exponentially moreso online, where vectors for abuse are numerous and practically zero-cost.


                                                                1. 6

                                                                  I don’t think your link has anything to do with the mechanism in my comment. Twitter gives total contextual power to the content creator, not to the responders. A malicious user deletes the tweet and the responses lose their power. This mechanism totally strips the community’s ability to self-police. I’m in agreement with the author of the link you posted, and I think twitter has made terrible trade-offs. There’s a reason tumblr has tended to be the place where many members of marginalized communities find the first group of people that accepts them, and these communities flourished there. That doesn’t happen with G+.

                                                                  1. 5

                                                                    There’s a reason tumblr has tended to be the place where many members of marginalized communities find the first group of people that accepts them, and these communities flourished there.

                                                                    To be fair, this only applied if the community was not sufficiently odious that Tumblr nuked them.

                                                                    1. 1

                                                                      What do you mean? We let terrorists post anything that was legal…

                                                                      1. 5

                                                                        If memory serves there were a bunch of tags and adult blogs (some of whom were legitimately scummy folks doing commerce) that got rounded up and purged. I think around the time of Gamergate there were also some blogs purged, notably [REDACTED–see edit].

                                                                        (Do yourself a favor: don’t search for that last one. There is nothing there that will make you a happier person. It’s some of the most hateful and misogynistic stuff posted online. Again, do not read it. I found it by accident through webcomics circles, and saw it descend pretty immediately into awful shit.)

                                                                        EDIT: On second thought, I’m going to actually censor that last one. They’re shitty human beings and I don’t want to drive traffic their way, even for the curious.

                                                                        1. 3

                                                                          I mind this kind of censorship, be it self-induced or by moderators: I believe it is everyones own judgement to see for themselves and learn about it. Why mention it in description but not allow further research? In my opinion, your post is no longer credible because I can not verify it independently.

                                                                          (Edit: Reverted autocorrect. Perhaps add public record of post edits?)

                                                                          1. 1

                                                                            I remember a few cases where certain tags were blocked on mobile because if one of the app store testers found porn using a test of the app it could lead to them rejecting an update. Once, this unfortunately included tags like bisexual, gay and lesbian. It was not a fun decision to make, because it hurt the communities we tried so hard to protect, but Apple was holding the business hostage with this, and we chose to block lgbt tags in the short term and figure out something else later. There was significant blowback.

                                                                            We often mass-blocked clusters of associated spammers, which is uncontroversial as far as I know. Other controversial mass-blocks may have happened, but I don’t remember them.

                                                              2. 7

                                                                I agree, on both points. Transparency has always been a core goal of the site, and should continue to be. And I also think it makes sense to do at least slightly more moderation than there has historically been.

                                                                1. 1

                                                                  I personally think the quality is/was fine and don’t care if there are flame wars as long as they’re not just 4chan style insults.

                                                                1. 1

                                                                  I like parts of the transaction model. It’s how I would/do do it - the commit collection, batching and asynchronous reply model.

                                                                  That said, why are we still using a decades old domain specific language from 1979 (SQL) to interact with databases.

                                                                  A language that has an extremely low impedance match with “data”, when we have a perfectly good language from two decades earlier… 1958 (LISP) that does a better job, and doesn’t require an ad-hoc query planner that trys (and fails) to outsmart the person planning the query.

                                                                  Not only that but clearly someone didn’t get the memo that relational database models are so “1995”.

                                                                  I applaud the efforts, and cynicism aside it really looks like they are doing their best here, and appreciate there is still some time to go before SQL falls away.

                                                                  Unfortunately I can’t really like the fact that the authors are working this hard on what really is a dead paradigm.

                                                                  Very well made paper however.

                                                                  Obviously this comment comes off as authoritative, snarky and denigrating to the people who did the work.

                                                                  That’s really not my intention however; it’s more like

                                                                  “just sayin…”

                                                                  1. 4

                                                                    Probably because it’s the least dead paradigm in the data world. Tools are useful to the extent that they can be employed to solve problems, and the first part of that is minimizing the number of new concepts the user must learn before they can solve problems.

                                                                    1. 1

                                                                      Just to go further in agreement in relation to your comment.

                                                                      Select * from Customers where Balance < 0

                                                                      I just made that up so it probably isn’t valid SQL. It’s years since I wrote a lot of SQL.

                                                                      Sure, that is easy and as you said, it helps people get going and solve problems.

                                                                      But look what they just did. They learned a DSL that wasn’t needed. Transducers like Map/Filter/Reduce (or better) are much clearer for them and the machine.

                                                                      Furthermore those translate to many other languages and compose much more easily.

                                                                      I’m not convinced it is easier to learn SQL than just learning operators based on basic set theory.

                                                                      Not only that, but extending SQL to graph databases, geospatial etc.

                                                                      Sure, it can be done, and has been, but only at the cost of vendor specific lock-in or extremely obtuse syntax.

                                                                      I used SQL a great deal back in the day. Before I “saw the light”. It’s just there are things that are equally expressive but compose much better and honestly not that much more difficult.

                                                                      I think that the problem is a matter of how principles are introduced.

                                                                      Oh you need to do data? You need to use SQL.

                                                                      It doesn’t work like that.

                                                                      1. 0

                                                                        Yes I get that and you are right of course.

                                                                        When SQL started it really seemed quite good and worked awesomely.

                                                                        Then it got extended - a lot.

                                                                        It got extended to such a great extent that when I look at SQL now, I feel like the thousands of people who have spent so much time in learning it properly have been painted into a corner, and I really feel sorry for them.

                                                                        I have good friends who are top notch SQL DBA’s but they can’t transfer their skills easily. They are upset and have been going through the five stages of grief for some time.

                                                                        Data is not flat relational tables anymore, (you could argue it never was), and I really feel bad that they really did do “Computer Science” to a high level on a very obscure technology that is destined to the same fate as COBOL.

                                                                        Obviously they get paid a lot. So there is that.

                                                                    1. 4

                                                                      TiDB doesn’t get much attention in the english-speaking infrastructure world, but they deserve more. They have been putting a lot of effort into reliability for the last year and a half, and are using things like namazu and simulation to great effect. More people building distributed systems need to know about simulation, as it’s one of the best ways to get your laptop to behave more like an actual (possibly partitioned) cluster without having to slowly spin up a bunch of containers / vm’s. Simulation can get very high bug/cpu cycle ratios.