1. 11

    Title is sort of the wrong message.

    The right message is the critical region should be as small as possible, and only protect against racy access to shared resources.

    Everything else that is in the critical region, but doesn’t need to be, increases contention without value.

    1. 3

      I think that’s a fair summary. Having worked with Go for awhile now, I think it’s interesting to try to formalize the guiding principles of writing robust thread-safe code. Keeping the critical section as small as possible is a well-known idea, and yet I still see examples of people locking around I/O, which might be an accident of the lock/defer unlock pattern.

      1. 3

        If the I/O is the contended for shared resource, eg. interleaving the outputs from multiple threads into a single output, then yes, i/o should also be “locked around”.

        The point is I/O or not I/O, the point is the shared resources, any shared resource, that as /u/tscs37 said below that need to be access atomically.

        It is very useful to think in terms of invariants. ie. Relationships that must always hold between data items.

        If a thread race may break the invariant (either by leaving the variables inconsistent relative to each other, or by providing a thread with an inconsistent snapshot of them) then you must lock around them to protect the invariant.

        However, a larger region than absolutely necessary, that is locked, doesn’t make you safer, it merely, as you rightly observed, always increases (sometimes drastically, as you observed) the latency of your system.

        It also, as /u/bio_end_io_t observed below, increases the opportunities for things to go very wrong. eg. Dead locks and priority inversions.

        which might be an accident of the lock/defer unlock pattern.

        In C, which lacks the nifty go “defer” thing, or the C++ RAII, or the Ruby “ensure”, I have taken to actually introducing a entire new scope for each lock…. eg.

        void myFunc( void)
        {
             DoStart();
        
             lock();{
                 DoA();
                 DoB();
             };unlock();
        
             DoEnd();
        }
        

        And then make sure as little as possible is in that scope.

        Sort of makes it in your face obvious which is the critical section. (Which is especially visually handy if you need to take two nested locks, and allows you to rapidly check that you always take them in the same order everywhere.)

        And Obvious is a Very Good Thing when dealing with multithreaded code.

        1. 2

          This is a fantastic response. Thank you for the thoughtful comments.

      2. 3

        I agree that keeping critical regions small and not locking around IO are good rules of thumb.

        There are some subtlies since you need to know what exactly is “as small as possible”. For example, in general, you don’t want to call kmalloc inside of a spinlock because it may block. Of course, you can pass the GFP_ATOMIC flag to kmalloc to avoid the block, but in most cases, you can allocate outside the critical section anyway.

        So really, you want to avoid blocking inside a critical region, but use something like a mutex or semaphore that can release the CPU if there is a chance you will need to sleep inside the critical region, because quite frankly, that can happen. IO tends to be something that can block, so if you NEED locking around IO, avoid using spinlocks.

        Edit: fixed typo. should say “that” instead of “than”

        1. 2

          There is two sides to it. There is of course when you need to keep this critical section as a small as possible and I think that would affect the majority of the code.

          There would also be code that needs to operate atomically in transactions. This would mainly affect anything that needs to process a lot of data while holding the lock for it. In that case you want the lock to be as wide as possible as to not have to enter the locked area all the time.

        1. 2

          Just a note, it would be helpful to preface code samples with the language that they’re written in (in this case, Go). Thanks!

          1. 1

            Good point. I’ll update the post.

          1. 2

            I can’t help but disagree with the “accept interfaces, return structs” thing. Especially if you’re exposing those functions publicly. I’m probably dumb and just doing this very wrong, but I’ve lived through many cases where mocking dependencies is harder than it needs to be because I need to provide a struct as the result of something. I mean, especially when it’s an external package, I don’t necessarily want to be tightly bound to the struct that is exposed and would generally much rather use an interface. Am I doing this wrong?

            1. 4

              If you mock dependencies, you’re creating a mock implementation of a narrowly-scoped interface as an input to a function, not a copy of a struct as the output of an e.g. constructor.

              Concretely, if you have a function

              func process(s3 *aws.S3) error {
                  x, err := s3.activate()
                  if err != nil {
                      return errors.Wrap(err, "activation failed")
                  }
                  if err := x.signal(); err != nil {
                      return errors.Wrap(err, "signal failed")
                  }
                  return nil
              }
              

              it should instead be

              +type activator interface{ activate() (xtype, error) }
              +
              -func process(s3 *aws.S3) error {
              +func process(a activator) error {
              

              but aws.NewS3 can and should continue to return *aws.S3.

              1. 2

                I had the same impulse when starting Go and friends rightly warned me away from it.

                If you write functions that take interfaces and that return interfaces, your mock implementations start returning their own mocks, which gets brittle very fast. Instead, by having interfaces as inputs and concrete types are return values, your mock implementations can simply return a plain struct and all is well.

                But I think it’s worth not taking someone else’s word for it, and trying both approaches and seeing how it goes.

                1. 3

                  I agree but in effect, most non-trivial libraries (stuff like Vault, for example) return structs that expose functions that return structs. Now, if I need to access that second layer of structure in my code, the appropriate design would seem to be, under those directions, to declare a function whose sole job is to accept the first level of structure as an interface and spit back the second layer (which still leaves me with a top-level function that is hard to test)

                  1. 1

                    Looking at Vault’s GoDoc, I see Auth.Token, which sounds like the kind of API you’re describing. Sometimes, it might be a matter of how to approach the test, like instead of mocking heavily, you run an instance of Vault in the test and ensure your code is interacting with Vault correctly.

                    1. 1

                      I’m not against this, technically, but this is not necessarily practical. Take databases for example. Or external services? Vault is one thing, but what if the thing you depend on us an external service that you can’t mock cleanly, that depends on another service that it can’t mock cleanly? I don’t have a solution for this, I realize that exposing an object as an interface is not especially practical either, and that it’s weird to decide what to expose through that interface. The inverse, to me, is equally weird for other reasons.

                2. 2

                  I’m probably dumb and just doing this very wrong

                  If so, then I’m right there with you. If I followed @peterbourgon’s advice, then my code would be an unsalvageable dumpster fire. Defining a billion ad hoc interfaces everywhere very quickly becomes unmaintainable in my experience. I’ve certainly tried to do it.

                  See also: https://lobste.rs/s/c984tz/note_on_using_go_interfaces_well#c_mye0mj

                  1. 2

                    All advice has a context, and the context for mine is that you’re dealing with mostly- or completely-opaque structs for the purposes of their behavior (methods) they implement, and not for the data (fields) they contain. If you’re doing mostly the latter, then interfaces don’t help you — GetFoo() Foo is an antipattern most of the time. In those cases, you can take the whole original struct, if you use a lot of the fields; or the specific fields you need, if you only take one or two; or, rarely, re-define your own subset of the struct, and copy fields over before making calls.

                    But when I’m in the former situation, doing as I’ve illustrated above has almost always improved my code, making it less fragile, easier to understand, and much easier to test — really the opposite of a dumpster fire. And you certainly don’t want a billion ad-hoc interfaces, generally you define consumer interfaces (contracts) at the major behavior boundaries of your package, where package unit tests naturally make sense.

                    1. 1

                      I do suspect we are thinking of different things, but it is actually hard to tease it apart in a short Internet conversation. :-) I am indeed dealing with completely-opaque structs, and those structs generally represent a collection of methods that provide access to some external service (s3 being a decent example, but it could be PostgreSQL or Elasticsearch), where their underlying state is probably something like a connection pool. Defining consumer interfaces was completely unmaintainable because their number quickly becomes overwhelming. Some of these types get a lot of use in a lot of code, and tend to be built at program initialization and live for the life of the program. At a certain point, using consumer interfaces just felt like busy work. “Oh I need to use this other method with Elasticsearch but the interface defined a few layers up doesn’t include it, so let’s just add it.” What ends up happening is that the consumer interfaces are just an ad hoc collection of methods that wind up becoming a reflection of specific implementation strategies.

                1. 2

                  I’ve responded to this sort of advice in the past: https://lobste.rs/s/9ots3k/advanced_testing_techniques_go#c_1ueuxw

                  My TL;DR is, “this advice isn’t nearly nuanced enough.” See below. Thanks for the correction @enocom!

                  1. 2

                    I think you’ve identified another valid use of interfaces that is distinct from that of the small interface.

                    My post isn’t as well written as it should be, but I was trying to identify that usage you’ve written about with the phrase, “An interface ensures implementations honor a contract.”

                    The Store is a great example because there are many reasons to want an in memory store, a postgres store, etc. And one must use a single interface to ensure all the various implementations honor the contract.

                    1. 2

                      Oh great! I did read that part of your post but didn’t quite line it up with my use case. :-) Re-reading it now, I see that I completely skipped over the second paragraph of that section. My bad.

                      1. 3

                        I can’t tell though if the store use case is a special exception to the small interface rule, or if it’s simply a class of uses. And that of course begs the question, when does the small interface rule actually apply?

                        I wonder if you’ve encountered this problem with things other than the store.

                        1. 3

                          Thus far, I think the only time I’ve ever used a large interface is for store-like things. I’m not quite sure how to categorize it. One interesting bit is that I almost never have the occasion to provide multiple implementations beyond the “real” one that hits a database somewhere and the in-memory one. This seems distinct from typical use of interfaces, which encourages a more open world where it’s common for many types to implement the same interface. This leads me to believe that in other programming languages, I might actually express this pattern as a sum. Not sure.

                  1. 13

                    aw man I thought this was for the protocol

                    1. 2

                      ^^;

                      1. 2

                        likewise