1. 10
  1.  

  2. 4

    I’ve seen this pattern crop up in a few places, including (IIRC) OAuth2.

    Some years ago I drew some ire from a few teams by writing a stub OAuth server that randomly returned a single item, an empty array, null, an array with one item, and an array with many items.

    Those were all valid ways of returning certain items. So my stub did all of them, to catch out nonconformant clients.

    It surprised me that so many devs considered “writing a client for $PROTOCOL” to be “writing a client for the exact subset of $PROTOCOL our current vendor happens to support”.

    1. 6

      I’ve used Go for some things, and enjoyed a lot of things about it, but my god is it hilariously bad at parsing JSON. I’m convinced that 50% of the reason Google created protobufs was to avoid it.

      Personally I would just use gjson in this case, because it’s helpful in a lot of additional situations.

      1. 6

        Well, part of this is JSON being hilariously bad schemaless too. As it happens, protobuf fixes also that.

        1. 1

          At the cost of the openness of HTTP APIs.

          Personally I’d still prefer to deal with brain dead API contracts like this (a single item or an array, indeed!) than lose the openness that HTTP APIs afford.

        2. 2

          You’re not the only one; half the Go questions on Stack Overflow seem to be about parsing JSON. Okay, maybe not half, but certainly a lot.

          1. 1

            If they actually do generics for 2.0 that’s going to go a long way to fixing this pain point. Looking forward to it

        3. 2

          I keep one eye on Go in case it starts to look like a fun language to work with, but this seems quite awkward and not too easy to read, which is disappointing seeing as readability was one of Google’s design goals.

          Recently I’ve found that languages with pattern matching make this kind of dynamic parsing as pleasant as I’ve ever experienced. Example:

          defmodule TagParser do
            def parse(%{"id" => id, "name" => name, "created_at" => created_at}),
              do: %Tag{id: id, name: name, created_at: created_at}
          
            def parse([x]), do: parse(x)
            def parse([x | xs]), do: [parse(x) | parse(xs)]
          
            def parse(s), do: Poison.decode!(s) |> parse
          end
          

          If you aren’t familiar with pattern matching (and pattern matching in function arguments), this does the following:

          1. The input string won’t match the dictionary in the first parse method, or the arrays in the next two, so is picked up by the last, which uses the Poison JSON library to decode, then passes to parse again.
          2. This time we might have a single Javascript object, so we match the fields we’re interested in and create the Tag struct we wanted - all done.
          3. If not, we have an array of Tags, so we (recursively) call parse on this, with the first version of parse being called on any single Tag objects and the array version being called where we still have an array. If you’re not a fan of recursion in your code, you can call Enum.map instead.

          C# is also evolving to have useful pattern matching, so I’m hoping it’ll become common in the well-known programming languages soon.

          1. 1

            Alternatively you could try to parse one struct form and then fallback on the other form. But that does end up go through the whole string twice in the worst case.

            1. 1

              Would it be incorrect if encoding/json unmarshaller handled this case?

              i.e. if attempting to unmarshal into an array or slice and the json isn’t a list, then instead attempt to continue with a single-valued array/slice?

              A quick look suggests this the error path:

              https://github.com/golang/go/blob/master/src/encoding/json/decode.go#L523

              1. 1

                This is a pretty solid way of doing this, though I wonder if it’s documented that go will strip whitespace here.

                For anyone who wants to play around with it, I’ve thrown up the code on https://play.golang.org/p/ZLHGLkhNH-4

                It’s a little less bulletproof, but for simple fields you can unmarshal to an interface{} and check if it’s a string or a []string, though that’s not super fun either and doesn’t work for complex values (because everything ends up being a map[string]interface{}. Alternatively, there’s json.RawMessage which lets you say “I’ll check what this is later”, but that falls into similar issues where you have to actually check it later. I think RawMessage is good when you need to unmarshal to different types depending on a specific field, but I guess it doesn’t really help here.

                That was a bit more stream of consciousness than I was hoping for, but I guess I just mean to say that this post is pretty useful and definitely better than the alternatives for nested data.

                1. 2

                  more succinctly. the unmarshalmany thing isn’t really necessary, you can just point to the tags slice directly. for unmarshalone you can just directly allocate a slice of one element and point to the 0 element. https://play.golang.org/p/6s6FQIMGf6j

                  it’s even easier if you just define some type whose underlying type is a slice of tags, like this: https://play.golang.org/p/qtEGuc-DGmw . I generally do it that way because it’s less nesting to the consumer.

                  1. 1

                    Yep agreed, that’s how I usually do it too. My attempt: https://play.golang.org/p/OSzWge2z5u1

                    1. 2

                      huh, I don’t think it occurred to me you could convert the pointer inline (line 48) to avoid the recursion, I like that.