1. 51

    I’m prepping for a work meeting in an hour I can’t miss, but I’ve taken the afternoon off to rush finishing the fix for this. @355e3b has free time before then and has jumped into to help. Sorry to be vague about the in-progress security issue, but I trust y’all understand how that goes. I’ll leave a response to this comment in a few hours with full info.

    In the meantime I’ve made the rate limiting much stricter. I’m sorry for the inconvenience and will revert as part of the fix.

    1. 40

      So, the thing I was talking around is that the potential vulnerability was potentially much worse and we were trying to be thorough about it. Hunter (@355e3b) and I dropped as much as we could to finish that work today.

      To start, there was some kibitzing in chat, so to be explicit: we do think this was a potentially a viable attack. An attacker could register a VPS at Digital Ocean to minimize network jitter, the dominant factor in this attack. (Other things that might introduce jitter can multiply the cost of the attack but don’t fix it.) We already see regular probing from other DO VPSs.

      A few weeks before soatok’s report, some of that probing prompted me to implement some rate limiting to deal with unrelated abuse that prevented effective abuse of this potential vulnerability. This was easy to extend to password resets. An attacker who can only test four tokens per minute isn’t going to get very far.

      Since this was reported, I was concerned that this kind of network timing attack could be used against other tokens. The most important of these is called session_token and is used to authenticate logged-in users on every request. Because this is every possible endpoint and not just the password reset flow, this would not be subject to the rate limit of 4 req/min and could lead to account takeover. We assumed this was a viable attack and Hunter wrote an in-depth mitigation for it.

      This afternoon I worked on finishing the mitigation and testing the attack, which required a bigger block of time than I previously could to devote to it. I was unable to get the timing attack to work against session_token (usually pretty straightforward on localhost!) and pulled my hair out for a bit. Eventually I recognized that the attacker can’t execute the attack against session_token because Rails stores the session as an encrypted, serialized value in the cookie. The session cookie isn’t the value of the session_token, it’s an encrypted blob that can’t be edited by an attacker without being invalidated. It’s a little embarrassing that I conflated the two, but I don’t much mind making an error in the direction of being over-cautious.

      In short, I was uncommunicative because I’d been proceeding the last couple weeks on a much more paranoid footing than was needed. My thanks and apologies to Hunter for his wasted effort; I’m sorry I didn’t have the time to write a proof-of-concept to test this first.

      The other thing I’ve been doing the last few weeks has to do with the “sister sites” that have adopted the Lobsters codebase. Lobsters is a site that’s open source for transparency, not to help start and run other sites. It’s really rewarding to see the code used this way, but nobody has the spare attention to run that potential project. Mostly this means we don’t take feature requests, but in this case it also means we don’t have a way to contact the site admins. There is no mailing list or forum, even for security issues. I’ve been in the process of finding admin emails and collecting translators (many sister sites aren’t in English) so we could notify them before the vulnerability was published. If you recently got a DM from me asking you to play translator, you can ignore it, I’m proceeding from here in English for urgency’s sake.

      Now that I’m confident there isn’t a worse vulnerability I’ve reverted the temporary stricter rate limiting. I’m sorry to anyone I inconvenienced, I know if you read the site by opening several stories as tabs you probably hit this.

      I think all the exciting bits are over, but am happy to be corrected if any volunteer spots something I missed, or any other issue. My email is on my profile.

      My thanks to soatok for alerting about the potential vulnerability.

      1. 8

        Thanks as always for the transparency and also the significant effort you continue to put into the site.

        1. 5

          To add to this, the encryption is AES-256-GCM and the underlying implementation is OpenSSL so there’s no timing attack on the encryption either.

          1. 5

            As someone who runs an (intentionally-single-threaded, but multithreaded in prefetching upstream stories) fetch loop before skimming all the stuff in a text editor, a separate thanks for the transparent and friendly throttling implementation. To say the least, not all emergency-throttling implementations are so clear with the exact changes in the fetcher one needs to do. Thanks.

            1. 2

              I know if you read the site by opening several stories as tabs you probably hit this.

              I did indeed hit this! I thought it was strange, and just figured it must have been my fault and I must have had a VPN going that was using a popular IP. Didn’t think about it much but glad you explained it.

              1. 1

                I was unable to get the timing attack to work against session_token (usually pretty straightforward on localhost!) and pulled my hair out for a bit

                Did you implement the timing attack described in the article and have success with it?

                1. 3

                  No. I took it as plausible but easily addressed with rate limiting. I only tried to repro the more serious attack I feared against session_token.

            1. 1

              Because those methods are for astronaut architects, and the only two methods you should ever use in reality are GET and POST. If you’re using Rails and it comes with PUT and DELETE specified, okay, I guess don’t go out of your way to redefine them, but the other verbs are pointless. There are many ways of communicating something to an HTTP server:

              • the URL path
              • query parameters
              • headers
              • the request body (for post)

              You don’t need yet a fifth way of communicating random bits of information. Use GET if it’s idempotent. Use POST if it is not idempotent. That’s a well defined semantic. DELETE in most cases should mean “set the is-hidden flag to true” anyway, which is a POST. PUT means even less than DELETE.

              1. 2

                A POST can often be idempotent. A PUT should almost never be idempotent. This is a useful distinction in many cases and especially so when there is middleware involved. There are an unknown number of firewalls and other services between your browser and your client. They make decisions based on the semantic meanings of these methods and pretending they aren’t there will lead to odd bugs eventually that are difficult to track down.

                1. 2

                  An operation being potentially mutative, rather than idempotent, is perhaps a more appropriate consideration here. Mutative operations can still be idempotent of course, something like

                  func (a *Account) SetBalance(target int) {
                    a.currentBalance = target

                  GET requests should not be mutative under any circumstances, while POSTs are (although they can be implemented idempotently)

                  1. 1

                    What middleware is going to replay a POST request? POST response, maybe, but a request?

                    1. 1

                      It’s the reverse:

                      A POST can often be idempotent.


                      A PUT should almost never be idempotent.


                      1. 1

                        You are correct. I transposed them in my head apparently. The larger point still stands though.

                    2. 1

                      Because those methods are for astronaut architects

                      Most major APIs (eg, github, etc) use PUT and DELETE. I’m not against criticisms of the whole paradigm, but these distinctions in practice are completely standard in the industry and not the special domain of “astronaut architects”.

                    1. 2

                      Do people use fzf at the CLI?

                      I use

                      alias gco='git checkout $(git branch | grep -v $(git rev-parse --abbrev-ref HEAD) | fzf)'
                      1. 2

                        The same applies to using bool in something like a gRPC request (or any typed request), much of the time you really want an enum: “unknown”, “set_true”, “set_false”

                        1. 17

                          Working in a large Go codebase with thousands of tests written using testify, I have absolutely no desire to switch everything to t.Error() to avoid whatever the pitfalls of testify that the author seems horrified by. The tests run extremely fast with single-line asserts without my having to implement every single eq type. Consider:

                          resp, err := SomeGRPCCall()
                          require.NoError(t, err, "unexpected error making the call") // fail the test immediately with a useful message if there is an error
                          assert.Equal(t, expectedThis, resp.GetThis(), "expected this to match")
                          assert.Equal(t, expectedThat, resp.GetThat(), "expected that to match")

                          With perhaps half a dozen or more fields in some gRPC responses. This could easily double or more in size without any discernible advantage in performance or clarity, and with oodles of boilerplate.

                          Some of the matchers feel a little too rspec-y for me (assert.NotZero() for example) but generally it provides a very nice way to avoid writing a LOT of code. It’s also nice to be able to differentiate between assert and require, so that if you break a test in a way that doesn’t segfault, you can see all the assert failures in a single run.

                          With respect to type safety, I agree that the type conversion approach is suboptimal, but this is plainly a problem with Go’s lack of generics (imo). I would love the type system to allow me to say assert.Equals(t, this, that) and fail to compile if the types did not match; I do not have any interest whatsoever in something like myCustomEqStrings(this, that) and myCustomEqInt64(this, that) and so on ad infinitum in a production environment in which I am actually expected to deliver working software from time to time.

                          If you are happy with testify but looking for a decent way to mock/fake calls made to your dependencies, we have found counterfeiter to be worth the while. You can declare an interface for your dependencies (good practice anyway) and then generate fakes against it to simulate responses of various types. Very useful if you work with microservices.

                          1. 14

                            The reason I started using testify is because the error messages were better than the built in messages, and I didn’t like repeating the if statement condition in every error message. I’m not sure if this is still the case though.

                            One thing I don’t like about the testify lib is the lack of consistency on parameter order (actual and expected.)

                            1. 3

                              One thing I don’t like about the testify lib is the lack of consistency on parameter order (actual and expected.)

                              assert.Len(t, collection, length, message)

                              bothers me a lot

                              1. 1

                                Isn’t that one correct? Collection is the “actual” and length is the “expected”.

                                1. 3

                                  I don’t know if “correct” is really the appropriate word to use here, but no, it is inconsistent with most other methods. For example: https://pkg.go.dev/github.com/stretchr/testify@v1.7.0/require#Equal

                                  1. 2

                                    Oh haha my bad. I misread the parent comment as claiming “actual, expected” is the prominent order but it’s indeed the reverse.

                              2. 2

                                I like using libs like testify for the same reason, when a test fails, the output is helpful. Multiline strings are diffed. JSON inconsistencies are highlighted. Nested values with subtle differences are compared. It’s those features that make a huge difference.

                                I think testifys has evolved over time in ways it shouldn’t, like inconsistent arguments and functions that bloat the API, but it’s still great imo.

                                Out of my own desire to explore ideas I’ve been building my own assertion library, inspired by testify, minimalistic, but useful for the apps I build, https://4d63.com/test. I don’t expect to build something better, but to understand the tradeoffs, decisioning process, and how this stuff works.

                              1. 3

                                The only time you need to consider a client-server setup is:

                                Where you have multiple physical machines accessing the same database server over a network. In this setup you have a shared database between multiple clients.

                                So…. an enormous percentage of mature production applications?

                                1. 8

                                  Which is a tiny percentage of all database deployments.

                                  1. 1

                                    I think saying “this tool, which is unfit for most production usages, is the only tool you need in most cases” is quite disingenuous at best.

                                    1. 2

                                      The average person is using 30-40 SQLite databases somewhere between waking up and leaving their house.

                                      It’s good to know when not to use SQLite, but denying that SQLite runs this planet is rejecting reality.

                                1. 24

                                  I am confused about why the Rest crowd is all over grpc ant the likes. I thought the reason why Rest became a thing was that they didn’t really thought RPC protocols were appropriate. Then Google decides to release an binary (no less) RPC protocol and all of the sudden, everyone thinks RPC is what everyone should do. SOAP wasn’t even that long ago. It’s still used out there.

                                  Could it be just cargo cult? I’ve yet to see a deployment where the protocol is the bottleneck.

                                  1. 14

                                    Because a lot of what is called REST wends up as something fairly close to an informal RPC over HTTP in JSON, maybe with an ad-hoc URI call scheme, and with these semantics, actual binary rpc is mostly an improvement.

                                    (Also everyone flocks to go for services and discover that performant JSON is a surprisingly poor fit for that language)

                                    1. 14

                                      I’I imagine that the hypermedia architectural constraints weren’t actually buying them much. For example, not many folks even do things like cacheability well, never mind building generic hypermedia client applications.

                                      But a lot of the time the bottleneck is usually around delivering new functionality. RPC style interfaces are cheapter to build, as they’re conceptually closer to “just making a function call” (albeit one that can fail half way though), wheras more hypermedia style interfaces requires a bit more planning. Or at least thinking in a way that I’ve not seen often.

                                      1. 10

                                        There has never been much, if anything at all, hypermedia specific about HTTP, It’s just a simple text based stateless protocol on top of TCP. At this day an age, that alone buys anyone more than any binary protocol. I cannot reason as to why anyone would want to use a binary protocol over a human readable (and writeable) text one, except for very rare situations of extreme performance or extreme bandwidth optimisations. Which I don’t think are common to encounter even among tech giants.

                                        Virtually every computing device has a TCP/IP stack these days. $2 microcontrollers have it. Text protocols were a luxury in the days where each kilobyte came with high costs. We are 20-30 years pst that time. Today even in the IoT world HTTP and MQTT are the go to choices for virtually everyone, no one bothers to buy into the hassle of an opaque protocol.

                                        I agree with you, but I think the herd is taking the wrong direction again. My suspicion is that the whole Rest histeria was a success because of being JSON over HTTP which are great easy to grasp and reliable technologies. Not because of the alleged architectural advantages as you well pointed out.

                                        SOAP does provide “just making a function call”, I think the reason why it lost to Restful APIs, was because requests were not easy to assemble without resourcing to advanced tooling. And implementations in new programming languages were demanding. I do think gRPC suffers from these problems too. It’s all fun and games while developers are hyped “because google is doing”, once the hype dies out, I’m picturing this old embarrassing beast no one wants to touch, in the lines of GWT, appengine, etc.

                                        1. 9

                                          I cannot reason as to why anyone would want to use a binary protocol over a human readable (and writeable) text one, except for very rare situations of extreme performance or extreme bandwidth optimisations.

                                          Those are not rare situations, believe me. Binary protocols can be much more efficient, in bandwidth and code complexity. In version 2 of the product I work on we switched from a REST-based protocol to a binary one and greatly increased performance.

                                          As for bandwidth, I still remember a major customer doing their own WireShark analysis of our protocol and asking us to shave off some data from the connection setup phase, because they really, really needed the lowest possible bandwidth.

                                          1. 2

                                            hypermedia specific about HTTP

                                            Sure, but the framing mostly comes from Roy Fielding’s thesis, which compares network architectural styles, and describes one for the web.

                                            But even then, you have the constraints around uniform access, cacheability and a stateless client, all of which are present in HTTP.

                                            just a simple text based stateless protocol

                                            The protocol might have comparatively few elements, but it’s just meant that other folks have had to specify their own semantics on top. For example, header values are (mostly) just byte strings. So for example, in some sense, it’s valid to send Content-Length: 50, 53 in a response to a client. Interpreting that and maintaing synchronisation within the protocol is hardly simple.

                                            herd is taking the wrong direction again

                                            I really don’t think that’s a helpful framing. Folks aren’t paid to ship something that’s elegant, they’re paid to ship things that work, so they’ll not want to fuck about too much. And while it might be crude and and inelegant, chunking JSON over HTTP achived precisely that.

                                            By and large gRPC succeeded because it lets developers ignore a whole swathe of issues around protocol design. And so far, it’s managed to avoid a lot of the ambiguity and interoperability issues that plagued XML based mechanisms.

                                        2. 3

                                          Cargo Cult/Flavour of the Week/Stockholm Syndrome.

                                          A good portion of JS-focussed developers seem to act like cats: they’re easily distracted by a new shiny thing. Look at the tooling. Don’t blink, it’ll change before you’ve finished reading about what’s ‘current’. But they also act like lemmings: once the new shiny thing is there, they all want to follow the new shiny thing.

                                          And then there’s the ‘tech’ worker generic “well if it works for google…” approach that has introduced so many unnecessary bullshit complications into mainstream use, and let slide so many egregious actions by said company. It’s basically Stockholm syndrome. Google’s influence is actively bad for the open web and makes development practices more complicated, but (a lot of) developers lap it up like the aforementioned Lemming Cats chasing a saucer of milk that’s thrown off a cliff.

                                          1. 2

                                            Partly for sure. It’s true for everything coming out of Google. Of course this also leads to a large userbase and ecosystem.

                                            However I personally dislike Rest. I do not think it’s a good interface and prefer functions and actions over (even if sometimes very well) forcing that into modifying a model or resource. But it also really depends on the use case. There certainly is standard CRUD stuff where it’s the perfect design and it’s the most frequent use case!

                                            However I was really unhappy when SOAP essentially killed RPC style Interfaces because it brought problems that are not inherent in RPC interfaces.

                                            I really liked JSON RPC as a minimal approach. Sadly this didn’t really pick up (only way later inside Bitcoin, etc.). This lead to lots of ecosystems and designs being built around REST.

                                            Something that has also been very noticeable with REST being the de-facto standard way of doing APIs is that oftentimes it’s not really followed. Many, I would say most REST-APIs do have very RPC-style parts. There’s also a lot of mixing up HTTP+JSON with REST and RPC with protobufs (or at least some binary format). Sometimes those “mixed” pattern HTTP-Interfaces also have very good reasons to be like they are. Sometimes “late” feature additions simply don’t fit in the well-designed REST-API and one would have to break a lot of rules anyways, leading to the questions of whether the last bits that would be worth preserving for their cost. But that’s a very specific situation, that typically would only arise years into the project, often triggered by the business side of things.

                                            I was happy about gRPC because it made people give it another shot. At the same time I am pretty unhappy about it being unusable for applications where web interfaces need to interact. Yes, there is “gateways” and “proxies” and while probably well designed in one way or another they come at a huge price essentially turning them into a big hack, which is also a reason why there’s so many grpc-alikes now. None as far as I know has a big ecosystem. Maybe thrift. And there’s many approaches not mentioned in the article, like webrpc.

                                            Anyways, while I don’t think RPC (and certainly gRPC) is the answer to everything I also don’t think restful services are, nor graphql.

                                            I really would have liked to see what json-rpc would have turned to if it got more traction, because I can imagine it during for many applications that now use REST. But this is more a curiosity on an alternative reality.

                                            So I think like all Google Project (Go, Tensorflow, Kubernetes, early Angular, Flutter, …) there is a huge cargo cult mentality around gRPC. I do however think that there’s quite a lot of people that would have loved to do it themselves, if that could guarantee that it would not be a single person or company using it.

                                            I also think the cargo cult is partly the reason for contenders not picking up. In cases where I use RPC over REST I certainly default to gRPC simply because there’s an ecosystem. I think a competitor would have a chance though if it would manage a way simpler implementation which most do.

                                            1. 1

                                              I can’t agree more with that comment! I think the RPC approach is fine most of the time. Unfortunately, SOAP, gRPC and GraphQL are too complex. I’d really like to see something like JSON-RPC, with a schema to define schemas (like the Protobuf or GraphQL IDL), used in more places.

                                              1. 2

                                                Working in a place that uses gRPC quite heavily, the primary advantage of passing protobufs instead of just json is that you can encode type information in the request/response. Granted you’re working with an extremely limited type system derived from golang’s also extremely limited type system, but it’s WONDERFUL to be able to express to your callers that such-and-such field is a User comprised of a string, a uint32, and so forth rather than having to write application code to validate every field in every endpoint. I would never trade that in for regular JSON again.

                                                1. 1

                                                  Strong typing is definitely nice, but I don’t see how that’s unique to gRPC. Swagger/OpenAPI, JSON Schema, and so forth can express “this field is a User with a string and a uint32” kinds of structures in regular JSON documents, and can drive validators to enforce those rules.

                                          1. 1

                                            Great. To get something similar (manually grokking a ginormous json blob) I was using irb, this is awesome. Thanks!

                                            1. 9

                                              Having volunteer moderated elsewhere in the past I can say it is truly thankless work. Be kind to your mods, their best case scenario is “nobody knows they exist” and more commonly various different people are mad at them for irreconcilable reasons.

                                              I have no idea if he’s interested but I’d nominate @calpaterson as he’s one of the few folks I know personally who post here, and he’s principled enough to do it properly.

                                              1. 8

                                                All but one of these function to make reviews stricter - ie: reject more stuff - but none of them really are ways to make reviews more effective. Effective does not always mean “stricter”.

                                                My personal experience is that there is a strong bias among programmers to feeling good that they rejected stuff and “were a tough reviewer!” and never considering the false positive side of that where you end up asking people to make a bunch of time-consuming changes based on perceived strictness but which are not really worthwhile. Talking about preventing “sub-optimally factored code” sounds like micromanagement at the review stage and does start to ring alarm bells for me!

                                                This one in particular really rubs me the wrong way:

                                                To review a change, it is important to agree on the problem to solve. Ask the author to supply a problem statement if it is not presented in the change. Only once you understand the problem statement you can evaluate the quality of a solution. The solution could be over- or under-engineered, implemented in the wrong place, or you could even disagree with the problem statement altogether.

                                                If you are ever in the situation where someone has spent time implementing something and their entire approach gets rejected at the final review stage you should absolutely not be patting yourself on the back for that. You need to immediately introspect about why your team process has spent time producing code that was fundamentally wrong from the beginning.

                                                1. 6

                                                  If you are ever in the situation where someone has spent time implementing something and their entire approach gets rejected at the final review stage you should absolutely not be patting yourself on the back for that.

                                                  I think there’s some nuance here in terms of where the change came from. In open source, it’s fairly common for someone to show up and drop a PR on your project with little or no explanation. That doesn’t make it bad, and it doesn’t make the person who submitted it bad. However, understanding the problem they are trying to solve is still important and may just need to happen in the code review. On the other hand, if we’re talking about a corporate setting, or a PR submitted by a regular contributor to a particular open source project, then it should already have been discussed (and if it wasn’t, there are probably larger communication issues).

                                                  1. 2

                                                    From personal experience, I’ve helped someone maintain a friendly fork of an open source project because they wanted a change that I definitely did not want upstream, but which solved a real problem for them (working around a bug in something else that was specific to their deployment). I think ‘this is a great implementation of what it does, but we don’t want it’ is a fine outcome for open source code review.

                                                  2. 4

                                                    I’ve definitely met some people who went too far (in my opinion) when it comes to requesting changes in code review. I prefer to conduct code review in person or in a freeform chat, rather than gerrit / PR conversations / etc. because I think there’s a bit of nuance to it. In my opinion things should go something like the following:

                                                    Major bugs / functional issues: no real question here, send it back to be fixed.

                                                    Unhandled edge case: fix if possible, but an assert guarding against the unhandled condition, and a TODO comment explaining why you might actually need it at some point, are an acceptable alternative if there’s more important work to be done.

                                                    Violation of something explicitly set out in the company style guide: fix unless emergency.

                                                    Refactoring opportunity / optimization opportunity / “I wouldn’t do it that way”: discuss, don’t insist. Sometimes the reviewee will agree that you have the better idea and they’ll go and make the change. Sometimes they’ll prove you wrong because they learned something during implementation that you overlooked from your lofty position. Sometimes they’ll agree that you made a good suggestion, but you’ll agree together that now isn’t the time to squeeze in the change (but the reviewee learned something, and you can come back to it later). And sometimes you’ll just agree to disagree on a matter of style. I think a lot of senior devs push too hard for having things done exactly their way, and end up wasting time and goodwill on things that are ultimately immaterial.

                                                    1. 3

                                                      I have almost word-for-word the exact same outlook. One trick I use (which it sounds like you already know) is to just phrase pretty much every piece of feedback as a question rather than a demand. Most people are good-natured and will either make the change you’re implying or will explain why not (often convincingly). One of the very worst things you can do is get up on your high horse about something in the code before hearing about it from the author.

                                                      1. 1

                                                        Yeah, I tend towards that as well. I don’t hold fast to it, but I agree. Even in the most straightforward cases, “hey, won’t the code blow up here if this variable is less than zero?” sets a much better tone than “this is broken.” Not to mention, it makes me look better in case I made a mistake and there isn’t actually a bug.

                                                    2. 3

                                                      On the contrary, I have a really hard time rejecting stuff. I suspect a lot of people in the industry are probably similar to me (shy, imposter syndrome, etc). I’m sure there are just as many on the other side of the spectrum (socially tone-deaf, cocksure, etc).

                                                      It has to be pretty “bad” for me to reject it or ask for a large refactoring of their approach, for better or worse. Most of the time, if I don’t see actually errors/bugs with the approach, I try my best not to shit on the whole approach just because it was not the way I like it done.

                                                      1. 1

                                                        On the contrary, I have a really hard time rejecting stuff. I suspect a lot of people in the industry are probably similar to me (shy, imposter syndrome, etc).

                                                        I have reviewed and subject my code from/to hundreds of other developers and what I observed is that you suspicion off. By and large, the majority, think 90%+, get high on power and are eager to reject patches for everything and nothing. White space, bracket style, indentation, style, naming conventions, linting and other secondary details always end up taking up like 75% of the work involved in code reviews. Grand parent is spot on.

                                                        To make the assertion that everyone is in position to review everyones code is a very silly premise, and can only result into non sense like this. We then get sprinkled with posts like this one on “how to do it right”, as if they have the magical formula and everyone else is doing it wrong.

                                                        In the occasions I had ownership of the code I end up asking fellow developers to not submit the code to review unless they feel uncertain about it or need additional input for any specific reason.

                                                        “At least two pairs of eyes”, “a last line of safety before the main trunk” and other silly mantras are non sense to account for lack of competence or code quality. You can throw 20 pairs of eyes at it, if those 20 are bad developers, than it is worst than just one pair of eyes. There is no shortcut to competence.

                                                        Furthermore, the idea that the whole code can be reviewed bit by bit doesn’t make sense. You can have 20 patches that all look perfectly fine by themselves, but when you put them together are a disaster.

                                                        An experiment for anyone to try: gather the reject ration of your coworkers and compare with how you would rate them in terms of skill/competence. These are usually inversely proportional.

                                                        1. 1

                                                          White space, bracket style, indentation, style, naming conventions, linting and other secondary details always end up taking up like 75% of the work involved in code reviews.

                                                          Other than naming conventions, humans shouldn’t be wasting their time reviewing any of those things anyway; automated tools should be rejecting those things before a code review request even goes out.

                                                          On teams I’ve been on where we’ve had automated checks for machine-detectable violations of the project’s style conventions, code reviews have tended to stay much more focused on the stuff that matters.

                                                      2. 1

                                                        If you are ever in the situation where someone has spent time implementing something and their entire approach gets rejected at the final review stage you should absolutely not be patting yourself on the back for that. You need to immediately introspect about why your team process has spent time producing code that was fundamentally wrong from the beginning.

                                                        I’d agree strongly with this. Having implementation approach be part of the PR feedback cycle makes development inordinately time consuming and adversarial; technical approach needs to be either (ideally) ingrained into the day-to-day communication on your team through some mechanism like slack, daily sync meetings, etc. or else formally deliberated on before the time is spent to build a complete solution. It’s better to take the time to have a half an hour meeting than it is letting a two-day change be implemented in an unacceptable fashion.

                                                        1. 2

                                                          It’s better to take the time to have a half an hour meeting than it is letting a two-day change be implemented in an unacceptable fashion.

                                                          Doesn’t that depend on how often the problem happens? It is, I think, not better to have a few dozen half-hour meetings over the course of a year when you would only have prevented one or two rejected PRs. Those meetings don’t come at zero cost to the team.

                                                          1. 1

                                                            It does depend on how often the problem happens but I think the probabilities you’re implicitly assuming aren’t quite right (IME, ofc). Disagreements over approach at review stage are quite common if the person was not involved at the design stage, largely people there are just a lot of ways to do something. In the worst cases you will find a senior or lead developer will be too busy to attend design sessions but will sit as gatekeeper on PRs. That tends to lead to lots of rework and a very slow effective pace of improvement.

                                                            To make it explicit with some numbers: getting a design agreed usually only takes a 10-15 minute discussion between a 2-3 people, one of which can do it and one of which can review. If you skip that it I would say that you have a ~15% chance - and that’s conservative - of drama raised at review stage and rework. Solving for x, if your piece of work takes >5 hours, you should always have the design session. Obviously this is approximate maths, I just hope to give a sense of when the conversation is worthwhile and when it’s not.

                                                      1. 4

                                                        Go’s learning curve is amazing. I was writing real, useful code in Go in just a few hours and had successfully written a real, “large” application in a few days. It’s great.

                                                        It’s a shame that it seems like all the momentum is with Rust now. It almost feels like there’s no point in learning anything other than Rust given the trajectory of things…(I’m sad because I prefer Go to Rust).

                                                        1. 2

                                                          It’s a mistake to conflate online “buzz” with real momentum. I’d argue that Go has less buzz precisely because it has lots of momentum. Rust doesn’t have much momentum right now, it has acceleration.

                                                          1. 1

                                                            Rust doesn’t have much momentum right now, it has acceleration.

                                                            That’s a really elegant way to put it.

                                                            I do feel like every job posting/offer I come across has C/C++/Python/Rust listed, and not Go. I realize that there’s very likely selection bias at play given the kind of jobs I look at.

                                                            EDIT: Okay, so yeah, perception vs reality. I went to Stack Overflow and did a job search. 109 for Golang, 18 for Rust (and I’m sure there’s overlap there).

                                                          2. 2

                                                            What momentum? I must be reading all the wrong sources of things, I don’t see Rust as on an upward trajectory myself (maybe I’m just biased), I don’t even see it on the TIOBE index where Go is at #14. :D

                                                            1. 2

                                                              Rust is at 26 right now. It’s jumped into the top 20 within the past year and jumped up and down.

                                                              The momentum I’m talking about is that Rust has been picked up by Microsoft as an acceptable language, Linus has intimated that he might be willing to allow Rust in the kernel, I’ve seen a lot of “why we switched Product X to Rust”, and so on. Go has been “stable” for over a decade now, so it’s got an enormous head start, but I feel like Rust is going to rapidly outpace it.

                                                              Again, they’re both fine languages, I just feel like I’m gonna need to learn Rust for the sake of my future career.

                                                              1. 3

                                                                Rust will never have the “market adoption” of Go. Not to say it’s not worth learning! Or that it’s not the right tool for many jobs. It’s a great language.

                                                            2. 2

                                                              I think it depends on what you want to write and where you want to write it. A lot of big name companies are doing things in Go these days, it’s hardly fallen out of favor in that sense.

                                                              1. 1

                                                                Go learning curve is smooth but it feel really like … work even if I only worked with it during a few months two years ago, that was the lasting impression Go made on me. It feel like a tool to get stuff done. The connotation is not negative but I don’t see myself use it for doing fun stuff on the side. Since that time, I want to take the time to learn more Go but for now I am waiting that Go2 drops because my knowledge is dated (never been exposed to go module at that time for example). It feels like a nice tool to have but I will not have fun creating with it.

                                                                On the momentum side, Go still have a lot and Rust has gained a fair share of it but it really different field and for different reasons imho and for teams/companies with different needs and approaches. Nothing to be sad about, your hammer don’t have to fit every nail.

                                                              1. 11

                                                                Do you know either language?

                                                                Use the one that you know.

                                                                If you love working with Python, then I’d suggest Go. Rust would likely be too much cognitive overhead for little benefit.

                                                                1. 5

                                                                  Interesting. I would have the exact opposite reaction. Go will make you think about raw pointers and that kind of thing, whereas rust you can write at a high-level like Python.

                                                                  Totally agree use the one you know unless you have a desire to learn a new one.

                                                                  1. 18

                                                                    If pointers are too much cognitive load then Rust’s lifetimes and ownership juggling is going to be way worse. I’d say that the comparison is more that Python and Go are not particularly functional languages, while Rust obviously is (and that’s the appeal of it to people who like functional languages).

                                                                    If Rust is faster for a given use case that’s a more like-for-like basis for comparison, but then you might want to use Fortran to go even faster depending on the use case. ;-)

                                                                    1. 4

                                                                      Admittedly I’ve invested time in becoming comfortable with Rust, but I actually concur – after gaining some familiarity, Rust feels much higher level than Go.

                                                                      1. 6

                                                                        Rust does can definitely operate at a higher level than Go, but it also has a lot more cognitive overhead.

                                                                    2. 5

                                                                      Pointers in Go are pretty trivial to work with, and I say this coming to Go from Ruby. Basically your only concern with points are “they can be nil,” otherwise you barely need to differentiate between pointer and structs if you’re writing idiomatically (that is a gross oversimplification and there are performance implications at times, but it’s extremely unlike C/C++ in this respect).

                                                                  1. 1

                                                                    I’d like an operating system I can run on a laptop that I can:

                                                                    • leave plugged in without the battery swelling
                                                                    • run google hangouts, zoom, goto meeting, and whatever other video conferencing tools I need for work and family
                                                                    • run slack
                                                                    • run 1password (or another non-lastpass password manager)
                                                                    • use a real package manager on
                                                                    • program in ruby, go, java, python, c, etc without horrendous amounts of pain

                                                                    I guess Ubuntu is my best bet? I still have yet to find a single linux-using coworker who can reliably join google hangouts video calls though (and I have no control over what video tool I use for work, but I CAN control my OS)

                                                                    1. 1

                                                                      Give Parrot OS a try.

                                                                      1. 1

                                                                        I’m guessing macOS is not on the cards?

                                                                        1. 1

                                                                          I mean, this is a Macintosh?

                                                                          1. 2

                                                                            I’m specifically frustrated with my nearly $3000 2018 macbook pro which has a swollen battery and cannot simultaneously handle a video call, slack and my IDE. I do recognize I’m being grumpy though.

                                                                            1. 2

                                                                              lol fair enough.

                                                                              1. 1

                                                                                To be fair, that’s probably an issue with the video calling software, Slack, and your IDE. Though IDEs at least have an excuse for eating some resources.

                                                                                My tip is to use Slack in a browser tab if you can, though this doesn’t allow you do call through it.

                                                                          2. 1

                                                                            I run Fedora on a Thinkpad and carry an iPad for slack and other text/video conferencing tools. Oddly enough, I landed here out of frustration with Apple laptops released since 2016. I’d been happily using Mac laptops since the ’90s, prior to that.

                                                                            The only place Fedora falls down for me (for the items on your list) is the reliability of conferencing tools. I’d say the ones you list are fine for me about 80% of the time, but soaking up the rest is worth the cost of the low-end iPad I use to do it. Hangouts/Meet, Zoom, Goto Meeting, Teams, etc. all work well there. Plus the camera is better and slack sucks less on iPad than anywhere else I used it. Slack and Discord don’t chew battery there they did on Mac and Linux, either.

                                                                            My non-lastpass password manager of choice is Bitwarden, fwiw. It did a decent job importing my large, very heavily used, 11-years-old (all my passwords since 2007!) 1password database, but there were quite a few empty entries to clean up after initial import.

                                                                          1. 7

                                                                            Having worked on a lot of Rails codebases and teams, big and small, I think this article is pretty spot on. Once your codebase or team is big enough that you find yourself getting surprised by things like

                                                                            counters = %i(one two three four five)
                                                                            counters.each do |counter|
                                                                              define_method("do_#{counter}_things") do |*args|
                                                                                # whatever horrible things happen here

                                                                            etc… you’ve outgrown rails.

                                                                            1. 7

                                                                              This is my litmus test for “has this person had to maintain code they wrote years ago”.

                                                                              I don’t think I’ve yet worked with anyone who can answer yes but also wants me to maintain code that can’t be found via grep.

                                                                              1. 3

                                                                                What unholy beast is that. I mean. Seriously. Wtf is that?

                                                                                1. 4

                                                                                  It’s gruesome, and I’ve seen a version of it (using define_method in a loop interpolating symbols into strings for completely ungreppable results) in at least 3 different large old codebases, where “large” is “50KLOC+” and “old” is “5+ years production”

                                                                                  There are a lot of ways to write unmaintainable code in a lot of languages/frameworks, but if I ever were to take a Rails job again, I would specifically ask to grep the codebase for define_method and look for this prior to taking the job. It’s such a smell.

                                                                                  1. 2

                                                                                    I don’t understand why it’s so normalized in rails to define methods in the fly. You could do that monstrosity easily in most interpreted languages, but there’s a reason people don’t! On Rails, it’s just normal.

                                                                                    1. 4

                                                                                      It’s been a long time since I’ve read the ActiveRecord source so it may no longer be this way, but there was a lot of this style of metaprogramming in the early versions (Rails 1.x and 2.x) of the Rails source, ActiveRecord in particular, and I think it influenced a lot of the early adopters and sort of became idiomatic.

                                                                                2. 1

                                                                                  Who the fuck writes code like this?


                                                                                  1. 1

                                                                                    The time between “just discovered ruby lets you do that” and “just realized why I shouldn’t” varies from person to person; I’ve seen it last anywhere between a week and a year.

                                                                                1. 1

                                                                                  I don’t understand why you wouldn’t want a timeout for the secondary request… But, naturally, this functionality is not exposed (in the context pkg), and the cancellation aspects are a bit tricky…

                                                                                  1. 1

                                                                                    One reason is if you’re delegating asynchronous calls (something with side effects, say a cleanup process) but still want to preserve the original context for observability/tracing purposes.

                                                                                    1. 1

                                                                                      SOOOO. I think the thing I missed is that you can still context.WithCancel() on this dummy context. So, nothing should be lost..

                                                                                  1. 6

                                                                                    Hey, this is a really interesting problem that i think lots of companies hit without noticing, however i think there are simpler ways of tackling it, i figured i could give a few pointers on how your code might be simpler and smaller.

                                                                                    Firstly with your DisconnectContext you can eliminate stubbing a lot of your fields by embedding the parent context rather than stubbing them eg:

                                                                                    type DisconnectContext struct {

                                                                                    then only overriding methods you need to change the behaviour of (https://golangbyexample.com/embedding-interfaces-go/).

                                                                                    edit: i just deleted some stuff about wrapping in another WithCancel to avoid the DisconnectContext from this comment because i was wrong!

                                                                                    here’s the example i used to prove myself wrong : https://play.golang.org/p/ynNwc-otFGO

                                                                                    1. 5

                                                                                      Thanks! I intentionally didn’t use embedding so that I could explicitly show the delegation of Value to the parent. Otherwise, that would be the only method missing from the impl and might be lost to the reader.

                                                                                      1. 2

                                                                                        This is roughly what we do at DO. The main thing is that your DisconnectContext wraps its parent but noops on Done()

                                                                                      1. 9

                                                                                        The case against OOP is not that it is impossible to write good OO code, but that OOP encourages people to abuse bad design and bad patterns, which the author seems to concede. The article makes an argument similar to someone who says it’s possible to write eg Ruby without type errors: of course it’s possible, but the language makes it harder than other languages, which is the whole brunt of the criticism.

                                                                                        1. 9

                                                                                          That only holds if you’re defining OOP to be something like Java circa 2000.

                                                                                          If you want to say “Java circa X encourages bad design”, I would concede that (see my other comment in this thread).

                                                                                          I would not concede “OOP encourages bad design”, because the space is so big and there are simply too many programs under that umbrella.

                                                                                          You could similarly define FP to be the subset of FP that allocates too much memory, is slow, and hard to modify, but that wouldn’t make much sense either.

                                                                                          I’ll point back toward the same comment: OOP and FP have largely converged – they are both about being principled about state and I/O, and not littering it all over your code.


                                                                                          “Immutable objects” were a thing 15 years ago and continue to be – and to not acknowledge that is like not acknowledging anything that’s happened in functional programming in 15 years (e.g. think Haskell or OCaml 15 years ago).

                                                                                          I understand that some people are still working with bad old codebases designed before that period. I think that is just the curse of “being used”, a la Bjarne Stroustrop (there’s 2 kinds of languages: the ones people don’t complain about and the ones people use).

                                                                                          1. 4

                                                                                            That only holds if you’re defining OOP to be something like Java circa 2000.

                                                                                            I think it holds pretty true of Rails in 2020 as well - idiomatic rails is a giant conflation of behavior and state where every model/controller/etc inherits from a framework class.

                                                                                            1. 2

                                                                                              OK I don’t have any experience with Rails, but that doesn’t surprise me too much. I am more of a Python person, and Ruby code definitely seems to lean more on classes/objects, for a better or worse.

                                                                                              I like this comment: https://news.ycombinator.com/item?id=24012090

                                                                                              If there is anything in CS which should be renamed simply because people have completely incorrect assumptions from the name, it’s OOP. Everyone has someone different they think about it, and almost none of it has anything to do with late-binding, encapsulation, or message passing.

                                                                                              Basically I think it is more useful to criticize Java or Rails specifically, rather than OOP, which is a big nebulous concept. (And ironically I don’t even agree that those are the 3 main things about OOP :) )

                                                                                              Although Java and Rails may share some problems with regard to maintainability, I also think they have completely distinct problems as well. Putting dynamically typed OO and statically typed OO under the same umbrella and making statements about them does seem like a stretch to me.

                                                                                              1. 2

                                                                                                Basically I think it is more useful to criticize Java or Rails specifically, rather than OOP, which is a big nebulous concept. (And ironically I don’t even agree that those are the 3 main things about OOP :) )

                                                                                                Those things quoted are not the 3 main things about OOP. The main things about OOP are inheritance, polymorphism, encapsulation and abstraction. (https://www.oreilly.com/library/view/vbnet-language-in/0596003080/ch04s02.html) At some point when you start defining OOP as not-these-things you’re just committing a “no true Scotsman” fallacy. And it’s entirely fair to criticize a language based on its dominant idioms and paradigms: writing code based around mutable state and subtype polymorphism isn’t just something you CAN do in Java or Ruby, it’s what you’re largely expected to do. The fact that even these languages have recently provided mechanisms to avoid doing this isn’t a counterargument to “OOP is bad”, it’s evidence that even mainstream OOP languages can’t sustain large professional codebases with the design patterns foisted by the overwhelming tendency of the respective ecosystems.

                                                                                        1. 3

                                                                                          My take is that the claimed advantages are not really delivered by OOP. From the website:

                                                                                          • Encapsulation: Most OOP languages leak trivially avoidable implementation details to the caller (like whether some member is a field or a no-arg method).
                                                                                          • Inheritance: Usually not that useful.
                                                                                          • Polymorphism: Parametric polymorphism is often superior to OOP’s subtype polymorphism.

                                                                                          I don’t have anything against OOP, but it is definitely oversold.

                                                                                          In the end, just stop making everything mutable, and I can largely live with it.

                                                                                          1. 0

                                                                                            In the end, just stop making everything mutable, and I can largely live with it.

                                                                                            I largely agree with this sentiment, but I feel that since objects are essentially a behavior/mutable state combination, this sentiment is also very anti-OOP.

                                                                                            1. 4

                                                                                              Object-oriented design does not imply mutability, but rather most implementations of OO feature it as the fusion of behavior and mutable state. As you mentioned, this creates the perception that OO must be implemented this way. (In fact, few languages even take mutability seriously enough until very recently!)

                                                                                              OTOH, most languages feature immutable String classes in their standard library, and there’s nothing non-OOP-y about them. They’re fundamentally values, and have the semantics of values by being immutable. OOP is not opposed to value semantics. IMO, the sweet spot of OOP is defaulting to values everywhere, with a few islands of mutable state loosely connected together (see Gary Bernhardt’s Boundaries talk for further discussion).

                                                                                          1. 12

                                                                                            I don’t generally look at programmers as personally inspirational or “heroes” (not out of cynicism, but mostly just because programming is a career to me and a career is only a part of a meaningful life) but antirez is the first exception I always make. I remember getting started in the field professionally a decade ago working with redis, reading stackoverflow questions, and seeing replies from him to nearly all of them - it still blows my mind how he could be seemingly singlehandedly the author, designer, and community support for such a ubiquitous thing. A prolific coder who genuinely put himself into redis (which has been useful for me my entire career including as recently as this week). Kudos to a job well done!