1. 18

    Definitely way complicated. Nomad(https://nomadproject.io/) is what we chose, because it is so operationally simple. You can wrap your head around it easily.

    1. 7

      I haven’t used either in production yet, but isn’t the use case of nomad much more restricted then kubernetes? It’s only the scheduling part and leaves it to the use define, for example, ingress through a load balancer and so on?

      1. 10

        Yes, Load balancing is your problem. Nomad is ONLY a task scheduler across a cluster of machines. Which is why it’s not rocket science.

        You say I need X cpu and X memory and I need these files out on disk(or this docker image) and run this command.

        It will enforce your task gets exactly X memory, X cpu and X disk, so you can’t over-provision at the task level.

        It handles batch(i.e. cron) and spark workloads, system jobs(run on every node) and services (any long-running task). For instance with nomad batch jobs you can almost entirely replace Celery and other distributed task queues, in a platform and language agnostic way!

        I’m not sure I’d say the use-case is much more restricted, since you can do load balancing and all the other things k8s does, but you use specialized tools for these things:

        • For HTTPS traffic you can use Fabio, Traefik, HAProxy, Nginx, etc.
        • For TCP traffic you can use Fabio, Relayd, etc.

        These are outside of Nomad’s scope, except that you can run those jobs inside of Nomad just fine.

        edit: and it’s all declarative, a total win.

        1. 1

          Why not haproxy for tcp too?

          1. 1

            I don’t actually use HAProxy, so I can’t really comment on if it does TCP as well, if it does, AWESOME. I was definitely not trying to be limiting, hence the etc. at the end of both of those.

            We use Nginx and Relayd.

            1. 2

              It does TCP. See the reliability and security sections of the web site to see why you might want it.

              1. 2

                Thanks!

      2. 4

        Oooh, the fact that it’s by HashiCorp is a good sign. I’ll have to read up on this. Thanks!

      1. 5

        Some extra details from the linked bug report and commit:

        • What about triple-dot ranges that exclude the end value? The bug report said “I don’t think ary[1...] (exclusive) is meaningful.” Despite this, I see from the commit that endless ranges with triple-dot were implemented. There is only one test case that uses the range in a way other than inspecting its attributes, and that test shows that a 3...nil range iterates over the same numbers as a 3..nil range when passed to Array#fill.

        • What about infinite ranges in the other direction, (..0) and (nil..0)? They could theoretically be used for checking if a number is less than 100, for example. Well, they are not part of this feature because it would be too hard to implement in Ruby’s grammar:

          It is better to have ary[..1] as a consistency. But it will cause a shift/reduce conflict. Anyway, ary[0..1] looks not so bad to me since it have no cursed negative index. So I don’t push it.

        1. 2

          Couldn’t you apply a unary minus to the infinite range and get the same thing? Or am I missing something?

          1. 1

            Good point, I hadn’t even thought of the negative infinity case.

          1. 1

            Interesting. I wonder if this is in response to a flood of new programmers from bootcamps, or perhaps non-technical users? It could very well be just a new learning tool, as well, but it seems like a fairly simple webapp to use for most established developers.

            1. 8

              The imperative and declarative versions are, in two cases, semantically different.

              Execute something on every element with map: the for-loop doesn’t do anything with the return value from performSomething, whereas map puts the results into an array. Consider using forEach instead of map. e.g. array.forEach(performSomething)

              Finding a single element in the array: the use of a return in the for-loop causes an early exit, whereas filter will iterate through the entire array.

              Iterate over an array to count a property of each item: I can only nit-pick and say the declarative implementation as-written is more verbose than the imperative one. Please consider: items.reduce((result, item) => result + item.content.value, 0)

              1. 3

                Finding a single element in the array: the use of a return in the for-loop causes an early exit, whereas filter will iterate through the entire array.

                There’s also Array.prototype.find().

                With regards to this improvement:

                items.reduce((result, item) => result + item.content.value, 0)

                If you want to get (probably too) fancy with destructuring, you could do:

                items.reduce((result, { content: { value } }) => result + value, 0)

                1. 1

                  items.reduce((result, { content: { value } }) => result + value, 0)

                  That’s clever, thanks I will include this example!

                  1. 1

                    There’s also Array.prototype.find().

                    Is Array.prototype.find() ES5?

                    1. 1

                      I’m still not sure what aligns to ES5 or ES6, but as per https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/find:

                      This method has been added to the ECMAScript 2015 specification and may not be available in all JavaScript implementations yet.

                      1. 2

                        ES6 and ES2015 are two names for the same thing. ES5 would be ES2009 under the new naming scheme.

                  2. 1

                    Good points, thanks for the feedback, I’m editing the post to incorporate it

                  1. 2

                    Just a note, it would be helpful to preface code samples with the language that they’re written in (in this case, Go). Thanks!

                    1. 1

                      Good point. I’ll update the post.

                    1. 5

                      Another “quirks” question: did you find any unexpected quirks of Go that made writing this emulator harder or easier?

                      1. 5

                        In this particular case, it feels like the code isn’t too far from what C code would be: here are some basic data structures and here are some functions that operate on them, mostly on the bit level. No fancy concurrency models nor exciting constructs. I think given the fact that this is an inherently low level program, most nicieties from Go weren’t immediately needed.

                        I did use some inner functions/closures and hash maps, but could’ve just as well done without them. The bottom line is that the language didn’t get in the way, but I didn’t feel like it was enourmously helpful, other than making it easier to declare dependencies and handling the build process for me.

                        1. 4

                          Did you run into any issues with gc pauses? That’s one of the things people worry about building latency sensitive applications in go.

                          1. 3

                            Not the OP, but I would assume this kind of application generates very little garbage in normal operation.

                            1. 2

                              The gc pauses are so miniscule now, for the latest releases of Go, that there should be no latency issues even for realtime use. And it’s always possible to allocate a blob of memory at the start of the program and just use that, to avoid gc in the first place.

                              1. 2

                                The garbage collector hasn’t been an issue either. Out of the box, I had to add artificial delays to slow things down and maintain the frame rate, so I haven’t done much performance tuning/profiling. I am interested in scenarios where this would be critical though.

                                1. 1

                                  Go’s GC pauses are sub-millisecond so it’s not an issue.

                              2. 3

                                Interested in this as well. I’ve toying with the idea of writing a CHIP-8 emulator in Go and would love to hear about how is the experience of writing emulators.

                                1. 3

                                  I did exactly this as a project to learn Go! I used channels in order to control the clock speed and the timer frequency and it ended up being a really nice solution. The only real hangup I had was fighting with the compiler with respect to types and casting, but having type checking overall was a good thing.

                                1. 2

                                  I love how reddit is the new Digg and HN is the new slashdot at the top left XD

                                  Can’t seem to get gifs to work in my sig, though T_T is it just [img]?

                                  1. 3

                                    You need the ! before a link to turn it into an image.

                                    edit: Contrary to what it says, it’s NOT BBCode, but rather this newfangled “markdown” some Mac nerd invented. I bet it won’t catch on, it’s not HTML-like enough.

                                    1. 2

                                      Thanks!

                                      Oh wow, that gif is obnoxiously huge. Hell yeah.

                                      EDIT - I changed my mind, too much

                                  1. 8

                                    “Not only that, any code examples in rustdoc are treated like tests and get executed during compilation!”

                                    This is brilliant. First time I’ve heard of it. ZeD on Hacker News said Python can do it, too, with seangransee offering this example.

                                    Aside from a good idea, it looks like it should also be a new requirement for high-assurance systems where all examples of system usage are executable. It’s often implicit where you’re already supposed to have both tests of all functions and consistency with documentation. At the least, it could be an extra check on these.

                                    1. 14

                                      I’m over the moon about doctests. Elixir has them too, that’s where I saw the light. In the past, I’ve gone to annoying lengths to ensure that code examples in documentation don’t go stale; it’s a great feeling to have first-class support for that workflow.

                                      1. 2

                                        I’ve found that in Elixir having both doctests and normal tests is quite handy–one as a quick sanity check and a way of demonstrating an API, and the other as a full way of confirming behavior. The use of tags to turn on and off different sets of tests is also not well supported (to my knowledge) with doctests.

                                        1. 3

                                          AFAIK turning Elixir doctests on and off is not supported. That bothers me just a bit, because there are times when I’d like to show some non-testable code (e.g., demonstrating a request to an external network service) in the exact same syntax that my doctests use (iex>, etc).

                                      2. 5

                                        I think the first place I saw that was R, where it’s used pervasively in the package repository (CRAN). In fact the output of example REPL sessions is generated by running the code, so besides being used as tests that are flagged if they fail to run entirely, it also keeps the examples up to date with minor changes to output format, etc., which can otherwise get out of date even when the code continues to work.

                                        1. 4
                                          1. 4

                                            I’ve found that in practice I write far fewer testable Go examples than I do in Rust code. In Rust, I just drop into Markdown, write the code, and I’m done. It’s incredibly low friction.

                                          2. 2

                                            D achieves the same the other way round. Instead of treating documentation like code, D has unit test code blocks and can treat them as documentation. This makes it easier for the tooling, since syntax highlighting etc applies to the unit tests as well.

                                            This is not a really critical difference, but if you design a new language, I would suggest the D approach as slightly better.

                                            1. 2

                                              I’ve generated documentation from tests before (in Ruby’s rspec). Just thinking about it, isn’t D’s approach prone to the same weakness as the comments that they use to generate the docs? That is, when you use comments or non-executable code (in my case it was string descriptors) to generate docs, they can get out of date with the actual code, whereas if you write tests inside documentation, the docs may grow stale but the tests will break.

                                          1. 6

                                            Just a note: I found the term Model-Based Testing a bit distracting - then again, I come from a Rails background. I think “Generative Testing in Rust with QuickCheck” would have been more helpful with no prior knowledge of QuickCheck.

                                            This also set me off into exploring QuickCheck. For those who don’t know, the most helpful thing I saw to help understand it was watching this video that showed off test.check, a QuickCheck implementation in Clojure: https://www.youtube.com/watch?v=u0TkAw8QqrQ

                                            Basically, it’s a way to generate random data and data structures (within certain bounds that you define) to be used as inputs in testing your application logic. Since I was also confused about this, it seems like people run QuickCheck as a step separate from their non-generative specs to identify specific edge cases and then add those edgecases as regression tests to their overall test suite. In some generative testing libraries I saw after poking around, they’re even run as part of the test suite, though I’m not sure how I feel about that - couldn’t that result in missing a case locally that then fails on CI due to different inputs?

                                            1. 5

                                              In the past, it was called specification-based (QuickCheck paper said “specifications”), model-based, or contract-based… test generation depending on which crowd you were listening to. Recently, most posts call it property-based testing. All are accurate given they’re technically true and have prior work. The superset would probably be specification-based since what they use in each is a specification. Formal specifications are also oldest technique of them.

                                              Generative is ambiguous since it sounds like it just means automated. All the test generators are automated to some degree. So, we name them from where the process starts. Far as what name, even I started using property-based testing instead of specification-based testing as default to go with the flow. I still use others if the discussion is already using those words, though. For instance, I might say…

                                              1. Spec-based if we’re talking formal specifications

                                              2. Model-based if we’re talking Alloy models.

                                              3. Contract-based if we’re talking Design-by-Contract, Eiffel, or Ada/SPARK since that’s their language.

                                              4. Property-based if talking to people using popular languages or something since they’ll find helpful stuff if they Google with that.

                                              1. 2

                                                Thanks for the background information!

                                              2. 2

                                                There’s also been some work done to save failing inputs for later retest. I’ve used that to do test driven development with properties.

                                                I know that’s supported in version 2 of the original quick check, almost certain Python’s hypothesis supports that, not sure about others.

                                                1. 2

                                                  If you have a QuickCheck implementation that permits easy testing of a concrete test case, grab it and use it. Once upon a time, QC found a bug. Keep that concrete test case and add it to your regression test suite. Randomized testing means that you don’t really know when randomness will create that same concrete test case again. But if your regression suite includes the concrete test case, you are assured that your regression suite will always check that scenario.

                                                  In the Erlang QuickCheck implementations (the commercial version from Quviq AB in Sweden and also the open “PropEr” package), there’s a subtlety in saving a concrete test case. I assume it’s the same feature/problem with Rust’s implementation of QC. The problem is: re-executing the test assumes that the test model doesn’t change. If you’re actively developing & changing the QC model today, then you may unwittingly change the behavior of re-executing a concrete test that was added to your regression test suite last year. If you’re aware of that feature/problem, then you can change your process/documentation/etc to cope with it.

                                                  1. 2

                                                    That’s probably because the first prototype for this required the random value as input to the value generator. I know that because I wrote it, and pushed for its inclusion in the second version of QuickCheck.

                                                    Nowadays there are libraries that will generate the actual value in such a way that you can copy and paste into a source file.

                                                    I’ve heard that hypothesis in Python keeps a database of failed inputs, not sure if anything else has that feature.

                                                    1. 2

                                                      Randomness is only one place where things can go wrong with saved concrete test cases.

                                                      For example (not a very realistic one), let’s extend the Rust example of testing a tree data structure. The failing concrete test case was: ([Insert(0, 192), Insert(0, 200), Get(0)])

                                                      Let’s now assume that X months later, the Insert behavior of the tree changes so that existing keys will not be replaced. (Perhaps a new operation, Replace, was added.) It’s very likely that re-execution of our 3-step regression test case will fail. A slightly different failure would happen if yesterday’s Insert were removed from the API and replaced by InsertNew and Replace operations. I’m probably preaching to the choir, but … software drifts, and testing (in any form) needs to drift with it.

                                                      1. 1

                                                        That’s an excellent point, I have no idea how to automate that. You’d have to somehow notice that the semantics changed and flush all the saved test inputs, sounds like more work than gain.

                                                        This is great info, any other thoughts on how saved inputs could go wrong?

                                                        1. 2

                                                          Ouch, sorry I didn’t see your reply over the weekend. I can’t think of other, significantly different problems. I guess I’d merely add a caution that “semantics changed” drift between app/library code & test code isn’t the only type of drift to worry about.

                                                          If you change the implementation, and the property test is validating a property of the implementation, you have more opportunity for drift. For example, checking at the end of a test case. For example, for testing a hash table when deleting all elements, “all buckets in the hash have lists of length zero” could be a desirable property. The test actually peeks into the hash table data structure and checks all the buckets and their lists. The original implementation had a fixed number of buckets; a later version has a variable number of buckets. Some bit of test code may or may not actually be examining all possible buckets.

                                                          It’s a contrived example, one that doesn’t apply only to historical, failing test cases. But the best that I can think at the moment. ^_^

                                                          -Scott

                                                2. 1

                                                  In some generative testing libraries I saw after poking around, they’re even run as part of the test suite, though I’m not sure how I feel about that - couldn’t that result in missing a case locally that then fails on CI due to different inputs?

                                                  This is a potential problem with property-based testing, but to turn the question around - if you’re writing unit tests by hand, how do you know you didn’t miss a case?

                                                  That’s why you use them together.

                                                  1. 2

                                                    I understand using property-based testing to find edge cases, but including it in the test suite seems to introduce a lot of uncertainty as to whether your build will succeed? And potentially how much time it will take to run the tests. Granted, finding edge cases is important regardless of when you find them, I’d just be more comfortable running the property-based tests as a separate step, though I’d be happy to be convinced otherwise.

                                                    1. 1

                                                      Correct me if I’m misunderstanding you. If the testing is part of build cycle, a build failure will likely indicate the software didn’t work as intended. You’ll also have a report waiting for you on what to fix. If it’s taking too much time, you can put a limit on how much time is spent per module, per project, or globally on test generation during a build. For instance, it’s common for folks using provers like SPARK Ada’s or model-checkers for C language to put a limit of 1-2 min per file so drawback of those tools (potentially unlimited runtime) doesn’t hold the work up. Also, if it takes lots of running time to verify their code, maybe they need to change their code or tooling to fix that.

                                                      1. 2

                                                        No, I think your understanding is correct, and that’s definitely part of the point of running specs in the build process. I guess I’m just operating from advice I got early on to keep specs as deterministic as possible. I don’t remember where I got this advice, but here’s a blog post: https://martinfowler.com/articles/nonDeterminism.html

                                                        He also recommends this, which is what I would instinctively want to do with property-based testing:

                                                        If you have non-deterministic tests keep them in a different test suite to your healthy tests.

                                                        Though the nondeterministic tests Fowler is talking about seem to be nondeterministic for different reasons than one would encounter when setting out to do property-based testing:

                                                        • Lack of Isolation
                                                        • Asynchronous Behavior
                                                        • Remote Services
                                                        • Time
                                                        1. 2

                                                          Just going by the problem in his intro, I remember that many people use property-based testing as a separate pass from regression tests with some failures in PBT becoming new, regression tests. The regression tests themselves are static. I’d guess they were run before PBT, as well, with logic being one should knock out obvious, quick-to-test problems before running methods that spend lots of time looking for non-obvious problems. Again, I’m just guessing they’d do it in that order since I don’t know people’s setups. It’s what I’d do.

                                                          1. 2

                                                            Ah, okay, so separating regression tests from PBT does seem to be a common thing.

                                                1. 2

                                                  Considering the pros of defer, there seems to be very little usage for async.

                                                  if you specify both, async takes precedence on modern browsers, while older browsers that support defer but not async will fallback to defer.

                                                  So why would you use async at all? And why do newer browsers even bother to support it?

                                                  1. 7

                                                    There are some (admittedly edge case) scenarios where async is still desirable. For example on the BBC News homepage if we loaded our scripts with defer, a bunch of non-defer third-party scripts would execute first and make the page feel slow. We used async instead, so that parsing isn’t blocked while the script is being fetched but it is blocked once the script is available so we can enhance the page and make it feel complete without having to wait for other scripts to execute first.

                                                    1. 2

                                                      Thanks for writing this, I found very little information online on when to use async is a better choice, I added a bit more information to consider this scenario 👍🏼

                                                  1. 2

                                                    This week, Don and I discuss our history with in-office and remote work, why junior devs might reconsider working outside the office, and how requiring folks to work in your proximity is a trait of managerial vanity

                                                    1. 2

                                                      I couldn’t find it in the show notes and I can’t listen right now, but I’m curious about your reasoning why junior developers would want to consider in-office work. I agree 100%, I’m just curious what you think about it.

                                                      As an aside, my wife’s work has a system where she basically gains 1 remote work day every 3 months (well, up to 2 days, but it’s a Big Old Company), which I think is a pretty reasonable way to onboard people to the company and business while also allowing freedom.

                                                      1. 5

                                                        Two reasons, for me:

                                                        1. Learning. Pairing face-to-face with someone, or being able to ask questions in-person, is a much faster way to learn things. The ability to interrupt the more experienced person is a big advantage to the learner, and remote interruption is harder to do. The fidelity of the conversation is stronger in-office, rather than remote, as well.

                                                        2. Politics. The ability to maneuver office politics, get a promotion, move up the ladder, etc, is much easier to do with a lot of firms face-to-face. There’s a bias that so many managers have with a person being in front of them instead of through a chat program.

                                                        You’re right, there’s nothing wrong with your wife’s approach of remote and in-office mix, but for folks trying to break into a career, I think there are certain advantages that on-site gives them.

                                                        1. 3

                                                          I’m in total agreement. #1 applies not just to technical learning, but about business domain learning, as well. I hadn’t thought of #2, though, that’s a great point!

                                                          1. 1

                                                            I mean, #2 is kind of gross, right? Why shouldn’t advancement be strictly on merit? But if only things worked that way…

                                                            I love working remote, so I think it’s a great goal for folks to go after. However, when I was talking to my students about it, I wanted them to consider the side effects of trying remote for their first gigs, which are usually critical for future success.

                                                            1. 2

                                                              It is gross, but being aware of and understanding the problem lets you engage with it instead of tripping over it, unawares. Politics is gross, in general, but educating ourselves and engaging in the process helps mitigate nasty surprises in the future, which is why I’m glad you mentioned it.

                                                    1. 22

                                                      Comments really aren’t a “code smell.”

                                                      1. 16

                                                        Nothing stinks quite like uncommented complicated code.

                                                        1. 7

                                                          Exactly! Margaret Hamilton’s code itself, whom the author cites, is full of comments. Possibly more comments than source code. Which, if you’re sending a ship with the processing power of a toothbrush to the Moon, is a great idea.

                                                          1. 10

                                                            This code is not readable on it’s own, if it was possible to use variable and function names most of those comments could be removed. It’s also quite likely that every detail of the program was decided before writing the code. In a modern codebase things are always evolving and comments can get left behind.

                                                            1. 5

                                                              This is my fear with comments. I code in a team of 2, so we don’t really comment stuff. I know it’s bad, but I’m a team of two, we kind of know the whole code anyway.

                                                              We also don’t write tests. We’re bad people.

                                                              1. 4

                                                                Oh man, save yourself some pain and write unit tests. You don’t need 100% test coverage, even non-zero coverage of basic functionality will save you so much time. If you don’t know how to use test frameworks then you don’t have to bother, just write one big main file with a function per test you want to do, and call them all in main. That’s basically what test frameworks are, so if you need a low barrier to entry then don’t bother learning one yet, just do something. If you program in a language with a REPL you can literally just save the stuff you use to manually test into a file so you don’t have to type it more than once.

                                                                I personally couldn’t develop without unit tests. You can test the application and do something that hits the code path you just changed, which is time consuming and tedious, especially to do repeatedly, or you can write a little bit of code that calls the code and run it with zero effort every time all the time for the rest of forever. Even a small sanity test of the happy path is better than nothing, you can at least check your code doesn’t blatantly fuck up with normal input and save yourself the round trip through the application.

                                                                If I had to code without unit tests I’d quit. And I have worked on teams that didn’t want to unit test, so I had out-of-tree tests I wrote for myself. The amount of bugs I fixed a couple hours after someone else committed was mind boggling.

                                                                1. 4

                                                                  How do you even develop without unit tests?

                                                                  I’d avoid this kind of shaming, especially since the commenter has already noted (in a self-deprecating manner) that they’re aware of the stigma associated with not using tests.

                                                                  If the intent is to encourage the use of tests, I would put your last paragraph first and focus on how it would help GP.

                                                                  1. 3

                                                                    Revised, thank you for the feedback. 😊

                                                                  2. 2

                                                                    Depends on the language and coding style though. I wrote a 25000 line game in C++ without a single test, and I never had a regression. I obviously had occasional bugs in new code, but they’re unavoidable either way. Now my preferred language is Haskell, and I feel the need for tests even less. I generally prefer correct-by-construction to correct-by-our-tests-pass. My purpose isn’t to discredit tests though, just that not every codebase has as much need for them.

                                                                    1. 2

                                                                      I’m just self taught and kind of out of my depth on it. I had a dev friend who did integration tests, and they were really brittle and slowed us down a lot. Are unit tests not as bad at slowing down a small team of two devs who are both self taught? We’re good / mediocre / we build good stuff (I consider ourselves hackers) but we don’t have a ton of time.

                                                                      1. 1

                                                                        Unit tests don’t have to slow things down like integration tests. In your situation, I’d wait until the next bug comes up, then instead of fixing the bug immediately, I’d write a test that reproduces the bug. Usually doing that helps narrow down where the bug is, and after fixing it, the test passes and (here’s the cool part) you will never see that bug again

                                                                        1. 1

                                                                          That’s what i was told about integration tests, but I had to set up all these extra dependencies so that the integration tests continued to work every time we added an external service… we’d have to mock it or shit would break.

                                                                          I’m assuming since Unit tests don’t run like that, they don’t have external dependencies like that? You’d mock on a component by component basis, and wouldn’t have to mock unrelated shit just to keep them running… hmm… maybe i will.

                                                                          Any unit testing video series I could watch as a noob to get started you’d recommend? Or anything like that?

                                                                      2. 1

                                                                        I second saving yourself pain with writing tests! I’ve avoided lots of rakes with a handful of tests

                                                                    2. 2

                                                                      What makes everybody think that the programmers who change code so that it no longer matches the comments they just used to understand it will somehow write code so clear you don’t need comments to understand it?

                                                                      1. 1

                                                                        Often people write code like total = price * 1.10 #This is tax which can be rewritten as total = price * TAX A lot of comments like that can be removed by just putting them in the actual code.

                                                                        1. 2

                                                                          I’m not suggesting it can’t be done I’m suggesting it won’t be done

                                                                    3. 4

                                                                      I’ll also correct the article to say a team did the code and review per the reports I read. She describes it here in “Apollo Beginnings” as a team with a lot of freedom and management backing with unusual requirement to get software right the first time. Unfortunately, a rare environment to work in.

                                                                    4. 5

                                                                      You can’t write test coverage for a comment. You can’t have your compiler warn you that a comment is inaccurate.

                                                                      If you have no tests, and your code is full of dead paths, you can’t even percieve the risk posed by an errant, out of date, or unintentionally misleading comment.

                                                                      Sometimes they’re necessary. But the best default advice to a ‘mediocre’ developer is to write better code, not add more comments.

                                                                      1. 5

                                                                        You can’t write test coverage for a comment. You can’t have your compiler warn you that a comment is inaccurate.

                                                                        https://docs.python.org/3/library/doctest.html

                                                                        If you have no tests, and your code is full of dead paths, you can’t even percieve the risk posed by an errant, out of date, or unintentionally misleading comment.

                                                                        If you have no tests or comments you have no way of knowing whether your code is actually matching your spec, anyway.

                                                                        Sometimes they’re necessary. But the best default advice to a ‘mediocre’ developer is to write better code, not add more comments.

                                                                        That’s like saying that the best default advice to a ‘mediocre’ developer is to write less buggy code, not add unit tests.

                                                                        1. 2

                                                                          doctest is great for testing comments that include code, but nothing else… If a comment says “Framework X is expecting variable foo in JSON format inside the array bar.” I would be inclined to believe it at first and then test the hypothesis that the comment is wrong. That’s the danger of comments.

                                                                          1. 1

                                                                            A couple of times today I caught myself committing deleted or changed lines without deleting or changing the associated comment. Luckily I could go back and fix things so that the comments weren’t complete nonsense. Sometimes though they escape detection.

                                                                        2. 2

                                                                          Once the code is cleaned as much as possible and still is hard to understand, or if something is tricky, comments help a lot!

                                                                          I guess the author talked about comments that could be removed by making the code cleaner.

                                                                          Maybe it depends on what motivates one to add comments, there might be good reasons as well.

                                                                          1. 2

                                                                            True.

                                                                            But Comments that are wrong or out of date stink like dead rats.

                                                                            I view asserts as “executable comments” that are never out of date. Sometimes they are wrong… but testing will tell you that.

                                                                            If a plain comment is wrong… nothing will tell you except a very long, very Bad Day at work.

                                                                            1. 7

                                                                              But Comments that are wrong or out of date stink like dead rats.

                                                                              Valuable comments are something along the lines of “this looks weird, but I did it because of [historical reason that is likely to be forgotten] even though [other implementation] looks like the more obvious solution at first glance; it wouldn’t have worked because [rationale].”

                                                                              The longer I spend working with old codebases, the more I’ve come to treasure such comments. But comments that just explain what the code is doing rather than why are suspect.

                                                                          1. 1

                                                                            The “Pay other people to audit your code” link https://wemake.services/meta/rsdp/auditions/ is broken. Is there really such a service? I’m mostly looking for “code review trades” though.

                                                                            1. 4

                                                                              Sorry about that. Here’s the correct link: https://wemake.services/meta/rsdp/audits/

                                                                              1. 1

                                                                                Thanks!

                                                                              2. 1

                                                                                Pretty confident the url is supposed to end in /audits/ instead of /auditions/

                                                                              1. 5

                                                                                Full disclosure: I work at Heroku.

                                                                                I started writing something more focused but eventually it turned into a brain dump. It would be shorter and to the point if I had more time.


                                                                                My team is responsible for managing Postgres/Redis/Kafka operations at the company and our setup is a little… *ahem* different. We never touch the UI and rely entirely on AWS APIs for our day-to-day operations. Requests for servers come in via HTTP API calls and we provision object representations of higher-level “Services” that contain servers, which contain EC2 instances, volumes, security groups, elastic IPs, etc.

                                                                                My team is maybe 20(?) or so people, depending on who you ask and we own the entire provision -> operation -> maintenance -> retirement lifecycle. Other teams have abstracted some things for us so we’re building on the backs of giants who have built on the back of giants.

                                                                                Part of our model is “shared nothing most of the time”. In the event of a failure, we don’t try and recover disks or instances. Our backup strategy revolves around getting information off the disks as quickly as possible and onto something more durable. This is S3 in our case and we treat pretty much everything else as ephemeral.

                                                                                What I do and what you are looking to do are a little different but my advice would be to investigate the native tools AWS gives you and try to work with them as much as possible. There are cases where you need to down your own tools but you should have a good reason for that (aside from usual tech contrarian opinions). “I don’t trust AWS” or “vendor lock-in” aren’t really things you should consider at an early stage. Get off the ground and worry about the details when you have the revenue to support those ideas. You have to build a wheel before you can build an interstate system.

                                                                                Keep ephemeralization in mind. Continue doing more with less until you are doing almost nothing. If you have an operation that AWS can handle for you, just let them. Your objective is to build a business and serve customers, not to build the world’s best infrastructure. Keep UX and self-service in mind. If your developers can’t easily push code changes, recover from bad deploys and scale their apps then you have a problem.

                                                                                Look into AWS CodeDeploy, ECS, Lambda, RDS, NLBs, etc. Make sure you understand the AWS networking model as working with VPCs and Security Groups can be quite complex. Don’t rely entirely on their policy simulator as it can be quite confusing at times. Build things in staging and TEST TEST TEST.

                                                                                Give each developer their own sub-account for staging/test environments. Some of AWS’s account features make this really easy. Don’t ever use root credentials. Keep an eye on trusted advisor to make sure developers aren’t spinning up dozens of r4.xlarge instances that do nothing (or worse, mine bitcoin).

                                                                                MAKE SURE YOUR S3 BUCKETS AREN’T WORLD WRITABLE. This happens more than you think. You’ve gotta pen test your network to make sure you set it up correctly.

                                                                                Learn the core concepts and learn to script as much as possible. The UI should only be used for experimentation early on, after that, you should take the time to teach a computer how to do this stuff. Consider immutability as much as possible. It is far better to throw something away and replace it in AWS land than to try and bring it back online if the root cause isn’t quickly apparent.

                                                                                Remember that AWS serves loads of customers and they have to prioritize. If your startup is only paying a few thousand a month then don’t expect immediate responses. They’ll do their best but at that stage, you are pretty much on your own. If you can afford Enterprise Support then pay for it. Money well spent.

                                                                                Use Reserved Instances as much as possible. You save a ton of money that way and once you get to a certain size AWS will likely start to cut bulk discount deals.


                                                                                If this all sounds scary and you are building a basic web app or API, do yourself a favor and use Heroku (or similar) to get started. If your organization doesn’t have the resources to bring on people to build and manage this full-time, you’re doing yourself a disservice by trying anyway. I learned that the hard way at a previous job when I had a CTO who was allergic to the idea of PaaS.

                                                                                That’s just my $0.02.

                                                                                1. 2

                                                                                  Do you mind my asking what the most painful parts of using AWS at Heroku are, if not already covered in your (very thorough) write-up?

                                                                                  1. 4

                                                                                    Hmm… Where to begin? I’m not an expert in all of these things but I’ve often heard complaints about the following:

                                                                                    • Insufficient capacity issues with instance types which involves making support calls to AWS to help us limp along. Smaller regions have this issue quite often.
                                                                                    • Lack of transitive routing with VPC peering, which makes our Private Spaces product a bit cumbersome. Private Links may help but we’re still investigating.
                                                                                    • STS credentials expiring during long data uploads, which means we need to switch to IAM, which have hard limits.
                                                                                    • Cloudwatch being way too expensive for our use case so we have to poll a lot of our instances to determine events. We’ve spoken with them a few times about what we are trying to do and it is simply a use-case they aren’t accounting for right now. Maybe someday. The current pricing structure may have been feasible when more of Heroku was multi-tenant, but that isn’t the case anymore. I’ll accept that as a tradeoff.

                                                                                    Those are at least the most recent sticking points. We’ve been lucky enough to get in a room with some AWS developers in the past and it was reassuring to hear things like “we know all about it” and “we’re working on a solution”. They’re a huge organization and can be slow to make changes but I genuinely believe they are doing their best.

                                                                                    1. 3

                                                                                      Oh, oh, oh!

                                                                                      People not understanding that CPU credits on t2 instances are a thing. AWS gives you part-time access to the full power of a CPU on their cheaper instances but throttles you down if you use too much. It is nice for use-cases where bursting is required but will break your app like nobody’s business if you keep your instance under high load. There is a reason t2s are so cheap (~$40/month with on-demand pricing for a t2.medium).

                                                                                      You get what you pay for.

                                                                                      1. 2

                                                                                        Fascinating, thank you for the write-ups!

                                                                                  1. 1

                                                                                    Random observation, but he passed a day before Einstein’s birthday (which happens to be Pi Day).

                                                                                    1. 2

                                                                                      He died on March 14th.

                                                                                      1. 1

                                                                                        Ah, I assumed it was the 13th due to the publication date, but I didn’t account for timezones. Looks like it happened the day of Einstein’s birth date.

                                                                                        1. 2

                                                                                          Aye - early morning Cambridge, England time. :-)

                                                                                          1. 1

                                                                                            Which fortuitously right now is the same as UTC, so there’s absolutely no doubt it was Mar 14.

                                                                                    1. 8
                                                                                      1. Learning me a Haskell. I’ve no interest in programming in Haskell, but it’s interesting to see the origin of some of the idiots I see in Rust. And I want to proceed from there to Learn Me An Agda so I can get some formal methods chops.

                                                                                      2. My dormant github portfolio.

                                                                                      1. 4

                                                                                        Did you mean to say “idiots in Rust” ? 😛

                                                                                        1. 4

                                                                                          No. THe only idiot I see in Rust is me. I meant idioms.

                                                                                          1. 2

                                                                                            They’d be the ones out of habit trying to profile the garbage collection of their Rust app, gripe about standard library’s pervasive lack of referential transparency, or insist Simon Peyton Jones does a talk on every new feature. That I haven’t seen these people doesn’t mean they don’t exist. ;)

                                                                                          2. 3

                                                                                            If you’re interested in formal methods, may I recommend starting with Alloy? It’s a very simple, but very powerful, formal specification language in the style of Z or B.

                                                                                            1. 1

                                                                                              It’s on my laptop and I have Daniel Jackson’s book. Agda, Alloy and TLA+ are what I’m concentrating on.

                                                                                          1. 12

                                                                                            His post reminded me how good was to be “alone”*. The most productive and meaningful work I did on my live was when my internet access and other resources where pretty limited.

                                                                                            During university I remember wget-ing entire docs sessions on 1.44 floppy to read/study during the weekend because I didn’t had internet access. I’ve also implemented two important projects in a clean room design style, no references other than the provided ones.

                                                                                            It’s on my TODO to rent a hut in the woods without internet or cellphone access and take my concentration flow to the next level.

                                                                                            *As Alone I mean most about being offline and not accessible.

                                                                                            1. 4

                                                                                              My favorite development time is spent on buses, where I get a few hours of “leave me the fuck alone” and “internet connection too poor for anything but IRC and documentation lookup”.

                                                                                              I kinda want to do a long train ride in the US for similar purposes.

                                                                                              1. 5

                                                                                                My favorite development time is biking trough town and extended forest walks. I think I fix most bugs offline.

                                                                                                1. 3

                                                                                                  That reminds of the Amtrak writer’s retreat (EDIT - I guess it was called a “residency”) that they ran a while back: http://blog.amtrak.com/amtrak-residency/

                                                                                                  1. 2

                                                                                                    You can get that benefit in rural areas, too, if you dont bring a smartphone with you. People often discuss drawbacks of being isolated from jobs, few crowds, good Internet, etc. When you need to focus or relax, stuff being far away can make that easy.

                                                                                                    1. 1

                                                                                                      My experience wasn’t all that great for writing as I found it difficult to type without mistakes. Other than that, it was not a bad trip.

                                                                                                  1. 12

                                                                                                    I hadn’t heard about the current, promotion practices before this article. Obviously, not believing it until I hear from more Googlers but it sounds believable given how normal businesses work. I found this one interesting since it’s (a) more a talk than a full-on rant, (b) leads author to understand what business relationship is, and (c) shows how incentives for promotion create some of worst problems in legacy tech. The cartoons were great, too.

                                                                                                    1. 26

                                                                                                      Yes, this is pretty accurate, both from my time at Google and from past public criticisms like this one from 2010:

                                                                                                      What if I tried to design a promotion system to piss off as many employees as possible? What characteristics would it have?

                                                                                                      • No pleasant surprises. In other words, you can only be disappointed if you didn’t get a promotion, you can’t be pleasantly surprised by a promotion.
                                                                                                      • Create unhappiness by dependence on scarce resources. In other words, gate promotions based on scarce resources so that even people who would otherwise be qualified could become disgruntled through no fault of their own.
                                                                                                      • Eliminate accountability from people who make the promotion decisions (e.g., through a committee). That way, promotion decisions can seem arbitrary.
                                                                                                      • Ensure that promotions are competitive races between all qualified candidates. This ensures that people who manipulate that packet in such a way as to have the best looking packets will win over people who are trying to get feedback and improve, which is supposedly the point behind all these feedback systems.

                                                                                                      When I looked at Google’s promotion system through this lens, I was very impressed. It seemed as though the system was designed to create disgruntled employees out of people who might otherwise be perfectly happy.”

                                                                                                      1. 2

                                                                                                        After reading that, I had a thought - promotion schedules are very similar to games, and probably just as likely to be engaging and rewarding. That is - incredibly hit or miss and incredibly hard to perfect.

                                                                                                        1. 3

                                                                                                          In my experience, people who really win at those games recognize that it is, indeed, a game, and go find loopholes in the rules. For example, in medium-large companies, I’ve seen people who optimize for glorious notoriety get promoted much faster than people who optimize for actual quality work and “exceeding expectations”. I’m not the only one seeing that, I even accidentally did that once by becoming super friendly with my direct managers; things went super smooth, I got stellar raises, got promoted extra quick (with not much to show as a warrant for that promotion, frankly). I’m pretty sure I got that promotion by just asking for it and having good standing with my manager.

                                                                                                          I think the “upgrade by committee” might be a way to try and mitigate this kind of stuff, but I’d wager there’s ways around that too. Games have rules, rules have loopholes.

                                                                                                      2. 10

                                                                                                        Just wanted to chime in and say thanks for linking this and the tl;dr. I saw this posted on the orange site and skipped it. You posting it here and taking the effort to summarize made me actually read through it - I don’t regret it.

                                                                                                        Great read, well deserved upvote! :)

                                                                                                        1. 3

                                                                                                          Thanks for the feedback. Ive tried to really-selectively bring articles or sometimes comments over here from there that yall will like without having to read all ths fluff. There was a ton of fluff today but worthwhile comments and articles buried in it.

                                                                                                          Now, we have the good article over here, an abstract, and low-noise comment section. That’s how I plan to do them in the future, esp if rant-prone topics.

                                                                                                        2. 10

                                                                                                          I worked at Google. My experience was notably atypical and I wasn’t there for long enough to get a sense of how the promotion system works, first-hand, but what he described rings true and he captured Google culture.

                                                                                                          Corporations in general– I don’t think Google is worse, in this regard, than any other– have a problem whereby they separate work into “male work” and “female work”. The stuff that causes pain downstream but makes the numbers look better and results in brag points one can defend in front of executives is “male work”. The actual upkeep of the business– resolving conflicts, defining culture, helping other people succeed, cleaning up messes, mentoring– is “female work” that executards don’t like to see people spending serious time on.

                                                                                                          OP spent too much time, from the promotion committee’s perspective, on female work that doesn’t directly affect the bottom line.

                                                                                                          I hope it’s obvious that I don’t wish to defend this mentality. In addition to the subconscious misogyny, it’s fascistic and short-sighted.

                                                                                                        1. 5

                                                                                                          I learned that I feel a lot better as a human being by going into an office. It helps separate personal and work lives, physically and mentally.