Threads for pondidum

  1. 3

    The only thing that I find a bit misleading about this tool is the examples given for file size changes; in the golang version it compares against the golang:1.15 image with built application, rather than a multistage build which is just a base container and the built binary.

    1. 5

      My company (reaktor.com) always gives feedback on interviews, and from what I’ve seen and heard, applicants really appreciate it.

      It helps show how we value honesty, there is no hiding behind vague rejections.

      1. 10

        My personal progression for systems is to start with something like heroku, and only when that starts being insufficient, move to something containers-as-a-service (ECS etc), and when that starts being insufficient, I move towards something like Nomad on EC2.

        There’s a lot to be said for using Packer to make AMIs and rolling them in an ASG too.

        1. 6

          Agree - I feel like prescribing ECS as an entry-level tier is really overkill if you’re just a small team trying to find product-market fit. Also, it’s not like Heroku is in any way a dead end; there’s very little lock-in (except for maybe an addiction to convenience) so if you ever want to migrate to Docker you just add a few more lines to your Procfile and rename it to Dockerfile.

          Last time I used ECS it still required quite a bit of configuration to get up and running (configuring an ingress, availability zones, RDS, ECR, setting up CI, etc etc). Also, there were a few common use cases that were surprisingly difficult to set up, such as running a one-off command on deploy (eg for triggering database migrations) or getting logs shipped to Kibana, things which can be done with literally a single line of config or a few clicks of the mouse on Heroku.

          TBH I’d rather run on regular compute nodes on something like Digital Ocean and deploy with Ansible than use ECS. Kubernetes and ECS feel like they’re solutions for the problems of managing a compute fleet—most people don’t have a compute fleet, but by using Kubernetes they get the same level of complexity anyway

          1. 4

            my biggest difficulty with heroku has been the pricing, at least for our use cases it felt pretty intense (but maybe I wasn’t properly comparing with our existnig cloud bill).

            I mean I think the service is truly amazing but it’s a tough sell sometimes

            1. 1

              Agreed - Heroku is really expensive once you go beyond the starter tiers. However, ECS and Kubernetes may be cheaper on paper, but what you’re really doing is trading hosting fees for man-hours. At a certain point that trade makes perfect sense (when you’ve got enough manpower) but I’ve seen several instances of people making the switch without realizing how costly it would be to do their own SRE.

            2. 2

              I’ve always been surprised that the Packer -> AMI -> ASG approach isn’t more popular. (I mean, I get why it isn’t.) It can take you really really far. There was even a (short-lived) startup around the idea, Skyliner. It’s not very efficient, from a bin-packing/k8s-scheduling POV, but efficiency is not a high priority until you are at a scale where margins cause you real pain. So we’re in a place today where it is under-theorized, under-documented, and under-tooled, as an ops discipline, compared to more complex alternatives. Too bad, it’s like the best medium/middle-sized scale approach I know about.

            1. 3

              Alternatively, utilising distributed tracing (even without distribution) is a great alternative to logging. That same information from your logs can be stuffed into a span; you still have the information locally, and when you deploy your app, that same information is available for helping troubleshoot production.

              Not to mention, setting typed properties (e.g. found_user: true ) makes the information smaller (in bytes), more filterable (where found_user = false Vs text searching for a missing log line), and richer (it’s part of a trace of one action in a process, or as part of multiple processes) than text logs.

              So while I think console.log logging can be helpful, it would be better to have tracing setup so that your debug data is more useful, imo.

              1. 2

                What is this “distributed tracing” thing? The only links I found were something about micro services.

                1. 1

                  Honeycomb is a good implementation of distributed tracing.

                  1. 1

                    Not surprising, it is a really helpful thing to have in that space. But you don’t have to have microservices, distributed tracing is handy even in standard 3 layers architectures. I picture it as the child of the debugger and the logs. Especially powerful combined with apms.

                    You can look into jaeger, zipkin, and opentelemetry . Or commercially there’s a ton, from newrelic to hepsagon (potentially misspelled)

                1. 6

                  Htmx, to me, seems like a simpler version of hotwire/turbolinks/etc, so I am interested in giving it a try soon.

                  1. 4

                    Ya - it’s the successor to intercooler.is which predates Hotwire/Turbolinks/etc.

                    I loathe doing most frontend work but when something I’m doing calls for dynamism, htmx (and formerly intercooler) are what I reach for for simple stuff.

                    1. 1

                      it’s very nice, you should try it if you can

                    1. 2

                      Although it doesn’t compare to Postgres, maybe this paper comparing Neo4j with two one commercial RDBMS could be interesting? http://ceur-ws.org/Vol-1810/GraphQ_paper_01.pdf

                      It also contains some pointers to other benchmark papers at the end.

                      1. 1

                        This is the paper I was looking for! Thank you so much!

                        As you say, not postgres; I probably muddled that with something else about postgres (possibly the comparison to mongodb presentation, thinking about it).

                        1. 1

                          You’re welcome! I just noticed that “RDBMS A” and “RDBMS B” does not stand for two different commercial RDBMS, but rather two different ways to organize edges in the same RDBMS.

                      1. 9

                        https://tls.ulfheim.net/ - tls explained (not quite a full app, but still)

                        https://devdocs.io/ - offline searchable language reference docs

                        https://app.diagrams.net/ - diagram drawing tool

                        https://nginx.viraptor.info/ - nginx location matching tester (mine)

                        1. 1

                          The nginx one looks really useful, thanks!

                          1. 1

                            whoa, that diagram tool is very impressive

                          1. 2

                            I remember skimming it but can’t seem to find it now either, sorry. FWIW though, YMMV, and just my opinion, etc, but: most of the value I get from using Neo4J a lot is not its raw speed, though that’s very good for my purposes, but natively using Cypher/CQL. I find it so much more natural than SQL & tables to model not only long chains of relationships but in fact pretty much any relational data, and it’s plenty fast enough. IMHO, etc.

                            1. 2

                              Interesting! Cypher is on my list of things to have a look at, as I’ve only seen it in a book about neo4j, and not had any practical experience with it.

                              Having said that, most services I interact with have postgres (or similar) as their primary datastore, so when we need to do something, I’d rather try and not introduce a new datastore unless it’s actually necessary - hence looking for this paper again!

                              1. 2

                                I guess you guys are aware of https://github.com/apache/incubator-age ?

                                1. 2

                                  I was not aware, thank you so much!

                                  1. 2

                                    V interesting, thanks. I’d seen AgensGraph before and if this supports all that AG does but as a Postgres extension rather than a fork that would be awesome. I don’t think it can support all the apoc.* procedures that make Neo4j so comprehensive though so I wouldn’t be able to port existing projects to it - but for new projects or ones which require multi-model or mostly straight-up graph querying then it could be really powerful given how widespread and operationally strong PG is. Nice one.

                                  2. 1

                                    Fair enough!

                                1. 2

                                  I don’t think this is the exact paper you’re looking for, but a paper in a similar theme is “Scalability! But at what COST?”, a perennial favorite with my team. It is a fairly snarky exploration of how the pursuit of “scalability” for its own sake has meant many distributed processing systems are in fact slower than a reasonably fast naive implementation running on a single thread.

                                  https://www.usenix.org/system/files/conference/hotos15/hotos15-paper-mcsherry.pdf

                                  1. 2

                                    Yes, this is a good paper!

                                  1. 7

                                    I like the concept of the events emitted by this post, but they feel closer to audit events to me, rather than the kind of data I used to want from logging. I say used to want, as for the last year or so, I have been removing/replacing logging with OpenTelemetry Tracing, which is giving me all the information logs would give me, along with call graphs, filterable properties, method timings, and the ability to visualise in multiple tools easily.

                                    Depending on the project I have used: Honeycomb, NewRelic, Zipkin, and Jaeger to name a few. They all have their own strengths and weaknesses, but the fact that they all can ingest the same data is a massive benefit to me.

                                    I really can’t see myself wanting to go back to using logs (structured or not) over this.

                                    1. 7

                                      From experience, you cannot rely on url discovery to constrain operations by a client.

                                      If you return urls like /deposit/account_id/amount in a particular response, you can expect developers to construct those, even if your documentation explicitly says you have to chase through previous steps, have them returned to you and then follow them.

                                      i.e. developers don’t like walking multiple requests to discover which operations are currently permissible and their associated urls, they destructure them to find semantic meaning and then use them on the fly.

                                      I think the only way around this would be to have opaque urls (/opaque/some-uuid) which must be discovered by a client, and are mapped into the semantic spaces by some other layer. This seems like it doesn’t add value.

                                      1. 2

                                        I once had to write an API for consumption where constructing URLs wasn’t allowed, but obviously people did it anyway.

                                        So I wrote a middleware which for every URL in the application would I rewrite responses to use guides/hashes/random data on parts of the URL. It kept a cache in memory so you could hit the same nonsense URL repeatedly, but the cache was cleared nightly and on app restart.

                                        As long as consumers followed from the root document, they had no problems. And the ones which complained were politely pointed to the contract they signed about how the API was to be used.

                                      1. 2

                                        I like the Dockerfile pattern there for Go, copying just the go.mod and go.sum files, and downloading those, before copying the rest of the source, to take proper advantage of Docker layer caching when iterating on source during development.

                                        That’s a definite “duh, why aren’t I doing this?” moment.

                                        1. 4

                                          Go already has a cache for incremental builds and downloads, and builds are reproducible. Using Docker layers in this case is redundant and is a major slowdown whenever some core input changes (e.g. go.mod and go.sum). That is especially true for large projects with lots of generated code since updating a single dependency will re-download everything and then rebuild the whole project from scratch.

                                          Instead, with modern Docker releases, you can let the Go manage its cache.

                                          RUN \
                                              --mount=type=cache,id=gocache,target=/root/.cache/go-build \
                                              --mount=type=cache,id=gomodcache,target=/go/pkg/mod \
                                            go install ./cmd/...
                                          

                                          I’d also suggest using gcr.io/distroless/static as a base for the final image unless you need a full-blown distro.

                                          1. 2

                                            ThoseThose are some great points, especially the distroless usage.

                                            With regards using the cache mount type, I have to manage that cache between build agents to make it useful, whereas with a multistage build I can push the intermediate stages after build, and pull before the next build to get a populated cache.

                                            I had problems doing this with buildkiy before, but I should revisit again soon

                                          2. 2

                                            Thanks!

                                            I spend a lot of time in various companies fixing their dockerfiles for speed/size/content, and this split I originally did with nodejs containers…then I realised I could make my go ones significantly faster when not changing deps too :)

                                          1. 2

                                            This actually gave me the little shove needed to go and write (and publish!) the blog post idea which has been knocking around in my head for the last few days!

                                            1. 2

                                              For whatever reason, it was the Casio calculators that were popular in the UK. The only language for programming was the normal calculator language - no BASIC or assembly. There was only 422 bytes of memory on Casio’s original model but you could do more with that than would be possible in assembly. There were weird tricks to optimising programmes for size. I had Connect 4 working in barely more than 100 bytes.

                                              1. 3

                                                The TI calculators were on the approved list for GCSR and A-level maths in the ’90s. The vast majority of the of the functionality of the calculator was completely irrelevant to (or actively unhelpful to) the course. It could solve quadratic equations via a repeated approximation method, which was useful for checking your answers, but pretty much everything else was useless.

                                                I had a TI-86 (which was like the TI-85 but with more RAM and, for some reason, much slower screen updates). I never really got into the whole coding-on-the-calculator thing. I had had a Psion Series 3 for several years by the time I got the TI-86 and the Psion ran a full multitasking OS and came with a decent keyboard for writing code on the device. I wrote a load of programs on that machine and the TI calculator seemed quite clunky in comparison.

                                                The XKCD in the article really resonated with me. I can run GNU Octave on a bargain basement Android phone and have something vastly more powerful than the TI calculator for less money. They seem to make their money from the fact that exams place an upper limit on what a calculator can do in your exam. This, in turn, annoys me because it’s pretty clear that the exams aren’t actually measuring useful skills. I spent a year in maths lessons going from being able to solve differential equations in a thousand times as long as it would take a computer to being able to solve them in a hundred times as long. If I ever need so solve a differential equation now, I’ll reach for Matlab / Mathematica / Octave because then I’ll definitely get the right answer and even installing the tool from scratch to solve a single equation will probably take less time than solving it on paper. Being able to construct the right equation to represent a problem is a useful skill. Being able to understand the underlying theory that leads to things like the chain rule is useful in some contexts (though that isn’t actually taught until university in the UK: at school they’re just magic recipes). Being able to run an algorithm slowly with a piece of paper and a pencil is as useful as being able to write with a quill pen: it may make you some friends at a historical re-enactment event but that’s about it.

                                                1. 1

                                                  I remember getting programs written in C on my Casio calculator (maybe a 9750? Its in my storage somewhere still). I was an active member on casiocalc.org forum, and vaguely recall one of the French members teaching me how to do the C programs on the calculator. All I can recall is that it was done on the computer, with a link cable (which was hand made )

                                                  That must have been in 1999 or so, based on which school I was in. Now I feel old.

                                                1. 4

                                                  First, please stop putting your binaries in /app, please, pretty-please? We have /usr/local/bin/ for that.

                                                  It’s not putting binaries in /app. It’s a multistage docker build, and the first stage is putting the source in /app, and the second stage is taking that compiled binary and putting it in the default workdir of the scratch image, which I guess is /, which is still not ideal, I suppose.

                                                  1. 1

                                                    Trunk based development works if

                                                    1. You assume your users will always be using the latest release of your software AND/OR
                                                    2. You will never support (bugfix) older releases of your software

                                                    If you need to support different versions of your software, feature branches are very useful.

                                                    1. 2

                                                      For 1, I would solve with feature flags, but 2 is a legitimate concern I think.

                                                      Different versions with feature branches sounds much like long lived branches, which I would avoid and again solve with feature flags.

                                                    1. 33

                                                      At a previous job, one of the other developers and I used to have a sort of competition on who could remove the most likes of code per week, while doing normal work.

                                                      I remember winning by 1.2 million lines one week when I found a second, unused, copy of a service embedded in another service’s repo.

                                                      I do kind of miss that friendly competition

                                                      1. 8

                                                        I do kind of miss that friendly competition

                                                        Now I’m imagining that the competition turned unfriendly and your coworker began to delete production services in order to beat your line count…

                                                        1. 11

                                                          If no one complained, give this guy a medal!

                                                          1. 7

                                                            This is also known as the “scream test” when used to measure whether or not a given thing was in use at all. ❤️

                                                            1. 2

                                                              Sooooo … how long do you wait before declaring an absence of screams?

                                                              (Quietly considers EOFY processes that run once per year …)

                                                          2. 2

                                                            If production kept operating correctly…why not? :)

                                                            1. 6

                                                              Delete your backups system and production will keep operating correctly.

                                                              1. 6

                                                                If that doesn’t raise any alerts, then it wasn’t operating correctly in the first place.

                                                                1. 12

                                                                  Delete the alerts

                                                          3. 6

                                                            I once worked in an office with a similar culture. The competition ended when someone found a header file containing a dictionary as a C array in some test code. They replaced it with a short function that reads the dictionary that comes with the distro, and removed 700,000 lines

                                                            1. 6

                                                              My favourite kinds of diff to review

                                                              1. More features, less code
                                                              2. No functionality change, less code
                                                              3. More features, same amount of code

                                                              One of the reasons that I hate C so much is that it’s incredibly rare to see diffs of any of these kinds in a language that doesn’t provide rich types. It’s common in C++ to review diffs that move duplicated code into a template or lambda (you may end up with the same amount of object code at the end, but you have less source to review and bugs are easier to fix).

                                                              1. 1

                                                                Hehe, we had that at a previous job as well, but the highest score was in the thousands, 1.2 million is awesome, hahahaha

                                                              1. 20

                                                                While I am not a systemd fan, I have to say this was a very useful and well written article.

                                                                The loadcredential argument will be especially useful to me, thanks!

                                                                1. 5

                                                                  It’s always nice to see recommendations for structured logging over random string messages, but for new projects I’d rather lean on opentelemetry tracing/spans instead.

                                                                  1. 3

                                                                    The best interview I had was also at my current workplace.

                                                                    • informal chat/culture
                                                                    • emailed a small take home task (a few hours max)
                                                                    • technical interview which was reviewing the take home work together and some open ended questions

                                                                    All in all, I really liked how it was done. There’s only a few minor things I would have changed, but we are constantly improving and refining the process to try and remove bias and make it better for all parties involved.

                                                                    1. 2

                                                                      Small take home task combined with open-ended questions based on how the candidate designed and implemented the solution is, in my opinion, the best interview format.

                                                                      It is an opportunity for both the interviewer and interviewee to learn and discuss pro and cons of a given design. If done right, both parties come out feeling that the time spent was valuable, even if it does not lead to an offer.

                                                                      1. 1

                                                                        My favourite is similar - small job-relevant take home task, followed by in person refactoring of that task (‘the requirements changed’), then some open ended questions. It actually doesn’t take all that long compared to what some companies do, but I think it gets you to the knowledge you need quickly.

                                                                        You have to sell people on the role though or they tend not to be prepared to do a take home task.