Threads for Tenzer

  1. 8

    I found this to be interesting reading and made me reflect on my latest career move where I went from being a line SRE to instead becoming a staff DevOps engineer. With the change, my area of expertise is wider (which I enjoy) and I’ve got much more say over the direction we are going in - a factor of that is also related to the company size.

    1. 3

      Whenever you write data to S3 you basically need a plan for when and how it is going be deleted again. It’s such a pain to clean up files in S3 retroactively that it should be avoided at all cost, otherwise you often end up with a massive, expensive bucket or a big semi-manual cleanup job.

      1. 3

        See also https://sembr.org/.

        1. 1

          That’s terrific: thanks.

        1. 2

          I save them in a file, encrypt it and rename it to have a UUID as the file name. I then upload the file to B2 and S3 and store the UUID with the login for the site in my password manager.

          It at least means you both need access to the storage service, have the encryption key and be able to map the files to the specific site (plus have the username and password) in order to be able to use the backup keys.

          1. 8

            Tools exist that can help find and optimise struct orders for you. I just found https://github.com/orijtech/structslop.

            1. 1

              Years ago I made a plugin for Sublime Text for doing some basic formatting of data structures you might print using print(). It’s nice not having to import extra things in order to get readable formatting and sometimes you might get big Python dicts outputted in log lines already which could like to read more easily.

              The plugin works well enough for me to use it regularly, but it could do with better handling of stuff like datetime.datetime objects, as they currently get split over many lines. I just never spend the time on sorting it out so it handles cases like that better.

              1. 8

                Which exacerbated another problem: uWSGI is so confusing. It’s amazing software, no doubt, but it ships with dozens and dozens of options you can tweak. So many options meant plenty of levers to twist around, but the lack of clear documentation meant that we were frequently left guessing the true intention of a given flag.

                This is the biggest drawback of uWSGI, to the point where we are looking at using something else in our stack. I personally have good experience with Gunicorn. It has good documentation and what seems like a reasonable feature set, compared to uWSGI - which for instance, has not just one, but two different async task runner capabilities built-in!

                I would have liked if the original post talked a bit about how the load on the system was balanced (IO vs. CPU) as that would have an impact on other people looking to benefit from the work performed by them.

                1. 10

                  I like hey for its simplicity.

                  1. 5

                    I love hey, but recently found it being unmaintained and switched to oha.

                  1. 5

                    I remember a few domains with A records at the top level; I think dk had one at some point

                    1. 3

                      Correct. You used to be able to type “dk.” in a browser and you would then be redirected to the website of DK Hostmaster who manages the .dk TLD.

                      I found it to be quite useful as a quick way to get to their site.

                    1. 2

                      I remember seeing Vector a while back but forgot about it. It seems very handy for having just one utility which can combine metrics collection/generation, like what Telegraf does, and logs collection, like Logstash/Filebeat/… does.

                      Are there any other tools out there which can handle both cases which are worth looking at? A use case I can think of right now is to use in a sidecar container in a Kubernetes pod running Nginx, for collecting logs and generating metrics from them at the same time.

                      1. 3

                        I recently found about https://www.benthos.dev which also seems pretty similar to what Vector achieves. It’s written in Golang (Vector is in Rust).

                        But for your usecase, I think Vector can do the job, out of the box.

                        1. 1

                          Thanks! I’ll check it out.

                          1. 1

                            It turns out Vector and Benthos share a developer, who gave some background into what the priorities are for each of them here: https://github.com/Jeffail/benthos/issues/359#issuecomment-573438855.

                      1. 17

                        One note with the “Passing request to Consul services” section: when you use a DNS name in proxy_pass, that name is only ever resolved at startup. If the IP address the domain name points to could ever change you should define an upstream. Nginx only does normal DNS TTL / refresh in upstreams.

                        1. 6

                          This is rarely mentioned and you learn it the hard way. @alexdzyoba I think it’s a good thing to add to your great article :)

                          1. 4

                            I have previously documented a workaround for this (although, really the open source version should just support this): https://tenzer.dk/nginx-with-dynamic-upstreams/.

                            1. 1

                              Ah yes, I had seen the variable hack once before! Both the variable hack and upstreams need the resolver set, so both will do the trick.

                            2. 2

                              Thanks, everyone! I’ve updated my post. Put the link to this thread and the post by Tenzer.

                              1. 2

                                Thanks for pointing out! But what about “valid” option in resolver directive? It should control the TTL for DNS cache.

                                1. 2

                                  It does, but proxy_pass doesn’t use DNS TTL, it only ever resolves once, unless you use a variable or an upstream. The Nginx docs aren’t very clear, but the resolver setting says it’s used for upstreams.

                              1. 5

                                I know the point of the post is to show how you could go about working on an unfamiliar code base, but I thought I would mention that Nginx already is able to returning a plain string in a response using the return statement: http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#return.

                                Perhaps it would be interesting as a follow up to see how it implements the functionality and if it uses some higher-level methods to do it?

                                1. 1

                                  Oh nice, thanks! I should have remembered this. Yes that could be good to look into.

                                1. 3

                                  I miss working with Sun hardware. They were things of beauty and care and really good design.

                                  I had a bunch of the V245’s predecessors, the V240s (as well as V210, V490 and probably some others I’m forgetting), which were purple-fronted. That purple front pulled down to reveal the drive bays and the key slot for locking the machine on or off. The lid was hinged in such a way that you could lift just the front few inches to access the intake fan array and hot-swap them. Lifting the entire lid off revealed a very well layed out machine. Everything was in its right place, easy to access, with a diagram of how to remove covering parts or screws to access things.

                                  In contrast I often found HP ProLiant machines a cramped and tangled mess.

                                  Working on Sun machines was much like working on Apple machines. The software and hardware were in pretty good harmony. Lights-out access was great (especially compared to HP ILO, which requires a license).

                                  A couple of servers we had were similar to this V245 - the T2000 - which we also installed graphics cards in (because my boss was weird and insisted that the management servers could be used as local desktops in the DC if necessary), so we had the full CDE experience locally on a server :-)

                                  1. 2

                                    One of my first career jobs started as working on the Sun hardware you described doing hardware maintenance. Drive and PSU swaps, memory/CPU replacements, etc. I always liked working on them, the colors and design gave them personality and always looked cool lined up in racks in data centers.

                                    There are some days I miss that type of work; getting the morning email with failures, checking in-house stock and putting in warranty claims to get parts we didn’t have on hand, then driving around the rest of the day to the different DCs doing the repairs. Wasn’t exciting or really challenging, but made me appreciate how much work goes into keeping the hardware layer online.

                                    1. 2

                                      Sun servers were always nice to work with. The clear lime green tabs on everything made it easy to work out what your were supposed to lift/press.

                                      While I’ve mostly got experience with the cheaper X2100 and X2200s (x86), I also got to play with the nicer T5120 and T5240 SPARC servers. They were beauties and a joy compared to SuperMicro servers and to a lesser extend Fujitsu.

                                      1. 1

                                        and to a lesser extend Fujitsu.

                                        Which is actually kinda funny because some of Sun’s servers were rebranded Fujitsu machines. The M-series were all Fujitsu (you could buy the same server as Sun-branded or Fujitsu-branded) and were just as nice to work with and had hot-swappable everything.

                                    1. 1

                                      It looks like the Lobsters Twitter bot has stopped posting stories. Might it be related to this?

                                      1. 1

                                        I am wondering if it would be possible to just dump the target and source records into one of the new breed of so called Authenticated databases eg [1]

                                        If the data is inserted in the same order, the root hash would be the same, if the records were, indeed, identical (or at least this is my understanding).

                                        [1] https://github.com/hoytech/quadrable

                                        1. 1

                                          That would have the downside that all the data would have to both be read from disk and sent over the network which will slow it down.

                                          I’m also not sure how well it would work for the use case of figuring out what records are different.

                                        1. 2

                                          I also used to be a person who would update apps manually, just so I could read the changes and know what new to expect from each update. I don’t do that anymore, however, due to this exact problem. There’s just no point in doing it any more.

                                          The worst part is that I’ve seen this pattern reused in changelogs on open source projects as well. I can’t remember which one it was right now, otherwise I would have provided a link.

                                          1. 5

                                            I recently bought a Logitech Streamcam (1080p60) with a good 1/2.0 aperture and very good image and sound quality. While browsing the market, it seemed to be the only sensible choice (who needs 4k when no service transmits at that resolution?). Much more important than image quality to me is high framerate (i.e. 60 FPS). The image quality of the Streamcam is really good and it works natively with the basic UVC-drivers in Linux, even though there still seems to exists a race-condition in its firmware that will probably be fixed soon.

                                            Anyway, I’d pick that if anyone asked me for a recommendation. The next step basically seems to be to use a DSLR and a USB-HDMI-streamer, but this is just overkill for meetings. I’d think about it if I was a high-ranking executive in a company being in meetings all day or something.

                                            1. 3

                                              Anyway, I’d pick that if anyone asked me for a recommendation. The next step basically seems to be to use a DSLR and a USB-HDMI-streamer, but this is just overkill for meetings. I’d think about it if I was a high-ranking executive in a company being in meetings all day or something.

                                              Some Canon cameras have a webcam driver, and i’ve been using it on and off (with a Rebel T6) since it came out earlier this year. The trade-off is you get great optics (I love how I look in a wide-angle lens, and a big lens makes the lighting vs. sensitivity & shutter speed trade-off less pronounced), but the weight of those optics (450g body + 385g lens) makes it less flexible to position than a normal webcam (Logitech C920 is 162g) on an articulated arm. I have done a few calls that really benefit from a big studio-type setup where I can spend time setting it up, but the daily or weekly meetings don’t really need that.

                                              1. 2

                                                I use a 2012 vintage Canon EOS M (which I picked up off Ebay for ~$150) and a cheap 1080p USB HDMI capture card. I was drawn to the EOS M because of its great support for Magic Lantern (an Open Source camera firmware) which allows for clean HDMI output.

                                                1. 1

                                                  Oh wow, that’s a very cute camera :3

                                                  Mine sadly doesn’t have Magic Lantern yet, and no clean HDMI either.

                                                2. 1

                                                  Is that by using the EOS Webcam Utility or does the newer models come with webcam support built in?

                                                  I’ve previously tried the webcam utility beta version with a camera when it came out (can’t remember if it was the 6D or 550D/Rebel T2i I used) and it wasn’t really usable since there was a very noticeable delay in the video input which wasn’t the case for the audio (it came directly into the laptop).

                                                  I didn’t look into ways of delaying the audio to sync it up with the video, but I guess it could have been usable if I got that worked out.

                                                  1. 1

                                                    It’s the webcam utility, yeah. I didn’t notice any difference in latency between the camera and a USB mic or Bluetooth headphones.

                                              1. 19

                                                OK, I did science to it.

                                                I’ve got a 2019 MBP 16” with its built-in display set to HiDPI 720p, plus a pair of external 4K displays each at 60Hz, which is a decent amount of baseline work for the WindowServer process.

                                                In this experiment I observed the temperature, fan speeds, and 1m load average with iStat Menus. I observed the WindowServer CPU usage with Activity Monitor (set to 5s between updates). I took notes in Tot. Besides that, my work environment includes some background tools I left alone, and I ran no other user-facing applications besides Finder.

                                                First I restarted my computer and waited for its temperature and fans to settle, which I speculated would finish off any at-boot work. I then rebooted again, just to be sure any at-boot once-per-day work would not interfere with measurements.

                                                In this first trial, in Activity Monitor, I did see GoogleSoftwareUpdate start and finish its work. I waited for the temperature and fan to settle, and then found WindowServer CPU usage varying in a range of about 6%-9% over five samples.

                                                For a second trial, I followed these instructions to prevent launchd from running Keystone. I then rebooted again and followed the same procedure. This time I did not see GoogleSoftwareUpdate running in Activity Monitor. When the system settled, it settled to the same temperature, about the same fan speed, and about the same load averages. WindowServer CPU usage was again in the 6%-9% range over five samples.

                                                For a third trial, I followed Brichter’s instructions to remove both Chrome and Keystone. (I regret not capturing any version numbers, but I know I had made sure Chrome was current sometime this past workweek.) Again, the system settled at about the same measurements as before, and WindowServer CPU usage was in the 6%-9% range.


                                                In my measurements on an idle Mac after boot, removing Keystone and Chrome did not produce an observable difference in WindowServer CPU usage.

                                                Several people on Brichter’s Twitter thread have reported that they do see a difference. But those people have said, in the first place, they saw high WindowServer CPU usage before removing Chrome, and I did not.

                                                I can only imagine this is a “now you see it, now you don’t” kind of bug. It’s probably not malicious, but if others confirm it, it should be fixed.

                                                1. 5

                                                  Thanks for doing such a throughout test. I saw the responses from people on Twitter saying that it helped them, so thought I would share the page. I’ve yet to do any testing on my machine to see if it makes a difference.

                                                  1. 5

                                                    My pleasure. Oh, this isn’t a refutation of your post; I don’t doubt the problem is real. Just trying to locate it.

                                                1. 2

                                                  For work I’m optimising our CI and deployment pipelines to make them faster.

                                                  Outside of work I’ve been working on writing a Prometheus exporter for my Google Nest thermostat. They changed the API available for it earlier in the year (and started charging $5 for access to it), so none of the existing exporters work anymore, hence I have to write a new one.