1. 49

  2. 15

    Congrats and great work. I would like to emphasise the pull request as it’s also a nice read!

    edit: @pushcx do you have stats pre & post update? I’m curious what impact this had on system load and performance if you have metrics for that.

    1. 2

      Didn’t track any stats beyond what alynpost linked already, no.

    2. 11

      Congratulations and thank you, this is excellent work and your write-up is exquisite.

      It’s recommended that after enabling each option, you restart the server and in some cases you might want to give it some time before continuing the upgrade to make sure everything is ok. This would have normally been not as bad as it was, but I didn’t have access to the server and merges needed to be approved each time.

      You did not turn out to need this, but should you or anyone else want to do integration testing for Ruby upgrades or feature development a test system is available. I’m happy to add accounts to it for work like this and pull you through bootstrapping.

      1. 5

        Three small things I’d add: 1. thomas0 was not previously a user of Lobsters; 2. I mailed him a handful of 2” Lobsters stickers to say thanks; 3. issue 509 is maybe the best overview as we broke this down into multiple PRs to review/deploy fewer moving parts at one time.

        1. 1

          Nice! What did the stickers look like?

          1. 2

            They look like the site logo in the upper-left of every page, but more tangible. I added a description to the about page recently in the hopes that someone suggests a service that’ll print and ship them.

        2. 3

          Congratulations and thank you very much for this awesome contribution!

          1. 3

            It would be interesting to read about how the site is deployed and some performance stats. Is any of that published anywhere?

            1. 4

              The site is deployed via ansible playbook. Can you describe what you mean by performance stats? In June 2018 pushcx published activity stats and in August 2018 I gave a presentation where I further articulated that activity. Is that what you are looking for or did you have something else in mind?

              1. 3

                I would be interested in seeing you/mem stats from before and after the 5.2 migration. Pure curiosity!

                1. 2

                  Alas, I don’t have any quantitative data to share here. I’ve been able to solve memory pressure by sampling and making plausible guesses from there. I can say that the memory the Ruby work queue uses is load dependent: over the day the memory size of that process set grows and shrinks based on how busy the site is.

                  That doesn’t really answer your question though. If we were going to collect memory timeseries data, would you recommend any particular tool? collectd?

                  1. 2

                    I would recommend Prometheus. It’s easy to get running, suitable for collecting data and graphing, as well as full monitoring / alerting.

                    1. 4

                      We already discussed about monitoring on the ansible repository. Lobsters is generously hosted by @alynpost’s company and adding Prometheus to it would add a good way to visualize metrics, but would also take cpu/memory that would have been available for Rails and MariaDB.

                      For the monitoring, it was then said by Alan that the Nagios setup used at Prgmr could be used, so adding monitoring without the downsides!

                      1. 1

                        Building on @jstoja’s reply: we definitely have external monitoring for lobste.rs (the username and password is ‘guest’). That Nagios instance is hooked up to prgmr.com’s pager rotation so we get paged if the site goes down.