1. 1

    At the start of the year I was designated lead for our metrics, monitoring, alerting, and observability project so the conference line up this year is focused on those topics.

    I went to GrafanaCon this year in Amsterdam already, which was great.

    Next up is Monitorama in Portland, Oregon. Really looking forward to it this year.

    I think single track conferences have quickly become my favorite type of conference. You don’t have to choose which talk to attend based on which single paragraph blurb sounds the most interesting. There aren’t ~100 speakers so the conference can be more picky on talk selection so I’ve found that the quality of talks are much higher.

    1. 15

      A company I worked at a few years back migrated an internal service to MongoDB. It took a 9 node MongoDB cluster to replace an single, old, untuned instance of Postgres where Postgres was the “bottleneck”. Besides it being a slog of a migration, the only thing outstanding that I recall is that, when I asked about the performance difference between the “old” solution and the MongoDB solution, the developers all said that it’s unfair to compare the two solutions because they’re so different.

      I asked them flat out how long the task took before and how long it took after the migration - it went from 5 hours on Postgres to 7.5 hours on the MongoDB cluster.

      I was there for only a few months after that migration, but while I was there they never did get that run time down to where Postgres was.

      1. 3

        A friend of mine and fellow lobster @squarism blogged a few years back about how he devlogs. In the past few years since reading and getting it set up locally I’ve tried pretty hard to log the highlights of my day and/or any useful information I pick up when diving into a project or issue. It’s especially handy for recalling what I did in my weekly retrospective meeting and/or daily standup. The file is sync’d to dropbox so it’s generally available on all of my devices, although I probably have about a dozen swap files that I need to clean out every few months.

        1. 22

          I wonder what this means for Mir, their Wayland competitor?

          It would be a shame if they dumped 4 years of time and money into Mir when they could have been dumping it into Wayland.

          1. 35

            From the Ars article at https://arstechnica.com/information-technology/2017/04/ubuntu-unity-is-dead-desktop-will-switch-back-to-gnome-next-year/:

            By switching to GNOME, Canonical is also giving up on Mir and moving to the Wayland display server, another contender for replacing the X window system.

            1. 4

              It’s also interesting what will happen to Snap packages.

              1. 8

                Those are specifically mentioned in the article as something they plan to keep:

                The choice, ultimately, is to invest in the areas which are contributing to the growth of the company. Those are Ubuntu itself, for desktops, servers and VMs, our cloud infrastructure products (OpenStack and Kubernetes) our cloud operations capabilities (MAAS, LXD, Juju, BootStack), and our IoT story in snaps and Ubuntu Core.

                Snaps are designed to run on distros other than ubuntu, so they’re pretty much completely independent of Unity.

                1. 1

                  Yes, but they are developed by Ubuntu developers, the same as Mir.

                  1. 2

                    It’s probably safe to assume that any Ubuntu project unrelated to Unity will continue development.

              2. 3

                They already dumped upstart and now Unity, why not Mir? If there are advantages of Mir in contrast to Wayland please let me know because I know very little about the differences of both display servers (protocols).

                I am also pretty happy about the anouncement because it looks like canonical won’t continue with developing an alternative solution for everything in house, instead they will take the effort to improve an already existing solution which will hopefully be advantageous for anyone using Gnome and not running Ubuntu. With other words, this decision is good news for the Linux desktop.

              1. 2

                I’m curious if a locked down CSP Header would help prevent sites from being exploited, although I guess it would depend on where the JS got loaded from. If the attacker was able to get malicious JS served from the site or an approved origin then this would still be exploitable.

                1. 1

                  It would certainly reduce the attack surface. I think using something like uMatrix is also a good idea.

                1. 11

                  keybase the command line tools are alright - I still prefer using good old gpg. The service itself is nice since I think it makes pgp more accessible.

                  On the other hand keybasefs, the encrypted filesystem they have in beta, is pretty awesome. Rather than having to walk someone through how to use the keybase toolset or the gpg toolset (roll a keypair, publish their public key, etc), you have them sign up, verify themselves, install the package and you are ready to share files securely between them or between multiple people. Now all I have to do to share sensitive info with people is to create a directory (mkdir /keybase/private/me,them) and drop the file in there. For sharing with multiple people it is the same process, although the directory would be /keybase/private/me,them,third_person.

                  It’s pretty neat.

                  1. 3

                    I’m gonna have to look into KBFS now, it seems pretty slick. PGP doesn’t make file sharing hard, if you know PGP, but it would certainly be nice to not have to explain PGP to people before sharing a sensitive file.

                  1. 3

                    For most “human” secrets, such as 3rd party site credentials, repo signing keys, etc. We use 1Password for Teams.

                    For application secrets, such as an applications db credentials, we store encrypted secrets inside of Hashicorp’s Consul, which the applications fetch and decrypt with a simple library. Keys are pushed via chef encrypted databags (for now). We are more than likely going to swap this out for Hashicorp’s Vault in the mid-future.

                    For larger secrets or application secrets that need to be handed between one human to another without needing to be stored we use good old GPG.