The post, takes you through the decisions and principles behind the signify utility. This is particularly useful if you plan on setting your own signing infrastructure.
As always Ted doesn’t let us down, great all around post and great timing, since we were in the process of setting our own signing procedure :)
Auto partition was one of the last thorns for the unattended installation process. Up until now you could either accept the default layout or you had create a new bsd.rd image with customized install.sh install.sub scripts to do that.
The post presents a sample usage of this new feature. If you do lots of OpenBSD its worth taking a look.
The thing that jumped out at me was, “How is their environment so heterogeneous?” Multiple kernel versions with multiple versions of Xen with Xen in different modes, I’m surprised they found it.
This might seem odd, however its not unusual. A couple of the reasons that come to mind
Not necessarily valid reasons for the case at hand, but just to give some ideas.
I noticed that too, often the version of Xen is out of your control if you’re running on top of IaaS, but the Linux version? I am very curious as to why you would run that many different kernels.
This is speculation, but I wonder if that helps in some ways. Once you’ve found an environment where it happens, and one where it doesn’t, you can narrow your search to the differences between the two.
Great post, sums up most of the arguments about structured logging and best practices, although mostly for central log collection.
Don’t let the first paragraph scare you away from this post (systemd and journal is mentioned) its well worth reading.
Well, it was a bit of a weird post too. It seemed the author was arguing for structured logging (which is great), but also oddly arguing for logging in a binary format – but the conclusion seemed to be that you should store your logs in an log-processing/introspection-engine which may itself use a binary format (like logstash, splunk, whatever). Eh? Well sure.
I think when most people talk about “you should store logs in text format”, they are talking about archival logging. I don’t consider introspection engines as really being archival. Just have them ingest the log.
I am ALL FOR structured logging, and/or logging in some parseable text format (json, netstrings). I am “ok” with logging in well understood/popular binary formats (like protobufs, thrift, capnproto), as long as the schema is usually stored close by and clearly identified. Less optimal but maybe you really want to save time on future ingestion if you do it often. However, I am generally not in favor of bespoke binary formats for archival logging.
Text has a nice property of being relatively readable and easily discernible for future developers, perhaps even if the original system that used it is long gone. It might not be pretty, and the format may change over time, but still the barrier is relatively low.
Custom binary formats on the other hand, may have to be reverse engineered (which can be a fun process in its own right I suppose) before even determining if the data is relevant or useful, and there may also be many different versions/revisions/etc.
I have seen big systems crash and burn hard, and being able to replay easily understood archival data saved the day. If your data isn’t that important, or is only transient, then it doesn’t really matter much either way I guess.
I (author here) wouldn’t recommend neither a proprietary logging format, nor something which isn’t well understood and documented. Custom formats can be incredibly useful, if used well. Having the tools and documentation about the format is essential nevertheless, be that any kind of storage format.
Great post even for non postgres users out there (like me). As a mysql user most of the slides made me go “aha! that would be nice to have”.
I normally work with Postgres, but I’ve had to use MySQL this year. The documentation for the .NET client library is pretty bad, and the database itself is limited compared to Postgres.
It’s only a tiny project but I managed to run into MySQL limitations anyway. For example, generating a set of dates requires hacks (instead of generate_series available in Postgres), expressions as default values aren’t supported, and there are no arrays so I have to create an extra table. Not to mention the insane data conversion rules which I think might still be enabled by default.
We’ve been using divert(4) sockets (at work) for some time now without any problems. The simplicity and performance benefits of divert(4) sockets, make them ideal for certain cases, such as dnsbl check before passing the connection to a service, temporarily disable certain protocol features (ala heartblead), instant and automatic block for connection attempts to non existent services.
divert sockets are also being used on some of the default daemons of OpenBSD (ftp-proxy(8), tftp-proxy(8)).
We have made public a small set of tools called pf-diverters that demonstrate some of the use cases. Also note that these tools are not meant for production use.
I liked the post even though I have no experience with Go (other than the occasional reading of a post like this), so take the following comment lightly.
If find a lot of statements that could sound “wrong” to a sysadmin, but I can see the logic behind them as a developer.
For instance statements like these
As a developer a “warning” usually means that my app will probably continue working. However as a sysadmin, i’m the guy who takes care of these warnings, and have to make sure these don’t happen.
I cant say with confident that i grasped what the “Lets talk about fatal & error” sections where talking about, as they seem to be specific to the internals of Go.
However, i’d add one more item to the list of things you should log
3. Things that system administrators and the guys who actually RUN your software on their servers care about.
<rant> I think one casualty of the DevOps era is that many teams don’t appreciate what has to go into a service to make it operatable. For many teams it probably is only a big pain point once or twice a year, so maybe it’s fine, but I have seen a tendency for teams to think “we’re the developers, we’re the operators, we know what’s going on in the service so we don’t need lots of metrics or logging like operators would need”. Unfortunately, many people overestimate their ability to debug a complex system without help. I’ve seen several devopsy talks about how to do it and they often bring up “lots of metric and logging” as it’s a new idea. Now you have Docker being a rage, where everything is even more blackboxed.
Many people are doing it well and figuring these things out, etc. But I think there is a loud (hopefully) minority that might be good developers but have no idea about operations and are giving other people advice. </rant>