1. 10
  1.  

  2. 1

    I’m glad to received a link to my own article, but I do disagree somewhat with that is said in this one.

    The specific example of cron/NFS is in fact a hard dependency: cron runs reboot tasks when it starts, and if they need NFS mounts, then those mounts should be a hard requirement of cron, “ordering” is not sufficient.

    The implied issue is that cron doesn’t need the NFS mounts once it’s run those tasks, so the dependency “becomes false” at that point. If I understand the argument correctly, it is: seeing as “the system as a whole wants both”, you could use a startup ordering to avoid leaving a lingering dependency once the @reboot jobs have run while still ensuring that NFS mounts are available before cron starts. This is true, but it would be fragile and racy. For instance, there would be nothing to prevent the NFS mounts, even with the co-operation of the service manager, being unmounted just after crond begins execution, but before it has even started (or possibly when it is midway through) running the @reboot tasks.

    In my eyes there are two ways to solve it properly: separate cron boot tasks from regular cron so that you can run them separately (that would mean changing cron or using some other tool), or having the cron boot tasks work by starting short-running services (which can then list NFS mounts as a dependency). This latter requires non-priviliged users be allowed to start services though, and that’s opening a can of worms. I feel that ultimately the example just illustrates the problems inherent in cron’s @reboot mechanism.

    (Not to mention that there’s a pre-existing problem: for cron, “reboot” just means “cron started”. If you start and stop cron, those reboot tasks will all run again…)

    1. 2

      Belatedly (as the author of the linked-to article): In our environment, NFS mounts are a hard dependency of those specific @reboot cron jobs, but not of cron in general. In fact we specifically want cron to run even if NFS mounts are not there, because one of the system cron jobs is an NFS mount updater that will, as a side effect, keep retrying NFS mounts that didn’t work the first time. Unfortunately there is no good way to express this in current init systems that I know about and @reboot cron jobs are the best way we could come up with to allow users to start their own services on boot without having to involve sysadmins to add, modify, and remove them.

      (With sufficient determination we could write our own service for this which people could register with and modify, and in that service we could get all of the dependencies right. But we’re reluctant to write local software, such a service would clearly be security sensitive, and @reboot works well enough in our environment.)

      1. 1

        But it’s not a dependency of cron, it’s a dependency of these particular tasks. Cron the package that contains the service definition has no idea about what you put into your crontab.

        Yes, it’s a problem in cron. This is why there’s movement towards just dropping cron in favor of integrating task scheduling into service managers. Apple launchd was probably first, systemd of course has timers too, and the “most Unix” solution is in Void Linux (and runit-faster for FreeBSD now): snooze as runit services. In all these cases, each scheduled task can have its own dependencies.

        (Of course the boot tasks then are just short-running services as you described.)

        1. 1

          But it’s not a dependency of cron, it’s a dependency of these particular tasks

          Agreed, but if you’re the sysadmin and know that cron jobs are using some service/unit, then you’d better make sure that the cron service is configured with an appropriate dependency. At least, that’s how I view it. Without knowing more about the particular system in question, I’m not sure we can say much more about how it should be configured - I agree that cron isn’t perfect, particularly for “on boot” tasks, but at least it’s a secure way of allowing unprivileged users to set up their own time-based tasks. (I guess it’s an open question whether that should really be allowed anyway).

        2. 1

          I was also confused by that, but from the discussion in the comments, I think the reason they don’t want it to be a hard dependency is that, in their setup, some machines typically have NFS configured and some don’t. In the case where the machine would start NFS anyway, they want an ordering dependency so it starts before cron. But if NFS wasn’t otherwise configured to start on that machine, then cron should start up without trying to start NFS.

          1. 1

            Yes, that accords with the comments below the post:

            On machines with NFS mounts and Apache running, we want Apache to start after the NFS mounts; however, we don’t want either NFS mounts or Apache to start the other unless they’re explicitly enabled. If we don’t want to have to customize dependencies on a per-machine basis, this must be a before/after relationship because neither service implies the other

            The problem is “don’t want to have to customize dependencies” is essentially saying “we are ok with the dependencies being incomplete on some machines if it means we can have the same dependency configurations on all machines”. That seems like the wrong approach to me; you should just bite the bullet and configure your machines correctly; you’ve already got to explicitly enable the services you do want on each machine, anyway.

            1. 1

              This gets into the philosophy of fleet management. As someone who manages a relatively small fleet by current standards (we only have a hundred machines or so), my view is that the less you have to do and remember for specific combinations of configurations, the better; as much as possible you want to be able to treat individual configuration options as independent and avoid potential combinatorial explosions of combinations. So it’s much better to be able to set a before/after relationship once, globally, than to remember that on a machine that both has NFS mounts and Apache that you need a special additional customization. Pragmatically you’re much more likely to forget that such special cases exist and thus set up machines with things missing (and such missing cases may be hard to debug, since they can work most of the time or have only tiny symptoms when things go wrong).

              (One way to think of it is that it is a building blocks approach versus a custom building approach. Building blocks is easier.)