1. 13
  1. 4

    I’ve always found it annoying that Red Hat offers no way to upgrade across major versions. This is to me an essential feature. There seems to be some (very limited) support for it nowadays, but it’s nothing compared to (for example) the Debian upgrade story.

    1. 6

      Redhat brings out new versions every ~5years, supports thema fort 10years. After that many years imho it’s beter tot re-install if only to make sure there are no dependencies someone installed by hand. This will make your life easier by reducing technical debt. At least that’s the theory…

      1. 1

        And it’s pretty rare to have a system live longer than 10 years in an enterprise environment.

        1. 12

          Yeah, you’d think so. You’d really think so.

          (pours another shot)

          1. 1

            Well, obviously there are going to be small exceptions, but can anyone produce an example of a 1000+ system datacenter running 10yr+ old systems for production? Most of my background is HPC, and that would have been quite rare to see because of power inefficiency.

            1. 1

              In the HPC world, that may be true. In a typical enterprise, it’s nothing of the sort.

              In a typical medium-sized enterprise, you have multiple datacenters filled with some mix of modern and “legacy” hardware in each. All of this is managed by separate teams operating in their own little silos. Projects come and go based on which middle managers impressed a C-level exec last week on the golf course. Even in a particularly profitable year when the purse strings are loosened up enough to modernize most of the infrastructure, there’s that one fucking server that’s responsible for some highly business-critical task but the person who knew the task and wrote the software (in friggen Delphi or something, probably) retired five years ago. Nobody wants to touch it because there’s no documentation on it and the source code was lost when IT re-imaged his desktop PC after he left. Many have tried to virtualize it or at least upgrade the OS but all have failed. The last time it went down in the middle of the day, the CEO of the company came down personally from the seventh floor just to yell at a room full of IT managers for two hours with the conference room door deliberately left open. The best anyone can do about it now is monitor some opaque queue status built into the thing, have some spare hardware handy, and make sure all the backups still run nightly.

              Yes, a company could hire a consultant to come in and disassemble the code to figure out how it works, and then possibly write a more maintainable clone for it. But that would introduce risk to whatever business process it manages and it would cost a lot more money than just keeping the old thing chugging along a little while longer, which is already working fine and, much more importantly, has already been paid for.

              That’s the enterprise I know, anyway.

              1. 1

                I believe Google had this problem and ended up installing Debian over top of each Red Hat box. https://www.usenix.org/node/177348

            2. 1

              Physical systems? Yes. That was the great thing about applications running directly on physical servers. Server warranty expired -> application had to be installed somewhere else, and most likely with a new OS and newer application version. Now with virtualization the VMs simply get migrated to a new cluster when the hardware is EOL. Aaand of course the application is important enough that management accepts the system running although there hasn’t been security patches for years…

            3. 1

              In OpenBSD it is easy and with little pain to perform a similar task, in my opinion that’s one of the benefits of developing a coherent system with unified and carefully maintained set of tools, developed wisely by the same team. In GNU, many of the basic userland operating system programs don’t have the same maintainer, and are not developed as part of an entity.

              1. 3

                I don’t think you understand, this has nothing to do with the operating system itself. If you leave any system running with users that can access it, bad things will happen. They will put small shell scripts on it that control mission critical functionality without you knowing, store important data on it, (ab)use it to access another system, …

                While I agree that being able to do upgrades could in theory be handy, I believe periodically wiping a system and replacing it will end up being better. All depends on your environment/job of course, but I’ve seen a fair share of 8+year old systems, not regularly re-created and accessible by almost everyone in the company. Shutting them down will probably end up causing a downtime somewhere else, or someone will complain about his data becoming inaccessible. This is no fun…

            4. 4

              This is ‘enterprise’ in the Red Hat world works.

              You can upgrade FreeBSD from 5.3-RELEAES - by several steps - up to latest 11.2-RELEASE but you can not upgrade Red Hat (or CentOS) from 6.9 to 7.5, because NOT.

              1. 2

                Looks like upgrading RHEL 6 to 7 server on x86_64 is supported.

                1. 1

                  Have you checked the details?

                  • Limited package groups: The upgrade process handles only the following package groups and packages: Minimal (@minimal), Base (@base), Web Server (@web-server), DHCP Server, File Server (@nfs-server), CIFS File Server and Print Server (@print-server). Although upgrades of other packages and groups are not supported, in some cases, packages can be uninstalled from the RHEL 6 system and reinstalled on the upgraded RHEL 7 system without a problem. See the table below.

                  So no, you can not compare that to freebsd-update and/or pkg upgrade from FreeBSD which will work in ANY condition and with all packages/states supported.

                  By the way, its only an ‘additional’ article in the knowledge base, its not official documentation of the Red Hat system.

                2. 1

                  Well, the modern way of working is immutable infrastructure (or at least scripted and therefore fastish to recreate) anyway, so that should be a moot point. And yeah, I know, in reality it is not :/

                3. 1

                  Red Hat Enterprise Linux 7 redefined the operating system

                  Ow ouch that marketing speak

                  Yum 4, the next generation of the Yum package manager

                  Huh, what happened to dnf?

                  1. 1

                    Yum4 is dnf 4.x