1. 1

    This brought back some good memories. I can remember using Sourcer (albeit an illegal copy) when I was at school in the early 90s and learning x86 assembly language. When I started using the internet Andrew Schulman was one of the first people I emailed (and he even replied!).

    1. 4

      Had to smile when I saw jcs’ first two screenshots, from the late 90s/early 2000s. I’m pretty certain I have a directory of desktop screenshots I took in the mid-late 90s, complete with various incarnations of fvwm, AfterStep, WindowMaker and Bowman (the window manager that really kicked off the NeXT-lookalike craze).

      1. 5

        Right? It takes me back to when I’d spend hours browsing themes.org (RIP)

        1. 5

          I’m still using WindowMaker, to this day. At some point about 10 years ago I took a detour through wmii and ratpoison land but it never stuck, and afterwards I found very few window managers come even close to matching WindowMaker’s speed and ergonomics. I tend to have a lot of windows open (datasheets, reference manuals, code windows, debuggers etc.), so “modern” interfaces, with flat windows and fat titlebars, are pretty much impossible to manage. On the other hand, the “layouts” are too fluid (thanks to many of these docs being PDFs with various font sizes, margins etc.) to meaningfully manage with a tiling WM, I don’t have enough monitors to make that work :).

          1. 1

            I tried it many times but I never understood how to really manage windows with it. Somehow things where always on top of each other and it never felt ergonomic. I guess I never read a good tutorial.

            1. 1

              I think you did understand it, there’s not much about it to understand if you’ve used any “mainstream” UIs like Windows. It’s just one of those things that everyone has different preferences about :). Things end up (more or less) on top of each other by design, and you can sort of put some order in it using window icons and window shading (double-click on the titlebar and the window is “folder” underneath it). It’s obviously not as neat as tiling WMs but the way I work usually isn’t neat, either. With tiling WMs, I had the opposite problem: I had so many things open at once that showing all of them at once was impractical, and trying to get them to fit into workspaces just meant I ended up spending forever going between workspaces.

              There are a lot of things about WindowMaker that I would improve (the dock is one of them, I always wanted something closer to AmiDock, for example) but it works well enough that I never got around actually writing any of that.

              1. 1

                I see, that makes sense. I try to work differently, so I guess that is why it never worked for me.

        2. 1

          I wish I had screenshots of my WindowMaker setup from the late 90s. There was a site that had these amazing gothic themes for WM, but I don’t have any of my files from those days.

          1. 1

            After my post I tried to find my old screenshots but it’s proved to be a bit more difficult than I thought - my fileserver home directory has ~25 years of cruft, some of it not that well organised. I’ve not managed to find them yet :(

            1. 1

              Yeah, I went and looked, and I have school work back to 1994(!), but nothing like a home directory with my WindowMaker themes.

              1. 2

                It would be really cool if you could find it, as in, I’m pretty sure the folks on the WM mailing list would love to hear about it! One of the things the Window Maker community laments is the disappearance of Freshmeat’s theme repository (which was itself a superset of themes.org’s archive if memory serves me right). It hosted more than 3,000 WindowMaker themes, very few of which were mirrored elsewhere. If you still have some of these, you may be the only one who still has them!

        1. 1

          Great to see Slackware is still around - if I’m not mistaken it’s the oldest surviving Linux distribution? I moved away from it a long time ago, but I can still remember downloading version 2.1 onto 5.25” disks back in 1994…

          1. 6

            I discovered, read and enjoyed this paper recently, and found that it crystallised one of the fundamental design axioms for me into a way that stuck in my head, so wrote it up briefly here yesterday: Unix tooling - join, don’t extend.

            1. 4

              It was from your blog post that I found the original paper :)

              1. 4

                That’s made my lunchtime :-)

            1. 1

              Scimming quickly through this it seems to me that it must be quite an ild article, because of the RAM of workstations that is in the range of 256MB to 1GB. However i didn’t see the date of publication.

              1. 3

                Yeah, was thinking the same. I can see the pkgsrc documentation was accessed in October 2004 so I guess the document must be from around that timeframe?

                Update: Ah, found this related presentation by Jan Schaumann and it’s from EuroBSDCon 2004.

                1. 3

                  The real question is how well it all holds up two decades later. Is that system even still in use? I wonder.

                  1. 2

                    Whoa, a KDE 2.x screenshot! That brings back memories!

                1. 1

                  Was this work part of the effort that led to the iPhone?

                  1. 1

                    No, I don’t believe so - the iPhone predates this activity by a few years (the iPhone was announced in January 2007, this paper is from 2012 although the work was done when Snow Leopard was still in development, so 2008/2009). It may well have assisted in the port of macOS to ARM though - the author later went to work for Apple after he graduated.

                  1. 3

                    Glad to see work is progressing on enhancing ZFS support - for so long NetBSD support was stagnating a bit.

                    1. 4

                      A lot of us having been using it in production for the past year (or longer).

                    1. 3

                      Thanks, enjoyed reading that. For those looking for a cheaper option, Mellanox is another option - second hand hardware can be found cheaply on eBay. Look for ConnectX-2 and ConnectX-3 cards, which support both 40Gbps InfiniBand and 10Gbps Ethernet.

                      1. 7

                        I love Fantasque Sans Mono; it’s so damn cheerful and twee every time I look at a terminal or editor.

                        1. 5

                          l and I look identical in that font :(

                          1. 26

                            As in the font used on lobste.rs which made your comment a bit hard to parse ;)

                            1. 1

                              yeah, fonts shouldn’t introduce ambiguity by displaying different characters the same way.

                            2. 6

                              l and I

                              Perhaps I’m missing something, but if I type them in the code sample input box on Compute Cuter (selecting Fantasque Sans Mono) they look different to me?

                              1. 3

                                I also see clearly identifiable glyphs for each when I try that. The I has top and bottom serifs, the l has a leftward head and rightward tail (don’t know what you call em), and only the | is just a line.

                              2. 1

                                Honestly when is that ever a real issue? You’ve got syntax highlighting, spellcheck, reference check, even a bad typer wouldn’t accidentally press the wrong key, you know to use mostly meaningful variable names and you’ve never used L as an index variable… So maybe if you’re copying base64 data manually but why?

                                1. 9

                                  My friend who’s name is Iurii started spelling his name with all-lowercase letters because people called him Lurii. Fonts that make those indistinguishable even in lowercase would strip him of his last resort measure to get people to read his name correctly. (Of course, spelling it Yuriy solves the issue, but Iuirii is how his name is written in his id documents, so it’s not always an option)

                                  1. 2

                                    It could be, and it’s not just limited to I, l, and 1. That’s why in C, when I have a long integer literal, I also postfix it with ‘L’: 1234L. Doing it that way makes it stand out easier than 1234l. And if I have to do an unsigned long literal, I use a lower case ‘u’: 5123123545uL. That way, the ‘u’ does stand out, compared to 5123123545UL or 5123123545ul.

                                  2. 1

                                    cf

                                1. 3

                                  Since I need to run a lot of x86_64 VMs, the M1 isn’t for me yet…but if and when it becomes a viable thing for me, my main concern is going to be thermals. I have a 2019 16” MBP and it just runs hot when plugged into an external monitor. The area above the touchbar is painfully hot to the touch and the fans go full blast at the slightest provocation.

                                  I’d like something that isn’t going to melt and doesn’t sound like a jet taking off when I provide a moderate workload…

                                  1. 2

                                    It’s a bit of a painful one to work through, but this thread on the MacRumors forums has some hints on how to solve the excessive heat when using an external monitor.

                                    1. 1

                                      Do you need to run the VMs locally? Since we’re all in working-from-home mode, I’ve got my work laptop as my primary machine, but any engineering work is done on a remote VM (either on my work desktop, which is a nice 10-core Xeon, or in the cloud). I basically need a web browser, Office, and a terminal on my laptop. Neither my MacBook Pro nor my Surface Book 2 have particularly great battery life running VMs, so I tend to shut them down when I’m mobile, meaning that the places I’d actually run them are even more restricted than the places where I can connect to a remote VM.

                                      1. 1

                                        Unfortunately, yeah. The product we ship is itself sometimes shipped as a VM image, and being able to build and tinker locally is a huge timesaver. Maybe in the future, when we’re doing more business in the cloud, it will be different but until then, I’m pretty stuck on x86_64 hardware.

                                        1. 1

                                          Genuine question, why does it need to be local? I use entr and rsync to keep remote systems in sync on file saves/changes and setup emacs to just run stuff remotely for compile-command. Works a treat and the laptop never gets warm.

                                          This lets you edit locally but “do crap” remotely, cloud or not. In my case the “cloud” is really just a bunch of servers I rsync to and then run stuff on via ssh. Yeah you could use syncthing etc… but damned if i’m going to sync my entire git checkout and .git tree when all I need is a few k of source that rsync can keep in sync easily if prodded.

                                          1. 1

                                            I mean, it’s not a law of the Universe or anything, but it’s significantly easier. These images are fairly huge and while we do have a beefy shared system running test VMs, it’s loaded down enough that there’s not a lot of spare capacity to run 4-6 VMs per user so that system is used for testing only.

                                            And then, finally, there’s the issue that I don’t want to have to have two machines when I can make do with one. :)

                                            1. 1

                                              Gotcha, just curious. I try to keep vm’s off my laptop in general if i can get away with it. Lets me just chill in coffee shops (back when that was a thing) longer on battery power alone.

                                              My goal is generally: be kind to the battery, plus with tmux+mosh i can have the remote thing run stuff and then move locations and have things happen in the bg. But if resource constraints is it that makes more sense.

                                    1. 1

                                      I was hoping for how he got the touchpad working properly. I have to use an external keyboard and mouse to dual boot properly. I installed Ubuntu 20.04 in June. If I can get past the hardware compatibility issues, I will switch completely. Highly recommended if you don’t mind the external hardware dependancies. Btw, it is so so so so much faster than native Mac OS. It is insanely fast. Everything is instant. I didn’t realize all of the sluggishness on my 2019 16inch macbook pro until I used Ubuntu for 2 seconds.

                                      1. 1

                                        I’m pleasantly surprised to read that the 16” MBP is reasonably well supported by Ubuntu. According to this list, the touchpad should work, you just need to apply some patches.

                                        1. 1

                                          Thanks, I will take a look at applying those patches, when I did it earlier this summer I don’t think a few of the patches worked at all. Worth trying again.

                                      1. 11

                                        I wonder what they’re going to do for Mac Pro class of hardware.

                                        Trying to fit more RAM, more CPU cores, and a beefier GPU all into one package doesn’t seem realistic, especially that one-size-fits-all chip isn’t going to make sense for all kinds of pro uses.

                                        1. 7

                                          It’s going to be interesting to see what they do (if anything at all) with a 250W TDP or so. Like, 64? 128? 196 cores? I’m also interested in seeing how they scale their GPU cores up.

                                          1. 2

                                            There’s NUMA, though Darwin doesn’t support that right now I think.

                                            1. 2

                                              Darwin does support the trashcan Mac Pros, right? They have two CPUs, and that’s a bona fide NUMA system.

                                              1. 3

                                                Trashcan Mac Pros (MacPro6,1) are single CPU - it’s the earlier “cheesegrater” (MacPro1,1-5,1) that are dual CPU. I do believe they are NUMA - similar-era x86 servers certainly are.

                                                1. 1

                                                  Ah, you’re right, sorry for the confusion.

                                            2. 2

                                              Trying to fit more RAM, more CPU cores, and a beefier GPU all into one package doesn’t seem realistic

                                              I have heard this repeatedly from various people – but I don’t have any idea why this would be the case. Is there an inherent limit in SOC package size?

                                              1. 1

                                                I’d assume they’ll just support non-integrated RAM - as there will be space and cooling available.

                                              1. 3

                                                Great news. Rather curiously I see there’s an SGI release - I thought the SGI port was discontinued after 6.7 (the port page still says as much)?

                                                1. 1

                                                  Thanks - nice review.

                                                  Did you investigate migrating data from Tiny Tiny RSS to Miniflux (I don’t mean OPML, I mean starred posts, etc)? I’ve looked at migrating to something else for a variety of reasons, one of which is the maintainer’s quite poor attitude. He’s notoriously picky and downright rude at times and others have made some other criticisms.

                                                  1. 1

                                                    No I have not, stars are not a feature I use.

                                                  1. 1

                                                    I’m saving this thread for the future since I’ll be switching fro Linux to macbook. Can anyone suggest me how to get a tiling window manager on macbook

                                                    1. 3

                                                      You can install Linux or BSD on a Mac, or you can virtualize to run them, but for macOS itself there is no ecosystem of alternative window managers and no such abstraction layer. macOS does not run X.org or have any similar abstraction layer for multiple window managers; it has its own window server that’s tightly integrated with the Mac SDK.

                                                      1. 2

                                                        I haven’t been able to find a tiling window manager on the Mac that doesn’t end up feeling like fighting the platform. I use them exclusively on X, but I don’t bother any longer on the Mac.

                                                        1. 1

                                                          I’ve not used them myself, but Amethyst and yabai are options.

                                                        1. 1

                                                          There are many lists of favourite macOS apps, but here are a few of my favourites that I haven’t seen mentioned here yet, in no particular order:

                                                          • TotalSpaces (paid): Enhances the built-in Spaces functionality. I use it primarily because it removes the annoying transition animation. Requires SIP to be partially disabled.
                                                          • AltTab (open source): Makes app switching a bit more usable (and, dare I say it, Windows-like).
                                                          • Wallpaperer (free, not open source): Automatically updates desktop wallpaper from a chosen Subreddit.
                                                          • Carbon Copy Cloner (paid): Backup utility, far superior to Time Machine.
                                                          • IINA (open source): More Mac-like media player. Alternative to VLC.
                                                          • MacUpdater (paid): Keeps third-party apps updated.
                                                          • Reeder (paid): RSS reeder, also available for iOS.
                                                          • MailMate (paid): GUI mail application, supports Markdown for HTML mail.
                                                          • Syntax Highlight (open source): QuickLook extension to view source code, with configurable syntax highlighting.
                                                          • MacVim (open source): MacVim
                                                          • Emacs for macOS (open source): Emacs
                                                          1. 5

                                                            Items marked in italics are not free as in beer

                                                            • I would recommend Alacritty over iTerm2 - if you’re happy to use tmux or screen for tab/window management, it’s better in nearly every way.
                                                            • Scroll Reverser to allow natural scrolling with the trackpad but not with a mouse, since you can’t set them independently by default.
                                                            • Muse for better music display/control on the touchbar.
                                                            • Bartender to manage all your menu bar icons.
                                                            • Iina is an alternative to QuickTime Player with support for a lot more formats.
                                                            • Skim : Preview :: Iina : QuickTime Player.
                                                            • Contexts : Command-Tab :: Skim : Preview.
                                                            • Monit for system stats monitoring.
                                                            • Magnet for window snapping.
                                                            1. 3

                                                              Alacritty is fine, but the profiles of iTerm2 is are very useful to avoid nested tmux sessions when connecting to remote machines, unless you want lots of terminal windows open. Having used both extensively, I can’t say one is significantly better than the other as a terminal emulator.

                                                              1. 1

                                                                the profiles of iTerm2 is are very useful to avoid nested tmux sessions when connecting to remote machines, unless you want lots of terminal windows open

                                                                Interesting, I haven’t come across a need for that in my daily use of Alacritty. Can you elaborate more on that use case?

                                                                I can’t say one is significantly better than the other as a terminal emulator

                                                                That’s fair enough :) I think “one is better than the other” is subjective, anyway, not least because it’s affected so strongly by personal priorities (I probably should have added “in my opinion” in my original comment). I value so much the fast response times of Alacritty that that single thing alone makes iTerm2 unusable for me, but I can understand that isn’t as important for everyone.

                                                                1. 2

                                                                  I connect to a couple of machines that have long running tmux sessions in them, so if I use Alacritty (which doesn’t have tabs) and I use tmux in Alacritty for tab/window management I end up with a nested tmux session when I connect. It’s a workable situation, but all the ways to deal with it are not great. With iTerm2, I can set up a profile and bind it to a hotkey to connect to the machine in a tab.

                                                                  With respect to response times, the only terminal emulator that I’ve found whose response time was actually noticable was Terminal.app. You really can’t go wrong with either Alacritty or iTerm2.

                                                              2. 3

                                                                Another open source alternative terminal is Kitty. I’m a recent convert after being a long-term iTerm2 user. IMHO, it’s lighter and less bloated and has all of the features I need. I did try Alacritty for a little while but find Kitty a little more polished.

                                                              1. 1

                                                                Trying to figure out some way to deliver our Java application to our clients. Been looking at Citrix, but it’s damn expensive. Now experimenting with Paralells.

                                                                1. 2

                                                                  Why do you need virtualization software to deliver a Java application, since the JVM is supposed to run on most things? Is it a security thing?

                                                                  1. 1

                                                                    Yes. Extra layer of security. And so that we don’t have to ship a JRE with it to the users.

                                                                  2. 2

                                                                    Not sure of your requirements, but I’ve had good experiences using AWS AppStream (WorkSpaces is an option if you need a full desktop outside the browser).

                                                                  1. 2

                                                                    Docker has become like a “universal server executable” by now. It does not matter if other container-systems can do the same, if you want 100 developers at your company to quickly have the same development environment, tell them to pull the versioned build docker from the local docker registry server, and most of them should be able do this without any major issues. Want to quickly set up a service somewhere? They are likely to support Docker too. FreeBSD jails are unique to FreeBSD.

                                                                    Also, most modern Linux distros now place binaries in just /usr/bin, then add symlinks from /bin,/sbin and /usr/sbin. Why FreeBSD thinks it’s a good idea to place binaries in these six directories: /bin, /sbin, /usr/bin, /usr/sbin, /usr/local/bin and /usr/local/sbin, is beyond me. What does it solve that a modern Linux distro can not?

                                                                    I get the separation between “system packages” and “user installed packages that are also available to other users”, but this should IMO be handled by a package manager, not by letting users place stray files outside of /home. I assume that’s why FreeBSD has an emphasis of being able to reset everything to the “base system” because /usr/local may easily become a mess?

                                                                    1. 4

                                                                      Docker has become like a “universal server executable” by now.

                                                                      I must admit it - and many other people from FreeBSD ‘world’ admit that too - that Docker tooling (not the technology behind) was the thing that Docker got so much traction. IMHO its pity that Jails management was not put into similar way (may little more thought out). But from what I heard Docker containers run only on Linux so that does not make them portable. CentOS to Ubuntu migration does not count :p

                                                                      Also, most modern Linux distros now place binaries in just /usr/bin.

                                                                      That is separation between Base System binaries (at /{bin,sbin} and /usr/{bin,sbin} dirs) and Third Party Packages mainained by pkg(8) located at /usr/local/{bin/sbin} dir.

                                                                      We all know differences between bin (user) and sbin (admin) binaries but in FreeBSD there is also more UFS related division. When there was only UFS in FreeBSD world then /bin and /sbin were available at boot before /usr was mounted - this the historical (and useful in UFS setups) distinguish between /{bin,sbin} and /usr/{bin,sbin} dirs. In ZFS setups it does not matter as all files are on ZFS pool anyway.

                                                                      I get the separation between “system packages” and “user installed packages that are also available to other users” (…)

                                                                      Users on FreeBSD are not allowed to install packages, only root can do that.

                                                                      I assume that’s why FreeBSD has an emphasis of being able to reset everything to the “base system” because /usr/local may easily become a mess?

                                                                      Its not about ‘mess’ in /usr/local as FreeBSD Ports Maintainers keep the same logic and order in the /usr/ports as in the Base System. Its about another layer of ‘security’ if you fuck up something. If you broke RPM database in the Linux distribution you need to reinstall (or need really heavy repair time). On FreeBSD when you (for some readon) mess up the packages you just reset the packages to ‘zero’ state and Base System is untouched.

                                                                      Hope that helps.

                                                                      1. 1

                                                                        Thanks for the thoughtful answers! I have considered using FreeBSD on a server or two, since it seems like a very solid and well put together system, while at the same time keeping things minimal and fast.

                                                                        I feel like a good package system in combination with ZFS (or another filesystem with snapshots) makes many of the old ideas and solutions obsolete. Why keep anything in /usr/local if a package system can handle (possibly manually built) packages better? For instance, on Arch Linux, creating a custom package is so easy and fast that the threshold is low enough to never having to install anything directly in /usr/local. And once a user did that, the threshold is equally low to upload it to AUR, so that new users can just use that one.

                                                                        Similarly, I’m not sure that the additional layer of ‘security’ of having a Base System covers anything that a good package system (like pacman) in combination with filesystem snapshots can cover. Do you have an example?

                                                                        About Docker, AFAIK it can also be used on Windows and macOS, not only Linux. There is some heavy black box lifting going on within those applications, though. :)

                                                                        1. 3

                                                                          One of the things you want to avoid is if some package gets the great idea to install a new compiler and name it “cc” and override the system compiler, or add libs/includes in such a way that makes it super hard to get back into a working system. If some random bsd port adds libc.so to /usr/local/lib, you are not prevented from running programs, because system programs link to stuff in /usr/lib, and not “any random lib everywhere I find .so files in”. A well meaning package system will of course tell you that this new “cc” seems to collide with the system compiler packaged cc if that did exist before, but if it didn’t, then the package system might work against you trying to get back. BSD wants you to be able to use base things without hiccups (which is lovely when things act up), then allow packages to add things on top of that, without risking the base set of files, be it manpages, libs, includes, fsck tools or whatever.

                                                                          1. 1

                                                                            I could not stress this better. Thanks.

                                                                            1. 1

                                                                              These seem like theoretical problems to me. Two packages can’t both install cc in /usr/bin on Arch Linux, and that’s the only directory that is in $PATH by default (since the other bin directories are symlinks).

                                                                              Isn’t it better to just install everything on the system with a package manager, so that nothing collides and everything is in its place? Then, if the combined results of a package upgrade should ever go wrong, one can just undo it by using filesystem snapshots.

                                                                              1. 2

                                                                                Indeed, if the package manager is in control of all installed software then it’s not a problem. It’s only relatively recently that this has happened with FreeBSD (Package Base) and the work is still ongoing - before this the base system was distributed purely in tarball format, with the package manager only managing /usr/local.

                                                                                1. 1

                                                                                  It’s quite common on FreeBSD to have both a base-system version with long-term support guarantees in /usr/bin and a newer version from ports in /usr/local/bin. It’s up to you which order you put the two in you PATH and it’s also up to you whether you install aliases that favour one over the other in specific cases.

                                                                                  1. 1

                                                                                    Why would you want that? If you want to use an experimental version, install that and test that it looks good. Or, replace it with the stable version again if your tests didn’t pass.

                                                                                    Or, for example, Firefox and Open Office are available as both stable and fresher versions on Arch Linux, and you can install both at the same time.

                                                                                    If you need some special piece of software to have an old and stable environment, you can use a jail, docker or a VM for that.

                                                                        1. 1

                                                                          Reading those RUN commands makes me want to weep - the one that builds Python itself is 75 lines long. Surely there has to be a better solution than that?

                                                                          1. 2

                                                                            Well, I’d imagine using a multi-stage Dockerfile would help. One could install all the cruft and build in one stage, then copy only the built binaries and required files to a different - clean - stage. I’m wondering if they keep it this way because the Docker hub itself doesn’t provide a way to specify a target stage to create the image from? (no idea, but the last time I checked they didn’t).

                                                                            1. 1

                                                                              The commands are combined in the single RUN command to avoid Docker caching the build artefacts in that layer. The looks to be a lot of effort here to limit the layer caching behaviour and clean up build components.

                                                                              I think this is an excellent candidate for multistage build (https://docs.docker.com/develop/develop-images/multistage-build/) as @dguaraglia mentioned, with all build artefacts including compilers and build dependencies jettisoned as soon as the binary build is completed with the resultant python binaries and associated libraries moved into a fresh container. With this approach, the layer caching from build/compile steps isn’t an issue because the whole thing is destroyed. There may be reasons why this approach wasn’t used.

                                                                              I also believe there is value in utilising the OS package managers for this. The constant driver for a lot of the source builds in containers seems to be the desire to access the bleeding edge versions that aren’t available in the distribution released packages of the base OS used for the image. In this example, the binary build process could be moved to a .deb package build of the latest source in an earlier CI step with the result stored/published. These .deb packages could then be installed using apt like the base OS components in the container in the same apt-get step. The additional benefit here is the resultant binary .deb can be used consistently inside multiple containers, or even across legacy VMs without requiring a rebuild.

                                                                              1. 2

                                                                                Indeed, I can understand why the build has been done like that, and a lot of effort has certainly gone into cleaning up the built artifacts. Maintaining it must be a nightmare though (I’m guessing it probably doesn’t change hugely from Python-release to Python-release though).

                                                                                I agree with your sentiments about using OS package managers. Building, for example, a .deb and having that made available for containerised/non-container use would be much easier to maintain, IMHO. Building Debian packages with all of the associated tooling is a lot easier than stringing everything together in a single RUN command.