1. 23
  1.  

  2. 5

    Real color theory is very complex, however this author just gave up and dealt with the HSL colorwheel, further teaching another generation that we should not look beyond RGB color spaces, which in my opinion is a fallacy once display technology gets better.

    1. 4

      What resources do you recommend as an alternative? I’m interested in color theory and alternate color spaces, but haven’t really found any accessible resources for learning about it.

      1. 13

        I would recommend to get your hands dirty and dive into the theory. There’s lots of handwaving in most sources available. Try to understand what the motivation behind the XYZ primaries is (to keep it short, they are three imaginary colors that, combined linearily, can create any perceivable colour), which followed from the fact that there are colours which cannot be “mixed” from RGB. I found the video series “Color Vision” by Craig Blackwell 1 2 3 4 5 to be very helpful and enlightening.

        Now, once you’ve understood that, you have to understand that the “perfect” spectral colors (which means monochromatic light of different wavelengths) are the “boundaries” of human vision, so to say the convex hull of the XYZ-coordinates of all visible spectral colours (modulo lightness) is the realm of human vision. You can get those coordinates here. It turns out, these “coordinates” X(lambda), Y(lambda), Z(lambda) are called color matching functions (you can get them there as CSV’s).

        The final piece in the puzzle is to convert these XYZ coordinates to common normalized formats like xyY or even CIELab or CIELuv, or even polar coordinate forms like HCL(uv). All these formulas can be found here, an excellent resource of transformation functions between all kinds of different formats, including XYZ <-> xyY and so on, by Bruce Lindbloom. He is a really awesome guy. Now the only thing left to do is plot the xyY data in ParaView or something and if you look at just the xy-plot you’ll see the familiar horseshoe. :)

        If you look at the color transformations from RGB to XYZ, you’ll see that RGB data includes a lot of implicit information, including the reference white and which RGB primaries you actually choose. Most people, talking about RGB, are talking about sRGB, but you can “invent” an RGB color space easily by just defining three primaries. If you manage to convert an RGB coordinate to an XYZ, Luv or Lab coordinate, the latter are all “universal” in describing color perception. These spaces allow you to express any visible colour. With RGB, they over the years defined new colour spaces (Adobe RGB, ProPhoto RGB and so on), moving the primaries a bit to make the range of colours bigger, but in the end there are still quite a lot of colours left which just cannot be expressed within currently “popular” RGB colour spaces. There are drawbacks to working with anything other than sRGB coordinates (imaginary colours, no good software backing, lots of superfluous nomenclature, …), but you gain a lot of things. For instance, CIELuv is perceptually uniform, which means that if you select equidistant colors on a “line” within the uv-diagram, these colours will all be equidistant with regard to perception. If you work with HSL based on RGB, this does not work, which is very cumbersome if you want to select nice colours for diagrams and such. The reason for that is for instance that sRGB has a nonlinear gamma curve. You might want to work with Linear RGB then, but if you do any of that stuff on your data, you might as well work with the real thing.

        I hope this introduction was sufficient to get you started. Send me a mail if you want to see some code. I might give a talk about it this year, but it takes some preparation.

        1. 1

          Woah, thanks for the intro - I remember seeing your slcon talk a few months ago on color spaces and farbfeld, which is initially what got me interested. I’ll probably dive into all of it a bit more soon when I start fiddling with 3D graphics again. Thanks for the resources - I’ll definitely check them out :D

        2. 4

          not OP, but https://www.handprint.com/HP/WCL/wcolor.html is probably the most accessible treatment I’ve found.

          1. 1

            Cool I’ll definitely take a look at this - thanks :)

          2. 3

            Color theory is interesting, but (I feel) a bit of a rabbit hole. I learned a little about color spaces a few years back while writing a blog post about how the difference in color space and gamma value between older Mac and Windows computers might have influenced some of the popular color themes.

            I found Danny Pascale’s “A Review of RGB Color Spaces” a useful (if somewhat terse) source of information about computer color spaces.

            1. 1

              Neat, I’ll have to read that paper, thanks for pointing it out

          3. 6

            Are there non-RBG displays coming down the pipeline? It makes sense to design for the displays that exist rather than the ones that might. I think this article is, as the title says, going for practical color theory rather than comprehensive color theory.

            1. 1

              I assume you are referring to displays that can display colours beyond sRGB. Yes, these displays exist already. In fact, the current iPhone and most AMOLED displays fall within that group.

            2. 3

              Real color theory is very complex

              I think the issue here isn’t really about “real color” theory as in “all there is no know about color” but “i am making a blog what color what I make all the stuff in it?” and choosing 2 colors then making a light and dark version of each is a solid strategy for an otherwise ‘design-illiterate’ developer.

            3. 2

              An interesting tidbit: https://en.wikipedia.org/wiki/SRGB - the visible color space in an sRGB monitor is far smaller than perceptible with normal human eyes.

              This becomes an interesting exercise in thought when you consider that the physical representation of images is typically transmitted on screens these days: we’re literally losing pieces of information between source reality and eyeball sensing.

              If you take up painting, you will, I think, realize that the nuance of color of the paint exceed the capacity of the monitor to display, as I did.

              1. 5

                Yes, it’s even more fascinating if you think about it: Cameras nowadays are very much capable of capturing all perceivable colours. This information is still present in the raw files, but once you do raw postprocessing, this information get lost.

                This is one reason why if you take a photo of a sunset with the rich, almost spectral reds and yellows, it turns out to be dull and almost uninspiring on your screen. Display technology is a joke, and I’m sure we’ll look back at our display technology nowadays in twenty years like we now look back at CRT screens of the mid 90’s. It’s hard to find good fluorescents in screens, but OLEDs and mLEDs might change this dramatically (research is underway).

                Linux really needs better colour management. Everything is assuming sRGB, and we missed the chance in Wayland to do it right. Hell, even GIMP didn’t have proper colour management until a few years ago. Like in the Allegory of the Cave, we might be shocked in a few years when we realize that the entire infrastructure needs to be “reworked” in the interest of being able to actually see the colours our cameras are capturing. And up to this point, I hope everybody keeps their RAW data so they might actually be able to leverage the screens of the future. For what it’s worth, nobody would only keep the b/w prints if they had taken the images in colour in the first place.

                1. 1

                  Yes, the infrastructure is nearly completely absent outside of highly specialist equipment & software. You have to ensure the camera picks up everything, but the cameras these days render to a screen (phone or on-device LED); then, that has to be examined on the computer. The pipeline is flawed…. raw data -> software to display/manipulate it -> device drivers / OS display software -> monitor. Then as you manipulate it, you are adjusting an incorrect file with an incorrect rendering.

                  I would guess that Windows will eat this space alive in about 5 years, as Apple commodifies and shifts downmarket and the Linux world keeps falling off the cliff of locking into the server market. :-/