Almost exactly 5 years after Nvidia explicitly asked for collaboration on the subject to no avail. If anyone is curious to a rough (it takes more than this but..) breakdown of what that would entail https://www.x.org/wiki/Events/XDC2016/Program/xdc-2016-hdr.pdf
The alliance with nVidia is uneasy; they’re always trying to subvert it. Also the nVidia proposals are always over-engineered. This is a good example; application authors really don’t want to manage monitor-specific metadata, they just want to know that rendering with floats will give good HDR. This cuts the actual amount of proposed work in half, now only requiring changes to Wayland and Mesa so that FP16 is a valid channel depth.
I suppose that now I have to contact RH about the job listing. Curses.
So would IBM, Qualcomm or anyone else. A company with as large a techology gap on such a strong control surface would act just as bad given the opportunity – that doesn’t exclude competitive intelligence practices but rather the opposite. Listening to Nvidia doesn’t mean that you have to place nice, just accept that they are ahead of the curve, listen to what they have to say and wait for the day to stab them back.
This is a good example; application authors really don’t want to manage monitor-specific metadata, they just want to know that rendering with floats will give good HDR. This cuts the actual amount of proposed work in half, now only requiring changes to Wayland and Mesa so that FP16 is a valid channel depth.
The problem space is much more involved, even for the most trivial fullscreen case (see how Kodi does it in their DRM backend if you don’t believe me) – for composited desktop where you have different color spaces, primaries, transfer functions, … there is no established solution and all the controls aren’t in place still (backlight and ambient sensor especially). The clients do need to know the presentation colourspace versus the content mastering colourspace for both games, videos and photos – which are the ones that will need HDR anyhow. Any mismatch that will require tonemapping is a substantial bandwidth hog.
I worked on this very problem on behalf of Sony for a while ~10 years ago, and still do scRGB scanout/composition here on displays from 400 to 1000 nits. FP16 is just the packing format/depth, not what the coordinates >represents< and the transfer function(s) between spaces and primaries between that and what the hardware can present. It’ll fight your other composition pipeline (alpha channels and effects) even if you have the bandwidth to composite and scanout in scRGB. EGL wasn’t prepared for this (or anything really), GL sure wasn’t prepared for this. While something for the employer, the actual gear needed for verification (colorimeter and so on) is far from cheap either.
I posted this for two reasons: First may be someone is interested, but also second as information that Red Hat wants to work on HDR support.
Almost exactly 5 years after Nvidia explicitly asked for collaboration on the subject to no avail. If anyone is curious to a rough (it takes more than this but..) breakdown of what that would entail https://www.x.org/wiki/Events/XDC2016/Program/xdc-2016-hdr.pdf
The alliance with nVidia is uneasy; they’re always trying to subvert it. Also the nVidia proposals are always over-engineered. This is a good example; application authors really don’t want to manage monitor-specific metadata, they just want to know that rendering with floats will give good HDR. This cuts the actual amount of proposed work in half, now only requiring changes to Wayland and Mesa so that FP16 is a valid channel depth.
I suppose that now I have to contact RH about the job listing. Curses.
So would IBM, Qualcomm or anyone else. A company with as large a techology gap on such a strong control surface would act just as bad given the opportunity – that doesn’t exclude competitive intelligence practices but rather the opposite. Listening to Nvidia doesn’t mean that you have to place nice, just accept that they are ahead of the curve, listen to what they have to say and wait for the day to stab them back.
The problem space is much more involved, even for the most trivial fullscreen case (see how Kodi does it in their DRM backend if you don’t believe me) – for composited desktop where you have different color spaces, primaries, transfer functions, … there is no established solution and all the controls aren’t in place still (backlight and ambient sensor especially). The clients do need to know the presentation colourspace versus the content mastering colourspace for both games, videos and photos – which are the ones that will need HDR anyhow. Any mismatch that will require tonemapping is a substantial bandwidth hog.
I worked on this very problem on behalf of Sony for a while ~10 years ago, and still do scRGB scanout/composition here on displays from 400 to 1000 nits. FP16 is just the packing format/depth, not what the coordinates >represents< and the transfer function(s) between spaces and primaries between that and what the hardware can present. It’ll fight your other composition pipeline (alpha channels and effects) even if you have the bandwidth to composite and scanout in scRGB. EGL wasn’t prepared for this (or anything really), GL sure wasn’t prepared for this. While something for the employer, the actual gear needed for verification (colorimeter and so on) is far from cheap either.
Thanks a lot for the link, now I know everything I wanted to about HDR.