1. 12
  1. 2


    in terms of black-hat, I’d say probably a good goal would be https://www.youtube.com/watch?v=CwuEPREECXI , Process involving using the very thankfully panning camera to produce a super-resolution image, perspective correct the envelopes, and then recover what info there is.

    1. 2

      A heuristic I find myself using often is entropy (in the information theory sense). If your redaction scheme provides more information than there is entropy in the source, I consider it a bad choice. For plain text English, that entropy could be less than 1 bit per character!

      1. 2

        it has a curious bit of coloring in it. […] I’m actually not 100% sure why this happens (and sometimes doesn’t), but it’s an artifact of the rasterization process when text is rendered to screen.

        That coloring specifically comes from subpixel rendering, a technique for increasing the effective horizontal resolution of text by separately setting the brightness of the red, green, and blue subpixels within each pixel.

        Reasons subpixel rendering is not always used:

        • It might be disabled when rendering to a screen with low pixel density, on which the colored artifacts are too noticeable.
        • It might be disabled if the font renderer is not sure of the layout of the subpixels within each pixel. If an image with such coloring is viewed on a screen with a different subpixel layout, the partially-filled pixels make the image look jagged instead of sharper.
        • Apple removed support for subpixel rendering in macOS after supporting it for years. Apple’s justification was that screens used with macOS generally have high pixel density, so the complexity the technique added to the graphics code in macOS was no longer worth the slight increase in sharpness the technique provided.
        1. 3

          Apple also never used subpixel antialiasing on iOS because you could rotate the device, which would screw it up.