Grade node normalize will now set Black point to lowest value so closest object distance typically well over 1 and the white point to 10000000000.0 for infinity.
How does this make the values usefully normalized for using as an alpha mask on a blur for instance?
Maybe if you link the same image's alpha to the mask of the grade node itself so alpha clips the normalize function?
Anyway I guess it needs to use the alpha or observe the mask in a boolean way to find the proper white point.
Also this might really depend on analyzing an entire sequence to find the true limits.