Denoise based on luminance?


This is the most basic implementation (I don’t have a noisy scene to hand) but this should give you a quick idea of what you’re looking at.

The crucial bits are as follows:

Take an output from your source footage to an RGB to LAB node. LAB gives you an output with a lightness channel and two “colour” axes - A and B. We only need the Lightness (L) channel so I used a shuffle note and direct the R, G and B so you have A.r, B.r and G.r to make sure the output reflects the lightness.

This becomes your mask which you feed to the Denoiser - note that you might need to tweak the output from the shuffle node to get the right effect. You can alter the mask by simply inverting, using threshold or even colour correction to alter the strength of effect according to your needs.

Don’t forget you can feed the output from the shuffle node to a viewer node to make sure you’re getting a nice greyscale image.

@omar is more experienced than I am so he might have a more detailed detailed version but this should get you started I hope.



You have a nice node setup. In my opinion, I would follow your suggestion but I wll say connect the shuffle node to the Denoiser AnalyseMask or AnalyseSource and not just the Mask. The direct mask is for blocking the ROD of an image that you don’t want want the denoising to occur. The AnalyseMask will still denoise but with a variance.

Mask: An optional image to use as a mask. By default, the effect is limited to the non-black areas of the mask.

AnalysisMask: An optional mask for the analysis area. This mask is intersected with the Analysis Rectangle. Non-zero pixels are taken into account in the noise analysis phase.

“If the image has many textured areas, it may be preferable to select an analysis area with flat colors, free from any details, shadows or hightlights, to avoid considering texture as noise. The AnalysisMask input can be used to mask the analysis, if the rectangular area is not appropriate. Any non-zero pixels in the mask are taken into account. A good option for the AnalysisMask would be to take the inverse of the output of an edge detector and clamp it correctly so that all pixels near the edges have a value of zero…”

I hope this helps.


Hi Omar,

Is this node setup what you were suggesting? If so what would be the setup for shuffle 1 and 2 (those after denoise)?



Of course Omar you’re right as ever - in my defence (cough) this was a “quick and dirty” node setup just to get Christopher started - luma masks such as this are incredibly useful for a bunch of reasons and it’s the sort of thing we should all have in the toolbox, don’t you agree?


I completely agree Smidoid.


Shuffle node 1 should read like

Shuffle node 2 should read like

Remember that the “A” input of the Shuffle node is the main input. The “B” operates as a Shuffle Channel Copy.

Is this what you are asking. The setup is flowing correctly. For organization purposes for the recombining of the channels, I would label the Shuffle node 1 as Y<U joined and Shuffle node 2 (Y<U)<V joined. That’s if you understand the channel joining workflow.



Smidoid, Omar, thank you both for your help. I still have a lot to learn from channels, any tutorial suggestions somewhere? I tried recreating the RGBtoYUV -> YUVtoRGB graph (Maximo’s post) and believe I should be getting the same image on both ends but I’m not. Could one of you please share content of the first 3 shuffle nodes (YUV)? Thanks.


Omar and I did something similar with LAB colour (it’s here somewhere…) which should be here somewhere but I’m burried! Using LAB colour in Natron for colour grading should get you moving.

One weakness of these nodes is you can’t see what’s coming out of them - this is something Natron (and others perhaps?) could learn from Blender. With Blender you can see the available outputs in the the node graph.

Marc (smidoid)


Actually every node can display its output. You have to click on the other tabs to see the option to view node. I am in the park with daughter now and I will show it later this evening.

This is your season
Omar Brown
Blessed House Media


Nice one Omar. Have a lovely time with your daughter. Just had mine over so she could make my kitchen smell (not allowed at her mom’s!) :smile:


Here is the example when you enable the Preview option of the Node’s Node Page.



Attached is the RGBToYUV709 graph node setup that you had asked for. This script was created in Natron 2.2.5 and yu have to supply your own Read node image. The attached image is the screen shot for the Natron script.

RGBToYUV709 - YUV709ToRGB Shuffles.ntp (57.3 KB)


Actually, what I mean (in reference to Blender) is this:

This is a simple example - just a node and a channel separator. You see how Blender displays all the possible output nodes - clearly Hue, Saturation, Value and Alpha in this case. Natron doesn’t do this from what I can see and it really can be a major pain in the fundament. :wink:

These output options are available when the node is added to the tree so you can work quickly and see where everything is going.


Okay!!! Stay tune. I have already requested for these blender nodes to be converted to PyPlugs. Attached are the work in progress PyPlug workflows.


Nice! Crucially though, this applies to all nodes where more than one output is allowable through a split - channels in particular. It’s visually more difficult to create a node graph (and then later see what you did) unless each output is visible in the chart - a la Blender.


The Shuffle is good for this as a PyPlug extended feature for color transform processing needs, but it is not something that is needed for an every job. I had asked Fabrice Fernandez to architect this and hopefully within the next few weeks we will see the results.


Omar, Marc, thanks a lot!


Cool Breeze and no problem.


Hey ! exiting stuff is going on here !
I think one issue is that Natron nodes can’t have more than one different outputs.
it’s also the case for the Pyplugs.

In Nuke I’m not sure that’s possible. In Blender nodes are more designed to allow this. But in the other hand it tends to make quickly a big spagetti mess when there is too much nodes.


Hey Omar,
Curiously if you add 3 denoise nodes after Y, U and Shuffle 1_3, the denoise on the U (green) acts weird once you press the analyse noise levels, it changes the color of the output image. This is using your scene I downloaded. I don’t think it has to do with your scene. Give it a try for yourself if you wish. Not that it really matters that much, noise is more on the blue channel.