Denoise based on luminance?


This is probably one for Omar or one of the devs but I expect it could help others like me with low-end prosumer cameras.

The problem of noise seems to concentrate in darker areas of the image - I assume this is because there are simply fewer photons hitting the sensor pits. (This is similar to the noise we see in 3D raytracers like Cycles.)

I’ve tested this theory by adjusting the sensor amplifier (ISO) and although there is a slight increase over the ISO range, there is still notable noise even at he the lowest settings but primarily in dark areas.

I think an interim solution might be to use a luma mask (hence why I’m calling on @Omar, sorry mate :innocent:) which feathers based on the luminance of the image: giving us stronger denonising in dark areas and progressively weaker application in brighter ones.

Of course, it’s quite possible this already happens and I’ve missed it. :slight_smile: but no matter how much I tinker with the settings I’m unable to get good result in dark areas without negatively affecting lighter ones.

All advice gratefully received as ever, this is a real pain in the fundament for me because a lot of my projects seem to feature dark areas which jump out at me like great big throbbing sore… thumbs.



Have you tried to separate the color channels using RGBtoYUV709 and YUV709RGB using the Denoiser node just on the U and V channels. Since the Y channel holds the luminance data, try processing the U® and V(B) with their own denoise node. Let me know if that have any affect.


Wouldn’t you know - it did! Thank you (again) Omar. Damn there old boy, I owe you a lot of beer! :wink: )

Awesome sauce. I’ve also added a field recorder (Ninja 2) to get better quality footage. And oh boy, does it!

From 50Mbps/420 to 220Mbps/422 is an amazing transition.


That is great. I was hoping that it would work based off what you were trying to do. I want to get the Ninja 2 device for my projects as well. I hope to get in the fall of this year or sooner.


Hi as a newbie in Natron I don’t fully understand how to achieve this.

I’m trying to remove noise produced from a Blender Cycles Render.
A screen-shot revealing the node structure will be extremely helpful.



They have dropped in price dramatically. There’s a design problem (not quite a flaw) that makes the HD caddy an absolute SOB to remove at first. I have to use a vice! Reading from the comments this does get easier with wear and it’s small price to pay for excellent output from a prosumer camera.


Hi SunBurn and welcome to Natron.

Noise in Blender’s cycles node is not the same as the noise produced by a camera sensor. In particular, noise is more prevalent in dark areas (hence why we’re using luma separation to tune the masks).

There are other ways to achieve this in Blender and while this isn’t a Blender forum, I’m happy to help if I can. For example, are you using Blender Cycles to produce an animation or a still? With animations there’s a technique where you can make the noise look like film grain and it’s barely noticeable.

You will always get a little bit of noise in an unbiased raytracer like Blender Cycles because we don’t have computers that are fast enough to work like light does so we cheat. Even the best renderers in the world still have to use tricks to reduce noise to acceptable levels and even then, they need fairly powerful systems. Even with my 32 core Xeon workstation (yes, it did cost and arm and a minor organ) a typical render might take several minutes per frame to reach a tolerable level of noise. That moves into the hours if you really want to clear it but this is always a trade off as the relation between the amount of noise and time isn’t linear.


Hi Smidoid and thank you for your answer.
I’m quite aware of some of de-noising methods in Blender but I’m trying to go the opposite way here.
I’m trying to use photography/video methods in CG Imagery just to test if it works.
What I’ve noticed is the that the noise lies in dark areas and/or shallow DOF.
So I was thinking if I could just apply de-noising only in the dark parts and leave everything else intact it would be great and I assume quite fast.
(I did a fast test in Gimp with masks and GMICs de-noising algorithms and it worked to a certain level…so I wanted to test the same in Natron which is far more efficient for post production)
For the moment I’m only testing still images but animations testing is in my plans.
if you have any more advice they’ll be all kindly accepted. :slight_smile:


It’s certainly worth a try. TBH, none of the non-commercial denoisers are all that good simply because the problem is hideous complex. The best denoiser for still and video (in my opinion) is Neat ( It’s available in still and video versions but it’s not cheap. There is a demo available so you can try it out though.

Video Noise is about as truly random as we can imagine. (If you understand the mathematics of random numbers, you’ll know this already) because it comes partly from the sensor and partly from the electronics connected to it. If we’re using compressed video (which most of us are) there’s an extra DCT noise to contend with too - that’s what gives those annoying blocks. (Cineon, incidentally, uses Wavelets rather than DCTs so it doesn’t suffer from that problem).

Most (if not all) denoisers work on the assumption that when you find the noise, you apply a varying degree of blur and smush those noise pixels together. This makes the are more uniform and more pleasing to the eye. This is why you have to sharpen things to polish off any hard edges that got blurred out. The crudest way to do this in Blender is to do a blur and followed by a sharpen pass in the compositor. It works but the results aren’t great.

The best denoisers are the ones that are best able to disseminate noise from detail - which, as you might imagine, is easier said than done.

The simple answer (which isn’t always practical) is to start with the best possible image. As the saying goes, “Don’t try to fix in post, the stuff you should have done in camera.” (and words to that effect.)

With Blender we don’t have that choice, so it’s either buy a bigger computer, send it to a render farm, wait longer or use a denoiser - for stills anyway.

For animation however, you have another option which may be enough: I mention this because (and trust me, I’ve tried!) the results are often better than a ham-fisted approach at denoising.

This panel is your friend. :slight_smile:

The clamp settings are for fireflies so that’s probably not your issue at this stage but you see that little clock next to seed? When clicked, that uses a different starting “position” (seed) for the random number generator each FRAME you render. This has the effect of making the noise appear like film grain and you’d be amazed how much noise you can tolerate (way more than you would in a still image!)

You can still used a denoiser to polish this off a little but it’s rarely necessary.

To best look at your particular problem are you up for uploading a sample image so I can give it a go with the masks and set up a project for you?


you should also take look at the treasure of avisynth/vapoursynth denoiser plugins:

sure, some of them are quite outdated, limited to 8bit etc., but not all of them. a few of them work exceptional well.


We can’t use them in Natron, can we @mash_graz unless they are OFX plugins [and I’m on a GNU box so I’m even more limited.]? (Again, this may be my ass-umption. :slight_smile:)


We can’t use them in Natron, can we @mash_graz unless they are OFX plugins [and I’m on a GNU box so I’m even more limited.]?

right now there is no solution available, to use avisynth or vapoursynth plugins in natron. it’s always a little bit juggling, to use different tools. but in the particular case of denoising, this kind of tools is often the most satisfying solution.

if you prefer the the GNU ecosystem, you should look for vapoursynth. it’s a modern rewrite of avisynth, using python as its scripting language, overcoming some important historic limitations of avisynth. many useful plugins have been ported to this fork in the meanwhile. most of it is also available as source code, which could be reused in ofx pugins as well.


Sometimes I wish I’d carried on learning development - but there are only so many hours in the day and my poor brain can only store so much. I’m already considered a polymath… (ironic, because the one thing I’m really poor at it math!)

It’s not so much that I prefer GNU, it’s that I’m trying to prove with a couple of the current projects, that GNU is a viable option for renegade, guerilla and small-time studios like ours to produce stuff without (and let’s be honest here) stealing software or deliberately using it outside of licensing terms.

Thanks for those links though - it’s great to find some new stuff. I only discovered MakeHuman quite recently (last year) and that’s made things easier for another project.


Smidoid, thanx once again for your suggestions.
Sample clamping is a vicious friend. It can help a lot reduce noise and fireflies but it throws away most of your images dynamic range.
Since I’m an every day Blender user for almost a decade I’ve been digging on the subject quite sometime and I have tested almost everything that is written or proposed out there as far as it concerns the Blender side of things.
What I’m investigating now is if Natron could be a part of my let’s call it “de-noising” work-flow.
Do you have any suggestions about a Natron node set up that could reduce noise without destroying textures and edges fidelity on still images?
I’m also an owner of small (“tiny” to be totally honest, studio) and I’m also in GNU side of things and I would love to stay here, but sometimes clients are crazy demanding when it comes to deadlines and I try to reduce time and or budget as efficient as I can.:wink:
I’m sure you know what I mean.
Any help will be more than appreciated.

If you’re still interested I can make a typical case scenario image in 16bit TIFF image format so you can play with it if you have some spare time.


Hello SunBurn,
I’ve tested a bit on how to denoise blender’s render with natron but that was not a full investigation.
I found nothing better than already existing techniques well known by the blender comunity.

As you may already know there isn’t a perfect solution, when there is too much noise it’s impossible to recreate the information on the image that’s quite not there.

Neat video can be quite efficient and affordable especially if you are a professional. It will give very good result in case not too extreme. The denoiser built in natron is quite good too but tends to give a more blurry result.

If you give us some example image we can try to see what we can manage with it, I’m sure it will be interesting for many people.


noise in artificial rendered content is a very special kind of challenge. it shows significant differences to footage recorded on a physical camera sensor. in the later case, it’s much harder to calculate motion vectors to compensate temporal NR artifacts. that’s much easier and effective to archive in blender by motion vector layers. therefore, i think, blender/cycles noise reduction is very special case, and you shouldn’t underestimate the recipes discussed at the blender community.


I have to agree with the others here. Cycles noise is very special compared to camera sensor noise. Also I’m with you on clamping - it’s very badly described in the manual too since you have to work “backwards” in effect.

Anyhoo, we’re comparing pseudo-random noise on a per-pixel basis (which is calculated on some simulated surface in a internal dimension) to physical noise which is entirely quantum based. On the face of it, that seems like a similar problem but, rather bizarrely, when you get to down to the nitty-gritty of things, they’re different sufficiently animals to need different fixes. Consider, for example, that a typical piece of H.264 footage has noise compounded by bayer-pattern ( correction AND 4.2.0 colour compression ( Even if we’re lucky enough to get pure, uncompressed 4.4.4, we’ve still got a bayer pattern to contend with. Ironically, this makes the noise slightly more predictable and therefore, easier to remove without damaging less noisy bits.

Neat as Sozap and I use, employs a profiling technique to identify how the noise is appearing in the image (as does Natron) but it’s probably more advanced (they have been at it longer) and it’s easier to tweak. I’d honestly give it a try. It’s not “cheap”, but you can offset the cost to your clients and it will save your hair!

A couple of other options are a faster workstation (you can pick up used multi-core Xeon machines relatively cheaply now) or sending your render to a commercial farm that has the compute power you need. This was the option I used when I had a Mac with limited CUDA cores.

Further, have to you tried an alternative renderer? I love Cycles but it’s still not as mature as some of the other options and you might find that something else gives you a better result. Dario Baldi did this comparison which I’ve found exceptionally useful as a starting point. (I rather fancy Octane, but it’s out of my price range.) Lux isn’t fast but the output is superb.

If you’re stuck with Cycles for any reason and time isn’t an option, [apologies for labouring this] Neat (/ is really your best option. They have a cheaper version for still images using the same technology, minus the temporal prediction.


Thank you guys for your answers but as far as I understand Natron its not the right tool of what I’m looking for except with the addition of Neat (I’m gonna test this for sure).
Yes I’m aware of all the solutions described here.
I’ve also done extensive tests with different work-flows that can be found in blender artists forum by doing a test and typing “L’Appartement”.
Comparing render engines doesn’t say much to me and Andrew’s article is quite limited and very basic IMHO.
For me important is the whole work-flow from start to finish, from the clients blue-prints, images, references, whatever to the final deliverable format.
I’ve tested some of the commercial 3d software out there and most of the render engines.
I’m aware that Cycles is a young engine, it’s a pure path-tracer and I know most of its drawbacks but none of the other engines commercial or not is quite well integrated into Blender (specially some of the commercial ones look like cheap alphas) and believe me guys I’ve tested a lot of them. (Lux, Yafaray, Vray, Thea, Octane, Corona, Mitsuba).
Anyway, thank you guys but since this is a Natron forum I’ll stop rumbling about Blender.:stuck_out_tongue_winking_eye:


LOL. It’s been informative for all of us and people chatting in the forums keeps them fresh. If they don’t want to read it, then they will move on.

I think the answer here is to educate your clients (politely!) as to what’s actually possible this side of a Hollyweird studio rendering-farm. That’s nice work (The Appartment) right there and I can see the problem with portal lights - another bloody maddening issue that we have to work around. (When I was new, I put a sun lamp outside the window because, well, it’s the sun, right?!)

There’s a reason why the young Terminator in Terminator Genisys didn’t look exactly like Arnie - it’s good, very, very good, but we can still see that it’s fake despite the unimaginable resources and time they took to do it.

Blender and Natron are related anyway since they share some tracking code so I don’t think people will be too put off from these discussions.

Ultimately, if you animate the scene with a moving camera and animate your random seed, you’ll find you get a pleasant film-like grain which is almost invisible to the eye.

Psychologically, there’s an effect here which I can’t recall the name of, but you see it on the Internet all the time with pictures titled “When you see it…”

Noise is like that - when you’re sat on top of the image it sticks out like a weary willie on a porno shoot (there’s another image you won’t be able to get out of your head). However, when OTHER people see the same image, they won’t perceive what you do provided you don’t attract attention to it.

On the subject of “When you see it…” here’s one that’ll kill you (and this thread, more than likely :wink: ) ).

Movie aficionados will probably slap their heads and go “WTF” but it can baffle people for days - and then they realise that he’s laughing at them!


Omar’s suggestion with separate channels looks great. Could Omar or Smidoid share a scene with such graph? Thank you.