How to do Z blur?


#1

Hi, is there a way to do depth blur in Natron? I haven’t found a blur that takes any kind of control channel, let alone bokeh etc. The latter isn’t as important to me at the moment, a basic zblur would do. Am I missing something or is this not available natively?


#2

You can feed b/w mask to Blur node and it will multiply blur value by whaterver value mask have at given pixel. If you want to use raw Z channel, you have to map it distance values to 0-1 range (Grade node will do the trick). But note, that if you’ll use non antialiased Z channel, you’ll get ugly artifacts on the edges of your objects


#3

sergusster, I tried but plugging a mask into a blur node seems to do what it does in Nuke - control a blend between the blurred and non-blurred versions of the input image. I confirmed this by comparing a masked blur and a mask-driven blend of the input and blurred versions - they are identical.

Did I misunderstand which blur node to use?


#4

Yep, sorry, mislead you here. It’s just a blending factor. Digging into that problem now. I’ll post my results here


#5

The algorithm do do a proper Zblur would make a complicated graph in which the number of nodes depends on the blur parameters, so it is better to code a C++ plugin for that.

We first have to code Convolve, then Lens aperture generators, Bokeh, and finally Zdefocus.

As described here: https://github.com/MrKepzie/Natron/wiki/Google-Summer-of-Code-GSoC-ideas

Zdefocus

The goal of depth-dependent defocus is, given a depth image and a all-in-focus color image, to blur each pixel in the color image depending on its depth. Many algorithms are available in the litterature have been proposed, but some of them produced coarse approximations of the result. The implementation in Natron should at least handle correctly occlusions by foreground objects.

We propose the following method:

  1. extract “depth-slices” from the image, using the depth image. a depth slice is black and transparent everywhere except where the depth is within some range, where it is the original RGBA data.
  2. blur each slice with the proper blur size (FFT-based convolution is necessary for a proper Bokeh effect). The in-focus slice is not blurred.
  3. merge-over each slice, from the back to the from. That way, each
    blurred slice may occlude objects in the back.

#6

Does’nt it sound clunky to you?? I mean - having a proper Zblur (or at least - simple variable blur based on b/w mask) is essential for decent compositing software. It’s a musthave. I know i know - you can use third party OFX plugins to get this functionality, but frankly told, having BASIC tools out of the box would be much nicer.


#7

A variable blur is exactly the same complexity as a Zblur, except the merge operation will be different.

You can still reproduce the algorithm I gave above as a series of ColorLookup (to generate the mask for a given blur size), Blur, and merge nodes.

Concerning the notion of “musthave”, almost all nodes are “musthaves”, and maybe you should think twice before making such critics…


#8

Thanks for clarifying. I figured it might not have been a top priority as third-party plugins are available and probably produce better results than a native node.

I did think of using seexpr to do more or less what you describe above, but the more I thought about it the less it seemed like a good idea. I’m not even sure there’s a reasonable way to do a blur with seexpr.


#9

And maybe I should not. Because following that logic i can reproduce every algorithm by doing pixel by pixel math with my pocket calculator in one hand and natron with no nodes at all in the other. When i say “must-have” i mean a tool, lack of which makes given compositing software almost useless in a real production, and i can name quite a few right now. As well as a can name number of nodes which presense makes absolutely no sense at all. For example Delate and Erode nodes providing absolutely same result. Or Position node, which is just a much simplified version of Transform node, etc. But they are here, when in same time there is no essential nodes to accomplish everyday tasks. I’m not critisizing you guys, you’re doing a great job! And i understand that you have much much more important task on todo list. I just want to point out that it is often a good practice to know what are your users eager for.


#10

I tried some experiments with seexpr, but yeah - it’s not worth it. Calculating blur kernels it’s quite a CPU intensive task and there is now way to do it efficiently with one-line expressions nor with whole python function :frowning:


#11

I’ll make a first version of Zblur (with gaussian/box/triangle/quadratic kernels only, no convolve) when I find time to do so. I also have a full-time job as a researcher, with teaching activities and a bunch of PhD students to take care of.


#12

So, i did came up with a solution. It’s utilizing method suggested by Frederic - slicing image into blur planes. It’s very easy to use, though I suppose it’s not that fast as if it be written as OFX plugin. Right now it only supports Z channel, but i’m planing to add grayscale mask support, then do some testing, and then share it with everyone.


#13

So how did your tests go along? Is it available for anyone to download and try it? this screen shot of yours looks very interesting.


#14

Right now I am waiting for RC3 or stable release, due to a manor bug, that shakes of expressions off of some nodes. Aside of that, there are few bugs with channel mapping that would be nice to come around, but it wont take much time.


#15

I’d like to bump this, has this been implemented yet? I’m interested in seeing the code for this :slight_smile:


#16

no implementation yet :confused:


#17

@sergusster, any chance of sharing what you did? Either the theory or the partial implementation?


#18

@zpelgrims I think he just recreate the algorithm that Frederic described with buit-in
nodes

natron[quote=“frederic_devernay, post:5, topic:263, full:true”]

Zdefocus

The goal of depth-dependent defocus is, given a depth image and a all-in-focus color image, to blur each pixel in the color image depending on its depth. Many algorithms are available in the litterature have been proposed, but some of them produced coarse approximations of the result. The implementation in Natron should at least handle correctly occlusions by foreground objects.

We propose the following method:

  1. extract “depth-slices” from the image, using the depth image. a depth slice is black and transparent everywhere except where the depth is within some range, where it is the original RGBA data.
  2. blur each slice with the proper blur size (FFT-based convolution is necessary for a proper Bokeh effect). The in-focus slice is not blurred.
  3. merge-over each slice, from the back to the front. That way, each
    blurred slice may occlude objects in the back.
    [/quote]

So there isn’t code yet, if I, at some point need this I’ll look into doing something with a shadertoy node , this could be a workaround before Fred or someone else make it properly into natron.


#19

There is a shadertoy-based zblur somewhere in the forums I believe, or maybe in the natron community pyplugs, but it is not geometrically or optically correct.
developping these things takes time, and we don’t have much time.


#20

yes indeed !

https://github.com/NatronVFX/natron-plugins/blob/master/Filter/Defocus/README.md

It can give a nice “artistic” result even if it’s not acurate, there is two more issues with it :
-alpha channel is not supported
-even if you can have a variable blur according to a BW image, it doesn’t work well in some case when you have a blurry background and a sharp foreground.

The idea of having deep slice may give a better result then.