getRegionsOfInterest and MultiThreading process action


Hi everyone,

I’m new to OFX programming.
I decided to make a first simple image processing plugin, based on the ofx-MISC invert.cpp example.
,so I’m following the basic guides, examples, API’s, etc.

I aim to do a simple local averaging (blur) with the spatial radius as the only parameter, to keep things simple.

I would like to know how should be managed the args.regionOfInterest and getRegionsOfInterest Action in order to make multithreading processing.
This is necessary because to blur a portion of the image per thread, I need to pad the input rectangle image with margins depending on the patch radius.

I need to do multithreading because I want to make more expensive processing in the future. I know the whole processing could be done within the render action and it should be easier.

In the invert example (which is per-pixel algorithm), we have:

multiThreadProcessImages(OfxRectI procWindow)

this calls the process function which do the whole processing of a single rectangle portion of the entire RoD dstClip , one per thread:

void process(const OfxRectI& procWindow) {

for (int y = procWindow.y1; y < procWindow.y2; y++) {
for (int x = procWindow.x1; x < procWindow.x2; x++) {

Here should process the dstPix components.


So how is the procWindow coords related to RoI coords in general?
should I preprocess the image somehow?
Should I pass to the process functions the RoI coordinates values , to be able to adress pixels out of the procWindow?

I could implement the getRegionsOfInterest action but now I got confused on how to use it in the processing step.

Thanks in advance for any reply.


The getRegionsOfInterest action must return what you need in input.
The getRegionOfDefinition action must return what size you output.

For plug-ins that do not need to modify the output region or don’t need extra pixels in input you may not implement these actions which by default have a correct behaviour.

They work in canonical coordinates (i.e: not taking into account pixel aspect ratio and render scale).

The render action takes in parameter a render window: this is an arbitrary rectangle in pixel coordinates that the host gives you to render. It is not necessary the region of definition (converted to pixel coordinates) except if you flagged your plug-in as not supporting tiles.

This render window maybe then split up and rendered by multiple threads by implementing a processor using the multi-thread suite. Then each sub-rect will be passed to the multiThreadProcessImage function that will be called for each thread to render.


So the RoI is only considered for the input clip , and has nothing to do with the multithreading step?

Then when the render windows is splited out in sub-rects,
the multiThreadProcessImage function calls process(procWindow), where procWindow are the coords of a single subrect.

So inside process function I would have to substitute the dstPix with the average of it’s local neighbours, implementing something like:

for each dstPix at (x,y) in the procWindows:

for (int i = -_radius; i < _radius + 1; i++)
for (int j = -_radius; j < _radius + 1; j++) {

         const PIX *srcPix= (const PIX *)(_srcImg ? _srcImg->getPixelAddress(x + i, y+j) : 0);

	dstPix[0] += srcPix[0];
	dstPix[1] += srcPix[1];
            dstPix[2] += srcPix[2];
	dstPix[3] += srcPix[3];


and then divide the dstPix components by (2*_radius +1)^2;

When I try to do so, Natron crashes.
The exception report says: "The thread try to read or write to a virtual adress for which it does not have the appropiate access "

Then the process function is unable to adress pixels out from the procWindow.

How should I do that?



First in the render action I would check that the returned input image has correct bounds, i.e: it must contain all the pixels you are going to address.
You could also assert that in the pixel processor. Then you have to check that you image has the appropriate number of components and that you do not iterate too far. Writing the first pixel processor is kinda challenging, once you got it it’s always the same


I appreciate your help.
I still couldn’t figure it out how the RoI affects the renderWindow.

I could verify that the RoI is correct. It has the size of the original image plus the margins.
e.g roi.x1=-1 , roi.y1=-1 if kernel_radius=1.

I flagged infinite RoD just in case, in the getRegionOfDefinition function. It didn’t make any difference.

The problem appears to be in the renderWindow size. It has the same size as the original image.
I thought it would have to be a bigger rect, due to the padding I’m adding with the RoI values.
e.g renderWindow should have (x1,y1)=(-1,-1)
But instead it has (x1,y1)=(0,0)
How can I make the getRegionsOfInterest take effect on the renderWindow?.

Then the process() is receiving subrects from those renderWindow coords, I printed out the coords of those subrects and they seem to be Ok too. They are all contained in the renderWindow.
So this doesn’t allow to acces the pixels in the padded RoI, again.
.If I try to access out of the image bounds, natron crashes, logically.

Besides, I couldn’t access or cout the args.renderWindows values inside the pixel processor function,
I only have access to the procWindow values ,
So I’m constraining the margins manually to avoid crashing when try to address pixels out of bounds.
How can I get the rendeWindow size within the process() function?

Lastly, The blur effect seems to work fine out of the margins, and could play forward on natron.
But when I scroll manually over the timeline , Natron crashes with the exception "The thread try to read or write to a virtual adress for which it does not have the appropiate access "

I’m trying to get a bigger renderWindow to be able to process the border pixels.

Thanks in advance for any help.


You miss the usage of getRegionOfInterest action: It’s purpose is for a plug-in to define the rectangle it needs from it’s input clips.

To specify the region that you will render you need to implement getRegionOfDefinition instead.

The render window is computed by Natron merely from what you returned from getRegionOfDefinition, clipped to the viewer visible portion etc…etc…

You are allowed to access pixels in input to the whole area that you requested in getRegionOfInterest, that’s the purpose of it…