We already have plans on how we are going to implement 3D so it is very efficient, but not had time yet. The thing is, the internal engine to support 3D is not so long to do and follows actually very closely how the 2D architecture works.
What will require work is the GUI to work with 3D elements and the 3D operators themselves.
What we are going to work on first is extend OpenFX with a few actions to support 3D effects. Mostly these effects will be able to read/write geometry and apply shaders as well.
We have no timeline yet on when we are going to do this, because we are busy on other things that need polishing before actually diving in the 3D stuff.
Here are the elements that need to be implemented for 3D as a first pass:
- Metadata support via OpenFX
- 3D viewer
- 3D cards (OpenFX plug-in)
- Camera support (via OpenFX) (using metadata)
From there we can then think about projection mapping and camera tracking (using libmv that we already use for the 2D tracker).
Then we can support reading geometry with alembic via an OpenFX plug-in and add support for basic shaders and a render node based on Cycles.
A useful tool would also be the implementation of Cryptomatte into Cycles.
Anyway all of this is quiet tied up to the Natron Engine and unfortunately unless you master the Natron architecture and code-base, I doubt you can implement the core Engine. But you can contribute on the external stuff and plug-ins, even in coding an experimental 3D viewer that we can then integrate in Natron.
Regarding network distribution, we do not work on that because 3-rd party tools already handle it such as CGRU Afanasy. Basically you only need to use the Python API to support that.
Supporting rendering of a single frame across different processing computers would require a lot of work and would probably be something to think of, even though some operators would most likely work a lot faster locally than over network (which would require an insane bandwidth btw)