Work flow question. Natron -Export-> Kdenlive(match with audio)


So I am trying to find the most efficient way to make animations match what’s being described in an audio clip.

So far my only solution is to animate a small part, 100 frames at a time, then try to make the exported clip match the audio by slowing or speeding up parts.

I feel like there’s probably a more efficient way to do this.

The most efficient way to do this is not available in natron, since we can’t have audio playback of any kind.

Is there another technique that’s more efficient than what I’m already doing?



Audio support is not implemented in Natron


Thank you. I did state that I knew that already.

I was just asking what the workflow should be.


It depends clearly on what you need to do, but I’d go with a software that enable realtime audio playback ( like blender or shotcut) and I’d do a proper detection, like putting on paper what appends on frame 100, and then go into natron and try to match that timing at best.


I’ll try to make the notes. I like that idea. Thanks.


it’s a complex question, because you’ll find quite different audio usage scenarios in practice.

  1. motion graphics, based on audio envelope curves
  2. video clip processing, monitoring/preserving the included audio tracks
  3. remote control and sync of external audio applications

the cases 1. and 2. are very common. most applications handle this kind of stuff by using audio subsystems, which work in parallel to the openfx image processing, usually also utilizing different kinds of plugin architecture (VST, LV2, etc). the openfx standard doesn’t have any support for this kind of multimedia handling until now.

remote transport control and synchronization between different applications is more unusual approach. it’s quite easy to realize this kind of linked behavior between blenders video sequence editor and ardour by jack transport control for example. whenever you chance the position in the timeline, toggle playback, etc, it will do the same in both application. this is a very powerful feature in studio environments, to combine the power of specialized tools for different tasks. in practice it doesn’t come without effort to setup and operate. it fits better to huge mastering jobs, than just audible feedback while editing or creating visual effects in sync to the audio track. from a technical point of view, it looks a little bit less demanding at first sight, because you do not have to handle the actual audio processing within the application, but in fact you need just the same amount of code at the end, to support this kind of transport control in a satisfying manner.


I would probably make a rough keyframed animatic in an editing software that lets you preview audio at the same time. Edit together a version that has visual cues for the important timings, import to natron, use as animation reference.