News:

--

Main Menu

Multiple frame blending

Started by Quaternions, August 16, 2019, 11:14:02 AM

Previous topic - Next topic

Quaternions

Hello, I would like to combine 100 sequential frames at a time of a slow motion video of a video game into 1 frame to create a high quality motion blur effect.  I was surprised that I couldn't find an ffmpeg filter to do the job.  Can Avidemux be used to do this?  I spent all day yesterday and achieved this effect in this video https://www.youtube.com/watch?v=6BItW_bPSkI by exporting all the frames as images and adding them together with Mathematica, which is supposed to be a math software...  I know the process but I can't find software which can do it.  I use avidemux to step through video frames and clip videos, but I didn't find a filter that could do this.

Thanks!

eumagga0x2a

Quote from: Quaternions on August 16, 2019, 11:14:02 AM
I would like to combine 100 sequential frames at a time of a slow motion video of a video game into 1 frame to create a high quality motion blur effect.  I was surprised that I couldn't find an ffmpeg filter to do the job.  Can Avidemux be used to do this?

If someone writes the video filter plugin to accomplish this task and the PC has quite a lot of memory (about 12 MiB per frame in 4:2:0 and 8 bit color depth * 100 just to hold uncompressed picture data at 2160p), Avidemux should be able to do this.

If C++ is not a complete stranger to you, you could implement a filter as a subclass of ADM_coreVideoFilterCached similar to FadeTo filter, though the largest cache currently requested by a video filter constructor in Avidemux contains just 11 pictures. The job is done by the respective implementation of the getNextFrame() method.

Quaternions

#2
I'm not terribly familiar with C++, but I wrote some code using yours and other plugins as reference:
https://github.com/krakow10/Avidemux-FrameBlend/blob/master/ADM_vidBlendFrames.cpp

The code is designed to use a high bit depth buffer frame and sum color data into it each input frame, which would avoid storing all the frames in memory, but requires a uint32_t type ADMImage which I didn't know how to go about creating.  I haven't the faintest clue how to compile it or add it to my own version.  I should probably also write a version that just fills the memory with an array of ADMImage and sums them at the end, but I don't remember how to make a custom sized list of pointers, or how to specify that type of value in the AVDM_BlendFrames class.

Also I did not figure out what this part means:
ADM_coreVideoFilterCached(3,in,setup)

eumagga0x2a

#3
Quote from: Quaternions on August 17, 2019, 07:13:25 AM
I'm not terribly familiar with C++, but I wrote some code using yours and other plugins as reference:
https://github.com/krakow10/Avidemux-FrameBlend/blob/master/ADM_vidBlendFrames.cpp

Great start!

QuoteI haven't the faintest clue how to compile it or add it to my own version.

Do you have already a development environment for Avidemux, i.e. preferably a Linux installation (Fedora or Ubuntu, a VM would suffice) as any other platform adds a moderate (macOS) or huge (Windows 10) burden of additional complexity?

QuoteAlso I did not figure out what this part means:
ADM_coreVideoFilterCached(3,in,setup)

https://github.com/mean00/avidemux2/blob/d48b5004a1e20ad653e4562de783761158c63192/avidemux_core/ADM_coreVideoFilter/include/ADM_coreVideoFilter.h#L89

= create a video filter with cache for 3 pictures, the parent filter (where the source image comes from) pointed to by in and configuration pointed to by setup.

Quaternions

#4
I realized that I haven't done any PTS editing for the frame times, I'm going to try to figure that out now.  I've got a ubuntu server which I've compiled some golang code on before that I mainly use for Plex.  Which implementation do you think is best for Avidemux: accumulating color in a high bit depth buffer, or loading the frames into memory?  In the first case I don't know how to edit the ADMImage class to support uint32_t (or if it can do higher bit depths already), and for the latter I don't know how to create a spot to hold an arbitrary number of frames.  I think that if this were to become a polished plugin, the frame blending and time scaling should be controllable separately, and perhaps also be extended to fractional blending to widen the applications.  I just want to make the high quality motion blur effect that I imagined easier for me to apply in practice.

Edit: Perhaps I could hardcode my specific 100 frame need?  Specify ADM_coreVideoFilterCached(100,in,setup) and then use vidCache->getImage on fn%100==100 frames?  Maybe that's what you were thinking originally and I just now understood

eumagga0x2a

I guess a simple solution without any PTS / FPS modifications based on a filter cache to hold pictures would be more useful. The drawback: we have to request the full size of the cache at filter creation, however.

Addition of luminance and chrominance values could be done using a lookup table like in the fadeTo filter.

To reduce FPS, the changeFps filter can be added to the chain. If you prefer an all-in-one solution, that filter could serve as reference for PTS handling.

eumagga0x2a

#6
Quote from: Quaternions on August 17, 2019, 02:38:02 PM
Edit: Perhaps I could hardcode my specific 100 frame need?  Specify ADM_coreVideoFilterCached(100,in,setup) and then use vidCache->getImage on fn%100==100 frames?  Maybe that's what you were thinking originally and I just now understood

Yes, sort of. You have to  :)

As we already have that cache with all frames in the memory, let's use them. For a true high quality motion blur effect, the filter should be able to detect scene changes (no motion blur on these!), so don't aim too high right now.

edit: This scene changes stuff should be probably of no concern at this phase. Making the filter partializable would allow the user to specify when exactly the scene change happens (as long as there are not too many of them). But for this, any timing modifications would make it incredibly hard or rather impossible to find out the exact range when the filter should start and when it should end.

eumagga0x2a

#7
The last remark for now: currently you don't output any picture at all (ADMImage *image remains unset).

QuoteIn the first case I don't know how to edit the ADMImage class to support uint32_t (or if it can do higher bit depths already)

I don't think this would be viable. Maybe just creating a three-dimensional array of uint32_t elements as you probably need just a buffer without all the methods ADMImage provides.

eumagga0x2a

Quote from: eumagga0x2a on August 17, 2019, 03:12:51 PM
Making the filter partializable would allow the user to specify when exactly the scene change happens

My mistake, video filters using cache can't be made partial, I'm sorry.

Quaternions

#9
I wrote a uint32_t buffer but I don't think it will get deleted nicely in the ~AVDM_BlendFrames function.  The way I've implemented it probably doesn't need to extend ADM_coreVideoFilterCached anymore does it?  What should my first line in the getNextFrame function be?

Edit: I think I get it... pushing attempted non cached code

This is how I think scene changes should work:

eumagga0x2a

Quote from: Quaternions on August 17, 2019, 04:51:39 PM
Edit: I think I get it... pushing attempted non cached code

Yes, sure. I had a different approach in mind, that is why I recommended caching.

You would probably need to call previousFilter->getNextFrame() in a loop until you have accumulated N pictures or the call has returned false in which case it might make sense to output whatever is in the buffer.

Instead of 3D array it seems to me now that it would be much handier to allocate just one chunk of memory stride*height*3 in size and then delete [] it in the dtor.

eumagga0x2a

Quote from: Quaternions on August 17, 2019, 02:38:02 PM
I've got a ubuntu server which I've compiled some golang code on before that I mainly use for Plex.

Avidemux won't run headless, so you better install Fedora Workstation or the regular desktop Ubuntu 19.04 either on bare metal (will greatly improve performance and comfort unless you've got some really nasty, poorly supported hardware) or in a VM. Avidemux uses CMake, so your filter should be added to https://github.com/mean00/avidemux2/blob/master/avidemux_plugins/ADM_videoFilters6/CMakeLists.txt and  CMakeLists.txt should be created in the top directory of the new filter. You should also run https://github.com/mean00/avidemux2/blob/master/cmake/admSerialization.py from within the filter directory to generate blend.h and blend_desc.cpp included in the filter source.

Quaternions

#12
Ubuntu server meaning Ubuntu desktop that I use as a Plex server :P

I cloned the repo and put my code into /home/quat/Documents/avidemux2/avidemux_plugins/ADM_videoFilters6/blend and ran the python script on blend.conf.  A friend of mine helped me with the first cmake error for which I had to install nasm, but he's not here to help me with this one: https://hastebin.com/uxogerilop.txt

I also changed the buffer to use three flat arrays.

Does return false in getNextFrame signal no more frames available?  That would make sense to loop if that's the case.

Also made another short clip with Mathematica and ffmpeg https://www.youtube.com/watch?v=iLiWxcv_Q3w

eumagga0x2a

CleanTalk going wild, as usual. Please try from a private browser window or from a different IP address, when possible.

eumagga0x2a

#14
Quote from: Quaternions on August 17, 2019, 09:04:14 PM
I cloned the repo and put my code into /home/quat/Documents/avidemux2/avidemux_plugins/ADM_videoFilters6/blend and ran the python script on blend.conf.  A friend of mine helped me with the first cmake error for which I had to install nasm, but he's not here to help me with this one: https://hastebin.com/uxogerilop.txt

Have you installed all deps by running

bash createDebFromSourceUbuntu.bash --deps-only?

You must also build (just build, not install to the prefix) Avidemux prior to trying to build your filter.

QuoteDoes return false in getNextFrame signal no more frames available?

Yes.