[FFmpeg-devel] vf_framestep: add blend parameter for motion blur effect

Submitted by Matthias C. M. Troffaes on May 31, 2017, 12:31 p.m.

Details

Message ID 20170531123114.3526163-1-matthias.troffaes@gmail.com
State New
Headers show

Commit Message

Matthias C. M. Troffaes May 31, 2017, 12:31 p.m.
This patch adds a "blend" parameter to the framestep filter, to blend
frames together at each step, for a motion blur effect. The number of
frames that are blended (i.e. the exposure time, in frames) can be
controlled, allowing control over the strength of the motion blur. The
filter has timeline support, and supports both 8-bit and 16-bit pixel
formats.

This can be used for instance to blend down high framerate footage to
produce a high quality motion blur effect.

Note that a similar effect is already possible by repeatedly chaining
the tblend and framestep=step=2 filters; see for example:

https://video.stackexchange.com/questions/16552/4x-resample-videoframes-using-ffmpeg

But this is limited to steps that are powers of two, and this does not
allow an intermediate exposure time. It's also slower.

Tests and documentation included.
---
 Changelog                              |   1 +
 doc/filters.texi                       |   7 +
 libavfilter/vf_framestep.c             | 241 ++++++++++++++++++++++++++++++---
 tests/fate/filter-video.mak            |  12 ++
 tests/ref/fate/filter-framestep-anim-1 |  17 +++
 tests/ref/fate/filter-framestep-anim-2 |  17 +++
 tests/ref/fate/filter-framestep-gray-1 |  14 ++
 tests/ref/fate/filter-framestep-gray-2 |  13 ++
 8 files changed, 302 insertions(+), 20 deletions(-)
 create mode 100644 tests/ref/fate/filter-framestep-anim-1
 create mode 100644 tests/ref/fate/filter-framestep-anim-2
 create mode 100644 tests/ref/fate/filter-framestep-gray-1
 create mode 100644 tests/ref/fate/filter-framestep-gray-2

Comments

Matthias C. M. Troffaes May 31, 2017, 12:33 p.m.
Note that this is a resubmission of
https://ffmpeg.org/pipermail/ffmpeg-devel/2017-April/209794.html - in
particular the mips test failure reported in
https://ffmpeg.org/pipermail/ffmpeg-devel/2017-April/209820.html has
been fixed.

Kind regards,
Matthias

On Wed, May 31, 2017 at 1:31 PM, Matthias C. M. Troffaes
<matthias.troffaes@gmail.com> wrote:
> This patch adds a "blend" parameter to the framestep filter, to blend
> frames together at each step, for a motion blur effect. The number of
> frames that are blended (i.e. the exposure time, in frames) can be
> controlled, allowing control over the strength of the motion blur. The
> filter has timeline support, and supports both 8-bit and 16-bit pixel
> formats.
>
> This can be used for instance to blend down high framerate footage to
> produce a high quality motion blur effect.
>
> Note that a similar effect is already possible by repeatedly chaining
> the tblend and framestep=step=2 filters; see for example:
>
> https://video.stackexchange.com/questions/16552/4x-resample-videoframes-using-ffmpeg
>
> But this is limited to steps that are powers of two, and this does not
> allow an intermediate exposure time. It's also slower.
>
> Tests and documentation included.
> ---
>  Changelog                              |   1 +
>  doc/filters.texi                       |   7 +
>  libavfilter/vf_framestep.c             | 241 ++++++++++++++++++++++++++++++---
>  tests/fate/filter-video.mak            |  12 ++
>  tests/ref/fate/filter-framestep-anim-1 |  17 +++
>  tests/ref/fate/filter-framestep-anim-2 |  17 +++
>  tests/ref/fate/filter-framestep-gray-1 |  14 ++
>  tests/ref/fate/filter-framestep-gray-2 |  13 ++
>  8 files changed, 302 insertions(+), 20 deletions(-)
>  create mode 100644 tests/ref/fate/filter-framestep-anim-1
>  create mode 100644 tests/ref/fate/filter-framestep-anim-2
>  create mode 100644 tests/ref/fate/filter-framestep-gray-1
>  create mode 100644 tests/ref/fate/filter-framestep-gray-2
>
> diff --git a/Changelog b/Changelog
> index 1949ec7..cef06eb 100644
> --- a/Changelog
> +++ b/Changelog
> @@ -2,6 +2,7 @@ Entries are sorted chronologically from oldest to youngest within each release,
>  releases are sorted from youngest to oldest.
>
>  version <next>:
> +- framestep filter: add blend parameter for motion blur effect
>  - deflicker video filter
>  - doubleweave video filter
>  - lumakey video filter
> diff --git a/doc/filters.texi b/doc/filters.texi
> index 107fe61..0160954 100644
> --- a/doc/filters.texi
> +++ b/doc/filters.texi
> @@ -8453,6 +8453,13 @@ This filter accepts the following option:
>  @item step
>  Select frame after every @code{step} frames.
>  Allowed values are positive integers higher than 0. Default value is @code{1}.
> +@item blend
> +Blend @code{blend} consequentive frames on every step,
> +to produce a motion blur effect.
> +Allowed values are positive integers between @code{1} and @code{step},
> +where @code{1} corresponds to no motion blur, and @code{step}
> +corresponds to maximal motion blur.
> +Default value is @code{1}.
>  @end table
>
>  @anchor{frei0r}
> diff --git a/libavfilter/vf_framestep.c b/libavfilter/vf_framestep.c
> index 8102e7c..33a380f 100644
> --- a/libavfilter/vf_framestep.c
> +++ b/libavfilter/vf_framestep.c
> @@ -1,5 +1,6 @@
>  /*
>   * Copyright (c) 2012 Stefano Sabatini
> + * Copyright (c) 2017 Matthias C. M. Troffaes
>   *
>   * This file is part of FFmpeg.
>   *
> @@ -24,13 +25,24 @@
>   */
>
>  #include "libavutil/opt.h"
> +#include "libavutil/pixdesc.h"
>  #include "avfilter.h"
>  #include "internal.h"
>  #include "video.h"
>
>  typedef struct NullContext {
>      const AVClass *class;
> -    int frame_step;
> +    int frame_step;     ///< step size in frames
> +    int frame_blend;    ///< how many frames to blend on each step
> +    int nb_planes;      ///< number of planes in the pixel format
> +    int planewidth[4];  ///< width of each plane (after subsampling)
> +    int planeheight[4]; ///< height of each plane (after subsampling)
> +    uint32_t *data[4];  ///< buffer for blending input frames
> +
> +    void (*blend_set)(AVFilterContext *ctx, AVFrame *in, int plane);
> +    void (*blend_add)(AVFilterContext *ctx, AVFrame *in, int plane);
> +    void (*blend_div)(AVFilterContext *ctx, AVFrame *in, int plane);
> +    int (*filter_frame)(AVFilterLink *inlink, AVFrame *in);
>  } FrameStepContext;
>
>  #define OFFSET(x) offsetof(FrameStepContext, x)
> @@ -38,43 +50,229 @@ typedef struct NullContext {
>
>  static const AVOption framestep_options[] = {
>      { "step", "set frame step",  OFFSET(frame_step), AV_OPT_TYPE_INT, {.i64=1}, 1, INT_MAX, FLAGS},
> +    { "blend", "number of frames to blend per step",  OFFSET(frame_blend), AV_OPT_TYPE_INT, {.i64=1}, 1, 65535, FLAGS},
>      { NULL },
>  };
>
>  AVFILTER_DEFINE_CLASS(framestep);
>
> +#define DEFINE_BLEND(NAME, TYPE, DECL, EXPR)                                   \
> +static void blend_##NAME##_##TYPE(AVFilterContext *ctx, AVFrame *in, int plane)\
> +{                                                                              \
> +    FrameStepContext *s = ctx->priv;                                           \
> +    DECL                                                                       \
> +    const int height = s->planeheight[plane];                                  \
> +    const int width  = s->planewidth[plane];                                   \
> +    const int stride = in->linesize[plane] / sizeof(TYPE);                     \
> +    TYPE *src = (TYPE *)in->data[plane];                                       \
> +    uint32_t *dst = s->data[plane];                                            \
> +    int y, x;                                                                  \
> +                                                                               \
> +    for (y = 0; y < height; y++) {                                             \
> +        for (x = 0; x < width; x++) {                                          \
> +            EXPR;                                                              \
> +        }                                                                      \
> +        src += stride;                                                         \
> +    }                                                                          \
> +}
> +
> +#define SET_DECL
> +#define SET_EXPR *dst++ = src[x]
> +#define ADD_DECL
> +#define ADD_EXPR *dst++ += src[x]
> +#define DIV_DECL const int frame_blend = s->frame_blend;
> +#define DIV_EXPR src[x] = *dst++ / frame_blend
> +
> +DEFINE_BLEND(set, uint8_t,  SET_DECL, SET_EXPR)
> +DEFINE_BLEND(set, uint16_t, SET_DECL, SET_EXPR)
> +DEFINE_BLEND(add, uint8_t,  ADD_DECL, ADD_EXPR)
> +DEFINE_BLEND(add, uint16_t, ADD_DECL, ADD_EXPR)
> +DEFINE_BLEND(div, uint8_t,  DIV_DECL, DIV_EXPR)
> +DEFINE_BLEND(div, uint16_t, DIV_DECL, DIV_EXPR)
> +
> +#undef SET_DECL
> +#undef SET_EXPR
> +#undef ADD_DECL
> +#undef ADD_EXPR
> +#undef DIV_DECL
> +#undef DIV_EXPR
> +#undef DEFINE_BLEND
> +
> +static int filter_frame_generic(AVFilterLink *inlink, AVFrame *in)
> +{
> +    AVFilterContext *ctx = inlink->dst;
> +    FrameStepContext *s = ctx->priv;
> +    AVFilterLink *outlink = ctx->outputs[0];
> +    AVFrame *out = NULL;
> +    int64_t frame_pos = inlink->frame_count_out % s->frame_step;
> +    int direct = 0;
> +
> +    /* update destination frame buffer; we need to do this even if filter is
> +       disabled because buffer might be used for later frames when filter is
> +       re-enabled */
> +    if (!frame_pos) {
> +        /* copy first frame to destination frame buffer */
> +        for (int plane = 0; plane < s->nb_planes; plane++)
> +            s->blend_set(ctx, in, plane);
> +    } else if (frame_pos < s->frame_blend) {
> +        /* add current frame to destination frame buffer */
> +        for (int plane = 0; plane < s->nb_planes; plane++)
> +            s->blend_add(ctx, in, plane);
> +    }
> +
> +    /* write frame */
> +    if (ctx->is_disabled) {
> +        /* filter is disabled, so pass input frame as is */
> +        return ff_filter_frame(outlink, in);
> +    } else if ((frame_pos + 1) == s->frame_blend) {
> +        /* filter is enabled, so write when all frames are blended */
> +        /* create a writable frame */
> +        if (av_frame_is_writable(in)) {
> +            direct = 1;
> +            out = in;
> +        } else {
> +            out = ff_get_video_buffer(outlink, outlink->w, outlink->h);
> +            if (!out) {
> +                av_frame_free(&in);
> +                return AVERROR(ENOMEM);
> +            }
> +            av_frame_copy_props(out, in);
> +        }
> +        /* finalize destination frame */
> +        for (int plane = 0; plane < s->nb_planes; plane++)
> +            s->blend_div(ctx, out, plane);
> +        /* free extra frame if created, and pass on output frame */
> +        if (!direct)
> +            av_frame_free(&in);
> +        return ff_filter_frame(outlink, out);
> +    } else {
> +        av_frame_free(&in);
> +        return 0;
> +    }
> +}
> +
> +/* special case of filter_frame when frame_blend is 1 */
> +static int filter_frame_single(AVFilterLink *inlink, AVFrame *in)
> +{
> +    AVFilterContext *ctx = inlink->dst;
> +    FrameStepContext *s = ctx->priv;
> +
> +    if (!(inlink->frame_count_out % s->frame_step) || ctx->is_disabled) {
> +        return ff_filter_frame(ctx->outputs[0], in);
> +    } else {
> +        av_frame_free(&in);
> +        return 0;
> +    }
> +}
> +
> +static int query_formats(AVFilterContext *ctx)
> +{
> +    static const enum AVPixelFormat pix_fmts[] = {
> +        AV_PIX_FMT_YUV410P, AV_PIX_FMT_YUV411P,
> +        AV_PIX_FMT_YUV420P, AV_PIX_FMT_YUV422P,
> +        AV_PIX_FMT_YUV440P, AV_PIX_FMT_YUV444P,
> +        AV_PIX_FMT_YUVJ444P, AV_PIX_FMT_YUVJ440P,
> +        AV_PIX_FMT_YUVJ422P, AV_PIX_FMT_YUVJ420P,
> +        AV_PIX_FMT_YUVJ411P,
> +        AV_PIX_FMT_YUVA420P, AV_PIX_FMT_YUVA422P, AV_PIX_FMT_YUVA444P,
> +        AV_PIX_FMT_GBRP, AV_PIX_FMT_GBRAP, AV_PIX_FMT_GRAY8,
> +        AV_PIX_FMT_YUV420P16, AV_PIX_FMT_YUV422P16, AV_PIX_FMT_YUV444P16,
> +        AV_PIX_FMT_YUVA420P16, AV_PIX_FMT_YUVA422P16, AV_PIX_FMT_YUVA444P16,
> +        AV_PIX_FMT_GBRP16, AV_PIX_FMT_GRAY16,
> +        AV_PIX_FMT_NV12, AV_PIX_FMT_NV21,
> +        AV_PIX_FMT_NONE
> +    };
> +    FrameStepContext *s = ctx->priv;
> +    AVFilterFormats *fmts_list = NULL;
> +
> +    if (s->frame_blend == 1) {
> +        fmts_list = ff_all_formats(AVMEDIA_TYPE_VIDEO);
> +    } else {
> +        fmts_list = ff_make_format_list(pix_fmts);
> +    }
> +    if (!fmts_list)
> +        return AVERROR(ENOMEM);
> +    return ff_set_common_formats(ctx, fmts_list);
> +}
> +
> +static int config_input_props(AVFilterLink *inlink)
> +{
> +    const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format);
> +    const AVFilterContext *ctx = inlink->dst;
> +    FrameStepContext *s = ctx->priv;
> +
> +    s->planewidth[0] = s->planewidth[3] = inlink->w;
> +    s->planewidth[1] = s->planewidth[2] = AV_CEIL_RSHIFT(inlink->w, desc->log2_chroma_w);
> +    s->planeheight[0] = s->planeheight[3] = inlink->h;
> +    s->planeheight[1] = s->planeheight[2] = AV_CEIL_RSHIFT(inlink->h, desc->log2_chroma_h);
> +    s->nb_planes = av_pix_fmt_count_planes(inlink->format);
> +    for (int plane = 0; plane < s->nb_planes; plane++) {
> +        const int planesize = s->planewidth[plane] * s->planeheight[plane];
> +        s->data[plane] = av_mallocz_array(planesize, sizeof(uint32_t));
> +        if (!s->data[plane])
> +            return AVERROR(ENOMEM);
> +    }
> +    if (s->frame_blend == 1) {
> +        s->filter_frame = filter_frame_single;
> +    } else {
> +        s->filter_frame = filter_frame_generic;
> +        if (desc->comp[0].depth == 8) {
> +            s->blend_set = blend_set_uint8_t;
> +            s->blend_add = blend_add_uint8_t;
> +            s->blend_div = blend_div_uint8_t;
> +        } else if (desc->comp[0].depth == 16) {
> +            s->blend_set = blend_set_uint16_t;
> +            s->blend_add = blend_add_uint16_t;
> +            s->blend_div = blend_div_uint16_t;
> +        } else {
> +            return AVERROR(AVERROR_BUG);
> +        }
> +    }
> +    return 0;
> +}
> +
>  static int config_output_props(AVFilterLink *outlink)
>  {
>      AVFilterContext *ctx = outlink->src;
> -    FrameStepContext *framestep = ctx->priv;
> -    AVFilterLink *inlink = ctx->inputs[0];
> +    const FrameStepContext *s = ctx->priv;
> +    const AVFilterLink *inlink = ctx->inputs[0];
>
>      outlink->frame_rate =
> -        av_div_q(inlink->frame_rate, (AVRational){framestep->frame_step, 1});
> +        av_div_q(inlink->frame_rate, (AVRational){s->frame_step, 1});
>
>      av_log(ctx, AV_LOG_VERBOSE, "step:%d frame_rate:%d/%d(%f) -> frame_rate:%d/%d(%f)\n",
> -           framestep->frame_step,
> +           s->frame_step,
>             inlink->frame_rate.num, inlink->frame_rate.den, av_q2d(inlink->frame_rate),
>             outlink->frame_rate.num, outlink->frame_rate.den, av_q2d(outlink->frame_rate));
> +
>      return 0;
>  }
>
> -static int filter_frame(AVFilterLink *inlink, AVFrame *ref)
> +static av_cold int init(AVFilterContext *ctx)
>  {
> -    FrameStepContext *framestep = inlink->dst->priv;
> +    FrameStepContext *s = ctx->priv;
> +    s->frame_blend = FFMIN(s->frame_blend, s->frame_step);
> +    return 0;
> +}
>
> -    if (!(inlink->frame_count_out % framestep->frame_step)) {
> -        return ff_filter_frame(inlink->dst->outputs[0], ref);
> -    } else {
> -        av_frame_free(&ref);
> -        return 0;
> -    }
> +static av_cold void uninit(AVFilterContext *ctx)
> +{
> +    FrameStepContext *s = ctx->priv;
> +    for (int plane = 0; plane < s->nb_planes; plane++)
> +        av_freep(&s->data[plane]);
> +}
> +
> +static int filter_frame(AVFilterLink *inlink, AVFrame *in)
> +{
> +    FrameStepContext *s = inlink->dst->priv;
> +    return s->filter_frame(inlink, in);
>  }
>
>  static const AVFilterPad framestep_inputs[] = {
>      {
>          .name         = "default",
>          .type         = AVMEDIA_TYPE_VIDEO,
> +        .config_props = config_input_props,
>          .filter_frame = filter_frame,
>      },
>      { NULL }
> @@ -90,11 +288,14 @@ static const AVFilterPad framestep_outputs[] = {
>  };
>
>  AVFilter ff_vf_framestep = {
> -    .name        = "framestep",
> -    .description = NULL_IF_CONFIG_SMALL("Select one frame every N frames."),
> -    .priv_size   = sizeof(FrameStepContext),
> -    .priv_class  = &framestep_class,
> -    .inputs      = framestep_inputs,
> -    .outputs     = framestep_outputs,
> -    .flags       = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC,
> +    .name          = "framestep",
> +    .description   = NULL_IF_CONFIG_SMALL("Select one frame every N frames."),
> +    .priv_size     = sizeof(FrameStepContext),
> +    .priv_class    = &framestep_class,
> +    .init          = init,
> +    .uninit        = uninit,
> +    .query_formats = query_formats,
> +    .inputs        = framestep_inputs,
> +    .outputs       = framestep_outputs,
> +    .flags         = AVFILTER_FLAG_SUPPORT_TIMELINE_INTERNAL,
>  };
> diff --git a/tests/fate/filter-video.mak b/tests/fate/filter-video.mak
> index 1ca29e8..5738b18 100644
> --- a/tests/fate/filter-video.mak
> +++ b/tests/fate/filter-video.mak
> @@ -320,6 +320,14 @@ FATE_FILTER_VSYNTH-$(CONFIG_SWAPRECT_FILTER) += $(FATE_SWAPRECT)
>  FATE_FILTER_VSYNTH-$(CONFIG_TBLEND_FILTER) += fate-filter-tblend
>  fate-filter-tblend: CMD = framecrc -c:v pgmyuv -i $(SRC) -vf tblend=all_mode=difference128
>
> +FATE_FRAMESTEP += fate-filter-framestep-gray-1
> +fate-filter-framestep-gray-1: CMD = framecrc -c:v pgmyuv -i $(SRC) -vf framestep=step=6
> +
> +FATE_FRAMESTEP += fate-filter-framestep-gray-2
> +fate-filter-framestep-gray-2: CMD = framecrc -c:v pgmyuv -i $(SRC) -vf framestep=step=6:blend=3
> +
> +FATE_FILTER_VSYNTH-$(CONFIG_FRAMESTEP_FILTER) += $(FATE_FRAMESTEP)
> +
>  FATE_FILTER_VSYNTH-$(CONFIG_TELECINE_FILTER) += fate-filter-telecine
>  fate-filter-telecine: CMD = framecrc -c:v pgmyuv -i $(SRC) -vf telecine
>
> @@ -384,6 +392,10 @@ fate-filter-fps-cfr: CMD = framecrc -i $(TARGET_SAMPLES)/qtrle/apple-animation-v
>  fate-filter-fps-r:   CMD = framecrc -i $(TARGET_SAMPLES)/qtrle/apple-animation-variable-fps-bug.mov -r 30 -vf fps -pix_fmt yuv420p
>  fate-filter-fps:     CMD = framecrc -i $(TARGET_SAMPLES)/qtrle/apple-animation-variable-fps-bug.mov -vf fps=30 -pix_fmt yuv420p
>
> +FATE_FILTER_SAMPLES-$(call ALLYES, FRAMESTEP_FILTER) += fate-filter-framestep-anim-1 fate-filter-framestep-anim-2
> +fate-filter-framestep-anim-1: CMD = framecrc -i $(TARGET_SAMPLES)/filter/anim.mkv -vf framestep=step=6
> +fate-filter-framestep-anim-2: CMD = framecrc -i $(TARGET_SAMPLES)/filter/anim.mkv -vf framestep=step=6:blend=3
> +
>  FATE_FILTER_VSYNTH-$(call ALLYES, FORMAT_FILTER SPLIT_FILTER ALPHAEXTRACT_FILTER ALPHAMERGE_FILTER) += fate-filter-alphaextract_alphamerge_rgb
>  fate-filter-alphaextract_alphamerge_rgb: tests/data/filtergraphs/alphamerge_alphaextract_rgb
>  fate-filter-alphaextract_alphamerge_rgb: CMD = framecrc -c:v pgmyuv -i $(SRC) -filter_complex_script $(TARGET_PATH)/tests/data/filtergraphs/alphamerge_alphaextract_rgb
> diff --git a/tests/ref/fate/filter-framestep-anim-1 b/tests/ref/fate/filter-framestep-anim-1
> new file mode 100644
> index 0000000..0a6dd19
> --- /dev/null
> +++ b/tests/ref/fate/filter-framestep-anim-1
> @@ -0,0 +1,17 @@
> +#tb 0: 1001/4000
> +#media_type 0: video
> +#codec_id 0: rawvideo
> +#dimensions 0: 320x180
> +#sar 0: 1/1
> +0,          0,          0,        1,   172800, 0x5adff92c
> +0,          1,          1,        1,   172800, 0x37b7f659
> +0,          2,          2,        1,   172800, 0xb4a6f1d1
> +0,          3,          3,        1,   172800, 0xd596f9c6
> +0,          4,          4,        1,   172800, 0xff5a015b
> +0,          5,          5,        1,   172800, 0x65477f11
> +0,          6,          6,        1,   172800, 0x41569400
> +0,          7,          7,        1,   172800, 0xcff9ddf9
> +0,          8,          8,        1,   172800, 0xd6daba1e
> +0,          9,          9,        1,   172800, 0xad83bda1
> +0,         10,         10,        1,   172800, 0x1518bdb3
> +0,         11,         11,        1,   172800, 0xfdd1c7ca
> diff --git a/tests/ref/fate/filter-framestep-anim-2 b/tests/ref/fate/filter-framestep-anim-2
> new file mode 100644
> index 0000000..3984f6d
> --- /dev/null
> +++ b/tests/ref/fate/filter-framestep-anim-2
> @@ -0,0 +1,17 @@
> +#tb 0: 1001/4000
> +#media_type 0: video
> +#codec_id 0: rawvideo
> +#dimensions 0: 320x180
> +#sar 0: 1/1
> +0,          0,          0,        1,   172800, 0xa99bf27e
> +0,          1,          1,        1,   172800, 0x6e2fdfe3
> +0,          2,          2,        1,   172800, 0x96e7ea35
> +0,          3,          3,        1,   172800, 0x832ff6cf
> +0,          4,          4,        1,   172800, 0x7a2dffb9
> +0,          5,          5,        1,   172800, 0xa0a05854
> +0,          6,          6,        1,   172800, 0x91d93f46
> +0,          7,          7,        1,   172800, 0x3051d27b
> +0,          8,          8,        1,   172800, 0x0c13b87c
> +0,          9,          9,        1,   172800, 0xa80eba4c
> +0,         10,         10,        1,   172800, 0xd206ba19
> +0,         11,         11,        1,   172800, 0x6608b61c
> diff --git a/tests/ref/fate/filter-framestep-gray-1 b/tests/ref/fate/filter-framestep-gray-1
> new file mode 100644
> index 0000000..dd209db
> --- /dev/null
> +++ b/tests/ref/fate/filter-framestep-gray-1
> @@ -0,0 +1,14 @@
> +#tb 0: 6/25
> +#media_type 0: video
> +#codec_id 0: rawvideo
> +#dimensions 0: 352x288
> +#sar 0: 0/1
> +0,          0,          0,        1,   152064, 0x05b789ef
> +0,          1,          1,        1,   152064, 0xe20f7c23
> +0,          2,          2,        1,   152064, 0xc711ad61
> +0,          3,          3,        1,   152064, 0xf25f6acc
> +0,          4,          4,        1,   152064, 0xce09f9d6
> +0,          5,          5,        1,   152064, 0x0f9d6aca
> +0,          6,          6,        1,   152064, 0x4c9737ab
> +0,          7,          7,        1,   152064, 0x20cebfa9
> +0,          8,          8,        1,   152064, 0xbb87b483
> diff --git a/tests/ref/fate/filter-framestep-gray-2 b/tests/ref/fate/filter-framestep-gray-2
> new file mode 100644
> index 0000000..c2dfc9b
> --- /dev/null
> +++ b/tests/ref/fate/filter-framestep-gray-2
> @@ -0,0 +1,13 @@
> +#tb 0: 6/25
> +#media_type 0: video
> +#codec_id 0: rawvideo
> +#dimensions 0: 352x288
> +#sar 0: 0/1
> +0,          0,          0,        1,   152064, 0x999eb961
> +0,          1,          1,        1,   152064, 0x0e947017
> +0,          2,          2,        1,   152064, 0x2f338ae6
> +0,          3,          3,        1,   152064, 0x4ead448b
> +0,          4,          4,        1,   152064, 0x6aef2c67
> +0,          5,          5,        1,   152064, 0x809df637
> +0,          6,          6,        1,   152064, 0x57b0bd89
> +0,          7,          7,        1,   152064, 0x3649efd8
> --
> 2.7.4
>
Moritz Barsnick May 31, 2017, 1:17 p.m.
On Wed, May 31, 2017 at 13:31:14 +0100, Matthias C. M. Troffaes wrote:
> @@ -2,6 +2,7 @@ Entries are sorted chronologically from oldest to youngest within each release,
>  releases are sorted from youngest to oldest.
>  
>  version <next>:
> +- framestep filter: add blend parameter for motion blur effect
>  - deflicker video filter

Did you read the text up there? You need to add your change to the
bottom of the "<next>:" list.

> +    int frame_blend;    ///< how many frames to blend on each step
[...]
>  static const AVOption framestep_options[] = {
>      { "step", "set frame step",  OFFSET(frame_step), AV_OPT_TYPE_INT, {.i64=1}, 1, INT_MAX, FLAGS},
> +    { "blend", "number of frames to blend per step",  OFFSET(frame_blend), AV_OPT_TYPE_INT, {.i64=1}, 1, 65535, FLAGS},

Your maximum is too high to be assigned to an int, unless you properly
checked that any overflow still results in correct behaviour. The upper
limit probably needs to be "INT_MAX" (which would be 32767, but you can
use the constant).


>      AVFilterContext *ctx = outlink->src;
> -    FrameStepContext *framestep = ctx->priv;
> -    AVFilterLink *inlink = ctx->inputs[0];
> +    const FrameStepContext *s = ctx->priv;
> +    const AVFilterLink *inlink = ctx->inputs[0];
>  
>      outlink->frame_rate =
> -        av_div_q(inlink->frame_rate, (AVRational){framestep->frame_step, 1});
> +        av_div_q(inlink->frame_rate, (AVRational){s->frame_step, 1});
>  
>      av_log(ctx, AV_LOG_VERBOSE, "step:%d frame_rate:%d/%d(%f) -> frame_rate:%d/%d(%f)\n",
> -           framestep->frame_step,
> +           s->frame_step,
>             inlink->frame_rate.num, inlink->frame_rate.den, av_q2d(inlink->frame_rate),
>             outlink->frame_rate.num, outlink->frame_rate.den, av_q2d(outlink->frame_rate));
> +
>      return 0;
>  }

Isn't this just a rename and const-ification? This probably doesn't
belong into this patch (perhaps a separate one, if at all.) And it adds
an empty line, which certainly doesn't belong into a functional patch.

Moritz
Matthias C. M. Troffaes May 31, 2017, 6:46 p.m.
Dear Moritz,

On Wed, May 31, 2017 at 2:17 PM, Moritz Barsnick <barsnick@gmx.net> wrote:
> On Wed, May 31, 2017 at 13:31:14 +0100, Matthias C. M. Troffaes wrote:
>> @@ -2,6 +2,7 @@ Entries are sorted chronologically from oldest to youngest within each release,
>>  releases are sorted from youngest to oldest.
>>
>>  version <next>:
>> +- framestep filter: add blend parameter for motion blur effect
>>  - deflicker video filter
>
> Did you read the text up there? You need to add your change to the
> bottom of the "<next>:" list.

Oh dear, that's embarrassing. I must have misread the instruction.
Thanks for pointing out - I'll fix this.

>> +    int frame_blend;    ///< how many frames to blend on each step
> [...]
>>  static const AVOption framestep_options[] = {
>>      { "step", "set frame step",  OFFSET(frame_step), AV_OPT_TYPE_INT, {.i64=1}, 1, INT_MAX, FLAGS},
>> +    { "blend", "number of frames to blend per step",  OFFSET(frame_blend), AV_OPT_TYPE_INT, {.i64=1}, 1, 65535, FLAGS},
>
> Your maximum is too high to be assigned to an int, unless you properly
> checked that any overflow still results in correct behaviour. The upper
> limit probably needs to be "INT_MAX" (which would be 32767, but you can
> use the constant).

On various platforms (such as a standard 64-bit Fedora install),
INT_MAX is 2147483647 (for example see
https://www.tutorialspoint.com/c_standard_library/limits_h.htm) and
that value *will* potentially overflow the code as the internal buffer
requires all values across the blend added up to fit into a uint32_t.
To avoid overflows, the maximum value for blend therefore needs to
satisfy the following inequality:

blend_max * pix_value_max <= 2^32 - 1

and since the filter supports up to 16 bit formats, this gives the constraint:

blend_max <= (2^32 - 1) / (2^16 - 1)

which is satisfied by 65535. Technically 65537 would work as well.

Shall I change AV_OPT_TYPE_INT into AV_OPT_TYPE_INT64 for both
parameters, use INT64_MAX for the maximum value of "step", and
UINT16_MAX for the maximum value of "blend"? That should eliminate a
possible overflow on those platforms where INT_MAX is only 32767.

>>      AVFilterContext *ctx = outlink->src;
>> -    FrameStepContext *framestep = ctx->priv;
>> -    AVFilterLink *inlink = ctx->inputs[0];
>> +    const FrameStepContext *s = ctx->priv;
>> +    const AVFilterLink *inlink = ctx->inputs[0];
>>
>>      outlink->frame_rate =
>> -        av_div_q(inlink->frame_rate, (AVRational){framestep->frame_step, 1});
>> +        av_div_q(inlink->frame_rate, (AVRational){s->frame_step, 1});
>>
>>      av_log(ctx, AV_LOG_VERBOSE, "step:%d frame_rate:%d/%d(%f) -> frame_rate:%d/%d(%f)\n",
>> -           framestep->frame_step,
>> +           s->frame_step,
>>             inlink->frame_rate.num, inlink->frame_rate.den, av_q2d(inlink->frame_rate),
>>             outlink->frame_rate.num, outlink->frame_rate.den, av_q2d(outlink->frame_rate));
>> +
>>      return 0;
>>  }
>
> Isn't this just a rename and const-ification? This probably doesn't
> belong into this patch (perhaps a separate one, if at all.) And it adds
> an empty line, which certainly doesn't belong into a functional patch.

Sure, I'll split it off into a separate patch. The use of "framestep"
for the context is very confusing with frame_step already being used
for the actual frame step value. Other filters use "s" for the context
and this simply brings the code in line with naming conventions
elsewhere. It seemed like a good thing to include, but I agree it's
not functional, just code cleanup.

Thank you for the prompt feedback!

Kind regards,
Matthias
Paul B Mahol May 31, 2017, 6:59 p.m.
On 5/31/17, Matthias Troffaes <matthias.troffaes@gmail.com> wrote:
> Dear Moritz,
>
> On Wed, May 31, 2017 at 2:17 PM, Moritz Barsnick <barsnick@gmx.net> wrote:
>> On Wed, May 31, 2017 at 13:31:14 +0100, Matthias C. M. Troffaes wrote:
>>> @@ -2,6 +2,7 @@ Entries are sorted chronologically from oldest to
>>> youngest within each release,
>>>  releases are sorted from youngest to oldest.
>>>
>>>  version <next>:
>>> +- framestep filter: add blend parameter for motion blur effect
>>>  - deflicker video filter
>>
>> Did you read the text up there? You need to add your change to the
>> bottom of the "<next>:" list.
>
> Oh dear, that's embarrassing. I must have misread the instruction.
> Thanks for pointing out - I'll fix this.
>
>>> +    int frame_blend;    ///< how many frames to blend on each step
>> [...]
>>>  static const AVOption framestep_options[] = {
>>>      { "step", "set frame step",  OFFSET(frame_step), AV_OPT_TYPE_INT,
>>> {.i64=1}, 1, INT_MAX, FLAGS},
>>> +    { "blend", "number of frames to blend per step",
>>> OFFSET(frame_blend), AV_OPT_TYPE_INT, {.i64=1}, 1, 65535, FLAGS},
>>
>> Your maximum is too high to be assigned to an int, unless you properly
>> checked that any overflow still results in correct behaviour. The upper
>> limit probably needs to be "INT_MAX" (which would be 32767, but you can
>> use the constant).
>
> On various platforms (such as a standard 64-bit Fedora install),
> INT_MAX is 2147483647 (for example see
> https://www.tutorialspoint.com/c_standard_library/limits_h.htm) and
> that value *will* potentially overflow the code as the internal buffer
> requires all values across the blend added up to fit into a uint32_t.
> To avoid overflows, the maximum value for blend therefore needs to
> satisfy the following inequality:
>
> blend_max * pix_value_max <= 2^32 - 1
>
> and since the filter supports up to 16 bit formats, this gives the
> constraint:
>
> blend_max <= (2^32 - 1) / (2^16 - 1)
>
> which is satisfied by 65535. Technically 65537 would work as well.
>
> Shall I change AV_OPT_TYPE_INT into AV_OPT_TYPE_INT64 for both
> parameters, use INT64_MAX for the maximum value of "step", and
> UINT16_MAX for the maximum value of "blend"? That should eliminate a
> possible overflow on those platforms where INT_MAX is only 32767.
>
>>>      AVFilterContext *ctx = outlink->src;
>>> -    FrameStepContext *framestep = ctx->priv;
>>> -    AVFilterLink *inlink = ctx->inputs[0];
>>> +    const FrameStepContext *s = ctx->priv;
>>> +    const AVFilterLink *inlink = ctx->inputs[0];
>>>
>>>      outlink->frame_rate =
>>> -        av_div_q(inlink->frame_rate, (AVRational){framestep->frame_step,
>>> 1});
>>> +        av_div_q(inlink->frame_rate, (AVRational){s->frame_step, 1});
>>>
>>>      av_log(ctx, AV_LOG_VERBOSE, "step:%d frame_rate:%d/%d(%f) ->
>>> frame_rate:%d/%d(%f)\n",
>>> -           framestep->frame_step,
>>> +           s->frame_step,
>>>             inlink->frame_rate.num, inlink->frame_rate.den,
>>> av_q2d(inlink->frame_rate),
>>>             outlink->frame_rate.num, outlink->frame_rate.den,
>>> av_q2d(outlink->frame_rate));
>>> +
>>>      return 0;
>>>  }
>>
>> Isn't this just a rename and const-ification? This probably doesn't
>> belong into this patch (perhaps a separate one, if at all.) And it adds
>> an empty line, which certainly doesn't belong into a functional patch.
>
> Sure, I'll split it off into a separate patch. The use of "framestep"
> for the context is very confusing with frame_step already being used
> for the actual frame step value. Other filters use "s" for the context
> and this simply brings the code in line with naming conventions
> elsewhere. It seemed like a good thing to include, but I agree it's
> not functional, just code cleanup.
>
> Thank you for the prompt feedback!

This code does not belong in this filter.

Make new filter instead.
Matthias C. M. Troffaes May 31, 2017, 7:26 p.m.
Dear Paul,

On Wed, May 31, 2017 at 7:59 PM, Paul B Mahol <onemda@gmail.com> wrote:
> This code does not belong in this filter.
>
> Make new filter instead.

May I kindly ask why you think so? I considered this as well but then
decided against it, as this new filter would behave like framestep as
a common special case, and would also share a reasonable amount of
code with framestep. I don't feel very strongly either way though.

I can split this into a new filter if that's deemed better by the
majority here. How should it be called? It generalises both
tblend=average (step=2, blend=2) and framestep (any step, blend=1) -
so maybe "frameblend"? Should blend default to step (so it behaves
like tblend by default but supporting more than 2 frames), or to 1 (so
it behaves like framestep by default, as in the current patch)?

Kind regards,
Matthias
wm4 May 31, 2017, 7:31 p.m.
On Wed, 31 May 2017 20:59:07 +0200
Paul B Mahol <onemda@gmail.com> wrote:

> On 5/31/17, Matthias Troffaes <matthias.troffaes@gmail.com> wrote:
> > Dear Moritz,
> >
> > On Wed, May 31, 2017 at 2:17 PM, Moritz Barsnick <barsnick@gmx.net> wrote:  
> >> On Wed, May 31, 2017 at 13:31:14 +0100, Matthias C. M. Troffaes wrote:  
> >>> @@ -2,6 +2,7 @@ Entries are sorted chronologically from oldest to
> >>> youngest within each release,
> >>>  releases are sorted from youngest to oldest.
> >>>
> >>>  version <next>:
> >>> +- framestep filter: add blend parameter for motion blur effect
> >>>  - deflicker video filter  
> >>
> >> Did you read the text up there? You need to add your change to the
> >> bottom of the "<next>:" list.  
> >
> > Oh dear, that's embarrassing. I must have misread the instruction.
> > Thanks for pointing out - I'll fix this.
> >  
> >>> +    int frame_blend;    ///< how many frames to blend on each step  
> >> [...]  
> >>>  static const AVOption framestep_options[] = {
> >>>      { "step", "set frame step",  OFFSET(frame_step), AV_OPT_TYPE_INT,
> >>> {.i64=1}, 1, INT_MAX, FLAGS},
> >>> +    { "blend", "number of frames to blend per step",
> >>> OFFSET(frame_blend), AV_OPT_TYPE_INT, {.i64=1}, 1, 65535, FLAGS},  
> >>
> >> Your maximum is too high to be assigned to an int, unless you properly
> >> checked that any overflow still results in correct behaviour. The upper
> >> limit probably needs to be "INT_MAX" (which would be 32767, but you can
> >> use the constant).  
> >
> > On various platforms (such as a standard 64-bit Fedora install),
> > INT_MAX is 2147483647 (for example see
> > https://www.tutorialspoint.com/c_standard_library/limits_h.htm) and
> > that value *will* potentially overflow the code as the internal buffer
> > requires all values across the blend added up to fit into a uint32_t.
> > To avoid overflows, the maximum value for blend therefore needs to
> > satisfy the following inequality:
> >
> > blend_max * pix_value_max <= 2^32 - 1
> >
> > and since the filter supports up to 16 bit formats, this gives the
> > constraint:
> >
> > blend_max <= (2^32 - 1) / (2^16 - 1)
> >
> > which is satisfied by 65535. Technically 65537 would work as well.
> >
> > Shall I change AV_OPT_TYPE_INT into AV_OPT_TYPE_INT64 for both
> > parameters, use INT64_MAX for the maximum value of "step", and
> > UINT16_MAX for the maximum value of "blend"? That should eliminate a
> > possible overflow on those platforms where INT_MAX is only 32767.
> >  
> >>>      AVFilterContext *ctx = outlink->src;
> >>> -    FrameStepContext *framestep = ctx->priv;
> >>> -    AVFilterLink *inlink = ctx->inputs[0];
> >>> +    const FrameStepContext *s = ctx->priv;
> >>> +    const AVFilterLink *inlink = ctx->inputs[0];
> >>>
> >>>      outlink->frame_rate =
> >>> -        av_div_q(inlink->frame_rate, (AVRational){framestep->frame_step,
> >>> 1});
> >>> +        av_div_q(inlink->frame_rate, (AVRational){s->frame_step, 1});
> >>>
> >>>      av_log(ctx, AV_LOG_VERBOSE, "step:%d frame_rate:%d/%d(%f) ->
> >>> frame_rate:%d/%d(%f)\n",
> >>> -           framestep->frame_step,
> >>> +           s->frame_step,
> >>>             inlink->frame_rate.num, inlink->frame_rate.den,
> >>> av_q2d(inlink->frame_rate),
> >>>             outlink->frame_rate.num, outlink->frame_rate.den,
> >>> av_q2d(outlink->frame_rate));
> >>> +
> >>>      return 0;
> >>>  }  
> >>
> >> Isn't this just a rename and const-ification? This probably doesn't
> >> belong into this patch (perhaps a separate one, if at all.) And it adds
> >> an empty line, which certainly doesn't belong into a functional patch.  
> >
> > Sure, I'll split it off into a separate patch. The use of "framestep"
> > for the context is very confusing with frame_step already being used
> > for the actual frame step value. Other filters use "s" for the context
> > and this simply brings the code in line with naming conventions
> > elsewhere. It seemed like a good thing to include, but I agree it's
> > not functional, just code cleanup.
> >
> > Thank you for the prompt feedback!  
> 
> This code does not belong in this filter.
> 
> Make new filter instead.

I kind of agree. vf_framestep is not even supposed to be a framerate
conversion filter. It merely skips frames by a fixed pattern or
depending on frame type. It also can work on any format because it does
not touch the actual image data. This patch changes most of these things
(even if it's "optional").

Also, I wonder if we don't already have other filters for framerate
conversion. Though each filter with fundamentally different, you know,
filtering, should probably be separate.
Paul B Mahol May 31, 2017, 7:32 p.m.
On 5/31/17, Matthias Troffaes <matthias.troffaes@gmail.com> wrote:
> Dear Paul,
>
> On Wed, May 31, 2017 at 7:59 PM, Paul B Mahol <onemda@gmail.com> wrote:
>> This code does not belong in this filter.
>>
>> Make new filter instead.
>
> May I kindly ask why you think so? I considered this as well but then
> decided against it, as this new filter would behave like framestep as
> a common special case, and would also share a reasonable amount of
> code with framestep. I don't feel very strongly either way though.

Framestep filter does have very little code.
And should be kept as is.

>
> I can split this into a new filter if that's deemed better by the
> majority here. How should it be called? It generalises both
> tblend=average (step=2, blend=2) and framestep (any step, blend=1) -
> so maybe "frameblend"? Should blend default to step (so it behaves
> like tblend by default but supporting more than 2 frames), or to 1 (so
> it behaves like framestep by default, as in the current patch)?

fskipblend or something like that

Patch hide | download patch | download mbox

diff --git a/Changelog b/Changelog
index 1949ec7..cef06eb 100644
--- a/Changelog
+++ b/Changelog
@@ -2,6 +2,7 @@  Entries are sorted chronologically from oldest to youngest within each release,
 releases are sorted from youngest to oldest.
 
 version <next>:
+- framestep filter: add blend parameter for motion blur effect
 - deflicker video filter
 - doubleweave video filter
 - lumakey video filter
diff --git a/doc/filters.texi b/doc/filters.texi
index 107fe61..0160954 100644
--- a/doc/filters.texi
+++ b/doc/filters.texi
@@ -8453,6 +8453,13 @@  This filter accepts the following option:
 @item step
 Select frame after every @code{step} frames.
 Allowed values are positive integers higher than 0. Default value is @code{1}.
+@item blend
+Blend @code{blend} consequentive frames on every step,
+to produce a motion blur effect.
+Allowed values are positive integers between @code{1} and @code{step},
+where @code{1} corresponds to no motion blur, and @code{step}
+corresponds to maximal motion blur.
+Default value is @code{1}.
 @end table
 
 @anchor{frei0r}
diff --git a/libavfilter/vf_framestep.c b/libavfilter/vf_framestep.c
index 8102e7c..33a380f 100644
--- a/libavfilter/vf_framestep.c
+++ b/libavfilter/vf_framestep.c
@@ -1,5 +1,6 @@ 
 /*
  * Copyright (c) 2012 Stefano Sabatini
+ * Copyright (c) 2017 Matthias C. M. Troffaes
  *
  * This file is part of FFmpeg.
  *
@@ -24,13 +25,24 @@ 
  */
 
 #include "libavutil/opt.h"
+#include "libavutil/pixdesc.h"
 #include "avfilter.h"
 #include "internal.h"
 #include "video.h"
 
 typedef struct NullContext {
     const AVClass *class;
-    int frame_step;
+    int frame_step;     ///< step size in frames
+    int frame_blend;    ///< how many frames to blend on each step
+    int nb_planes;      ///< number of planes in the pixel format
+    int planewidth[4];  ///< width of each plane (after subsampling)
+    int planeheight[4]; ///< height of each plane (after subsampling)
+    uint32_t *data[4];  ///< buffer for blending input frames
+
+    void (*blend_set)(AVFilterContext *ctx, AVFrame *in, int plane);
+    void (*blend_add)(AVFilterContext *ctx, AVFrame *in, int plane);
+    void (*blend_div)(AVFilterContext *ctx, AVFrame *in, int plane);
+    int (*filter_frame)(AVFilterLink *inlink, AVFrame *in);
 } FrameStepContext;
 
 #define OFFSET(x) offsetof(FrameStepContext, x)
@@ -38,43 +50,229 @@  typedef struct NullContext {
 
 static const AVOption framestep_options[] = {
     { "step", "set frame step",  OFFSET(frame_step), AV_OPT_TYPE_INT, {.i64=1}, 1, INT_MAX, FLAGS},
+    { "blend", "number of frames to blend per step",  OFFSET(frame_blend), AV_OPT_TYPE_INT, {.i64=1}, 1, 65535, FLAGS},
     { NULL },
 };
 
 AVFILTER_DEFINE_CLASS(framestep);
 
+#define DEFINE_BLEND(NAME, TYPE, DECL, EXPR)                                   \
+static void blend_##NAME##_##TYPE(AVFilterContext *ctx, AVFrame *in, int plane)\
+{                                                                              \
+    FrameStepContext *s = ctx->priv;                                           \
+    DECL                                                                       \
+    const int height = s->planeheight[plane];                                  \
+    const int width  = s->planewidth[plane];                                   \
+    const int stride = in->linesize[plane] / sizeof(TYPE);                     \
+    TYPE *src = (TYPE *)in->data[plane];                                       \
+    uint32_t *dst = s->data[plane];                                            \
+    int y, x;                                                                  \
+                                                                               \
+    for (y = 0; y < height; y++) {                                             \
+        for (x = 0; x < width; x++) {                                          \
+            EXPR;                                                              \
+        }                                                                      \
+        src += stride;                                                         \
+    }                                                                          \
+}
+
+#define SET_DECL
+#define SET_EXPR *dst++ = src[x]
+#define ADD_DECL
+#define ADD_EXPR *dst++ += src[x]
+#define DIV_DECL const int frame_blend = s->frame_blend;
+#define DIV_EXPR src[x] = *dst++ / frame_blend
+
+DEFINE_BLEND(set, uint8_t,  SET_DECL, SET_EXPR)
+DEFINE_BLEND(set, uint16_t, SET_DECL, SET_EXPR)
+DEFINE_BLEND(add, uint8_t,  ADD_DECL, ADD_EXPR)
+DEFINE_BLEND(add, uint16_t, ADD_DECL, ADD_EXPR)
+DEFINE_BLEND(div, uint8_t,  DIV_DECL, DIV_EXPR)
+DEFINE_BLEND(div, uint16_t, DIV_DECL, DIV_EXPR)
+
+#undef SET_DECL
+#undef SET_EXPR
+#undef ADD_DECL
+#undef ADD_EXPR
+#undef DIV_DECL
+#undef DIV_EXPR
+#undef DEFINE_BLEND
+
+static int filter_frame_generic(AVFilterLink *inlink, AVFrame *in)
+{
+    AVFilterContext *ctx = inlink->dst;
+    FrameStepContext *s = ctx->priv;
+    AVFilterLink *outlink = ctx->outputs[0];
+    AVFrame *out = NULL;
+    int64_t frame_pos = inlink->frame_count_out % s->frame_step;
+    int direct = 0;
+
+    /* update destination frame buffer; we need to do this even if filter is
+       disabled because buffer might be used for later frames when filter is
+       re-enabled */
+    if (!frame_pos) {
+        /* copy first frame to destination frame buffer */
+        for (int plane = 0; plane < s->nb_planes; plane++)
+            s->blend_set(ctx, in, plane);
+    } else if (frame_pos < s->frame_blend) {
+        /* add current frame to destination frame buffer */
+        for (int plane = 0; plane < s->nb_planes; plane++)
+            s->blend_add(ctx, in, plane);
+    }
+
+    /* write frame */
+    if (ctx->is_disabled) {
+        /* filter is disabled, so pass input frame as is */
+        return ff_filter_frame(outlink, in);
+    } else if ((frame_pos + 1) == s->frame_blend) {
+        /* filter is enabled, so write when all frames are blended */
+        /* create a writable frame */
+        if (av_frame_is_writable(in)) {
+            direct = 1;
+            out = in;
+        } else {
+            out = ff_get_video_buffer(outlink, outlink->w, outlink->h);
+            if (!out) {
+                av_frame_free(&in);
+                return AVERROR(ENOMEM);
+            }
+            av_frame_copy_props(out, in);
+        }
+        /* finalize destination frame */
+        for (int plane = 0; plane < s->nb_planes; plane++)
+            s->blend_div(ctx, out, plane);
+        /* free extra frame if created, and pass on output frame */
+        if (!direct)
+            av_frame_free(&in);
+        return ff_filter_frame(outlink, out);
+    } else {
+        av_frame_free(&in);
+        return 0;
+    }
+}
+
+/* special case of filter_frame when frame_blend is 1 */
+static int filter_frame_single(AVFilterLink *inlink, AVFrame *in)
+{
+    AVFilterContext *ctx = inlink->dst;
+    FrameStepContext *s = ctx->priv;
+
+    if (!(inlink->frame_count_out % s->frame_step) || ctx->is_disabled) {
+        return ff_filter_frame(ctx->outputs[0], in);
+    } else {
+        av_frame_free(&in);
+        return 0;
+    }
+}
+
+static int query_formats(AVFilterContext *ctx)
+{
+    static const enum AVPixelFormat pix_fmts[] = {
+        AV_PIX_FMT_YUV410P, AV_PIX_FMT_YUV411P,
+        AV_PIX_FMT_YUV420P, AV_PIX_FMT_YUV422P,
+        AV_PIX_FMT_YUV440P, AV_PIX_FMT_YUV444P,
+        AV_PIX_FMT_YUVJ444P, AV_PIX_FMT_YUVJ440P,
+        AV_PIX_FMT_YUVJ422P, AV_PIX_FMT_YUVJ420P,
+        AV_PIX_FMT_YUVJ411P,
+        AV_PIX_FMT_YUVA420P, AV_PIX_FMT_YUVA422P, AV_PIX_FMT_YUVA444P,
+        AV_PIX_FMT_GBRP, AV_PIX_FMT_GBRAP, AV_PIX_FMT_GRAY8,
+        AV_PIX_FMT_YUV420P16, AV_PIX_FMT_YUV422P16, AV_PIX_FMT_YUV444P16,
+        AV_PIX_FMT_YUVA420P16, AV_PIX_FMT_YUVA422P16, AV_PIX_FMT_YUVA444P16,
+        AV_PIX_FMT_GBRP16, AV_PIX_FMT_GRAY16,
+        AV_PIX_FMT_NV12, AV_PIX_FMT_NV21,
+        AV_PIX_FMT_NONE
+    };
+    FrameStepContext *s = ctx->priv;
+    AVFilterFormats *fmts_list = NULL;
+
+    if (s->frame_blend == 1) {
+        fmts_list = ff_all_formats(AVMEDIA_TYPE_VIDEO);
+    } else {
+        fmts_list = ff_make_format_list(pix_fmts);
+    }
+    if (!fmts_list)
+        return AVERROR(ENOMEM);
+    return ff_set_common_formats(ctx, fmts_list);
+}
+
+static int config_input_props(AVFilterLink *inlink)
+{
+    const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format);
+    const AVFilterContext *ctx = inlink->dst;
+    FrameStepContext *s = ctx->priv;
+
+    s->planewidth[0] = s->planewidth[3] = inlink->w;
+    s->planewidth[1] = s->planewidth[2] = AV_CEIL_RSHIFT(inlink->w, desc->log2_chroma_w);
+    s->planeheight[0] = s->planeheight[3] = inlink->h;
+    s->planeheight[1] = s->planeheight[2] = AV_CEIL_RSHIFT(inlink->h, desc->log2_chroma_h);
+    s->nb_planes = av_pix_fmt_count_planes(inlink->format);
+    for (int plane = 0; plane < s->nb_planes; plane++) {
+        const int planesize = s->planewidth[plane] * s->planeheight[plane];
+        s->data[plane] = av_mallocz_array(planesize, sizeof(uint32_t));
+        if (!s->data[plane])
+            return AVERROR(ENOMEM);
+    }
+    if (s->frame_blend == 1) {
+        s->filter_frame = filter_frame_single;
+    } else {
+        s->filter_frame = filter_frame_generic;
+        if (desc->comp[0].depth == 8) {
+            s->blend_set = blend_set_uint8_t;
+            s->blend_add = blend_add_uint8_t;
+            s->blend_div = blend_div_uint8_t;
+        } else if (desc->comp[0].depth == 16) {
+            s->blend_set = blend_set_uint16_t;
+            s->blend_add = blend_add_uint16_t;
+            s->blend_div = blend_div_uint16_t;
+        } else {
+            return AVERROR(AVERROR_BUG);
+        }
+    }
+    return 0;
+}
+
 static int config_output_props(AVFilterLink *outlink)
 {
     AVFilterContext *ctx = outlink->src;
-    FrameStepContext *framestep = ctx->priv;
-    AVFilterLink *inlink = ctx->inputs[0];
+    const FrameStepContext *s = ctx->priv;
+    const AVFilterLink *inlink = ctx->inputs[0];
 
     outlink->frame_rate =
-        av_div_q(inlink->frame_rate, (AVRational){framestep->frame_step, 1});
+        av_div_q(inlink->frame_rate, (AVRational){s->frame_step, 1});
 
     av_log(ctx, AV_LOG_VERBOSE, "step:%d frame_rate:%d/%d(%f) -> frame_rate:%d/%d(%f)\n",
-           framestep->frame_step,
+           s->frame_step,
            inlink->frame_rate.num, inlink->frame_rate.den, av_q2d(inlink->frame_rate),
            outlink->frame_rate.num, outlink->frame_rate.den, av_q2d(outlink->frame_rate));
+
     return 0;
 }
 
-static int filter_frame(AVFilterLink *inlink, AVFrame *ref)
+static av_cold int init(AVFilterContext *ctx)
 {
-    FrameStepContext *framestep = inlink->dst->priv;
+    FrameStepContext *s = ctx->priv;
+    s->frame_blend = FFMIN(s->frame_blend, s->frame_step);
+    return 0;
+}
 
-    if (!(inlink->frame_count_out % framestep->frame_step)) {
-        return ff_filter_frame(inlink->dst->outputs[0], ref);
-    } else {
-        av_frame_free(&ref);
-        return 0;
-    }
+static av_cold void uninit(AVFilterContext *ctx)
+{
+    FrameStepContext *s = ctx->priv;
+    for (int plane = 0; plane < s->nb_planes; plane++)
+        av_freep(&s->data[plane]);
+}
+
+static int filter_frame(AVFilterLink *inlink, AVFrame *in)
+{
+    FrameStepContext *s = inlink->dst->priv;
+    return s->filter_frame(inlink, in);
 }
 
 static const AVFilterPad framestep_inputs[] = {
     {
         .name         = "default",
         .type         = AVMEDIA_TYPE_VIDEO,
+        .config_props = config_input_props,
         .filter_frame = filter_frame,
     },
     { NULL }
@@ -90,11 +288,14 @@  static const AVFilterPad framestep_outputs[] = {
 };
 
 AVFilter ff_vf_framestep = {
-    .name        = "framestep",
-    .description = NULL_IF_CONFIG_SMALL("Select one frame every N frames."),
-    .priv_size   = sizeof(FrameStepContext),
-    .priv_class  = &framestep_class,
-    .inputs      = framestep_inputs,
-    .outputs     = framestep_outputs,
-    .flags       = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC,
+    .name          = "framestep",
+    .description   = NULL_IF_CONFIG_SMALL("Select one frame every N frames."),
+    .priv_size     = sizeof(FrameStepContext),
+    .priv_class    = &framestep_class,
+    .init          = init,
+    .uninit        = uninit,
+    .query_formats = query_formats,
+    .inputs        = framestep_inputs,
+    .outputs       = framestep_outputs,
+    .flags         = AVFILTER_FLAG_SUPPORT_TIMELINE_INTERNAL,
 };
diff --git a/tests/fate/filter-video.mak b/tests/fate/filter-video.mak
index 1ca29e8..5738b18 100644
--- a/tests/fate/filter-video.mak
+++ b/tests/fate/filter-video.mak
@@ -320,6 +320,14 @@  FATE_FILTER_VSYNTH-$(CONFIG_SWAPRECT_FILTER) += $(FATE_SWAPRECT)
 FATE_FILTER_VSYNTH-$(CONFIG_TBLEND_FILTER) += fate-filter-tblend
 fate-filter-tblend: CMD = framecrc -c:v pgmyuv -i $(SRC) -vf tblend=all_mode=difference128
 
+FATE_FRAMESTEP += fate-filter-framestep-gray-1
+fate-filter-framestep-gray-1: CMD = framecrc -c:v pgmyuv -i $(SRC) -vf framestep=step=6
+
+FATE_FRAMESTEP += fate-filter-framestep-gray-2
+fate-filter-framestep-gray-2: CMD = framecrc -c:v pgmyuv -i $(SRC) -vf framestep=step=6:blend=3
+
+FATE_FILTER_VSYNTH-$(CONFIG_FRAMESTEP_FILTER) += $(FATE_FRAMESTEP)
+
 FATE_FILTER_VSYNTH-$(CONFIG_TELECINE_FILTER) += fate-filter-telecine
 fate-filter-telecine: CMD = framecrc -c:v pgmyuv -i $(SRC) -vf telecine
 
@@ -384,6 +392,10 @@  fate-filter-fps-cfr: CMD = framecrc -i $(TARGET_SAMPLES)/qtrle/apple-animation-v
 fate-filter-fps-r:   CMD = framecrc -i $(TARGET_SAMPLES)/qtrle/apple-animation-variable-fps-bug.mov -r 30 -vf fps -pix_fmt yuv420p
 fate-filter-fps:     CMD = framecrc -i $(TARGET_SAMPLES)/qtrle/apple-animation-variable-fps-bug.mov -vf fps=30 -pix_fmt yuv420p
 
+FATE_FILTER_SAMPLES-$(call ALLYES, FRAMESTEP_FILTER) += fate-filter-framestep-anim-1 fate-filter-framestep-anim-2
+fate-filter-framestep-anim-1: CMD = framecrc -i $(TARGET_SAMPLES)/filter/anim.mkv -vf framestep=step=6
+fate-filter-framestep-anim-2: CMD = framecrc -i $(TARGET_SAMPLES)/filter/anim.mkv -vf framestep=step=6:blend=3
+
 FATE_FILTER_VSYNTH-$(call ALLYES, FORMAT_FILTER SPLIT_FILTER ALPHAEXTRACT_FILTER ALPHAMERGE_FILTER) += fate-filter-alphaextract_alphamerge_rgb
 fate-filter-alphaextract_alphamerge_rgb: tests/data/filtergraphs/alphamerge_alphaextract_rgb
 fate-filter-alphaextract_alphamerge_rgb: CMD = framecrc -c:v pgmyuv -i $(SRC) -filter_complex_script $(TARGET_PATH)/tests/data/filtergraphs/alphamerge_alphaextract_rgb
diff --git a/tests/ref/fate/filter-framestep-anim-1 b/tests/ref/fate/filter-framestep-anim-1
new file mode 100644
index 0000000..0a6dd19
--- /dev/null
+++ b/tests/ref/fate/filter-framestep-anim-1
@@ -0,0 +1,17 @@ 
+#tb 0: 1001/4000
+#media_type 0: video
+#codec_id 0: rawvideo
+#dimensions 0: 320x180
+#sar 0: 1/1
+0,          0,          0,        1,   172800, 0x5adff92c
+0,          1,          1,        1,   172800, 0x37b7f659
+0,          2,          2,        1,   172800, 0xb4a6f1d1
+0,          3,          3,        1,   172800, 0xd596f9c6
+0,          4,          4,        1,   172800, 0xff5a015b
+0,          5,          5,        1,   172800, 0x65477f11
+0,          6,          6,        1,   172800, 0x41569400
+0,          7,          7,        1,   172800, 0xcff9ddf9
+0,          8,          8,        1,   172800, 0xd6daba1e
+0,          9,          9,        1,   172800, 0xad83bda1
+0,         10,         10,        1,   172800, 0x1518bdb3
+0,         11,         11,        1,   172800, 0xfdd1c7ca
diff --git a/tests/ref/fate/filter-framestep-anim-2 b/tests/ref/fate/filter-framestep-anim-2
new file mode 100644
index 0000000..3984f6d
--- /dev/null
+++ b/tests/ref/fate/filter-framestep-anim-2
@@ -0,0 +1,17 @@ 
+#tb 0: 1001/4000
+#media_type 0: video
+#codec_id 0: rawvideo
+#dimensions 0: 320x180
+#sar 0: 1/1
+0,          0,          0,        1,   172800, 0xa99bf27e
+0,          1,          1,        1,   172800, 0x6e2fdfe3
+0,          2,          2,        1,   172800, 0x96e7ea35
+0,          3,          3,        1,   172800, 0x832ff6cf
+0,          4,          4,        1,   172800, 0x7a2dffb9
+0,          5,          5,        1,   172800, 0xa0a05854
+0,          6,          6,        1,   172800, 0x91d93f46
+0,          7,          7,        1,   172800, 0x3051d27b
+0,          8,          8,        1,   172800, 0x0c13b87c
+0,          9,          9,        1,   172800, 0xa80eba4c
+0,         10,         10,        1,   172800, 0xd206ba19
+0,         11,         11,        1,   172800, 0x6608b61c
diff --git a/tests/ref/fate/filter-framestep-gray-1 b/tests/ref/fate/filter-framestep-gray-1
new file mode 100644
index 0000000..dd209db
--- /dev/null
+++ b/tests/ref/fate/filter-framestep-gray-1
@@ -0,0 +1,14 @@ 
+#tb 0: 6/25
+#media_type 0: video
+#codec_id 0: rawvideo
+#dimensions 0: 352x288
+#sar 0: 0/1
+0,          0,          0,        1,   152064, 0x05b789ef
+0,          1,          1,        1,   152064, 0xe20f7c23
+0,          2,          2,        1,   152064, 0xc711ad61
+0,          3,          3,        1,   152064, 0xf25f6acc
+0,          4,          4,        1,   152064, 0xce09f9d6
+0,          5,          5,        1,   152064, 0x0f9d6aca
+0,          6,          6,        1,   152064, 0x4c9737ab
+0,          7,          7,        1,   152064, 0x20cebfa9
+0,          8,          8,        1,   152064, 0xbb87b483
diff --git a/tests/ref/fate/filter-framestep-gray-2 b/tests/ref/fate/filter-framestep-gray-2
new file mode 100644
index 0000000..c2dfc9b
--- /dev/null
+++ b/tests/ref/fate/filter-framestep-gray-2
@@ -0,0 +1,13 @@ 
+#tb 0: 6/25
+#media_type 0: video
+#codec_id 0: rawvideo
+#dimensions 0: 352x288
+#sar 0: 0/1
+0,          0,          0,        1,   152064, 0x999eb961
+0,          1,          1,        1,   152064, 0x0e947017
+0,          2,          2,        1,   152064, 0x2f338ae6
+0,          3,          3,        1,   152064, 0x4ead448b
+0,          4,          4,        1,   152064, 0x6aef2c67
+0,          5,          5,        1,   152064, 0x809df637
+0,          6,          6,        1,   152064, 0x57b0bd89
+0,          7,          7,        1,   152064, 0x3649efd8