[FFmpeg-devel] avfilter: add eval video filter

Submitted by Paul B Mahol on Nov. 1, 2019, 12:09 p.m.

Details

Message ID 20191101120924.17853-1-onemda@gmail.com
State New
Headers show

Commit Message

Paul B Mahol Nov. 1, 2019, 12:09 p.m.
Signed-off-by: Paul B Mahol <onemda@gmail.com>
---
 doc/filters.texi         |  78 +++++
 libavfilter/Makefile     |   1 +
 libavfilter/allfilters.c |   1 +
 libavfilter/vf_eval.c    | 687 +++++++++++++++++++++++++++++++++++++++
 4 files changed, 767 insertions(+)
 create mode 100644 libavfilter/vf_eval.c

Comments

Michael Niedermayer Nov. 1, 2019, 2:06 p.m.
On Fri, Nov 01, 2019 at 01:09:24PM +0100, Paul B Mahol wrote:
> Signed-off-by: Paul B Mahol <onemda@gmail.com>
> ---
>  doc/filters.texi         |  78 +++++
>  libavfilter/Makefile     |   1 +
>  libavfilter/allfilters.c |   1 +
>  libavfilter/vf_eval.c    | 687 +++++++++++++++++++++++++++++++++++++++
>  4 files changed, 767 insertions(+)
>  create mode 100644 libavfilter/vf_eval.c

This looks like it duplicates code from vf_geq.c
also this should be integrated with or in vf_geq.c

IIUC this was written because geq is GPL, but i had already agreed to
relicense the code in it i wrote to LGPL, so iam not sure why you wrote this.

Who disagreed to relicnce their contribution to geq to avoid this extra
work you did ? 
Or did noone disagree ? then i do not understand why you wrote a seperate
filter 

Thanks

[...]
Paul B Mahol Nov. 1, 2019, 3:50 p.m.
On 11/1/19, Michael Niedermayer <michael@niedermayer.cc> wrote:
> On Fri, Nov 01, 2019 at 01:09:24PM +0100, Paul B Mahol wrote:
>> Signed-off-by: Paul B Mahol <onemda@gmail.com>
>> ---
>>  doc/filters.texi         |  78 +++++
>>  libavfilter/Makefile     |   1 +
>>  libavfilter/allfilters.c |   1 +
>>  libavfilter/vf_eval.c    | 687 +++++++++++++++++++++++++++++++++++++++
>>  4 files changed, 767 insertions(+)
>>  create mode 100644 libavfilter/vf_eval.c
>
> This looks like it duplicates code from vf_geq.c
> also this should be integrated with or in vf_geq.c
>
> IIUC this was written because geq is GPL, but i had already agreed to
> relicense the code in it i wrote to LGPL, so iam not sure why you wrote
> this.
>
> Who disagreed to relicnce their contribution to geq to avoid this extra
> work you did ?
> Or did noone disagree ? then i do not understand why you wrote a seperate
> filter

Do I need to contact all on blame list of vf_geq? or also mplayer devs
that touched geq in mplayer?

>
> Thanks
>
> [...]
> --
> Michael     GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
>
> Everything should be made as simple as possible, but not simpler.
> -- Albert Einstein
>
Michael Niedermayer Nov. 1, 2019, 6:29 p.m.
On Fri, Nov 01, 2019 at 04:50:38PM +0100, Paul B Mahol wrote:
> On 11/1/19, Michael Niedermayer <michael@niedermayer.cc> wrote:
> > On Fri, Nov 01, 2019 at 01:09:24PM +0100, Paul B Mahol wrote:
> >> Signed-off-by: Paul B Mahol <onemda@gmail.com>
> >> ---
> >>  doc/filters.texi         |  78 +++++
> >>  libavfilter/Makefile     |   1 +
> >>  libavfilter/allfilters.c |   1 +
> >>  libavfilter/vf_eval.c    | 687 +++++++++++++++++++++++++++++++++++++++
> >>  4 files changed, 767 insertions(+)
> >>  create mode 100644 libavfilter/vf_eval.c
> >
> > This looks like it duplicates code from vf_geq.c
> > also this should be integrated with or in vf_geq.c
> >
> > IIUC this was written because geq is GPL, but i had already agreed to
> > relicense the code in it i wrote to LGPL, so iam not sure why you wrote
> > this.
> >
> > Who disagreed to relicnce their contribution to geq to avoid this extra
> > work you did ?
> > Or did noone disagree ? then i do not understand why you wrote a seperate
> > filter
> 
> Do I need to contact all on blame list of vf_geq? or also mplayer devs
> that touched geq in mplayer?

all who touched it MINUS anyone who made a change that is totally not
in the current version (there are several developers in mplayer who
made auch changes)
also in case of mplayer you may need to look at the commit messages
some developers did apply patches from others.

I took a quick look and these are the people i saw
carl, ubitux, ivan, ods15, Alexis Ballier, derek, Marc-Antoine Arnaud, Nicolas George, Stefano and Paul
reimar, zuxy and diego

For the last 3 i could not find a change that persists in the current
code

Either way please check the list of names, i may have made a mistake!

Thanks

[...]
Paul B Mahol Nov. 1, 2019, 8:38 p.m.
On 11/1/19, Michael Niedermayer <michael@niedermayer.cc> wrote:
> On Fri, Nov 01, 2019 at 04:50:38PM +0100, Paul B Mahol wrote:
>> On 11/1/19, Michael Niedermayer <michael@niedermayer.cc> wrote:
>> > On Fri, Nov 01, 2019 at 01:09:24PM +0100, Paul B Mahol wrote:
>> >> Signed-off-by: Paul B Mahol <onemda@gmail.com>
>> >> ---
>> >>  doc/filters.texi         |  78 +++++
>> >>  libavfilter/Makefile     |   1 +
>> >>  libavfilter/allfilters.c |   1 +
>> >>  libavfilter/vf_eval.c    | 687
>> >> +++++++++++++++++++++++++++++++++++++++
>> >>  4 files changed, 767 insertions(+)
>> >>  create mode 100644 libavfilter/vf_eval.c
>> >
>> > This looks like it duplicates code from vf_geq.c
>> > also this should be integrated with or in vf_geq.c
>> >
>> > IIUC this was written because geq is GPL, but i had already agreed to
>> > relicense the code in it i wrote to LGPL, so iam not sure why you wrote
>> > this.
>> >
>> > Who disagreed to relicnce their contribution to geq to avoid this extra
>> > work you did ?
>> > Or did noone disagree ? then i do not understand why you wrote a
>> > seperate
>> > filter
>>
>> Do I need to contact all on blame list of vf_geq? or also mplayer devs
>> that touched geq in mplayer?
>
> all who touched it MINUS anyone who made a change that is totally not
> in the current version (there are several developers in mplayer who
> made auch changes)
> also in case of mplayer you may need to look at the commit messages
> some developers did apply patches from others.
>
> I took a quick look and these are the people i saw
> carl, ubitux, ivan, ods15, Alexis Ballier, derek, Marc-Antoine Arnaud,
> Nicolas George, Stefano and Paul
> reimar, zuxy and diego

I do not think I can contact ods15.

>
> For the last 3 i could not find a change that persists in the current
> code
>
> Either way please check the list of names, i may have made a mistake!
>
> Thanks
>
> [...]
> --
> Michael     GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
>
> Opposition brings concord. Out of discord comes the fairest harmony.
> -- Heraclitus
>

Patch hide | download patch | download mbox

diff --git a/doc/filters.texi b/doc/filters.texi
index 7e9d50782a..e26ee53c1e 100644
--- a/doc/filters.texi
+++ b/doc/filters.texi
@@ -9977,6 +9977,84 @@  Flags to local 3x3 coordinates maps like this:
     6 7 8
 @end table
 
+@section eval
+Filter video frames according to specified expressions.
+
+The filter accepts the following options:
+
+@table @option
+@item inputs
+Set number of inputs. All input must be of same format.
+Note that inputs width and height might be different.
+
+@item size
+Set output video size. Default is hd720.
+
+@item sar
+Set output sample aspect ratio.
+
+@item expr0
+@item expr1
+@item expr2
+@item expr3
+Set expression for each component. Components with unset or invalid expression
+will be replaced with zero values in output.
+@end table
+
+Each expression can contain the following constants and functions:
+
+@table @option
+@item C
+Number of color components in currently used format.
+
+@item DEPTH
+Bit depth of current used format.
+
+@item W
+Width of current plane of output frame.
+
+@item H
+Height of current plane of output frame.
+
+@item SW
+@item SH
+Width and height scale for the plane being filtered. It is the
+ratio between the dimensions of the current plane to the frame dimensions,
+
+@item X
+@item Y
+The coordinates of the current output pixel component.
+
+@item CC
+The current color component.
+
+@item N
+Current output frame number, starting from 0.
+
+@item T
+Time of the current frame, expressed in seconds.
+
+@item sw(c)
+Return width scale for component @var{c}.
+
+@item sh(c)
+Return height scale for component @var{c}.
+
+@item iw(n)
+Return frame width of input @var{n}.
+
+@item ih(n)
+Return frame height of input @var{n}.
+
+@item fNc0(x, y)
+@item fNc1(x, y)
+@item fNc2(x, y)
+@item fNc3(x, y)
+Return the value of pixel component at location (@var{x}, @var{y}) of
+N-th input frame and components @var{0}-@var{3}. N is replaced with input number.
+Min allowed N is @var{0}, and max allowed N is number of @var{inputs} less one.
+@end table
+
 @section extractplanes
 
 Extract color channel components from input video stream into
diff --git a/libavfilter/Makefile b/libavfilter/Makefile
index 2080eed559..0fcc273e10 100644
--- a/libavfilter/Makefile
+++ b/libavfilter/Makefile
@@ -235,6 +235,7 @@  OBJS-$(CONFIG_EQ_FILTER)                     += vf_eq.o
 OBJS-$(CONFIG_EROSION_FILTER)                += vf_neighbor.o
 OBJS-$(CONFIG_EROSION_OPENCL_FILTER)         += vf_neighbor_opencl.o opencl.o \
                                                 opencl/neighbor.o
+OBJS-$(CONFIG_EVAL_FILTER)                   += vf_eval.o
 OBJS-$(CONFIG_EXTRACTPLANES_FILTER)          += vf_extractplanes.o
 OBJS-$(CONFIG_FADE_FILTER)                   += vf_fade.o
 OBJS-$(CONFIG_FFTDNOIZ_FILTER)               += vf_fftdnoiz.o
diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
index cb609067b6..ed9e8c2d4f 100644
--- a/libavfilter/allfilters.c
+++ b/libavfilter/allfilters.c
@@ -220,6 +220,7 @@  extern AVFilter ff_vf_entropy;
 extern AVFilter ff_vf_eq;
 extern AVFilter ff_vf_erosion;
 extern AVFilter ff_vf_erosion_opencl;
+extern AVFilter ff_vf_eval;
 extern AVFilter ff_vf_extractplanes;
 extern AVFilter ff_vf_fade;
 extern AVFilter ff_vf_fftdnoiz;
diff --git a/libavfilter/vf_eval.c b/libavfilter/vf_eval.c
new file mode 100644
index 0000000000..6cb5f9d030
--- /dev/null
+++ b/libavfilter/vf_eval.c
@@ -0,0 +1,687 @@ 
+/*
+ * Copyright (c) 2019 Paul B Mahol
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include "libavutil/avstring.h"
+#include "libavutil/eval.h"
+#include "libavutil/imgutils.h"
+#include "libavutil/intreadwrite.h"
+#include "libavutil/opt.h"
+#include "libavutil/pixdesc.h"
+
+#include "avfilter.h"
+#include "formats.h"
+#include "internal.h"
+#include "filters.h"
+#include "framesync.h"
+#include "video.h"
+
+enum Variables {
+    VAR_C,
+    VAR_DEPTH,
+    VAR_W,
+    VAR_H,
+    VAR_SW,
+    VAR_SH,
+    VAR_X,
+    VAR_Y,
+    VAR_CC,
+    VAR_N,
+    VAR_T,
+    VAR_NB
+};
+
+typedef struct EvalContext {
+    const AVClass *class;
+
+    int nb_inputs;
+    int w, h;
+    AVRational sar;
+    int nb_threads;
+    char *expr_str[4];
+
+    int depth;
+    int max;
+    int nb_planes;
+    int linesize[4];
+    int *planewidth[4];
+    int *planeheight[4];
+    int outplanewidth[4];
+    int outplaneheight[4];
+    int process[4];
+
+    const char **var_names;
+    const char **func1_names;
+    const char **func2_names;
+    double (**func1)(void *, double);
+    double (**func2)(void *, double, double);
+    double **var_values;
+
+    int (*eval_slice)(AVFilterContext *ctx, void *arg,
+                      int jobnr, int nb_jobs);
+
+    AVExpr *expr[4];
+    AVFrame **frames;
+    FFFrameSync fs;
+} EvalContext;
+
+static int query_formats(AVFilterContext *ctx)
+{
+    static const enum AVPixelFormat pix_fmts[] = {
+        AV_PIX_FMT_YUVA444P,   AV_PIX_FMT_YUVA444P9,
+        AV_PIX_FMT_YUVA444P10, AV_PIX_FMT_YUVA444P12,
+        AV_PIX_FMT_YUVA444P16,
+        AV_PIX_FMT_YUVA422P,   AV_PIX_FMT_YUVA422P9,
+        AV_PIX_FMT_YUVA422P10, AV_PIX_FMT_YUVA422P12,
+        AV_PIX_FMT_YUVA422P16,
+        AV_PIX_FMT_YUVA420P,   AV_PIX_FMT_YUVA420P9,
+        AV_PIX_FMT_YUVA420P10, AV_PIX_FMT_YUVA420P16,
+        AV_PIX_FMT_YUVJ444P, AV_PIX_FMT_YUVJ440P,
+        AV_PIX_FMT_YUVJ422P, AV_PIX_FMT_YUVJ420P,
+        AV_PIX_FMT_YUVJ411P,
+        AV_PIX_FMT_YUV444P,   AV_PIX_FMT_YUV444P9,
+        AV_PIX_FMT_YUV444P10, AV_PIX_FMT_YUV444P12,
+        AV_PIX_FMT_YUV444P14, AV_PIX_FMT_YUV444P16,
+        AV_PIX_FMT_YUV440P, AV_PIX_FMT_YUV440P10,
+        AV_PIX_FMT_YUV440P12,
+        AV_PIX_FMT_YUV422P,   AV_PIX_FMT_YUV422P9,
+        AV_PIX_FMT_YUV422P10, AV_PIX_FMT_YUV422P12,
+        AV_PIX_FMT_YUV422P14, AV_PIX_FMT_YUV422P16,
+        AV_PIX_FMT_YUV420P,   AV_PIX_FMT_YUV420P9,
+        AV_PIX_FMT_YUV420P10, AV_PIX_FMT_YUV420P12,
+        AV_PIX_FMT_YUV420P14, AV_PIX_FMT_YUV420P16,
+        AV_PIX_FMT_YUV411P,
+        AV_PIX_FMT_YUV410P,
+        AV_PIX_FMT_GBRP,   AV_PIX_FMT_GBRP9,
+        AV_PIX_FMT_GBRP10, AV_PIX_FMT_GBRP12,
+        AV_PIX_FMT_GBRP14, AV_PIX_FMT_GBRP16,
+        AV_PIX_FMT_GBRAP,   AV_PIX_FMT_GBRAP10,
+        AV_PIX_FMT_GBRAP12, AV_PIX_FMT_GBRAP16,
+        AV_PIX_FMT_GRAY8,  AV_PIX_FMT_GRAY9,
+        AV_PIX_FMT_GRAY10, AV_PIX_FMT_GRAY12,
+        AV_PIX_FMT_GRAY14, AV_PIX_FMT_GRAY16,
+        AV_PIX_FMT_NONE
+    };
+
+    AVFilterFormats *fmts_list = ff_make_format_list(pix_fmts);
+    if (!fmts_list)
+        return AVERROR(ENOMEM);
+    return ff_set_common_formats(ctx, fmts_list);
+}
+
+static av_cold int init(AVFilterContext *ctx)
+{
+    EvalContext *s = ctx->priv;
+    int ret;
+
+    s->frames = av_calloc(s->nb_inputs, sizeof(*s->frames));
+    if (!s->frames)
+        return AVERROR(ENOMEM);
+
+    s->var_names = av_calloc(VAR_NB + 1, sizeof(*s->var_names));
+    if (!s->var_names)
+        return AVERROR(ENOMEM);
+
+    for (int i = 0; i < s->nb_inputs; i++) {
+        AVFilterPad pad = { 0 };
+
+        pad.type = AVMEDIA_TYPE_VIDEO;
+        pad.name = av_asprintf("input%d", i);
+        if (!pad.name)
+            return AVERROR(ENOMEM);
+
+        if ((ret = ff_insert_inpad(ctx, i, &pad)) < 0) {
+            av_freep(&pad.name);
+            return ret;
+        }
+    }
+
+    return 0;
+}
+
+#define TS2T(ts, tb) ((ts) == AV_NOPTS_VALUE ? NAN : (double)(ts)*av_q2d(tb))
+
+static int eval_slice8(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
+{
+    EvalContext *s = ctx->priv;
+    AVFrame **in = s->frames;
+    AVFrame *out = arg;
+    double *var_values = s->var_values[jobnr];
+
+    var_values[VAR_DEPTH] = s->depth;
+    var_values[VAR_C]     = s->nb_planes;
+    var_values[VAR_N]     = ctx->outputs[0]->frame_count_out;
+    var_values[VAR_T]     = TS2T(in[0]->pts, ctx->inputs[0]->time_base);
+
+    for (int p = 0; p < s->nb_planes; p++) {
+        const int slice_start = (s->outplaneheight[p] * jobnr) / nb_jobs;
+        const int slice_end = (s->outplaneheight[p] * (jobnr+1)) / nb_jobs;
+        uint8_t *dst = out->data[p] + slice_start * out->linesize[p];
+
+        if (!s->process[p]) {
+            continue;
+        }
+
+        var_values[VAR_CC] = p;
+        var_values[VAR_W]  = s->outplanewidth[p];
+        var_values[VAR_H]  = s->outplaneheight[p];
+        var_values[VAR_SW] = s->outplanewidth[p] / (double)s->outplanewidth[0];
+        var_values[VAR_SH] = s->outplaneheight[p] / (double)s->outplaneheight[0];
+
+        for (int y = slice_start; y < slice_end; y++) {
+            var_values[VAR_Y] = y;
+            for (int x = 0; x < s->outplanewidth[p]; x++) {
+                var_values[VAR_X] = x;
+                dst[x] = av_clip_uint8(av_expr_eval(s->expr[p], var_values, s));
+            }
+
+            dst += out->linesize[p];
+        }
+    }
+
+    return 0;
+}
+
+static av_always_inline int eval_slice_16_depth(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs, int depth)
+{
+    EvalContext *s = ctx->priv;
+    AVFrame **in = s->frames;
+    AVFrame *out = arg;
+    double *var_values = s->var_values[jobnr];
+
+    var_values[VAR_DEPTH] = s->depth;
+    var_values[VAR_C]     = s->nb_planes;
+    var_values[VAR_N]     = ctx->outputs[0]->frame_count_out;
+    var_values[VAR_T]     = TS2T(in[0]->pts, ctx->inputs[0]->time_base);
+
+    for (int p = 0; p < s->nb_planes; p++) {
+        const int slice_start = (s->planeheight[p][0] * jobnr) / nb_jobs;
+        const int slice_end = (s->planeheight[p][0] * (jobnr+1)) / nb_jobs;
+        uint16_t *dst = (uint16_t *)out->data[p] + slice_start * out->linesize[p] / 2;
+
+        if (!s->process[p]) {
+            continue;
+        }
+
+        var_values[VAR_CC] = p;
+        var_values[VAR_W]  = s->planewidth[p][0];
+        var_values[VAR_H]  = s->planeheight[p][0];
+        var_values[VAR_SW] = s->planewidth[p][0] / (double)s->planeheight[0][0];
+        var_values[VAR_SH] = s->planeheight[p][0] / (double)s->planeheight[0][0];
+
+        for (int y = slice_start; y < slice_end; y++) {
+            var_values[VAR_Y] = y;
+            for (int x = 0; x < s->planewidth[p][0]; x++) {
+                var_values[VAR_X] = x;
+                dst[x] = av_clip_uintp2_c(av_expr_eval(s->expr[p], var_values, s), depth);
+            }
+
+            dst += out->linesize[p] / 2;
+        }
+    }
+
+    return 0;
+}
+
+static int eval_slice9(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
+{
+    return eval_slice_16_depth(ctx, arg, jobnr, nb_jobs, 9);
+}
+
+static int eval_slice10(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
+{
+    return eval_slice_16_depth(ctx, arg, jobnr, nb_jobs, 10);
+}
+
+static int eval_slice12(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
+{
+    return eval_slice_16_depth(ctx, arg, jobnr, nb_jobs, 12);
+}
+
+static int eval_slice14(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
+{
+    return eval_slice_16_depth(ctx, arg, jobnr, nb_jobs, 14);
+}
+
+static int eval_slice16(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
+{
+    return eval_slice_16_depth(ctx, arg, jobnr, nb_jobs, 16);
+}
+
+static int process_frame(FFFrameSync *fs)
+{
+    AVFilterContext *ctx = fs->parent;
+    AVFilterLink *outlink = ctx->outputs[0];
+    EvalContext *s = fs->opaque;
+    AVFrame *out;
+    int ret;
+
+    for (int i = 0; i < s->nb_inputs; i++) {
+        if ((ret = ff_framesync_get_frame(&s->fs, i, &s->frames[i], 0)) < 0)
+            return ret;
+    }
+
+    out = ff_get_video_buffer(outlink, outlink->w, outlink->h);
+    if (!out)
+        return AVERROR(ENOMEM);
+    out->pts = av_rescale_q(s->fs.pts, s->fs.time_base, outlink->time_base);
+
+    ctx->internal->execute(ctx, s->eval_slice, out, NULL, FFMIN(s->outplaneheight[1], ff_filter_get_nb_threads(ctx)));
+
+    return ff_filter_frame(outlink, out);
+}
+
+static inline double get_pixel(void *priv, double dx, double dy, int n, int p)
+{
+    EvalContext *s = priv;
+    AVFrame **in = s->frames;
+    int w = s->planewidth[p][n];
+    int h = s->planeheight[p][n];
+    int x = av_clip(dx, 0, w-1);
+    int y = av_clip(dy, 0, h-1);
+
+    if (s->depth == 8)
+        return in[n]->data[p][in[n]->linesize[p] * y + x];
+    else
+        return AV_RN16(&in[n]->data[p][in[n]->linesize[p] * y + x * 2]);
+}
+
+#define FUNCTIONS(N) \
+static double f##N##c0(void *priv, double x, double y) { return get_pixel(priv, x, y, N, 0); } \
+static double f##N##c1(void *priv, double x, double y) { return get_pixel(priv, x, y, N, 1); } \
+static double f##N##c2(void *priv, double x, double y) { return get_pixel(priv, x, y, N, 2); } \
+static double f##N##c3(void *priv, double x, double y) { return get_pixel(priv, x, y, N, 3); }
+
+FUNCTIONS(0)  FUNCTIONS(1)  FUNCTIONS(2)  FUNCTIONS(3)  FUNCTIONS(4)  FUNCTIONS(5)  FUNCTIONS(6)  FUNCTIONS(7)  FUNCTIONS(8)  FUNCTIONS(9)
+FUNCTIONS(10) FUNCTIONS(11) FUNCTIONS(12) FUNCTIONS(13) FUNCTIONS(14) FUNCTIONS(15) FUNCTIONS(16) FUNCTIONS(17) FUNCTIONS(18) FUNCTIONS(19)
+FUNCTIONS(20) FUNCTIONS(21) FUNCTIONS(22) FUNCTIONS(23) FUNCTIONS(24) FUNCTIONS(25) FUNCTIONS(26) FUNCTIONS(27) FUNCTIONS(28) FUNCTIONS(29)
+FUNCTIONS(30) FUNCTIONS(31) FUNCTIONS(32)
+
+static inline double get_sw(void *priv, double component)
+{
+    EvalContext *s = priv;
+    int p = av_clip(component, 0, s->nb_planes - 1);
+    return s->planewidth[p][0] / (double)s->planewidth[0][0];
+}
+
+static inline double get_sh(void *priv, double component)
+{
+    EvalContext *s = priv;
+    int p = av_clip(component, 0, s->nb_planes - 1);
+    return s->planeheight[p][0] / (double)s->planeheight[0][0];
+}
+
+static inline double get_iw(void *priv, double input)
+{
+    EvalContext *s = priv;
+    int n = av_clip(input, 0, s->nb_inputs - 1);
+    return s->planewidth[0][n];
+}
+
+static inline double get_ih(void *priv, double input)
+{
+    EvalContext *s = priv;
+    int n = av_clip(input, 0, s->nb_inputs - 1);
+    return s->planeheight[0][n];
+}
+
+double (*functions1[])(void *, double) =
+{
+    get_sw, get_sh, get_iw, get_ih,
+    NULL
+};
+
+double (*functions2[])(void *, double, double) =
+{
+    f0c0,  f0c1,  f0c2,  f0c3,  f1c0,  f1c1,  f1c2,  f1c3,
+    f2c0,  f2c1,  f2c2,  f2c3,  f3c0,  f3c1,  f3c2,  f3c3,
+    f4c0,  f4c1,  f4c2,  f4c3,  f5c0,  f5c1,  f5c2,  f5c3,
+    f6c0,  f6c1,  f6c2,  f6c3,  f7c0,  f7c1,  f7c2,  f7c3,
+    f8c0,  f8c1,  f8c2,  f8c3,  f9c0,  f9c1,  f9c2,  f9c3,
+    f10c0, f10c1, f10c2, f10c3, f11c0, f11c1, f11c2, f11c3,
+    f12c0, f12c1, f12c2, f12c3, f13c0, f13c1, f13c2, f13c3,
+    f14c0, f14c1, f14c2, f14c3, f15c0, f15c1, f15c2, f15c3,
+    f16c0, f16c1, f16c2, f16c3, f17c0, f17c1, f17c2, f17c3,
+    f18c0, f18c1, f18c2, f18c3, f19c0, f19c1, f19c2, f19c3,
+    f20c0, f20c1, f20c2, f20c3, f21c0, f21c1, f21c2, f21c3,
+    f22c0, f22c1, f22c2, f22c3, f23c0, f23c1, f23c2, f23c3,
+    f24c0, f24c1, f24c2, f24c3, f25c0, f25c1, f25c2, f25c3,
+    f26c0, f26c1, f26c2, f26c3, f27c0, f27c1, f27c2, f27c3,
+    f28c0, f28c1, f28c2, f28c3, f29c0, f29c1, f29c2, f29c3,
+    f30c0, f30c1, f30c2, f30c3, f31c0, f31c1, f31c2, f31c3,
+    f32c0, f32c1, f32c2, f32c3,
+    NULL
+};
+
+static int config_output(AVFilterLink *outlink)
+{
+    AVFilterContext *ctx = outlink->src;
+    EvalContext *s = ctx->priv;
+    AVRational frame_rate = ctx->inputs[0]->frame_rate;
+    AVFilterLink *inlink = ctx->inputs[0];
+    const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format);
+    FFFrameSyncIn *in;
+    int ret;
+
+    for (int i = 1; i < s->nb_inputs; i++) {
+        if (ctx->inputs[i]->format != inlink->format) {
+            av_log(ctx, AV_LOG_ERROR, "Input %d format does not match input %d format.\n", i, 0);
+            return AVERROR(EINVAL);
+        }
+    }
+
+    outlink->format = inlink->format;
+    desc = av_pix_fmt_desc_get(outlink->format);
+    if (!desc)
+        return AVERROR_BUG;
+    s->nb_planes = av_pix_fmt_count_planes(outlink->format);
+    s->depth = desc->comp[0].depth;
+    s->max = (1 << s->depth) - 1;
+
+    switch (s->depth) {
+    case  8: s->eval_slice = eval_slice8;  break;
+    case  9: s->eval_slice = eval_slice9;  break;
+    case 10: s->eval_slice = eval_slice10; break;
+    case 12: s->eval_slice = eval_slice12; break;
+    case 14: s->eval_slice = eval_slice14; break;
+    case 16: s->eval_slice = eval_slice16; break;
+    }
+
+    if ((ret = av_image_fill_linesizes(s->linesize, inlink->format, s->w)) < 0)
+        return ret;
+
+    for (int i = 0; i < 4; i++) {
+        s->planeheight[i] = av_calloc(s->nb_inputs, sizeof(*s->planeheight[0]));
+        if (!s->planeheight[i])
+            return AVERROR(ENOMEM);
+        s->planewidth[i]  = av_calloc(s->nb_inputs, sizeof(*s->planewidth[0]));
+        if (!s->planewidth[i])
+            return AVERROR(ENOMEM);
+    }
+
+    for (int n = 0; n < s->nb_inputs; n++) {
+        s->planeheight[1][n] = s->planeheight[2][n] = AV_CEIL_RSHIFT(ctx->inputs[n]->h, desc->log2_chroma_h);
+        s->planeheight[0][n] = s->planeheight[3][n] = ctx->inputs[n]->h;
+        s->planewidth[1][n]  = s->planewidth[2][n]  = AV_CEIL_RSHIFT(ctx->inputs[n]->w, desc->log2_chroma_w);
+        s->planewidth[0][n]  = s->planewidth[3][n]  = ctx->inputs[n]->w;
+    }
+
+    s->outplaneheight[1] = s->outplaneheight[2] = AV_CEIL_RSHIFT(s->h, desc->log2_chroma_h);
+    s->outplaneheight[0] = s->outplaneheight[3] = s->h;
+    s->outplanewidth[1]  = s->outplanewidth[2]  = AV_CEIL_RSHIFT(s->w, desc->log2_chroma_w);
+    s->outplanewidth[0]  = s->outplanewidth[3]  = s->w;
+
+    outlink->w          = s->w;
+    outlink->h          = s->h;
+    outlink->frame_rate = frame_rate;
+    outlink->sample_aspect_ratio = s->sar;
+    outlink->time_base  = inlink->time_base;
+
+    s->var_names[VAR_DEPTH]  = av_asprintf("DEPTH");
+    if (!s->var_names[VAR_DEPTH])
+        return AVERROR(ENOMEM);
+    s->var_names[VAR_W]  = av_asprintf("W");
+    if (!s->var_names[VAR_W])
+        return AVERROR(ENOMEM);
+    s->var_names[VAR_H]  = av_asprintf("H");
+    if (!s->var_names[VAR_H])
+        return AVERROR(ENOMEM);
+    s->var_names[VAR_SW] = av_asprintf("SW");
+    if (!s->var_names[VAR_SW])
+        return AVERROR(ENOMEM);
+    s->var_names[VAR_SH] = av_asprintf("SH");
+    if (!s->var_names[VAR_SH])
+        return AVERROR(ENOMEM);
+    s->var_names[VAR_X]  = av_asprintf("X");
+    if (!s->var_names[VAR_X])
+        return AVERROR(ENOMEM);
+    s->var_names[VAR_Y]  = av_asprintf("Y");
+    if (!s->var_names[VAR_Y])
+        return AVERROR(ENOMEM);
+    s->var_names[VAR_CC]  = av_asprintf("CC");
+    if (!s->var_names[VAR_CC])
+        return AVERROR(ENOMEM);
+    s->var_names[VAR_C]  = av_asprintf("C");
+    if (!s->var_names[VAR_C])
+        return AVERROR(ENOMEM);
+    s->var_names[VAR_N]  = av_asprintf("N");
+    if (!s->var_names[VAR_N])
+        return AVERROR(ENOMEM);
+    s->var_names[VAR_T]  = av_asprintf("T");
+    if (!s->var_names[VAR_T])
+        return AVERROR(ENOMEM);
+
+    s->func1_names = av_calloc(4 + 1, sizeof(*s->func1_names));
+    if (!s->func1_names)
+        return AVERROR(ENOMEM);
+
+    s->func2_names = av_calloc(s->nb_planes * s->nb_inputs + 1, sizeof(*s->func2_names));
+    if (!s->func2_names)
+        return AVERROR(ENOMEM);
+
+    s->func1 = av_calloc(4 + 1, sizeof(*s->func1));
+    if (!s->func1)
+        return AVERROR(ENOMEM);
+
+    s->func2 = av_calloc(s->nb_planes * s->nb_inputs + 1, sizeof(*s->func2));
+    if (!s->func2)
+        return AVERROR(ENOMEM);
+
+    s->func1_names[0] = av_asprintf("sw");
+    if (!s->func1_names[0])
+        return AVERROR(ENOMEM);
+    s->func1_names[1] = av_asprintf("sh");
+    if (!s->func1_names[1])
+        return AVERROR(ENOMEM);
+    s->func1_names[2] = av_asprintf("iw");
+    if (!s->func1_names[2])
+        return AVERROR(ENOMEM);
+    s->func1_names[3] = av_asprintf("ih");
+    if (!s->func1_names[3])
+        return AVERROR(ENOMEM);
+
+    for (int i = 0; i < 4; i++) {
+        s->func1[i] = functions1[i];
+    }
+
+    for (int i = 0; i < s->nb_inputs; i++) {
+        for (int j = 0; j < s->nb_planes; j++) {
+            s->func2_names[i * s->nb_planes + j] = av_asprintf("f%dc%d", i, j);
+            if (!s->func2_names[i * s->nb_planes + j])
+                return AVERROR(ENOMEM);
+        }
+    }
+
+    for (int i = 0; i < s->nb_inputs; i++) {
+        for (int j = 0; j < s->nb_planes; j++) {
+            s->func2[i * s->nb_planes + j] = functions2[i * 4 + j];
+        }
+    }
+
+    for (int i = 0; i < 4; i++) {
+        if (!s->expr_str[i])
+            continue;
+        ret = av_expr_parse(&s->expr[i], s->expr_str[i], s->var_names,
+                            s->func1_names, s->func1, s->func2_names, s->func2, 0, ctx);
+        if (ret < 0)
+            continue;
+        s->process[i] = 1;
+    }
+
+    s->nb_threads = FFMIN(ff_filter_get_nb_threads(ctx), s->outplaneheight[1]);
+
+    s->var_values = av_calloc(s->nb_threads, sizeof(*s->var_values));
+    if (!s->var_values)
+        return AVERROR(ENOMEM);
+
+    for (int i = 0; i < s->nb_threads; i++) {
+        s->var_values[i] = av_calloc(VAR_NB, sizeof(double));
+        if (!s->var_values[i])
+            return AVERROR(ENOMEM);
+    }
+
+    if (s->nb_inputs == 1)
+        return 0;
+
+    if ((ret = ff_framesync_init(&s->fs, ctx, s->nb_inputs)) < 0)
+        return ret;
+
+    in = s->fs.in;
+    s->fs.opaque = s;
+    s->fs.on_event = process_frame;
+
+    for (int i = 0; i < s->nb_inputs; i++) {
+        AVFilterLink *inlink = ctx->inputs[i];
+
+        in[i].time_base = inlink->time_base;
+        in[i].sync   = 1;
+        in[i].before = EXT_STOP;
+        in[i].after  = EXT_STOP;
+    }
+
+    ret = ff_framesync_configure(&s->fs);
+    outlink->time_base = s->fs.time_base;
+
+    return ret;
+}
+
+static av_cold void uninit(AVFilterContext *ctx)
+{
+    EvalContext *s = ctx->priv;
+
+    av_freep(&s->frames);
+    if (s->var_names) {
+        for (int i = 0; i < VAR_NB; i++)
+            av_freep(&s->var_names[i]);
+    }
+    av_freep(&s->var_names);
+
+    if (s->func1_names) {
+        for (int i = 0; i < 4; i++)
+            av_freep(&s->func1_names[i]);
+    }
+    av_freep(&s->func1_names);
+    av_freep(&s->func1);
+
+    if (s->func2_names) {
+        for (int i = 0; i < s->nb_inputs * s->nb_planes; i++)
+            av_freep(&s->func2_names[i]);
+    }
+    av_freep(&s->func2_names);
+    av_freep(&s->func2);
+
+    ff_framesync_uninit(&s->fs);
+
+    if (s->var_values) {
+        for (int i = 0; i < s->nb_threads; i++)
+            av_freep(&s->var_values[i]);
+    }
+    av_freep(&s->var_values);
+
+    for (int i = 0; i < 4; i++) {
+        av_expr_free(s->expr[i]);
+        s->expr[i] = NULL;
+    }
+
+    for (int i = 0; i < 4; i++) {
+        av_freep(&s->planewidth[i]);
+        av_freep(&s->planeheight[i]);
+    }
+
+    for (int i = 0; i < ctx->nb_inputs; i++)
+        av_freep(&ctx->input_pads[i].name);
+}
+
+static int activate(AVFilterContext *ctx)
+{
+    EvalContext *s = ctx->priv;
+
+    if (s->nb_inputs == 1) {
+        AVFilterLink *outlink = ctx->outputs[0];
+        AVFilterLink *inlink = ctx->inputs[0];
+        AVFrame *out = NULL;
+        int ret = 0, status;
+        int64_t pts;
+
+        FF_FILTER_FORWARD_STATUS_BACK_ALL(outlink, ctx);
+
+        if ((ret = ff_inlink_consume_frame(inlink, &s->frames[0])) > 0) {
+            out = ff_get_video_buffer(outlink, outlink->w, outlink->h);
+
+            ctx->internal->execute(ctx, s->eval_slice, out, NULL, s->nb_threads);
+            out->pts = s->frames[0]->pts;
+
+            av_frame_free(&s->frames[0]);
+            if (ret < 0)
+                return ret;
+            ret = ff_filter_frame(outlink, out);
+        }
+
+        if (ret < 0) {
+            return ret;
+        } else if (ff_inlink_acknowledge_status(inlink, &status, &pts)) {
+            ff_outlink_set_status(outlink, status, pts);
+            return 0;
+        } else {
+            if (ff_outlink_frame_wanted(outlink))
+                ff_inlink_request_frame(inlink);
+            return 0;
+        }
+        return ret;
+    }
+    return ff_framesync_activate(&s->fs);
+}
+
+#define OFFSET(x) offsetof(EvalContext, x)
+#define FLAGS AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_FILTERING_PARAM
+
+static const AVOption eval_options[] = {
+    { "inputs", "set number of inputs",                OFFSET(nb_inputs),   AV_OPT_TYPE_INT,    {.i64=1}, 1,   33, .flags = FLAGS },
+    { "size",   "set output video size",               OFFSET(w),           AV_OPT_TYPE_IMAGE_SIZE, {.str="hd720"}, 0, 0, FLAGS },
+    { "sar",    "set video sample aspect ratio",       OFFSET(sar),         AV_OPT_TYPE_RATIONAL, {.dbl=1},  0, INT_MAX,   FLAGS },
+    { "expr0",  "set expression for first component",  OFFSET(expr_str[0]), AV_OPT_TYPE_STRING, {.str=0}, 0,    0, .flags = FLAGS },
+    { "expr1",  "set expression for second component", OFFSET(expr_str[1]), AV_OPT_TYPE_STRING, {.str=0}, 0,    0, .flags = FLAGS },
+    { "expr2",  "set expression for third component",  OFFSET(expr_str[2]), AV_OPT_TYPE_STRING, {.str=0}, 0,    0, .flags = FLAGS },
+    { "expr3",  "set expression for fourth component", OFFSET(expr_str[3]), AV_OPT_TYPE_STRING, {.str=0}, 0,    0, .flags = FLAGS },
+    { NULL },
+};
+
+static const AVFilterPad outputs[] = {
+    {
+        .name          = "default",
+        .type          = AVMEDIA_TYPE_VIDEO,
+        .config_props  = config_output,
+    },
+    { NULL }
+};
+
+AVFILTER_DEFINE_CLASS(eval);
+
+AVFilter ff_vf_eval = {
+    .name          = "eval",
+    .description   = NULL_IF_CONFIG_SMALL("Filter video frames according to a specified expression."),
+    .priv_size     = sizeof(EvalContext),
+    .priv_class    = &eval_class,
+    .query_formats = query_formats,
+    .outputs       = outputs,
+    .init          = init,
+    .uninit        = uninit,
+    .activate      = activate,
+    .flags         = AVFILTER_FLAG_DYNAMIC_INPUTS | AVFILTER_FLAG_SLICE_THREADS,
+};