diff mbox series

[FFmpeg-devel,v2] avfilter: add libdewobble_opencl filter

Message ID 20210823141822.807649-1-daniel.playfair.cal@gmail.com
State New
Headers show
Series [FFmpeg-devel,v2] avfilter: add libdewobble_opencl filter | expand

Checks

Context Check Description
andriy/make_x86 success Make finished
andriy/make_fate_x86 success Make fate finished
andriy/make_ppc success Make finished
andriy/make_fate_ppc success Make fate finished

Commit Message

Daniel Playfair Cal Aug. 23, 2021, 2:18 p.m. UTC
All of the existing filters for video stabilization use an affine model
(or a limited version of it) to represent the movement of the camera. When
used with cameras with a very wide field of view and/or where the camera
shaking is severe, the corrections result in significant geometric
distortion ("wobbling").

Dewobble (https://git.sr.ht/~hedgepigdaniel/dewobble) is a library built
to solve this problem. It requires knowledge of the projection used by
the input camera, and it performs stabilization using a homography
model, which is limited to include only changes in camera orientation.
Additionally, it can perform projection change by specifying a different
projection for the output camera. This is more efficient and results in
less loss of information than using separate filters to perform
stabilization and projection change.

The dewobble_opencl filter is a wrapper for Dewobble. Dewobble supports
input and output in OpenCL buffers containing NV12 frames. Hence, the
filter is named dewobble_opencl and has the same limitations. Currently
all of the options of Dewobble are supported. Of the two types of filter
available in Dewobble (FilterSync and FilterThreaded), FilterThreaded is
used. The API is synchronous, but the transformations are done in a
separate thread. The purpose of this is to isolate the global per thread
OpenCL context used by OpenCV, which Dewobble uses internally. This
prevents dewobble_opencl from interfering with any other usage of OpenCV
from within FFmpeg.

Signed-off-by: Daniel Playfair Cal <daniel.playfair.cal@gmail.com>
---

Changelog v2:
 - style improvements
 - rename from "dewobble_opencl" to "libdewobble_opencl"

I'm still confused as to why this filter should be prefixed with lib but
not others that wrap external libraries like lensfun, vidstabtransform,
vidstabdetect, etc. In any case, I've renamed as you requested.

I've addressed all the comments about style as well as doing a general
cleanup. Function arguments are arranged in a different way which
doesn't result in so many new lines.

---
 Changelog                           |    1 +
 LICENSE.md                          |    2 +-
 configure                           |    4 +
 doc/filters.texi                    |  149 ++++
 libavfilter/Makefile                |    1 +
 libavfilter/allfilters.c            |    1 +
 libavfilter/version.h               |    2 +-
 libavfilter/vf_libdewobble_opencl.c | 1273 +++++++++++++++++++++++++++
 8 files changed, 1431 insertions(+), 2 deletions(-)
 create mode 100644 libavfilter/vf_libdewobble_opencl.c

Comments

Paul B Mahol Aug. 23, 2021, 5:09 p.m. UTC | #1
On Mon, Aug 23, 2021 at 4:18 PM Daniel Playfair Cal <
daniel.playfair.cal@gmail.com> wrote:

> All of the existing filters for video stabilization use an affine model
> (or a limited version of it) to represent the movement of the camera. When
> used with cameras with a very wide field of view and/or where the camera
> shaking is severe, the corrections result in significant geometric
> distortion ("wobbling").
>
> Dewobble (https://git.sr.ht/~hedgepigdaniel/dewobble) is a library built
> to solve this problem. It requires knowledge of the projection used by
> the input camera, and it performs stabilization using a homography
> model, which is limited to include only changes in camera orientation.
> Additionally, it can perform projection change by specifying a different
> projection for the output camera. This is more efficient and results in
> less loss of information than using separate filters to perform
> stabilization and projection change.
>
> The dewobble_opencl filter is a wrapper for Dewobble. Dewobble supports
> input and output in OpenCL buffers containing NV12 frames. Hence, the
> filter is named dewobble_opencl and has the same limitations. Currently
> all of the options of Dewobble are supported. Of the two types of filter
> available in Dewobble (FilterSync and FilterThreaded), FilterThreaded is
> used. The API is synchronous, but the transformations are done in a
> separate thread. The purpose of this is to isolate the global per thread
> OpenCL context used by OpenCV, which Dewobble uses internally. This
> prevents dewobble_opencl from interfering with any other usage of OpenCV
> from within FFmpeg.
>
> Signed-off-by: Daniel Playfair Cal <daniel.playfair.cal@gmail.com>
> ---
>
> Changelog v2:
>  - style improvements
>  - rename from "dewobble_opencl" to "libdewobble_opencl"
>

library is named dewobble, thus filter should be libdewobble.



>
> I'm still confused as to why this filter should be prefixed with lib but
> not others that wrap external libraries like lensfun, vidstabtransform,
> vidstabdetect, etc. In any case, I've renamed as you requested.
>
> I've addressed all the comments about style as well as doing a general
> cleanup. Function arguments are arranged in a different way which
> doesn't result in so many new lines.
>
> ---
>  Changelog                           |    1 +
>  LICENSE.md                          |    2 +-
>  configure                           |    4 +
>  doc/filters.texi                    |  149 ++++
>  libavfilter/Makefile                |    1 +
>  libavfilter/allfilters.c            |    1 +
>  libavfilter/version.h               |    2 +-
>  libavfilter/vf_libdewobble_opencl.c | 1273 +++++++++++++++++++++++++++
>  8 files changed, 1431 insertions(+), 2 deletions(-)
>  create mode 100644 libavfilter/vf_libdewobble_opencl.c
>
> diff --git a/Changelog b/Changelog
> index 5a5b50eb66..a8d71ab4ee 100644
> --- a/Changelog
> +++ b/Changelog
> @@ -11,6 +11,7 @@ version <next>:
>  - afwtdn audio filter
>  - audio and video segment filters
>  - Apple Graphics (SMC) encoder
> +- Dewobble filter
>

no, libdewobble video filter


>
>
>  version 4.4:
> diff --git a/LICENSE.md b/LICENSE.md
> index 613070e1b6..dfdf010d8e 100644
> --- a/LICENSE.md
> +++ b/LICENSE.md
> @@ -112,7 +112,7 @@ The VMAF, mbedTLS, RK MPI, OpenCORE and VisualOn
> libraries are under the Apache
>  version 3 of those licenses. So to combine these libraries with FFmpeg,
> the
>  license version needs to be upgraded by passing `--enable-version3` to
> configure.
>
> -The smbclient library is under the GPL v3, to combine it with FFmpeg,
> +The dewobble and smbclient libraries are under the GPL v3, to combine
> them with FFmpeg,
>  the options `--enable-gpl` and `--enable-version3` have to be passed to
>  configure to upgrade FFmpeg to the GPL v3.
>
> diff --git a/configure b/configure
> index 9249254b70..60b3d3dbea 100755
> --- a/configure
> +++ b/configure
> @@ -230,6 +230,7 @@ External library support:
>    --enable-libdavs2        enable AVS2 decoding via libdavs2 [no]
>    --enable-libdc1394       enable IIDC-1394 grabbing using libdc1394
>                             and libraw1394 [no]
> +  --enable-libdewobble     enable video stabilization via libdewobble [no]
>    --enable-libfdk-aac      enable AAC de/encoding via libfdk-aac [no]
>    --enable-libflite        enable flite (voice synthesis) support via
> libflite [no]
>    --enable-libfontconfig   enable libfontconfig, useful for drawtext
> filter [no]
> @@ -1781,6 +1782,7 @@ EXTERNAL_LIBRARY_VERSION3_LIST="
>  "
>
>  EXTERNAL_LIBRARY_GPLV3_LIST="
> +    libdewobble
>      libsmbclient
>  "
>
> @@ -3606,6 +3608,7 @@ interlace_filter_deps="gpl"
>  kerndeint_filter_deps="gpl"
>  ladspa_filter_deps="ladspa libdl"
>  lensfun_filter_deps="liblensfun version3"
> +libdewobble_opencl_filter_deps="libdewobble opencl"
>  lv2_filter_deps="lv2"
>  mcdeint_filter_deps="avcodec gpl"
>  metadata_filter_deps="avformat"
> @@ -6406,6 +6409,7 @@ enabled libcodec2         && require libcodec2
> codec2/codec2.h codec2_create -lc
>  enabled libdav1d          && require_pkg_config libdav1d "dav1d >= 0.5.0"
> "dav1d/dav1d.h" dav1d_version
>  enabled libdavs2          && require_pkg_config libdavs2 "davs2 >= 1.6.0"
> davs2.h davs2_decoder_open
>  enabled libdc1394         && require_pkg_config libdc1394 libdc1394-2
> dc1394/dc1394.h dc1394_new
> +enabled libdewobble       && require_pkg_config libdewobble dewobble
> dewobble/filter.h dewobble_filter_create_threaded
>  enabled libdrm            && require_pkg_config libdrm libdrm xf86drm.h
> drmGetVersion
>  enabled libfdk_aac        && { check_pkg_config libfdk_aac fdk-aac
> "fdk-aac/aacenc_lib.h" aacEncOpen ||
>                                 { require libfdk_aac fdk-aac/aacenc_lib.h
> aacEncOpen -lfdk-aac &&
> diff --git a/doc/filters.texi b/doc/filters.texi
> index c84202cf85..f8f6528479 100644
> --- a/doc/filters.texi
> +++ b/doc/filters.texi
> @@ -14129,6 +14129,155 @@ ffmpeg -i input.mov -vf
> lensfun=make=Canon:model="Canon EOS 100D":lens_model="Ca
>
>  @end itemize
>
> +@section libdewobble_opencl
> +
> +Apply motion stabilization with awareness of lens projection and/or lens
> projection change using libdewobble (@url{
> https://git.sr.ht/~hedgepigdaniel/dewobble}).
> +
> +To enable compilation of this filter you need to configure FFmpeg with
> +@code{--enable-libdewobble}.
> +
> +This filter accepts the following options:
> +
> +@table @option
> +@item in_p
> +@item out_p
> +Set the lens projection model for the input and output.
> +
> +Available values are:
> +@table @samp
> +@item rect
> +Rectilinear projection.
> +
> +@item fish
> +Equidistant fisheye projection.
> +
> +@end table
> +
> +@item in_dfov
> +@item out_dfov
> +Diagonal field of view in degrees for the input and output.
> +
> +@item in_fx
> +@item in_fy
> +@item out_fx
> +@item out_fy
> +Location of the focal point in the input and output image.
> +Default value is the image centre in both cases.
> +
> +@item out_w
> +@item out_h
> +Dimensions of the output image.
> +Default value is the same as in input image.
> +
> +@item stab
> +Motion stabilization algorithm.
> +
> +Available values are:
> +@table @samp
> +@item fixed
> +Fix the camera orientation after the first frame.
> +
> +@item none
> +No not apply stabilization.
> +
> +@item sg
> +Smooth the camera motion using a Savitzky-Golay filter.
> +
> +@end table
> +
> +Default value is @samp{sg}.
> +
> +@item stab_r
> +For Savitzky-Golay smoothing: the number of frames to look ahead and
> behind.
> +Higher values result in a smoother output camera path.
> +
> +Default value is 15.
> +
> +Higher values increase (OpenCL) memory usage.
> +
> +@item stab_h
> +For stabilization: the number of frames to look ahead to interpolate
> input camera rotation in frames where it cannot be detected.
> +
> +Default value is 30.
> +
> +Higher values increase (OpenCL) memory usage.
> +
> +@item interp
> +Pixel interpolation algorithm.
> +
> +Available values are:
> +@table @samp
> +@item nearest
> +Nearest neighbour interpolation (fast OpenCL implementation).
> +
> +@item linear
> +Bilinear interpolation (fast OpenCL implementation).
> +
> +@item cubic
> +Bicubic interpolation (CPU implementation).
> +
> +@item lanczos
> +Lanczos4 interpolation in an 8x8 neighbourhood (CPU implementation).
> +
> +@end table
> +
> +Default value is @samp{linear}.
> +
> +@item border
> +Border extrapolation algorithm (determines how to color pixels in the
> output that do not map to the input).
> +
> +Available values are:
> +@table @samp
> +@item constant
> +Constant color.
> +
> +@item reflect
> +Reflection of the input horizontally or vertically about the edge.
> +
> +@item reflect101
> +Reflection of the input horizontally or vertically about the point half a
> pixel from the edge.
> +
> +@item replicate
> +Replicate the pixel on the edge in a vertical or horizontal direction.
> +
> +@item wrap
> +Wrap around to the opposite side of the source image.
> +
> +@end table
> +
> +Default value is @samp{constant}.
> +
> +@item border_r
> +@item border_g
> +@item border_b
> +For @samp{constant} border, the color to fill with (red, green, blue
> components).
> +
> +Default value is black.
> +
> +@item debug
> +Include a suite of debugging information in the output.
> +
> +Default value is disabled.
> +
> +@end table
> +
> +@subsection Examples
> +
> +@itemize
> +@item
> +Apply motion stabilization to video from a popular action cam in a
> certain capture mode:
> +@example
> +ffmpeg -i INPUT -vf
> libdewobble_opencl=in_p=fish:in_dfov=145.8:out_p=fish:out_dfov=145.8:stab=sg
> OUTPUT
> +@end example
> +
> +@item
> +Apply stabilization and lens projection change:
> +@example
> +ffmpeg -i INPUT -vf
> libdewobble_opencl=in_p=fish:in_dfov=145.8:out_p=rect:out_dfov=145.8:stab=sg
> OUTPUT
> +@end example
> +
> +@end itemize
> +
>  @section libvmaf
>
>  Obtain the VMAF (Video Multi-Method Assessment Fusion)
> diff --git a/libavfilter/Makefile b/libavfilter/Makefile
> index 102ce7beff..c9399f8f68 100644
> --- a/libavfilter/Makefile
> +++ b/libavfilter/Makefile
> @@ -313,6 +313,7 @@ OBJS-$(CONFIG_KIRSCH_FILTER)                 +=
> vf_convolution.o
>  OBJS-$(CONFIG_LAGFUN_FILTER)                 += vf_lagfun.o
>  OBJS-$(CONFIG_LENSCORRECTION_FILTER)         += vf_lenscorrection.o
>  OBJS-$(CONFIG_LENSFUN_FILTER)                += vf_lensfun.o
> +OBJS-$(CONFIG_LIBDEWOBBLE_OPENCL_FILTER)     += vf_libdewobble_opencl.o
> opencl.o
>  OBJS-$(CONFIG_LIBVMAF_FILTER)                += vf_libvmaf.o framesync.o
>  OBJS-$(CONFIG_LIMITER_FILTER)                += vf_limiter.o
>  OBJS-$(CONFIG_LOOP_FILTER)                   += f_loop.o
> diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
> index 73040d2824..95be7cb568 100644
> --- a/libavfilter/allfilters.c
> +++ b/libavfilter/allfilters.c
> @@ -298,6 +298,7 @@ extern const AVFilter ff_vf_kirsch;
>  extern const AVFilter ff_vf_lagfun;
>  extern const AVFilter ff_vf_lenscorrection;
>  extern const AVFilter ff_vf_lensfun;
> +extern const AVFilter ff_vf_libdewobble_opencl;
>  extern const AVFilter ff_vf_libvmaf;
>  extern const AVFilter ff_vf_limiter;
>  extern const AVFilter ff_vf_loop;
> diff --git a/libavfilter/version.h b/libavfilter/version.h
> index bcd27aa6e8..e9a76c5ac3 100644
> --- a/libavfilter/version.h
> +++ b/libavfilter/version.h
> @@ -30,7 +30,7 @@
>  #include "libavutil/version.h"
>
>  #define LIBAVFILTER_VERSION_MAJOR   8
> -#define LIBAVFILTER_VERSION_MINOR   3
> +#define LIBAVFILTER_VERSION_MINOR   4
>  #define LIBAVFILTER_VERSION_MICRO 100
>
>
> diff --git a/libavfilter/vf_libdewobble_opencl.c
> b/libavfilter/vf_libdewobble_opencl.c
> new file mode 100644
> index 0000000000..74c2940877
> --- /dev/null
> +++ b/libavfilter/vf_libdewobble_opencl.c
> @@ -0,0 +1,1273 @@
> +/*
> + * Copyright (c) 2021 Daniel Playfair Cal <daniel.playfair.cal@gmail.com>
> + *
> + * This file is part of FFmpeg.
> + *
> + * This program is free software: you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation, either version 3 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <https://www.gnu.org/licenses/>.
> + */
> +#include <dewobble/camera.h>
> +#include <dewobble/filter.h>
> +#include <dewobble/stabilizer.h>
> +#include <float.h>
> +#include <pthread.h>
> +#include <signal.h>
> +
> +#include "libavutil/avassert.h"
> +#include "libavutil/common.h"
> +#include "libavutil/imgutils.h"
> +#include "libavutil/mem.h"
> +#include "libavutil/opt.h"
> +#include "libavutil/pixdesc.h"
> +#include "libavutil/thread.h"
> +
> +#include "avfilter.h"
> +#include "filters.h"
> +#include "internal.h"
> +#include "opencl.h"
> +#include "opencl_source.h"
> +#include "transpose.h"
> +#include "video.h"
> +
> +/**
> + * @file
> + * Apply motion stabilization with awareness of lens projection and/or
> change
> + * camera projection.
> + *
> + * This filter is essentially a wrapper around dewobble
> + * (https://git.sr.ht/~hedgepigdaniel/dewobble).
> + *
> + * @par Queued frames
> + *
> + * libdewobble requires a queue of frames before it can provide output
> because
> + * it looks ahead to calculate a smooth camera path and to interpolate
> camera
> + * positions from frames where it fails to detect motion. The number of
> queued
> + * frames required is determined by libdewobble.
> + *
> + * @par Hardware frame allocation
> + *
> + * Input OpenCL hardware frames contain `cl_image`s but these must be
> converted
> + * to `cl_buffer`s for libdewobble. Although the filter keeps a reference
> to
> + * the input frame until the output frame is sent, it unreferences the
> original
> + * hardware buffers immediately after copying them to a `cl_buffer` in
> + * `consume_input_frame`. This avoids OOM issues for example when using
> input
> + * frames mapped from VA-API hardware frames where there is a low limit
> for how
> + * many can be allocated at once. The filter only owns a single
> input/output
> + * hardware frame buffer at any time, although internally it allocates
> OpenCL
> + * buffers to store the contents of a queue of frames.
> + */
> +
> +/**
> + * Camera properties, mirroring those present in libdewobble's camera
> object.
> + */
> +typedef struct Camera {
> +    /**
> +     * Camera projection model, e.g. `DEWOBBLE_PROJECTION_RECTILINEAR`
> +     */
> +    int model;
> +
> +    /**
> +     * Camera diagonal field of view in degrees
> +     */
> +    double diagonal_fov;
> +
> +    /**
> +     * Width in pixels
> +     */
> +    int width;
> +
> +    /**
> +     * Height in pixels
> +     */
> +    int height;
> +
> +    /**
> +     * Horizonal coordinate of focal point in pixels
> +     */
> +    double focal_point_x;
> +
> +    /**
> +     * Vertical coordinate of focal point in pixels
> +     */
> +    double focal_point_y;
> +} Camera;
> +
> +/**
> + * Motion stabilization algorithm, mirroring those available in
> libdewobble.
> + */
> +typedef enum StabilizationAlgorithm {
> +
> +    /**
> +     * Do not apply stabilization
> +     */
> +    STABILIZATION_ALGORITHM_ORIGINAL,
> +
> +    /**
> +     * Keep the camera orientation fixed at its orientation in the first
> frame
> +     */
> +    STABILIZATION_ALGORITHM_FIXED,
> +
> +    /**
> +     * Smooth camera orientation with a Savitsky-Golay filter
> +     */
> +    STABILIZATION_ALGORITHM_SMOOTH,
> +
> +    /**
> +     * Number of stabilization algorithms
> +     */
> +    NB_STABILIZATION_ALGORITHMS,
> +
> +} StabilizationAlgorithm;
> +
>

Huh? Why this and bellow similar stuff are not part of library?


+/**
> + * libdewobble_opencl filter context
> + */
> +typedef struct LibDewobbleOpenCLContext {
> +
> +    /**
> +     * Generic OpenCL filter context
> +     */
> +    OpenCLFilterContext ocf;
> +
> +    /**
> +     * OpenCL command queue
> +     */
> +    cl_command_queue command_queue;
> +
> +    /**
> +     * Input camera (projection, focal length, etc)
> +     */
> +    Camera input_camera;
> +
> +    /**
> +     * Output camera (projection, focal length, etc)
> +     */
> +    Camera output_camera;
> +
> +    /**
> +     * Stabilization algorithm applied by the filter
> +     * (@ref StabilizationAlgorithm)
> +     */
> +    int stabilization_algorithm;
> +
> +    /**
> +     * The number of frames to look ahead and behind for the purpose of
> +     * stabilizing each frame
> +     */
> +    int stabilization_radius;
> +
> +    /**
> +     * The number of frames to look ahead for the purpose of interpolating
> +     * frame rotation for frames where detection fails
> +     */
> +    int stabilization_horizon;
> +
> +    /**
> +     * The algorithm to interpolate the value between source image pixels
> +     * (e.g.\ `DEWOBBLE_INTERPOLATION_LINEAR`)
> +     */
> +    int interpolation_algorithm;
> +
> +    /**
> +     * The algorithm used to fill in unmapped areas of the output (e.g.\
> +     * `DEWOBBLE_BORDER_CONSTANT`)
> +     */
> +    int border_type;
> +
> +    /**
> +     * The color used to fill unmapped areas of the output when
> +     * @ref border_type is `DEWOBBLE_BORDER_CONSTANT`
> +     */
> +    double border_color[4];
> +
> +    /**
> +     * Whether to include debugging information in the output
> +     */
> +    int debug;
> +
> +    /**
> +     * Whether the filter has been initialized
> +     */
> +    int initialized;
> +
> +    /**
> +     * The status of the input link
> +     */
> +    int input_status;
> +
> +    /**
> +     * The time that the input status was reached
> +     */
> +    int64_t input_status_pts;
> +
> +    /**
> +     * Number of frame jobs currently in progress (read from inlink but
> not
> +     * yet sent to outlink)
> +     */
> +    int nb_frames_in_progress;
> +
> +    /**
> +     * Number of frames consumed so far
> +     */
> +    long nb_frames_consumed;
> +
> +    /**
> +     * The instance of libdewobble's filter
> +     */
> +    DewobbleFilter dewobble_filter;
> +
> +} LibDewobbleOpenCLContext;
> +
> +/**
> + * Convert degrees to radians.
> + * @param degrees the number of degrees
> + * @return the equivalent number of radians
> + */
> +static double degrees_to_radians(double degrees)
> +{
> +    return degrees * M_PI / 180;
> +}
> +
> +/**
> + * Initialize the libdewobble filter instance.
> + * @param avctx the filter context
> + * @return 0 on success, otherwise a negative error code
> + */
> +static int init_libdewobble_filter(AVFilterContext *avctx)
> +{
> +    LibDewobbleOpenCLContext *ctx = avctx->priv;
> +    DewobbleStabilizer stabilizer = NULL;
> +    DewobbleCamera input_camera = NULL, output_camera = NULL;
> +    DewobbleFilterConfig config = NULL;
> +
> +    input_camera = dewobble_camera_create(
> +        ctx->input_camera.model,
> degrees_to_radians(ctx->input_camera.diagonal_fov),
> +        ctx->input_camera.width, ctx->input_camera.height,
> +        ctx->input_camera.focal_point_x, ctx->input_camera.focal_point_y);
> +
> +    if (input_camera == NULL)
> +        goto fail;
> +
> +    output_camera = dewobble_camera_create(
> +        ctx->output_camera.model,
> +        degrees_to_radians(ctx->output_camera.diagonal_fov),
> +        ctx->output_camera.width, ctx->output_camera.height,
> +        ctx->output_camera.focal_point_x,
> ctx->output_camera.focal_point_y);
> +
> +    if (output_camera == NULL)
> +        goto fail;
> +
> +    switch (ctx->stabilization_algorithm) {
> +    case STABILIZATION_ALGORITHM_ORIGINAL:
> +        stabilizer = dewobble_stabilizer_create_none();
> +        break;
> +    case STABILIZATION_ALGORITHM_FIXED:
> +        stabilizer = dewobble_stabilizer_create_fixed(input_camera,
> +
> ctx->stabilization_horizon);
> +
> +        break;
> +    case STABILIZATION_ALGORITHM_SMOOTH:
> +        stabilizer = dewobble_stabilizer_create_savitzky_golay(
> +            input_camera, ctx->stabilization_radius,
> ctx->stabilization_horizon);
> +
> +        break;
> +    }
> +
> +    if (stabilizer == NULL)
> +        goto fail;
> +
> +    config = dewobble_filter_config_create(input_camera, output_camera,
> stabilizer);
> +
> +    dewobble_filter_config_set_opencl_context(config,
> ctx->ocf.hwctx->context);
> +    dewobble_filter_config_set_opencl_device(config,
> ctx->ocf.hwctx->device_id);
> +    dewobble_filter_config_set_interpolation(config,
> ctx->interpolation_algorithm);
> +    dewobble_filter_config_set_border_type(config, ctx->border_type);
> +    dewobble_filter_config_set_border_color(config, ctx->border_color);
> +    dewobble_filter_config_set_debug(config, ctx->debug);
> +
> +    ctx->dewobble_filter = dewobble_filter_create_threaded(config);
> +
> +    dewobble_filter_config_destroy(&config);
> +
> +    if (ctx->dewobble_filter == NULL)
> +        goto fail;
> +
> +    dewobble_stabilizer_destroy(&stabilizer);
> +
> +    return 0;
> +
> +fail:
> +    dewobble_stabilizer_destroy(&stabilizer);
> +    dewobble_camera_destroy(&input_camera);
> +    dewobble_camera_destroy(&output_camera);
> +
> +    return AVERROR(ENOMEM);
> +}
> +
> +/**
> + * Initialize the filter based on the options
> + * @param avctx the filter context
> + * @return 0 on success, otherwise a negative error code
> + */
> +static int libdewobble_opencl_init(AVFilterContext *avctx)
> +{
> +    LibDewobbleOpenCLContext *ctx = avctx->priv;
> +
> +    av_log(avctx, AV_LOG_VERBOSE, "Init\n");
> +
> +    if (ctx->input_camera.model == DEWOBBLE_NB_PROJECTIONS
> +        || ctx->output_camera.model == DEWOBBLE_NB_PROJECTIONS) {
> +
> +        av_log(avctx, AV_LOG_ERROR, "both in_p and out_p must be set\n");
> +        return AVERROR(EINVAL);
> +    }
> +
> +    if (ctx->input_camera.diagonal_fov == 0 ||
> ctx->output_camera.diagonal_fov == 0) {
> +        av_log(avctx, AV_LOG_ERROR, "both in_dfov and out_dfov must be
> set\n");
> +        return AVERROR(EINVAL);
> +    }
> +
> +    if (ctx->stabilization_algorithm == STABILIZATION_ALGORITHM_ORIGINAL)
> +        ctx->stabilization_horizon = 0;
> +
> +    return ff_opencl_filter_init(avctx);
> +}
> +
> +/**
> + * Clean up the filter on destruction.
> + * @param avctx the filter context
> + */
> +static void libdewobble_opencl_uninit(AVFilterContext *avctx)
> +{
> +    cl_int cle;
> +    LibDewobbleOpenCLContext *ctx = avctx->priv;
> +
> +    av_log(avctx, AV_LOG_VERBOSE, "Uninit\n");
> +
> +    if (ctx->command_queue) {
> +        cle = clReleaseCommandQueue(ctx->command_queue);
> +
> +        if (cle != CL_SUCCESS)
> +            av_log(avctx, AV_LOG_ERROR,
> +                   "Failed to release command queue: %d.\n", cle);
> +    }
> +
> +    dewobble_filter_destroy(&ctx->dewobble_filter);
> +    ff_opencl_filter_uninit(avctx);
> +}
> +
> +/**
> + * Perform further initialization of the filter when the first input
> frame is
> + * available.
> + * @param avctx the filter context
> + * @param first_frame the first input frame
> + * @return 0 on success, otherwise a negative error code
> + */
> +static int libdewobble_opencl_frames_init(AVFilterContext *avctx, AVFrame
> *first_frame)
> +{
> +    LibDewobbleOpenCLContext *ctx = avctx->priv;
> +    AVFilterLink *inlink = avctx->inputs[0];
> +    cl_int cle;
> +    int err;
> +
> +    if (first_frame->crop_top % 2 == 1 || first_frame->crop_bottom % 2 ==
> 1
> +        || first_frame->crop_left % 2 == 1 || first_frame->crop_right % 2
> == 1) {
> +
> +        av_log(avctx, AV_LOG_ERROR,
> +               "Cropping by an odd number of pixels is not supported!\n");
> +        return AVERROR(EINVAL);
> +    }
> +
> +    if ((first_frame->crop_top || first_frame->crop_bottom)
> +        && (ctx->output_camera.height == 0
> +            || ctx->output_camera.focal_point_y == DBL_MAX))
> +        av_log(avctx, AV_LOG_WARNING,
> +               "Input is vertically cropped, but output height or
> vertical "
> +               "focal point is not set. The default values are based on
> the "
> +               "uncropped input!\n");
> +
> +    if ((first_frame->crop_left || first_frame->crop_right)
> +        && (ctx->output_camera.width == 0
> +            || ctx->output_camera.focal_point_x == DBL_MAX))
> +        av_log(avctx, AV_LOG_WARNING,
> +               "Input is horizontally cropped, but output width or
> horizontal "
> +               "focal point is not set. The default values are based on
> the "
> +               "uncropped input!\n");
> +
> +    ctx->input_camera.width
> +        = inlink->w - first_frame->crop_left - first_frame->crop_right;
> +    ctx->input_camera.height
> +        = inlink->h - first_frame->crop_top - first_frame->crop_bottom;
> +
> +    /* Output camera width must match the filter output */
> +    ctx->output_camera.width = ctx->ocf.output_width;
> +    ctx->output_camera.height = ctx->ocf.output_height;
> +
> +    /* Focal points default to the image center (disregarding cropping) */
> +    if (ctx->input_camera.focal_point_x == DBL_MAX)
> +        ctx->input_camera.focal_point_x
> +            = (inlink->w - 1) / 2.0 - first_frame->crop_left;
> +
> +    if (ctx->input_camera.focal_point_y == DBL_MAX)
> +        ctx->input_camera.focal_point_y
> +            = (inlink->h - 1) / 2.0 - first_frame->crop_top;
> +
> +    if (ctx->output_camera.focal_point_x == DBL_MAX)
> +        ctx->output_camera.focal_point_x = (ctx->output_camera.width - 1)
> / 2.0;
> +
> +    if (ctx->output_camera.focal_point_y == DBL_MAX)
> +        ctx->output_camera.focal_point_y = (ctx->output_camera.height -
> 1) / 2.0;
> +
> +    ctx->command_queue = clCreateCommandQueue(ctx->ocf.hwctx->context,
> +                                              ctx->ocf.hwctx->device_id,
> 0, &cle);
> +
> +    if (cle) {
> +        av_log(avctx, AV_LOG_ERROR,
> +               "Failed to create OpenCL command queue %d.\n", cle);
> +        return AVERROR(EIO);
> +    }
> +
> +    err = init_libdewobble_filter(avctx);
> +    if (err) {
> +        av_log(avctx, AV_LOG_ERROR,
> +               "Failed to initialise libdewobble filter %d.\n", err);
> +        return AVERROR(EIO);
> +    }
> +
> +    ctx->initialized = 1;
> +
> +    return 0;
> +}
> +
> +/**
> + * Perform initialization based on the input filter link.
> + * @param inlink the input filter link
> + * @return 0 on success, otherwise a negative error code
> + */
> +static int libdewobble_opencl_config_input(AVFilterLink *inlink)
> +{
> +    AVFilterContext *avctx = inlink->dst;
> +    LibDewobbleOpenCLContext *ctx = avctx->priv;
> +    int ret;
> +
> +    ret = ff_opencl_filter_config_input(inlink);
> +
> +    if (ret < 0)
> +        return ret;
> +
> +    if (ctx->ocf.output_format != AV_PIX_FMT_NV12) {
> +        av_log(avctx, AV_LOG_ERROR, "Only NV12 input is supported!\n");
> +        return AVERROR(ENOSYS);
> +    }
> +
> +    if (inlink->w % 2 == 1 || inlink->h % 2 == 1) {
> +        av_log(avctx, AV_LOG_ERROR, "Input with odd dimensions is not
> supported!\n");
> +        return AVERROR(EINVAL);
> +    }
> +
> +    if (ctx->output_camera.width % 2 == 1 || ctx->output_camera.height %
> 2 == 1) {
> +        av_log(avctx, AV_LOG_ERROR, "Output camera must have even
> dimensions!\n");
> +        return AVERROR(EINVAL);
> +    }
> +
> +    /* Output dimensions default to the input dimensions (disregarding
> cropping) */
> +    ctx->ocf.output_width
> +        = ctx->output_camera.width ? ctx->output_camera.width : inlink->w;
> +
> +    ctx->ocf.output_height
> +        = ctx->output_camera.height ? ctx->output_camera.height :
> inlink->h;
> +
> +    return 0;
> +}
> +
> +/**
> + * Copy the contents of an input frame to an OpenCL buffer.
> + * @param avctx the filter context
> + * @param context the OpenCL context to use
> + * @param command_queue the OpenCL command queue to use
> + * @param frame the input @ref AVFrame
> + * @param input_buffer the OpenCL buffer to copy the frame into
> + * @return 0 on success, otherwise a negative error code
> + */
> +static cl_int copy_frame_to_buffer(AVFilterContext *avctx, cl_context
> context,
> +                                   cl_command_queue command_queue,
> +                                   AVFrame *frame, cl_mem input_buffer)
> +{
> +    int err;
> +    LibDewobbleOpenCLContext *ctx = avctx->priv;
> +    cl_mem luma = (cl_mem)frame->data[0];
> +    cl_mem chroma = (cl_mem)frame->data[1];
> +    cl_int cle = 0;
> +    size_t src_luma_origin[3] = { frame->crop_left, frame->crop_top, 0 };
> +
> +    size_t src_chroma_origin[3] = {
> +        frame->crop_left / 2,
> +        frame->crop_top / 2,
> +        0,
> +    };
> +
> +    size_t luma_region[3] = {
> +        ctx->input_camera.width,
> +        ctx->input_camera.height,
> +        1,
> +    };
> +
> +    size_t chroma_region[3] = {
> +        ctx->input_camera.width / 2,
> +        ctx->input_camera.height / 2,
> +        1,
> +    };
> +
> +    cl_event copy_finished[2];
> +
> +    cle = clEnqueueCopyImageToBuffer(command_queue, luma, input_buffer,
> +                                     src_luma_origin, luma_region, 0, 0,
> NULL,
> +                                     &copy_finished[0]);
> +
> +    CL_FAIL_ON_ERROR(AVERROR(EINVAL),
> +                     "Failed to enqueue copy luma image to buffer: %d\n",
> cle);
> +
> +    cle = clEnqueueCopyImageToBuffer(
> +        command_queue, chroma, input_buffer, src_chroma_origin,
> chroma_region,
> +        ctx->input_camera.width * ctx->input_camera.height * 1, 0, NULL,
> +        &copy_finished[1]);
> +
> +    CL_FAIL_ON_ERROR(AVERROR(EINVAL),
> +                     "Failed to enqueue copy chroma image to buffer:
> %d\n", cle);
> +
> +    cle = clWaitForEvents(2, copy_finished);
> +
> +    CL_FAIL_ON_ERROR(AVERROR(EINVAL), "Failed to copy images to buffer:
> %d\n", cle);
> +
> +    return 0;
> +
> +fail:
> +    return err;
> +}
> +
> +/**
> + * Copy the contents of an OpenCL buffer to an output frame.
> + * @param avctx the filter context
> + * @param buffer the OpenCL buffer
> + * @param output_frame the output frame
> + * @return 0 on success, otherwise a negative error code
> + */
> +static int copy_buffer_to_frame(AVFilterContext *avctx, cl_mem buffer,
> +                                AVFrame *output_frame)
> +{
> +    int err;
> +    LibDewobbleOpenCLContext *ctx = avctx->priv;
> +    cl_mem luma = (cl_mem)output_frame->data[0];
> +    cl_mem chroma = (cl_mem)output_frame->data[1];
> +    cl_int cle = 0;
> +    size_t dst_origin[3] = { 0, 0, 0 };
> +    size_t luma_region[3] = { output_frame->width, output_frame->height,
> 1 };
> +
> +    size_t chroma_region[3] = {
> +        output_frame->width / 2,
> +        output_frame->height / 2,
> +        1,
> +    };
> +
> +    cl_event copy_finished[2];
> +
> +    cle = clEnqueueCopyBufferToImage(ctx->command_queue, buffer, luma, 0,
> dst_origin,
> +                                     luma_region, 0, NULL,
> &copy_finished[0]);
> +
> +    CL_FAIL_ON_ERROR(AVERROR(EINVAL),
> +                     "Failed to enqueue copy buffer to luma image: %d\n",
> cle);
> +
> +    cle = clEnqueueCopyBufferToImage(ctx->command_queue, buffer, chroma,
> +                                     output_frame->width *
> output_frame->height * 1,
> +                                     dst_origin, chroma_region, 0, NULL,
> +                                     &copy_finished[1]);
> +
> +    CL_FAIL_ON_ERROR(AVERROR(EINVAL),
> +                     "Failed to enqueue copy buffer to luma image: %d\n",
> cle);
> +
> +    cle = clWaitForEvents(2, copy_finished);
> +    CL_FAIL_ON_ERROR(AVERROR(EINVAL), "Failed to copy buffer to images:
> %d\n", cle);
> +
> +    return 0;
> +
> +fail:
> +    return err;
> +}
> +
> +/**
> + * Consume an input frame and push it to the libdewobble filter.
> + * @param avctx the filter context
> + * @param input_frame the input frame
> + * @return 0 on success, otherwise a negative error code
> + */
> +static int consume_input_frame(AVFilterContext *avctx, AVFrame
> *input_frame)
> +{
> +    LibDewobbleOpenCLContext *ctx = avctx->priv;
> +    cl_mem input_buffer;
> +    int err = 0;
> +    cl_int cle;
> +
> +    if (!input_frame->hw_frames_ctx)
> +        return AVERROR(EINVAL);
> +
> +    if (!ctx->initialized) {
> +        av_log(avctx, AV_LOG_VERBOSE, "Initializing\n");
> +        err = libdewobble_opencl_frames_init(avctx, input_frame);
> +
> +        if (err < 0)
> +            return err;
> +    }
> +
> +    input_buffer
> +        = dewobble_filter_get_input_frame_buffer(ctx->dewobble_filter,
> &cle);
> +
> +    CL_FAIL_ON_ERROR(AVERROR(ENOMEM), "Failed to create buffer: %d\n",
> cle);
> +
> +    err = copy_frame_to_buffer(avctx, ctx->ocf.hwctx->context,
> +                               ctx->command_queue, input_frame,
> input_buffer);
> +
> +    if (err)
> +        goto fail;
> +
> +    /* Free original input frame buffers */
> +    for (int i = 0; input_frame->buf[i] != NULL; i++)
> +        av_buffer_unref(&input_frame->buf[i]);
> +
> +    dewobble_filter_push_frame(ctx->dewobble_filter, input_buffer,
> +                               (void **)input_frame);
> +
> +    ctx->nb_frames_in_progress += 1;
> +    ctx->nb_frames_consumed += 1;
> +
> +    return 0;
> +
> +fail:
> +    return err;
> +}
> +
> +/**
> + * Create and send on an output frame using an output buffer pulled from
> the
> + * libdewobble filter.
> + * @param avctx the filter context
> + * @return 0 on success, otherwise a negative error code.
> + */
> +static int send_output_frame(AVFilterContext *avctx)
> +{
> +    AVFilterLink *inlink = avctx->inputs[0];
> +    AVFilterLink *outlink = avctx->outputs[0];
> +    LibDewobbleOpenCLContext *ctx = avctx->priv;
> +    AVFrame *input_frame;
> +    AVFrame *output_frame = NULL;
> +    cl_mem output_buffer = NULL, input_buffer;
> +    int err;
> +
> +    dewobble_filter_pull_frame(ctx->dewobble_filter, &output_buffer,
> +                               &input_buffer, (void **)&input_frame);
> +
> +    dewobble_filter_release_input_frame_buffer(ctx->dewobble_filter,
> &input_buffer);
> +
> +    output_frame = ff_get_video_buffer(outlink, outlink->w, outlink->h);
> +    if (output_frame == NULL) {
> +        err = AVERROR(ENOMEM);
> +        goto fail;
> +    }
> +
> +    err = av_frame_copy_props(output_frame, input_frame);
> +    if (err)
> +        goto fail;
> +
> +    output_frame->crop_top = 0;
> +    output_frame->crop_bottom = 0;
> +    output_frame->crop_left = 0;
> +    output_frame->crop_right = 0;
> +
> +    err = copy_buffer_to_frame(avctx, output_buffer, output_frame);
> +    if (err)
> +        goto fail;
> +
> +    dewobble_filter_release_output_frame_buffer(ctx->dewobble_filter,
> &output_buffer);
> +
> +    av_log(avctx, AV_LOG_VERBOSE, "Sending output frame %ld (%d in
> progress)\n",
> +           ctx->nb_frames_consumed - ctx->nb_frames_in_progress,
> +           ctx->nb_frames_in_progress);
> +
> +    ctx->nb_frames_in_progress -= 1;
> +
> +    err = ff_filter_frame(outlink, output_frame);
> +    if (err < 0)
> +        goto fail;
> +
> +    if (!dewobble_filter_frame_ready(ctx->dewobble_filter))
> +        ff_inlink_request_frame(inlink);
> +
> +    if (ctx->input_status && ctx->nb_frames_in_progress == 0) {
> +        av_log(avctx, AV_LOG_VERBOSE, "Output reached EOF\n");
> +        ff_outlink_set_status(outlink, ctx->input_status,
> ctx->input_status_pts);
> +    }
> +
> +    av_frame_free(&input_frame);
> +
> +    return 0;
> +
> +fail:
> +    av_frame_free(&input_frame);
> +    av_log(avctx, AV_LOG_ERROR, "Failed to send output frame: %d\n", err);
> +    av_frame_free(&output_frame);
> +
> +    return err;
> +}
> +
> +/**
> + * Attempt to consume an input frame, and push it to the libdewobble
> filter
> + * if one is available.
> + * @param avctx the filter context
> + * @return 0 on success, otherwise a negative error code
> + */
> +static int try_consume_input_frame(AVFilterContext *avctx)
> +{
> +    AVFilterLink *inlink = avctx->inputs[0];
> +    LibDewobbleOpenCLContext *ctx = avctx->priv;
> +    int err = 0;
> +    AVFrame *input_frame;
> +
> +    /* If necessary, attempt to consume a frame from the input */
> +    if (!ctx->initialized ||
> !dewobble_filter_frame_ready(ctx->dewobble_filter)) {
> +        err = ff_inlink_consume_frame(inlink, &input_frame);
> +        if (err < 0) {
> +            av_log(avctx, AV_LOG_ERROR, "Failed to read input frame\n");
> +
> +            return err;
> +        } else if (err > 0) {
> +            av_log(avctx, AV_LOG_VERBOSE,
> +                   "Consuming input frame %ld (%d in progress)\n",
> +                   ctx->nb_frames_consumed, ctx->nb_frames_in_progress);
> +
> +            err = consume_input_frame(avctx, input_frame);
> +            if (err) {
> +                av_log(avctx, AV_LOG_ERROR,
> +                       "Failed to consume input frame: %d\n", err);
> +
> +                return err;
> +            }
> +        }
> +    }
> +
> +    return err;
> +}
> +
> +/**
> + * Read the input status and update the filter state and output status as
> + * appropriate.
> + * @param avctx the filter context
> + */
> +static void check_input_status(AVFilterContext *avctx)
> +{
> +    AVFilterLink *inlink = avctx->inputs[0];
> +    AVFilterLink *outlink = avctx->outputs[0];
> +    LibDewobbleOpenCLContext *ctx = avctx->priv;
> +
> +    /* Check for end of input */
> +    if (!ctx->input_status
> +        && ff_inlink_acknowledge_status(inlink, &ctx->input_status,
> +                                        &ctx->input_status_pts)) {
> +
> +        if (ctx->input_status == AVERROR_EOF) {
> +            av_log(avctx, AV_LOG_VERBOSE, "Reached input EOF\n");
> +
> +            dewobble_filter_end_input(ctx->dewobble_filter);
> +        } else
> +            av_log(avctx, AV_LOG_ERROR, "Input status: %d\n",
> ctx->input_status);
> +
> +        if (ctx->nb_frames_in_progress == 0) {
> +            av_log(avctx, AV_LOG_VERBOSE, "Sending output EOF\n");
> +
> +            ff_outlink_set_status(outlink, ctx->input_status,
> ctx->input_status_pts);
> +        }
> +    }
> +}
> +
> +/**
> + * Perform some work to advance the filtering process
> + * @param avctx the filter context
> + * @return 0 if progress was made, otherwise a negative error code
> + */
> +static int activate(AVFilterContext *avctx)
> +{
> +    LibDewobbleOpenCLContext *ctx = avctx->priv;
> +    AVFilterLink *inlink = avctx->inputs[0];
> +    AVFilterLink *outlink = avctx->outputs[0];
> +    int err = 0;
> +
> +    /* Forward any output status to input */
> +    err = ff_outlink_get_status(outlink);
> +    if (err) {
> +        av_log(avctx, AV_LOG_VERBOSE, "forwarding status to inlink:
> %d\n", err);
> +        ff_inlink_set_status(inlink, err);
> +        return 0;
> +    }
> +
> +    /* Consume an input frame if possible */
> +    err = try_consume_input_frame(avctx);
> +    if (err) {
> +        av_log(avctx, AV_LOG_ERROR, "try_consume_input_frame failed:
> %d\n", err);
> +        return 0;
> +    }
> +
> +    /* Check input status, including detecting EOF */
> +    check_input_status(avctx);
> +
> +    /* If possible, send an output frame */
> +    if (dewobble_filter_frame_ready(ctx->dewobble_filter)) {
> +        err = send_output_frame(avctx);
> +        if (err < 0) {
> +            av_log(avctx, AV_LOG_ERROR, "send_output_frame failed: %d\n",
> err);
> +            goto fail;
> +        }
> +    }
> +
> +    /* Schedule the next activation */
> +    if (ff_inlink_check_available_frame(inlink))
> +        /* Immediately, if input frames are still queued */
> +        ff_filter_set_ready(avctx, 1);
> +    else if (dewobble_filter_frame_ready(ctx->dewobble_filter))
> +        /* Immediately, if output frames are ready */
> +        ff_filter_set_ready(avctx, 1);
> +    else
> +        /* Otherwise when more input frames are ready */
> +        ff_inlink_request_frame(inlink);
> +
> +    return FFERROR_NOT_READY;
> +
> +fail:
> +    ff_outlink_set_status(outlink, AVERROR_UNKNOWN, 0);
> +    return err;
> +}
> +
> +/**
> + * Get the offset of a member in @ref LibDewobbleOpenCLContext
> + */
> +#define OFFSET(x) offsetof(LibDewobbleOpenCLContext, x)
> +
> +/**
> + * Get the offset of a member in @ref Camera
> + */
> +#define OFFSET_CAMERA(x) offsetof(Camera, x)
> +
> +#define FLAGS (AV_OPT_FLAG_FILTERING_PARAM | AV_OPT_FLAG_VIDEO_PARAM)
> +
> +static const AVOption libdewobble_opencl_options[] = {
> +    /* Input camera options */
> +    {
> +        "in_p",
> +        "input camera projection model",
> +        OFFSET(input_camera) + OFFSET_CAMERA(model),
> +        AV_OPT_TYPE_INT,
> +        { .i64 = DEWOBBLE_PROJECTION_EQUIDISTANT_FISHEYE },
> +        0,
> +        DEWOBBLE_NB_PROJECTIONS - 1,
> +        FLAGS,
> +        "model",
> +    },
> +    {
> +        "in_dfov",
> +        "input camera diagonal field of view in degrees",
> +        OFFSET(input_camera) + OFFSET_CAMERA(diagonal_fov),
> +        AV_OPT_TYPE_DOUBLE,
> +        { .dbl = 0 },
> +        0,
> +        DBL_MAX,
> +        .flags = FLAGS,
> +    },
> +    {
> +        "in_fx",
> +        "horizontal coordinate of focal point in input camera (default: "
> +        "center)",
> +        OFFSET(input_camera) + OFFSET_CAMERA(focal_point_x),
> +        AV_OPT_TYPE_DOUBLE,
> +        { .dbl = DBL_MAX },
> +        -DBL_MAX,
> +        DBL_MAX,
> +        .flags = FLAGS,
> +    },
> +    {
> +        "in_fy",
> +        "vertical coordinate of focal point in input camera (default:
> center)",
> +        OFFSET(input_camera) + OFFSET_CAMERA(focal_point_y),
> +        AV_OPT_TYPE_DOUBLE,
> +        { .dbl = DBL_MAX },
> +        -DBL_MAX,
> +        DBL_MAX,
> +        .flags = FLAGS,
> +    },
> +
> +    /* Output camera options */
> +    {
> +        "out_p",
> +        "output camera projection model",
> +        OFFSET(output_camera) + OFFSET_CAMERA(model),
> +        AV_OPT_TYPE_INT,
> +        { .i64 = DEWOBBLE_PROJECTION_RECTILINEAR },
> +        0,
> +        DEWOBBLE_NB_PROJECTIONS - 1,
> +        FLAGS,
> +        "model",
> +    },
> +    {
> +        "out_dfov",
> +        "output camera diagonal field of view in degrees",
> +        OFFSET(output_camera) + OFFSET_CAMERA(diagonal_fov),
> +        AV_OPT_TYPE_DOUBLE,
> +        { .dbl = 0 },
> +        0,
> +        DBL_MAX,
> +        .flags = FLAGS,
> +    },
> +    {
> +        "out_w",
> +        "output camera width in pixels (default: same as input)",
> +        OFFSET(output_camera) + OFFSET_CAMERA(width),
> +        AV_OPT_TYPE_INT,
> +        { .i64 = 0 },
> +        0,
> +        SHRT_MAX,
> +        .flags = FLAGS,
> +    },
> +    {
> +        "out_h",
> +        "output camera height in pixels (default: same as input)",
> +        OFFSET(output_camera) + OFFSET_CAMERA(height),
> +        AV_OPT_TYPE_INT,
> +        { .i64 = 0 },
> +        0,
> +        SHRT_MAX,
> +        .flags = FLAGS,
> +    },
> +    {
> +        "out_fx",
> +        "horizontal coordinate of focal point in output camera "
> +        "(default: center)",
> +        OFFSET(output_camera) + OFFSET_CAMERA(focal_point_x),
> +        AV_OPT_TYPE_DOUBLE,
> +        { .dbl = DBL_MAX },
> +        -DBL_MAX,
> +        DBL_MAX,
> +        .flags = FLAGS,
> +    },
> +    {
> +        "out_fy",
> +        "vertical coordinate of focal point in output camera "
> +        "(default: center)",
> +        OFFSET(output_camera) + OFFSET_CAMERA(focal_point_y),
> +        AV_OPT_TYPE_DOUBLE,
> +        { .dbl = DBL_MAX },
> +        -DBL_MAX,
> +        DBL_MAX,
> +        .flags = FLAGS,
> +    },
> +
> +    /* Stabilization options */
> +    {
> +        "stab",
> +        "camera orientation stabilization algorithm",
> +        OFFSET(stabilization_algorithm),
> +        AV_OPT_TYPE_INT,
> +        { .i64 = STABILIZATION_ALGORITHM_SMOOTH },
> +        0,
> +        NB_STABILIZATION_ALGORITHMS - 1,
> +        FLAGS,
> +        "stab",
> +    },
> +    {
> +        "stab_r",
> +        "for Savitzky-Golay smoothing: the number of frames "
> +        "to look ahead and behind",
> +        OFFSET(stabilization_radius),
> +        AV_OPT_TYPE_INT,
> +        { .i64 = 15 },
> +        1,
> +        INT_MAX,
> +        FLAGS,
> +    },
> +    {
> +        "stab_h",
> +        "for stabilization: the number of frames to look "
> +        "ahead to interpolate rotation in frames where it cannot be
> detected",
> +        OFFSET(stabilization_horizon),
> +        AV_OPT_TYPE_INT,
> +        { .i64 = 30 },
> +        0,
> +        INT_MAX,
> +        FLAGS,
> +    },
> +
> +    /* General options */
> +    {
> +        "interp",
> +        "interpolation algorithm",
> +        OFFSET(interpolation_algorithm),
> +        AV_OPT_TYPE_INT,
> +        { .i64 = DEWOBBLE_INTERPOLATION_LINEAR },
> +        0,
> +        DEWOBBLE_NB_INTERPOLATIONS - 1,
> +        FLAGS,
> +        "interpolation",
> +    },
> +    {
> +        "border",
> +        "border fill mode",
> +        OFFSET(border_type),
> +        AV_OPT_TYPE_INT,
> +        { .i64 = DEWOBBLE_BORDER_CONSTANT },
> +        0,
> +        DEWOBBLE_NB_BORDER_TYPES - 1,
> +        FLAGS,
> +        "border_type",
> +    },
> +    {
> +        "border_r",
> +        "border fill color (red component)",
> +        OFFSET(border_color) + sizeof(double) * 2,
> +        AV_OPT_TYPE_DOUBLE,
> +        { .i64 = 0 },
> +        0,
> +        255,
> +        FLAGS,
> +    },
> +    {
> +        "border_g",
> +        "border fill color (green component)",
> +        OFFSET(border_color) + sizeof(double) * 1,
> +        AV_OPT_TYPE_DOUBLE,
> +        { .i64 = 0 },
> +        0,
> +        255,
> +        FLAGS,
> +    },
> +    {
> +        "border_b",
> +        "border fill color (blue component)",
> +        OFFSET(border_color) + sizeof(double) * 0,
> +        AV_OPT_TYPE_DOUBLE,
> +        { .i64 = 0 },
> +        0,
> +        255,
> +        FLAGS,
> +    },
> +    {
> +        "debug",
> +        "whether to include debugging information in the output",
> +        OFFSET(debug),
> +        AV_OPT_TYPE_BOOL,
> +        { .i64 = 0 },
> +        0,
> +        1,
> +        FLAGS,
> +    },
> +
> +    /* Camera models */
> +    {
> +        "rect",
> +        "rectilinear projection",
> +        0,
> +        AV_OPT_TYPE_CONST,
> +        { .i64 = DEWOBBLE_PROJECTION_RECTILINEAR },
> +        INT_MIN,
> +        INT_MAX,
> +        FLAGS,
> +        "model",
> +    },
> +    {
> +        "fish",
> +        "equidistant fisheye projection",
> +        0,
> +        AV_OPT_TYPE_CONST,
> +        { .i64 = DEWOBBLE_PROJECTION_EQUIDISTANT_FISHEYE },
> +        INT_MIN,
> +        INT_MAX,
> +        FLAGS,
> +        "model",
> +    },
> +
> +    /* Stabilization algorithms */
> +    {
> +        "fixed",
> +        "fix the camera orientation after the first frame",
> +        0,
> +        AV_OPT_TYPE_CONST,
> +        { .i64 = STABILIZATION_ALGORITHM_FIXED },
> +        INT_MIN,
> +        INT_MAX,
> +        FLAGS,
> +        "stab",
> +    },
> +    {
> +        "none",
> +        "do not apply stabilization",
> +        0,
> +        AV_OPT_TYPE_CONST,
> +        { .i64 = STABILIZATION_ALGORITHM_ORIGINAL },
> +        INT_MIN,
> +        INT_MAX,
> +        FLAGS,
> +        "stab",
> +    },
> +    {
> +        "sg",
> +        "smooth the camera orientation using a Savitzky-Golay filter",
> +        0,
> +        AV_OPT_TYPE_CONST,
> +        { .i64 = STABILIZATION_ALGORITHM_SMOOTH },
> +        INT_MIN,
> +        INT_MAX,
> +        FLAGS,
> +        "stab",
> +    },
> +
> +    /* Interpolation algorithms */
> +    {
> +        "nearest",
> +        "nearest neighbour interpolation (fast)",
> +        0,
> +        AV_OPT_TYPE_CONST,
> +        { .i64 = DEWOBBLE_INTERPOLATION_NEAREST },
> +        INT_MIN,
> +        INT_MAX,
> +        FLAGS,
> +        "interpolation",
> +    },
> +    {
> +        "linear",
> +        "bilinear interpolation (fast)",
> +        0,
> +        AV_OPT_TYPE_CONST,
> +        { .i64 = DEWOBBLE_INTERPOLATION_LINEAR },
> +        INT_MIN,
> +        INT_MAX,
> +        FLAGS,
> +        "interpolation",
> +    },
> +    {
> +        "cubic",
> +        "bicubic interpolation (medium)",
> +        0,
> +        AV_OPT_TYPE_CONST,
> +        { .i64 = DEWOBBLE_INTERPOLATION_CUBIC },
> +        INT_MIN,
> +        INT_MAX,
> +        FLAGS,
> +        "interpolation",
> +    },
> +    {
> +        "lanczos",
> +        "Lanczos4, in an 8x8 neighbourhood (slow)",
> +        0,
> +        AV_OPT_TYPE_CONST,
> +        { .i64 = DEWOBBLE_INTERPOLATION_LANCZOS4 },
> +        INT_MIN,
> +        INT_MAX,
> +        FLAGS,
> +        "interpolation",
> +    },
> +
> +    /* Border fill algorithms */
> +    {
> +        "constant",
> +        "constant color (default black)",
> +        0,
> +        AV_OPT_TYPE_CONST,
> +        { .i64 = DEWOBBLE_BORDER_CONSTANT },
> +        INT_MIN,
> +        INT_MAX,
> +        FLAGS,
> +        "border_type",
> +    },
> +    {
> +        "reflect",
> +        "reflection of the input about the edge",
> +        0,
> +        AV_OPT_TYPE_CONST,
> +        { .i64 = DEWOBBLE_BORDER_REFLECT },
> +        INT_MIN,
> +        INT_MAX,
> +        FLAGS,
> +        "border_type",
> +    },
> +    {
> +        "reflect101",
> +        "reflection of the input about the middle of the pixel on the
> edge",
> +        0,
> +        AV_OPT_TYPE_CONST,
> +        { .i64 = DEWOBBLE_BORDER_REFLECT_101 },
> +        INT_MIN,
> +        INT_MAX,
> +        FLAGS,
> +        "border_type",
> +    },
> +    {
> +        "replicate",
> +        "replicate the pixel on the edge",
> +        0,
> +        AV_OPT_TYPE_CONST,
> +        { .i64 = DEWOBBLE_BORDER_REPLICATE },
> +        INT_MIN,
> +        INT_MAX,
> +        FLAGS,
> +        "border_type",
> +    },
> +    {
> +        "wrap",
> +        "wrap around to the opposite side of the source image",
> +        0,
> +        AV_OPT_TYPE_CONST,
> +        { .i64 = DEWOBBLE_BORDER_WRAP },
> +        INT_MIN,
> +        INT_MAX,
> +        FLAGS,
> +        "border_type",
> +    },
> +    { NULL },
> +};
> +
> +AVFILTER_DEFINE_CLASS(libdewobble_opencl);
> +
> +static const AVFilterPad inputs[] = {
> +    {
> +        .name = "default",
> +        .type = AVMEDIA_TYPE_VIDEO,
> +        .config_props = &libdewobble_opencl_config_input,
> +    },
> +};
> +
> +static const AVFilterPad outputs[] = {
> +    {
> +        .name = "default",
> +        .type = AVMEDIA_TYPE_VIDEO,
> +        .config_props = &ff_opencl_filter_config_output,
> +    },
> +};
> +
> +const AVFilter ff_vf_libdewobble_opencl = {
> +    .name = "libdewobble_opencl",
> +    .description = NULL_IF_CONFIG_SMALL(
> +        "apply motion stabilization with awareness of camera projection "
>

Apply ....

> +        "and/or change camera projection"),
> +    .priv_size = sizeof(LibDewobbleOpenCLContext),
> +    .priv_class = &libdewobble_opencl_class,
> +    .init = &libdewobble_opencl_init,
> +    .uninit = &libdewobble_opencl_uninit,
> +    .query_formats = &ff_opencl_filter_query_formats,
> +    FILTER_INPUTS(inputs),
> +    FILTER_OUTPUTS(outputs),
> +    .activate = activate,
> +    .flags_internal = FF_FILTER_FLAG_HWFRAME_AWARE,
> +};
>


I really dislike GPL3 and use of opencv.
I would prefer pure C solution.


> --
> 2.33.0
>
>
Daniel Playfair Cal Aug. 24, 2021, 2:43 a.m. UTC | #2
On Tue, Aug 24, 2021 at 3:09 AM Paul B Mahol <onemda@gmail.com> wrote:
> library is named dewobble, thus filter should be libdewobble.

Lynne suggested "libdewobble_opencl". I can rename it to "libdewobble"
if you prefer, but will everyone be happy with that?

Perhaps it will be easier if you explain the convention and your
reasoning for the name. All the other filters which work with OpenCL
hardware frames are postfixed with "_opencl". All the other filters
which wrap external libraries are not prefixed with "lib" (with the
exception of "libvmaf"). Why is this filter different in both senses?

> no, libdewobble video filter

OK, I will update this to match the name of the filter, whatever that is.

>> +
>> +/**
>> + * Camera properties, mirroring those present in libdewobble's camera object.
>> + */
>> +typedef struct Camera {
>> +    /**
>> +     * Camera projection model, e.g. `DEWOBBLE_PROJECTION_RECTILINEAR`
>> +     */
>> +    int model;
>> +
>> +    /**
>> +     * Camera diagonal field of view in degrees
>> +     */
>> +    double diagonal_fov;
>> +
>> +    /**
>> +     * Width in pixels
>> +     */
>> +    int width;
>> +
>> +    /**
>> +     * Height in pixels
>> +     */
>> +    int height;
>> +
>> +    /**
>> +     * Horizonal coordinate of focal point in pixels
>> +     */
>> +    double focal_point_x;
>> +
>> +    /**
>> +     * Vertical coordinate of focal point in pixels
>> +     */
>> +    double focal_point_y;
>> +} Camera;
>> +
>> +/**
>> + * Motion stabilization algorithm, mirroring those available in libdewobble.
>> + */
>> +typedef enum StabilizationAlgorithm {
>> +
>> +    /**
>> +     * Do not apply stabilization
>> +     */
>> +    STABILIZATION_ALGORITHM_ORIGINAL,
>> +
>> +    /**
>> +     * Keep the camera orientation fixed at its orientation in the first frame
>> +     */
>> +    STABILIZATION_ALGORITHM_FIXED,
>> +
>> +    /**
>> +     * Smooth camera orientation with a Savitsky-Golay filter
>> +     */
>> +    STABILIZATION_ALGORITHM_SMOOTH,
>> +
>> +    /**
>> +     * Number of stabilization algorithms
>> +     */
>> +    NB_STABILIZATION_ALGORITHMS,
>> +
>> +} StabilizationAlgorithm;
>> +
>
>
> Huh? Why this and bellow similar stuff are not part of library?


Because the FFmpeg filter options work by writing values to offsets in
memory, but this is not supported in libdewobble. So there are
intermediate structs/enums in the filter context where options are
written to, which are later passed to the libdewobble API in the
appropriate way. e.g. see the use of dewobble_filter_config_create()
and stabilization_algorithm.

> Apply ....

Ok, will fix.

> I really dislike GPL3 and use of opencv.
> I would prefer pure C solution.

You are welcome to use the same ideas and write a native C filter with
LGPL license if you want. But there will be alot more code to write.
OpenCV provides sparse optical flow, RANSAC, warping with 5
interpolation algorithms, text/shape drawing utilities, etc. And most
of these with OpenCL acceleration or optimized multithreaded CPU
implementations.

Use of OpenCV is an implementation detail of libdewobble. OpenCV is
not used directly in the filter, and no OpenCV objects are passed to
or received from libdewobble.
Mapul Bhola Aug. 24, 2021, 1:42 p.m. UTC | #3
August 23, 2021 10:43 PM, "Daniel Playfair Cal" <daniel.playfair.cal@gmail.com> wrote:

> On Tue, Aug 24, 2021 at 3:09 AM Paul B Mahol <onemda@gmail.com> wrote:
> 
>> library is named dewobble, thus filter should be libdewobble.
> 
> Lynne suggested "libdewobble_opencl". I can rename it to "libdewobble"
> if you prefer, but will everyone be happy with that?
> 
> Perhaps it will be easier if you explain the convention and your
> reasoning for the name. All the other filters which work with OpenCL
> hardware frames are postfixed with "_opencl". All the other filters
> which wrap external libraries are not prefixed with "lib" (with the
> exception of "libvmaf"). Why is this filter different in both senses?
> 
>> no, libdewobble video filter
> 
> OK, I will update this to match the name of the filter, whatever that is.
> 
>>> +
>>> +/**
>>> + * Camera properties, mirroring those present in libdewobble's camera object.
>>> + */
>>> +typedef struct Camera {
>>> + /**
>>> + * Camera projection model, e.g. `DEWOBBLE_PROJECTION_RECTILINEAR`
>>> + */
>>> + int model;
>>> +
>>> + /**
>>> + * Camera diagonal field of view in degrees
>>> + */
>>> + double diagonal_fov;
>>> +
>>> + /**
>>> + * Width in pixels
>>> + */
>>> + int width;
>>> +
>>> + /**
>>> + * Height in pixels
>>> + */
>>> + int height;
>>> +
>>> + /**
>>> + * Horizonal coordinate of focal point in pixels
>>> + */
>>> + double focal_point_x;
>>> +
>>> + /**
>>> + * Vertical coordinate of focal point in pixels
>>> + */
>>> + double focal_point_y;
>>> +} Camera;
>>> +
>>> +/**
>>> + * Motion stabilization algorithm, mirroring those available in libdewobble.
>>> + */
>>> +typedef enum StabilizationAlgorithm {
>>> +
>>> + /**
>>> + * Do not apply stabilization
>>> + */
>>> + STABILIZATION_ALGORITHM_ORIGINAL,
>>> +
>>> + /**
>>> + * Keep the camera orientation fixed at its orientation in the first frame
>>> + */
>>> + STABILIZATION_ALGORITHM_FIXED,
>>> +
>>> + /**
>>> + * Smooth camera orientation with a Savitsky-Golay filter
>>> + */
>>> + STABILIZATION_ALGORITHM_SMOOTH,
>>> +
>>> + /**
>>> + * Number of stabilization algorithms
>>> + */
>>> + NB_STABILIZATION_ALGORITHMS,
>>> +
>>> +} StabilizationAlgorithm;
>>> +
>> 
>> Huh? Why this and bellow similar stuff are not part of library?
> 
> Because the FFmpeg filter options work by writing values to offsets in
> memory, but this is not supported in libdewobble. So there are
> intermediate structs/enums in the filter context where options are
> written to, which are later passed to the libdewobble API in the
> appropriate way. e.g. see the use of dewobble_filter_config_create()
> and stabilization_algorithm.
> 
>> Apply ....
> 
> Ok, will fix.
> 
>> I really dislike GPL3 and use of opencv.
>> I would prefer pure C solution.
> 
> You are welcome to use the same ideas and write a native C filter with
> LGPL license if you want. But there will be alot more code to write.
> OpenCV provides sparse optical flow, RANSAC, warping with 5
> interpolation algorithms, text/shape drawing utilities, etc. And most
> of these with OpenCL acceleration or optimized multithreaded CPU
> implementations.
> 
> Use of OpenCV is an implementation detail of libdewobble. OpenCV is
> not used directly in the filter, and no OpenCV objects are passed to
> or received from libdewobble.

I agree with Mahol here. It's good to make sure all the code in FFmpeg meets a certain quality.
I thought there were OpenCV filters in ffmpeg already?

And if you are the writer of this plugin as well, you should consider relicensing it to LGPL for its usage in ffmpeg.
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
Daniel Playfair Cal Aug. 26, 2021, 1:33 p.m. UTC | #4
On Tue, Aug 24, 2021 at 11:42 PM Mapul Bhola
<ffmpegandmahanstreamer@e.email> wrote:

> I agree with Mahol here. It's good to make sure all the code in FFmpeg meets a certain quality.
> I thought there were OpenCV filters in ffmpeg already?


I'm more than happy to address any issues of code quality in this
filter to go into FFmpeg - please let me know if you can see something
specific that I haven't addressed.

I don't think using OpenCV is a quality issue. OpenCV is an
established and well maintained library, and the specific algorithms
used from it in Dewobble are well used and tested. Within Dewobble
there are some functionalities that are implemented natively and
others that use OpenCV implementations. My choices have depended on
how much specialisation is needed and the relative difficulty of
implementing the algorithms. For example, I wrote my own OpenCL
kernels to build the final map for warping and for colour conversion.
But I didn't write my own implementation of warping/interpolation,
Shi-Tomasi corner detection, Lucas-Kanade optical flow, RANSAC, etc.
There is no point IMO making such a huge effort to recreate what is
already there in OpenCV, unless there is a good reason to think the
result would be better. There are also alternative algorithms for
video stabilization, many of which are also implemented in OpenCV. So
it's easy to experiment with different methods without having to
implement complex computer vision algorithms each time.

For those parts that don't change so much, which are customised more,
or which could be done better than by using OpenCV, I will probably
slowly implement algorithms in Dewobble (and any help is welcome).

And yes, there is an existing filter "ocv" (vf_libopencv.c) which
wraps a very specific set of functionalities in OpenCV, from its image
filtering category. These functionalities are unrelated to this filter
or to Dewobble. There is also a filter "deshake_opencl" which doesn't
depend on OpenCV but contains code copied from some of it's OpenCL
kernels.

> And if you are the writer of this plugin as well, you should consider relicensing it to LGPL for its usage in ffmpeg.

I considered this but I've decided to license Dewobble as GPL. I
realise this prevents it from being used in closed source or
permissively licensed libraries that links to it, but it can still be
usable for end users of FFmpeg and within other GPL software. FFmpeg
already has build infrastructure to support this, as well as multiple
existing filters which are also licensed under GPL.
diff mbox series

Patch

diff --git a/Changelog b/Changelog
index 5a5b50eb66..a8d71ab4ee 100644
--- a/Changelog
+++ b/Changelog
@@ -11,6 +11,7 @@  version <next>:
 - afwtdn audio filter
 - audio and video segment filters
 - Apple Graphics (SMC) encoder
+- Dewobble filter
 
 
 version 4.4:
diff --git a/LICENSE.md b/LICENSE.md
index 613070e1b6..dfdf010d8e 100644
--- a/LICENSE.md
+++ b/LICENSE.md
@@ -112,7 +112,7 @@  The VMAF, mbedTLS, RK MPI, OpenCORE and VisualOn libraries are under the Apache
 version 3 of those licenses. So to combine these libraries with FFmpeg, the
 license version needs to be upgraded by passing `--enable-version3` to configure.
 
-The smbclient library is under the GPL v3, to combine it with FFmpeg,
+The dewobble and smbclient libraries are under the GPL v3, to combine them with FFmpeg,
 the options `--enable-gpl` and `--enable-version3` have to be passed to
 configure to upgrade FFmpeg to the GPL v3.
 
diff --git a/configure b/configure
index 9249254b70..60b3d3dbea 100755
--- a/configure
+++ b/configure
@@ -230,6 +230,7 @@  External library support:
   --enable-libdavs2        enable AVS2 decoding via libdavs2 [no]
   --enable-libdc1394       enable IIDC-1394 grabbing using libdc1394
                            and libraw1394 [no]
+  --enable-libdewobble     enable video stabilization via libdewobble [no]
   --enable-libfdk-aac      enable AAC de/encoding via libfdk-aac [no]
   --enable-libflite        enable flite (voice synthesis) support via libflite [no]
   --enable-libfontconfig   enable libfontconfig, useful for drawtext filter [no]
@@ -1781,6 +1782,7 @@  EXTERNAL_LIBRARY_VERSION3_LIST="
 "
 
 EXTERNAL_LIBRARY_GPLV3_LIST="
+    libdewobble
     libsmbclient
 "
 
@@ -3606,6 +3608,7 @@  interlace_filter_deps="gpl"
 kerndeint_filter_deps="gpl"
 ladspa_filter_deps="ladspa libdl"
 lensfun_filter_deps="liblensfun version3"
+libdewobble_opencl_filter_deps="libdewobble opencl"
 lv2_filter_deps="lv2"
 mcdeint_filter_deps="avcodec gpl"
 metadata_filter_deps="avformat"
@@ -6406,6 +6409,7 @@  enabled libcodec2         && require libcodec2 codec2/codec2.h codec2_create -lc
 enabled libdav1d          && require_pkg_config libdav1d "dav1d >= 0.5.0" "dav1d/dav1d.h" dav1d_version
 enabled libdavs2          && require_pkg_config libdavs2 "davs2 >= 1.6.0" davs2.h davs2_decoder_open
 enabled libdc1394         && require_pkg_config libdc1394 libdc1394-2 dc1394/dc1394.h dc1394_new
+enabled libdewobble       && require_pkg_config libdewobble dewobble dewobble/filter.h dewobble_filter_create_threaded
 enabled libdrm            && require_pkg_config libdrm libdrm xf86drm.h drmGetVersion
 enabled libfdk_aac        && { check_pkg_config libfdk_aac fdk-aac "fdk-aac/aacenc_lib.h" aacEncOpen ||
                                { require libfdk_aac fdk-aac/aacenc_lib.h aacEncOpen -lfdk-aac &&
diff --git a/doc/filters.texi b/doc/filters.texi
index c84202cf85..f8f6528479 100644
--- a/doc/filters.texi
+++ b/doc/filters.texi
@@ -14129,6 +14129,155 @@  ffmpeg -i input.mov -vf lensfun=make=Canon:model="Canon EOS 100D":lens_model="Ca
 
 @end itemize
 
+@section libdewobble_opencl
+
+Apply motion stabilization with awareness of lens projection and/or lens projection change using libdewobble (@url{https://git.sr.ht/~hedgepigdaniel/dewobble}).
+
+To enable compilation of this filter you need to configure FFmpeg with
+@code{--enable-libdewobble}.
+
+This filter accepts the following options:
+
+@table @option
+@item in_p
+@item out_p
+Set the lens projection model for the input and output.
+
+Available values are:
+@table @samp
+@item rect
+Rectilinear projection.
+
+@item fish
+Equidistant fisheye projection.
+
+@end table
+
+@item in_dfov
+@item out_dfov
+Diagonal field of view in degrees for the input and output.
+
+@item in_fx
+@item in_fy
+@item out_fx
+@item out_fy
+Location of the focal point in the input and output image.
+Default value is the image centre in both cases.
+
+@item out_w
+@item out_h
+Dimensions of the output image.
+Default value is the same as in input image.
+
+@item stab
+Motion stabilization algorithm.
+
+Available values are:
+@table @samp
+@item fixed
+Fix the camera orientation after the first frame.
+
+@item none
+No not apply stabilization.
+
+@item sg
+Smooth the camera motion using a Savitzky-Golay filter.
+
+@end table
+
+Default value is @samp{sg}.
+
+@item stab_r
+For Savitzky-Golay smoothing: the number of frames to look ahead and behind.
+Higher values result in a smoother output camera path.
+
+Default value is 15.
+
+Higher values increase (OpenCL) memory usage.
+
+@item stab_h
+For stabilization: the number of frames to look ahead to interpolate input camera rotation in frames where it cannot be detected.
+
+Default value is 30.
+
+Higher values increase (OpenCL) memory usage.
+
+@item interp
+Pixel interpolation algorithm.
+
+Available values are:
+@table @samp
+@item nearest
+Nearest neighbour interpolation (fast OpenCL implementation).
+
+@item linear
+Bilinear interpolation (fast OpenCL implementation).
+
+@item cubic
+Bicubic interpolation (CPU implementation).
+
+@item lanczos
+Lanczos4 interpolation in an 8x8 neighbourhood (CPU implementation).
+
+@end table
+
+Default value is @samp{linear}.
+
+@item border
+Border extrapolation algorithm (determines how to color pixels in the output that do not map to the input).
+
+Available values are:
+@table @samp
+@item constant
+Constant color.
+
+@item reflect
+Reflection of the input horizontally or vertically about the edge.
+
+@item reflect101
+Reflection of the input horizontally or vertically about the point half a pixel from the edge.
+
+@item replicate
+Replicate the pixel on the edge in a vertical or horizontal direction.
+
+@item wrap
+Wrap around to the opposite side of the source image.
+
+@end table
+
+Default value is @samp{constant}.
+
+@item border_r
+@item border_g
+@item border_b
+For @samp{constant} border, the color to fill with (red, green, blue components).
+
+Default value is black.
+
+@item debug
+Include a suite of debugging information in the output.
+
+Default value is disabled.
+
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Apply motion stabilization to video from a popular action cam in a certain capture mode:
+@example
+ffmpeg -i INPUT -vf libdewobble_opencl=in_p=fish:in_dfov=145.8:out_p=fish:out_dfov=145.8:stab=sg OUTPUT
+@end example
+
+@item
+Apply stabilization and lens projection change:
+@example
+ffmpeg -i INPUT -vf libdewobble_opencl=in_p=fish:in_dfov=145.8:out_p=rect:out_dfov=145.8:stab=sg OUTPUT
+@end example
+
+@end itemize
+
 @section libvmaf
 
 Obtain the VMAF (Video Multi-Method Assessment Fusion)
diff --git a/libavfilter/Makefile b/libavfilter/Makefile
index 102ce7beff..c9399f8f68 100644
--- a/libavfilter/Makefile
+++ b/libavfilter/Makefile
@@ -313,6 +313,7 @@  OBJS-$(CONFIG_KIRSCH_FILTER)                 += vf_convolution.o
 OBJS-$(CONFIG_LAGFUN_FILTER)                 += vf_lagfun.o
 OBJS-$(CONFIG_LENSCORRECTION_FILTER)         += vf_lenscorrection.o
 OBJS-$(CONFIG_LENSFUN_FILTER)                += vf_lensfun.o
+OBJS-$(CONFIG_LIBDEWOBBLE_OPENCL_FILTER)     += vf_libdewobble_opencl.o opencl.o
 OBJS-$(CONFIG_LIBVMAF_FILTER)                += vf_libvmaf.o framesync.o
 OBJS-$(CONFIG_LIMITER_FILTER)                += vf_limiter.o
 OBJS-$(CONFIG_LOOP_FILTER)                   += f_loop.o
diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
index 73040d2824..95be7cb568 100644
--- a/libavfilter/allfilters.c
+++ b/libavfilter/allfilters.c
@@ -298,6 +298,7 @@  extern const AVFilter ff_vf_kirsch;
 extern const AVFilter ff_vf_lagfun;
 extern const AVFilter ff_vf_lenscorrection;
 extern const AVFilter ff_vf_lensfun;
+extern const AVFilter ff_vf_libdewobble_opencl;
 extern const AVFilter ff_vf_libvmaf;
 extern const AVFilter ff_vf_limiter;
 extern const AVFilter ff_vf_loop;
diff --git a/libavfilter/version.h b/libavfilter/version.h
index bcd27aa6e8..e9a76c5ac3 100644
--- a/libavfilter/version.h
+++ b/libavfilter/version.h
@@ -30,7 +30,7 @@ 
 #include "libavutil/version.h"
 
 #define LIBAVFILTER_VERSION_MAJOR   8
-#define LIBAVFILTER_VERSION_MINOR   3
+#define LIBAVFILTER_VERSION_MINOR   4
 #define LIBAVFILTER_VERSION_MICRO 100
 
 
diff --git a/libavfilter/vf_libdewobble_opencl.c b/libavfilter/vf_libdewobble_opencl.c
new file mode 100644
index 0000000000..74c2940877
--- /dev/null
+++ b/libavfilter/vf_libdewobble_opencl.c
@@ -0,0 +1,1273 @@ 
+/*
+ * Copyright (c) 2021 Daniel Playfair Cal <daniel.playfair.cal@gmail.com>
+ *
+ * This file is part of FFmpeg.
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation, either version 3 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <https://www.gnu.org/licenses/>.
+ */
+#include <dewobble/camera.h>
+#include <dewobble/filter.h>
+#include <dewobble/stabilizer.h>
+#include <float.h>
+#include <pthread.h>
+#include <signal.h>
+
+#include "libavutil/avassert.h"
+#include "libavutil/common.h"
+#include "libavutil/imgutils.h"
+#include "libavutil/mem.h"
+#include "libavutil/opt.h"
+#include "libavutil/pixdesc.h"
+#include "libavutil/thread.h"
+
+#include "avfilter.h"
+#include "filters.h"
+#include "internal.h"
+#include "opencl.h"
+#include "opencl_source.h"
+#include "transpose.h"
+#include "video.h"
+
+/**
+ * @file
+ * Apply motion stabilization with awareness of lens projection and/or change
+ * camera projection.
+ *
+ * This filter is essentially a wrapper around dewobble
+ * (https://git.sr.ht/~hedgepigdaniel/dewobble).
+ *
+ * @par Queued frames
+ *
+ * libdewobble requires a queue of frames before it can provide output because
+ * it looks ahead to calculate a smooth camera path and to interpolate camera
+ * positions from frames where it fails to detect motion. The number of queued
+ * frames required is determined by libdewobble.
+ *
+ * @par Hardware frame allocation
+ *
+ * Input OpenCL hardware frames contain `cl_image`s but these must be converted
+ * to `cl_buffer`s for libdewobble. Although the filter keeps a reference to
+ * the input frame until the output frame is sent, it unreferences the original
+ * hardware buffers immediately after copying them to a `cl_buffer` in
+ * `consume_input_frame`. This avoids OOM issues for example when using input
+ * frames mapped from VA-API hardware frames where there is a low limit for how
+ * many can be allocated at once. The filter only owns a single input/output
+ * hardware frame buffer at any time, although internally it allocates OpenCL
+ * buffers to store the contents of a queue of frames.
+ */
+
+/**
+ * Camera properties, mirroring those present in libdewobble's camera object.
+ */
+typedef struct Camera {
+    /**
+     * Camera projection model, e.g. `DEWOBBLE_PROJECTION_RECTILINEAR`
+     */
+    int model;
+
+    /**
+     * Camera diagonal field of view in degrees
+     */
+    double diagonal_fov;
+
+    /**
+     * Width in pixels
+     */
+    int width;
+
+    /**
+     * Height in pixels
+     */
+    int height;
+
+    /**
+     * Horizonal coordinate of focal point in pixels
+     */
+    double focal_point_x;
+
+    /**
+     * Vertical coordinate of focal point in pixels
+     */
+    double focal_point_y;
+} Camera;
+
+/**
+ * Motion stabilization algorithm, mirroring those available in libdewobble.
+ */
+typedef enum StabilizationAlgorithm {
+
+    /**
+     * Do not apply stabilization
+     */
+    STABILIZATION_ALGORITHM_ORIGINAL,
+
+    /**
+     * Keep the camera orientation fixed at its orientation in the first frame
+     */
+    STABILIZATION_ALGORITHM_FIXED,
+
+    /**
+     * Smooth camera orientation with a Savitsky-Golay filter
+     */
+    STABILIZATION_ALGORITHM_SMOOTH,
+
+    /**
+     * Number of stabilization algorithms
+     */
+    NB_STABILIZATION_ALGORITHMS,
+
+} StabilizationAlgorithm;
+
+/**
+ * libdewobble_opencl filter context
+ */
+typedef struct LibDewobbleOpenCLContext {
+
+    /**
+     * Generic OpenCL filter context
+     */
+    OpenCLFilterContext ocf;
+
+    /**
+     * OpenCL command queue
+     */
+    cl_command_queue command_queue;
+
+    /**
+     * Input camera (projection, focal length, etc)
+     */
+    Camera input_camera;
+
+    /**
+     * Output camera (projection, focal length, etc)
+     */
+    Camera output_camera;
+
+    /**
+     * Stabilization algorithm applied by the filter
+     * (@ref StabilizationAlgorithm)
+     */
+    int stabilization_algorithm;
+
+    /**
+     * The number of frames to look ahead and behind for the purpose of
+     * stabilizing each frame
+     */
+    int stabilization_radius;
+
+    /**
+     * The number of frames to look ahead for the purpose of interpolating
+     * frame rotation for frames where detection fails
+     */
+    int stabilization_horizon;
+
+    /**
+     * The algorithm to interpolate the value between source image pixels
+     * (e.g.\ `DEWOBBLE_INTERPOLATION_LINEAR`)
+     */
+    int interpolation_algorithm;
+
+    /**
+     * The algorithm used to fill in unmapped areas of the output (e.g.\
+     * `DEWOBBLE_BORDER_CONSTANT`)
+     */
+    int border_type;
+
+    /**
+     * The color used to fill unmapped areas of the output when
+     * @ref border_type is `DEWOBBLE_BORDER_CONSTANT`
+     */
+    double border_color[4];
+
+    /**
+     * Whether to include debugging information in the output
+     */
+    int debug;
+
+    /**
+     * Whether the filter has been initialized
+     */
+    int initialized;
+
+    /**
+     * The status of the input link
+     */
+    int input_status;
+
+    /**
+     * The time that the input status was reached
+     */
+    int64_t input_status_pts;
+
+    /**
+     * Number of frame jobs currently in progress (read from inlink but not
+     * yet sent to outlink)
+     */
+    int nb_frames_in_progress;
+
+    /**
+     * Number of frames consumed so far
+     */
+    long nb_frames_consumed;
+
+    /**
+     * The instance of libdewobble's filter
+     */
+    DewobbleFilter dewobble_filter;
+
+} LibDewobbleOpenCLContext;
+
+/**
+ * Convert degrees to radians.
+ * @param degrees the number of degrees
+ * @return the equivalent number of radians
+ */
+static double degrees_to_radians(double degrees)
+{
+    return degrees * M_PI / 180;
+}
+
+/**
+ * Initialize the libdewobble filter instance.
+ * @param avctx the filter context
+ * @return 0 on success, otherwise a negative error code
+ */
+static int init_libdewobble_filter(AVFilterContext *avctx)
+{
+    LibDewobbleOpenCLContext *ctx = avctx->priv;
+    DewobbleStabilizer stabilizer = NULL;
+    DewobbleCamera input_camera = NULL, output_camera = NULL;
+    DewobbleFilterConfig config = NULL;
+
+    input_camera = dewobble_camera_create(
+        ctx->input_camera.model, degrees_to_radians(ctx->input_camera.diagonal_fov),
+        ctx->input_camera.width, ctx->input_camera.height,
+        ctx->input_camera.focal_point_x, ctx->input_camera.focal_point_y);
+
+    if (input_camera == NULL)
+        goto fail;
+
+    output_camera = dewobble_camera_create(
+        ctx->output_camera.model,
+        degrees_to_radians(ctx->output_camera.diagonal_fov),
+        ctx->output_camera.width, ctx->output_camera.height,
+        ctx->output_camera.focal_point_x, ctx->output_camera.focal_point_y);
+
+    if (output_camera == NULL)
+        goto fail;
+
+    switch (ctx->stabilization_algorithm) {
+    case STABILIZATION_ALGORITHM_ORIGINAL:
+        stabilizer = dewobble_stabilizer_create_none();
+        break;
+    case STABILIZATION_ALGORITHM_FIXED:
+        stabilizer = dewobble_stabilizer_create_fixed(input_camera,
+                                                      ctx->stabilization_horizon);
+
+        break;
+    case STABILIZATION_ALGORITHM_SMOOTH:
+        stabilizer = dewobble_stabilizer_create_savitzky_golay(
+            input_camera, ctx->stabilization_radius, ctx->stabilization_horizon);
+
+        break;
+    }
+
+    if (stabilizer == NULL)
+        goto fail;
+
+    config = dewobble_filter_config_create(input_camera, output_camera, stabilizer);
+
+    dewobble_filter_config_set_opencl_context(config, ctx->ocf.hwctx->context);
+    dewobble_filter_config_set_opencl_device(config, ctx->ocf.hwctx->device_id);
+    dewobble_filter_config_set_interpolation(config, ctx->interpolation_algorithm);
+    dewobble_filter_config_set_border_type(config, ctx->border_type);
+    dewobble_filter_config_set_border_color(config, ctx->border_color);
+    dewobble_filter_config_set_debug(config, ctx->debug);
+
+    ctx->dewobble_filter = dewobble_filter_create_threaded(config);
+
+    dewobble_filter_config_destroy(&config);
+
+    if (ctx->dewobble_filter == NULL)
+        goto fail;
+
+    dewobble_stabilizer_destroy(&stabilizer);
+
+    return 0;
+
+fail:
+    dewobble_stabilizer_destroy(&stabilizer);
+    dewobble_camera_destroy(&input_camera);
+    dewobble_camera_destroy(&output_camera);
+
+    return AVERROR(ENOMEM);
+}
+
+/**
+ * Initialize the filter based on the options
+ * @param avctx the filter context
+ * @return 0 on success, otherwise a negative error code
+ */
+static int libdewobble_opencl_init(AVFilterContext *avctx)
+{
+    LibDewobbleOpenCLContext *ctx = avctx->priv;
+
+    av_log(avctx, AV_LOG_VERBOSE, "Init\n");
+
+    if (ctx->input_camera.model == DEWOBBLE_NB_PROJECTIONS
+        || ctx->output_camera.model == DEWOBBLE_NB_PROJECTIONS) {
+
+        av_log(avctx, AV_LOG_ERROR, "both in_p and out_p must be set\n");
+        return AVERROR(EINVAL);
+    }
+
+    if (ctx->input_camera.diagonal_fov == 0 || ctx->output_camera.diagonal_fov == 0) {
+        av_log(avctx, AV_LOG_ERROR, "both in_dfov and out_dfov must be set\n");
+        return AVERROR(EINVAL);
+    }
+
+    if (ctx->stabilization_algorithm == STABILIZATION_ALGORITHM_ORIGINAL)
+        ctx->stabilization_horizon = 0;
+
+    return ff_opencl_filter_init(avctx);
+}
+
+/**
+ * Clean up the filter on destruction.
+ * @param avctx the filter context
+ */
+static void libdewobble_opencl_uninit(AVFilterContext *avctx)
+{
+    cl_int cle;
+    LibDewobbleOpenCLContext *ctx = avctx->priv;
+
+    av_log(avctx, AV_LOG_VERBOSE, "Uninit\n");
+
+    if (ctx->command_queue) {
+        cle = clReleaseCommandQueue(ctx->command_queue);
+
+        if (cle != CL_SUCCESS)
+            av_log(avctx, AV_LOG_ERROR,
+                   "Failed to release command queue: %d.\n", cle);
+    }
+
+    dewobble_filter_destroy(&ctx->dewobble_filter);
+    ff_opencl_filter_uninit(avctx);
+}
+
+/**
+ * Perform further initialization of the filter when the first input frame is
+ * available.
+ * @param avctx the filter context
+ * @param first_frame the first input frame
+ * @return 0 on success, otherwise a negative error code
+ */
+static int libdewobble_opencl_frames_init(AVFilterContext *avctx, AVFrame *first_frame)
+{
+    LibDewobbleOpenCLContext *ctx = avctx->priv;
+    AVFilterLink *inlink = avctx->inputs[0];
+    cl_int cle;
+    int err;
+
+    if (first_frame->crop_top % 2 == 1 || first_frame->crop_bottom % 2 == 1
+        || first_frame->crop_left % 2 == 1 || first_frame->crop_right % 2 == 1) {
+
+        av_log(avctx, AV_LOG_ERROR,
+               "Cropping by an odd number of pixels is not supported!\n");
+        return AVERROR(EINVAL);
+    }
+
+    if ((first_frame->crop_top || first_frame->crop_bottom)
+        && (ctx->output_camera.height == 0
+            || ctx->output_camera.focal_point_y == DBL_MAX))
+        av_log(avctx, AV_LOG_WARNING,
+               "Input is vertically cropped, but output height or vertical "
+               "focal point is not set. The default values are based on the "
+               "uncropped input!\n");
+
+    if ((first_frame->crop_left || first_frame->crop_right)
+        && (ctx->output_camera.width == 0
+            || ctx->output_camera.focal_point_x == DBL_MAX))
+        av_log(avctx, AV_LOG_WARNING,
+               "Input is horizontally cropped, but output width or horizontal "
+               "focal point is not set. The default values are based on the "
+               "uncropped input!\n");
+
+    ctx->input_camera.width
+        = inlink->w - first_frame->crop_left - first_frame->crop_right;
+    ctx->input_camera.height
+        = inlink->h - first_frame->crop_top - first_frame->crop_bottom;
+
+    /* Output camera width must match the filter output */
+    ctx->output_camera.width = ctx->ocf.output_width;
+    ctx->output_camera.height = ctx->ocf.output_height;
+
+    /* Focal points default to the image center (disregarding cropping) */
+    if (ctx->input_camera.focal_point_x == DBL_MAX)
+        ctx->input_camera.focal_point_x
+            = (inlink->w - 1) / 2.0 - first_frame->crop_left;
+
+    if (ctx->input_camera.focal_point_y == DBL_MAX)
+        ctx->input_camera.focal_point_y
+            = (inlink->h - 1) / 2.0 - first_frame->crop_top;
+
+    if (ctx->output_camera.focal_point_x == DBL_MAX)
+        ctx->output_camera.focal_point_x = (ctx->output_camera.width - 1) / 2.0;
+
+    if (ctx->output_camera.focal_point_y == DBL_MAX)
+        ctx->output_camera.focal_point_y = (ctx->output_camera.height - 1) / 2.0;
+
+    ctx->command_queue = clCreateCommandQueue(ctx->ocf.hwctx->context,
+                                              ctx->ocf.hwctx->device_id, 0, &cle);
+
+    if (cle) {
+        av_log(avctx, AV_LOG_ERROR,
+               "Failed to create OpenCL command queue %d.\n", cle);
+        return AVERROR(EIO);
+    }
+
+    err = init_libdewobble_filter(avctx);
+    if (err) {
+        av_log(avctx, AV_LOG_ERROR,
+               "Failed to initialise libdewobble filter %d.\n", err);
+        return AVERROR(EIO);
+    }
+
+    ctx->initialized = 1;
+
+    return 0;
+}
+
+/**
+ * Perform initialization based on the input filter link.
+ * @param inlink the input filter link
+ * @return 0 on success, otherwise a negative error code
+ */
+static int libdewobble_opencl_config_input(AVFilterLink *inlink)
+{
+    AVFilterContext *avctx = inlink->dst;
+    LibDewobbleOpenCLContext *ctx = avctx->priv;
+    int ret;
+
+    ret = ff_opencl_filter_config_input(inlink);
+
+    if (ret < 0)
+        return ret;
+
+    if (ctx->ocf.output_format != AV_PIX_FMT_NV12) {
+        av_log(avctx, AV_LOG_ERROR, "Only NV12 input is supported!\n");
+        return AVERROR(ENOSYS);
+    }
+
+    if (inlink->w % 2 == 1 || inlink->h % 2 == 1) {
+        av_log(avctx, AV_LOG_ERROR, "Input with odd dimensions is not supported!\n");
+        return AVERROR(EINVAL);
+    }
+
+    if (ctx->output_camera.width % 2 == 1 || ctx->output_camera.height % 2 == 1) {
+        av_log(avctx, AV_LOG_ERROR, "Output camera must have even dimensions!\n");
+        return AVERROR(EINVAL);
+    }
+
+    /* Output dimensions default to the input dimensions (disregarding cropping) */
+    ctx->ocf.output_width
+        = ctx->output_camera.width ? ctx->output_camera.width : inlink->w;
+
+    ctx->ocf.output_height
+        = ctx->output_camera.height ? ctx->output_camera.height : inlink->h;
+
+    return 0;
+}
+
+/**
+ * Copy the contents of an input frame to an OpenCL buffer.
+ * @param avctx the filter context
+ * @param context the OpenCL context to use
+ * @param command_queue the OpenCL command queue to use
+ * @param frame the input @ref AVFrame
+ * @param input_buffer the OpenCL buffer to copy the frame into
+ * @return 0 on success, otherwise a negative error code
+ */
+static cl_int copy_frame_to_buffer(AVFilterContext *avctx, cl_context context,
+                                   cl_command_queue command_queue,
+                                   AVFrame *frame, cl_mem input_buffer)
+{
+    int err;
+    LibDewobbleOpenCLContext *ctx = avctx->priv;
+    cl_mem luma = (cl_mem)frame->data[0];
+    cl_mem chroma = (cl_mem)frame->data[1];
+    cl_int cle = 0;
+    size_t src_luma_origin[3] = { frame->crop_left, frame->crop_top, 0 };
+
+    size_t src_chroma_origin[3] = {
+        frame->crop_left / 2,
+        frame->crop_top / 2,
+        0,
+    };
+
+    size_t luma_region[3] = {
+        ctx->input_camera.width,
+        ctx->input_camera.height,
+        1,
+    };
+
+    size_t chroma_region[3] = {
+        ctx->input_camera.width / 2,
+        ctx->input_camera.height / 2,
+        1,
+    };
+
+    cl_event copy_finished[2];
+
+    cle = clEnqueueCopyImageToBuffer(command_queue, luma, input_buffer,
+                                     src_luma_origin, luma_region, 0, 0, NULL,
+                                     &copy_finished[0]);
+
+    CL_FAIL_ON_ERROR(AVERROR(EINVAL),
+                     "Failed to enqueue copy luma image to buffer: %d\n", cle);
+
+    cle = clEnqueueCopyImageToBuffer(
+        command_queue, chroma, input_buffer, src_chroma_origin, chroma_region,
+        ctx->input_camera.width * ctx->input_camera.height * 1, 0, NULL,
+        &copy_finished[1]);
+
+    CL_FAIL_ON_ERROR(AVERROR(EINVAL),
+                     "Failed to enqueue copy chroma image to buffer: %d\n", cle);
+
+    cle = clWaitForEvents(2, copy_finished);
+
+    CL_FAIL_ON_ERROR(AVERROR(EINVAL), "Failed to copy images to buffer: %d\n", cle);
+
+    return 0;
+
+fail:
+    return err;
+}
+
+/**
+ * Copy the contents of an OpenCL buffer to an output frame.
+ * @param avctx the filter context
+ * @param buffer the OpenCL buffer
+ * @param output_frame the output frame
+ * @return 0 on success, otherwise a negative error code
+ */
+static int copy_buffer_to_frame(AVFilterContext *avctx, cl_mem buffer,
+                                AVFrame *output_frame)
+{
+    int err;
+    LibDewobbleOpenCLContext *ctx = avctx->priv;
+    cl_mem luma = (cl_mem)output_frame->data[0];
+    cl_mem chroma = (cl_mem)output_frame->data[1];
+    cl_int cle = 0;
+    size_t dst_origin[3] = { 0, 0, 0 };
+    size_t luma_region[3] = { output_frame->width, output_frame->height, 1 };
+
+    size_t chroma_region[3] = {
+        output_frame->width / 2,
+        output_frame->height / 2,
+        1,
+    };
+
+    cl_event copy_finished[2];
+
+    cle = clEnqueueCopyBufferToImage(ctx->command_queue, buffer, luma, 0, dst_origin,
+                                     luma_region, 0, NULL, &copy_finished[0]);
+
+    CL_FAIL_ON_ERROR(AVERROR(EINVAL),
+                     "Failed to enqueue copy buffer to luma image: %d\n", cle);
+
+    cle = clEnqueueCopyBufferToImage(ctx->command_queue, buffer, chroma,
+                                     output_frame->width * output_frame->height * 1,
+                                     dst_origin, chroma_region, 0, NULL,
+                                     &copy_finished[1]);
+
+    CL_FAIL_ON_ERROR(AVERROR(EINVAL),
+                     "Failed to enqueue copy buffer to luma image: %d\n", cle);
+
+    cle = clWaitForEvents(2, copy_finished);
+    CL_FAIL_ON_ERROR(AVERROR(EINVAL), "Failed to copy buffer to images: %d\n", cle);
+
+    return 0;
+
+fail:
+    return err;
+}
+
+/**
+ * Consume an input frame and push it to the libdewobble filter.
+ * @param avctx the filter context
+ * @param input_frame the input frame
+ * @return 0 on success, otherwise a negative error code
+ */
+static int consume_input_frame(AVFilterContext *avctx, AVFrame *input_frame)
+{
+    LibDewobbleOpenCLContext *ctx = avctx->priv;
+    cl_mem input_buffer;
+    int err = 0;
+    cl_int cle;
+
+    if (!input_frame->hw_frames_ctx)
+        return AVERROR(EINVAL);
+
+    if (!ctx->initialized) {
+        av_log(avctx, AV_LOG_VERBOSE, "Initializing\n");
+        err = libdewobble_opencl_frames_init(avctx, input_frame);
+
+        if (err < 0)
+            return err;
+    }
+
+    input_buffer
+        = dewobble_filter_get_input_frame_buffer(ctx->dewobble_filter, &cle);
+
+    CL_FAIL_ON_ERROR(AVERROR(ENOMEM), "Failed to create buffer: %d\n", cle);
+
+    err = copy_frame_to_buffer(avctx, ctx->ocf.hwctx->context,
+                               ctx->command_queue, input_frame, input_buffer);
+
+    if (err)
+        goto fail;
+
+    /* Free original input frame buffers */
+    for (int i = 0; input_frame->buf[i] != NULL; i++)
+        av_buffer_unref(&input_frame->buf[i]);
+
+    dewobble_filter_push_frame(ctx->dewobble_filter, input_buffer,
+                               (void **)input_frame);
+
+    ctx->nb_frames_in_progress += 1;
+    ctx->nb_frames_consumed += 1;
+
+    return 0;
+
+fail:
+    return err;
+}
+
+/**
+ * Create and send on an output frame using an output buffer pulled from the
+ * libdewobble filter.
+ * @param avctx the filter context
+ * @return 0 on success, otherwise a negative error code.
+ */
+static int send_output_frame(AVFilterContext *avctx)
+{
+    AVFilterLink *inlink = avctx->inputs[0];
+    AVFilterLink *outlink = avctx->outputs[0];
+    LibDewobbleOpenCLContext *ctx = avctx->priv;
+    AVFrame *input_frame;
+    AVFrame *output_frame = NULL;
+    cl_mem output_buffer = NULL, input_buffer;
+    int err;
+
+    dewobble_filter_pull_frame(ctx->dewobble_filter, &output_buffer,
+                               &input_buffer, (void **)&input_frame);
+
+    dewobble_filter_release_input_frame_buffer(ctx->dewobble_filter, &input_buffer);
+
+    output_frame = ff_get_video_buffer(outlink, outlink->w, outlink->h);
+    if (output_frame == NULL) {
+        err = AVERROR(ENOMEM);
+        goto fail;
+    }
+
+    err = av_frame_copy_props(output_frame, input_frame);
+    if (err)
+        goto fail;
+
+    output_frame->crop_top = 0;
+    output_frame->crop_bottom = 0;
+    output_frame->crop_left = 0;
+    output_frame->crop_right = 0;
+
+    err = copy_buffer_to_frame(avctx, output_buffer, output_frame);
+    if (err)
+        goto fail;
+
+    dewobble_filter_release_output_frame_buffer(ctx->dewobble_filter, &output_buffer);
+
+    av_log(avctx, AV_LOG_VERBOSE, "Sending output frame %ld (%d in progress)\n",
+           ctx->nb_frames_consumed - ctx->nb_frames_in_progress,
+           ctx->nb_frames_in_progress);
+
+    ctx->nb_frames_in_progress -= 1;
+
+    err = ff_filter_frame(outlink, output_frame);
+    if (err < 0)
+        goto fail;
+
+    if (!dewobble_filter_frame_ready(ctx->dewobble_filter))
+        ff_inlink_request_frame(inlink);
+
+    if (ctx->input_status && ctx->nb_frames_in_progress == 0) {
+        av_log(avctx, AV_LOG_VERBOSE, "Output reached EOF\n");
+        ff_outlink_set_status(outlink, ctx->input_status, ctx->input_status_pts);
+    }
+
+    av_frame_free(&input_frame);
+
+    return 0;
+
+fail:
+    av_frame_free(&input_frame);
+    av_log(avctx, AV_LOG_ERROR, "Failed to send output frame: %d\n", err);
+    av_frame_free(&output_frame);
+
+    return err;
+}
+
+/**
+ * Attempt to consume an input frame, and push it to the libdewobble filter
+ * if one is available.
+ * @param avctx the filter context
+ * @return 0 on success, otherwise a negative error code
+ */
+static int try_consume_input_frame(AVFilterContext *avctx)
+{
+    AVFilterLink *inlink = avctx->inputs[0];
+    LibDewobbleOpenCLContext *ctx = avctx->priv;
+    int err = 0;
+    AVFrame *input_frame;
+
+    /* If necessary, attempt to consume a frame from the input */
+    if (!ctx->initialized || !dewobble_filter_frame_ready(ctx->dewobble_filter)) {
+        err = ff_inlink_consume_frame(inlink, &input_frame);
+        if (err < 0) {
+            av_log(avctx, AV_LOG_ERROR, "Failed to read input frame\n");
+
+            return err;
+        } else if (err > 0) {
+            av_log(avctx, AV_LOG_VERBOSE,
+                   "Consuming input frame %ld (%d in progress)\n",
+                   ctx->nb_frames_consumed, ctx->nb_frames_in_progress);
+
+            err = consume_input_frame(avctx, input_frame);
+            if (err) {
+                av_log(avctx, AV_LOG_ERROR,
+                       "Failed to consume input frame: %d\n", err);
+
+                return err;
+            }
+        }
+    }
+
+    return err;
+}
+
+/**
+ * Read the input status and update the filter state and output status as
+ * appropriate.
+ * @param avctx the filter context
+ */
+static void check_input_status(AVFilterContext *avctx)
+{
+    AVFilterLink *inlink = avctx->inputs[0];
+    AVFilterLink *outlink = avctx->outputs[0];
+    LibDewobbleOpenCLContext *ctx = avctx->priv;
+
+    /* Check for end of input */
+    if (!ctx->input_status
+        && ff_inlink_acknowledge_status(inlink, &ctx->input_status,
+                                        &ctx->input_status_pts)) {
+
+        if (ctx->input_status == AVERROR_EOF) {
+            av_log(avctx, AV_LOG_VERBOSE, "Reached input EOF\n");
+
+            dewobble_filter_end_input(ctx->dewobble_filter);
+        } else
+            av_log(avctx, AV_LOG_ERROR, "Input status: %d\n", ctx->input_status);
+
+        if (ctx->nb_frames_in_progress == 0) {
+            av_log(avctx, AV_LOG_VERBOSE, "Sending output EOF\n");
+
+            ff_outlink_set_status(outlink, ctx->input_status, ctx->input_status_pts);
+        }
+    }
+}
+
+/**
+ * Perform some work to advance the filtering process
+ * @param avctx the filter context
+ * @return 0 if progress was made, otherwise a negative error code
+ */
+static int activate(AVFilterContext *avctx)
+{
+    LibDewobbleOpenCLContext *ctx = avctx->priv;
+    AVFilterLink *inlink = avctx->inputs[0];
+    AVFilterLink *outlink = avctx->outputs[0];
+    int err = 0;
+
+    /* Forward any output status to input */
+    err = ff_outlink_get_status(outlink);
+    if (err) {
+        av_log(avctx, AV_LOG_VERBOSE, "forwarding status to inlink: %d\n", err);
+        ff_inlink_set_status(inlink, err);
+        return 0;
+    }
+
+    /* Consume an input frame if possible */
+    err = try_consume_input_frame(avctx);
+    if (err) {
+        av_log(avctx, AV_LOG_ERROR, "try_consume_input_frame failed: %d\n", err);
+        return 0;
+    }
+
+    /* Check input status, including detecting EOF */
+    check_input_status(avctx);
+
+    /* If possible, send an output frame */
+    if (dewobble_filter_frame_ready(ctx->dewobble_filter)) {
+        err = send_output_frame(avctx);
+        if (err < 0) {
+            av_log(avctx, AV_LOG_ERROR, "send_output_frame failed: %d\n", err);
+            goto fail;
+        }
+    }
+
+    /* Schedule the next activation */
+    if (ff_inlink_check_available_frame(inlink))
+        /* Immediately, if input frames are still queued */
+        ff_filter_set_ready(avctx, 1);
+    else if (dewobble_filter_frame_ready(ctx->dewobble_filter))
+        /* Immediately, if output frames are ready */
+        ff_filter_set_ready(avctx, 1);
+    else
+        /* Otherwise when more input frames are ready */
+        ff_inlink_request_frame(inlink);
+
+    return FFERROR_NOT_READY;
+
+fail:
+    ff_outlink_set_status(outlink, AVERROR_UNKNOWN, 0);
+    return err;
+}
+
+/**
+ * Get the offset of a member in @ref LibDewobbleOpenCLContext
+ */
+#define OFFSET(x) offsetof(LibDewobbleOpenCLContext, x)
+
+/**
+ * Get the offset of a member in @ref Camera
+ */
+#define OFFSET_CAMERA(x) offsetof(Camera, x)
+
+#define FLAGS (AV_OPT_FLAG_FILTERING_PARAM | AV_OPT_FLAG_VIDEO_PARAM)
+
+static const AVOption libdewobble_opencl_options[] = {
+    /* Input camera options */
+    {
+        "in_p",
+        "input camera projection model",
+        OFFSET(input_camera) + OFFSET_CAMERA(model),
+        AV_OPT_TYPE_INT,
+        { .i64 = DEWOBBLE_PROJECTION_EQUIDISTANT_FISHEYE },
+        0,
+        DEWOBBLE_NB_PROJECTIONS - 1,
+        FLAGS,
+        "model",
+    },
+    {
+        "in_dfov",
+        "input camera diagonal field of view in degrees",
+        OFFSET(input_camera) + OFFSET_CAMERA(diagonal_fov),
+        AV_OPT_TYPE_DOUBLE,
+        { .dbl = 0 },
+        0,
+        DBL_MAX,
+        .flags = FLAGS,
+    },
+    {
+        "in_fx",
+        "horizontal coordinate of focal point in input camera (default: "
+        "center)",
+        OFFSET(input_camera) + OFFSET_CAMERA(focal_point_x),
+        AV_OPT_TYPE_DOUBLE,
+        { .dbl = DBL_MAX },
+        -DBL_MAX,
+        DBL_MAX,
+        .flags = FLAGS,
+    },
+    {
+        "in_fy",
+        "vertical coordinate of focal point in input camera (default: center)",
+        OFFSET(input_camera) + OFFSET_CAMERA(focal_point_y),
+        AV_OPT_TYPE_DOUBLE,
+        { .dbl = DBL_MAX },
+        -DBL_MAX,
+        DBL_MAX,
+        .flags = FLAGS,
+    },
+
+    /* Output camera options */
+    {
+        "out_p",
+        "output camera projection model",
+        OFFSET(output_camera) + OFFSET_CAMERA(model),
+        AV_OPT_TYPE_INT,
+        { .i64 = DEWOBBLE_PROJECTION_RECTILINEAR },
+        0,
+        DEWOBBLE_NB_PROJECTIONS - 1,
+        FLAGS,
+        "model",
+    },
+    {
+        "out_dfov",
+        "output camera diagonal field of view in degrees",
+        OFFSET(output_camera) + OFFSET_CAMERA(diagonal_fov),
+        AV_OPT_TYPE_DOUBLE,
+        { .dbl = 0 },
+        0,
+        DBL_MAX,
+        .flags = FLAGS,
+    },
+    {
+        "out_w",
+        "output camera width in pixels (default: same as input)",
+        OFFSET(output_camera) + OFFSET_CAMERA(width),
+        AV_OPT_TYPE_INT,
+        { .i64 = 0 },
+        0,
+        SHRT_MAX,
+        .flags = FLAGS,
+    },
+    {
+        "out_h",
+        "output camera height in pixels (default: same as input)",
+        OFFSET(output_camera) + OFFSET_CAMERA(height),
+        AV_OPT_TYPE_INT,
+        { .i64 = 0 },
+        0,
+        SHRT_MAX,
+        .flags = FLAGS,
+    },
+    {
+        "out_fx",
+        "horizontal coordinate of focal point in output camera "
+        "(default: center)",
+        OFFSET(output_camera) + OFFSET_CAMERA(focal_point_x),
+        AV_OPT_TYPE_DOUBLE,
+        { .dbl = DBL_MAX },
+        -DBL_MAX,
+        DBL_MAX,
+        .flags = FLAGS,
+    },
+    {
+        "out_fy",
+        "vertical coordinate of focal point in output camera "
+        "(default: center)",
+        OFFSET(output_camera) + OFFSET_CAMERA(focal_point_y),
+        AV_OPT_TYPE_DOUBLE,
+        { .dbl = DBL_MAX },
+        -DBL_MAX,
+        DBL_MAX,
+        .flags = FLAGS,
+    },
+
+    /* Stabilization options */
+    {
+        "stab",
+        "camera orientation stabilization algorithm",
+        OFFSET(stabilization_algorithm),
+        AV_OPT_TYPE_INT,
+        { .i64 = STABILIZATION_ALGORITHM_SMOOTH },
+        0,
+        NB_STABILIZATION_ALGORITHMS - 1,
+        FLAGS,
+        "stab",
+    },
+    {
+        "stab_r",
+        "for Savitzky-Golay smoothing: the number of frames "
+        "to look ahead and behind",
+        OFFSET(stabilization_radius),
+        AV_OPT_TYPE_INT,
+        { .i64 = 15 },
+        1,
+        INT_MAX,
+        FLAGS,
+    },
+    {
+        "stab_h",
+        "for stabilization: the number of frames to look "
+        "ahead to interpolate rotation in frames where it cannot be detected",
+        OFFSET(stabilization_horizon),
+        AV_OPT_TYPE_INT,
+        { .i64 = 30 },
+        0,
+        INT_MAX,
+        FLAGS,
+    },
+
+    /* General options */
+    {
+        "interp",
+        "interpolation algorithm",
+        OFFSET(interpolation_algorithm),
+        AV_OPT_TYPE_INT,
+        { .i64 = DEWOBBLE_INTERPOLATION_LINEAR },
+        0,
+        DEWOBBLE_NB_INTERPOLATIONS - 1,
+        FLAGS,
+        "interpolation",
+    },
+    {
+        "border",
+        "border fill mode",
+        OFFSET(border_type),
+        AV_OPT_TYPE_INT,
+        { .i64 = DEWOBBLE_BORDER_CONSTANT },
+        0,
+        DEWOBBLE_NB_BORDER_TYPES - 1,
+        FLAGS,
+        "border_type",
+    },
+    {
+        "border_r",
+        "border fill color (red component)",
+        OFFSET(border_color) + sizeof(double) * 2,
+        AV_OPT_TYPE_DOUBLE,
+        { .i64 = 0 },
+        0,
+        255,
+        FLAGS,
+    },
+    {
+        "border_g",
+        "border fill color (green component)",
+        OFFSET(border_color) + sizeof(double) * 1,
+        AV_OPT_TYPE_DOUBLE,
+        { .i64 = 0 },
+        0,
+        255,
+        FLAGS,
+    },
+    {
+        "border_b",
+        "border fill color (blue component)",
+        OFFSET(border_color) + sizeof(double) * 0,
+        AV_OPT_TYPE_DOUBLE,
+        { .i64 = 0 },
+        0,
+        255,
+        FLAGS,
+    },
+    {
+        "debug",
+        "whether to include debugging information in the output",
+        OFFSET(debug),
+        AV_OPT_TYPE_BOOL,
+        { .i64 = 0 },
+        0,
+        1,
+        FLAGS,
+    },
+
+    /* Camera models */
+    {
+        "rect",
+        "rectilinear projection",
+        0,
+        AV_OPT_TYPE_CONST,
+        { .i64 = DEWOBBLE_PROJECTION_RECTILINEAR },
+        INT_MIN,
+        INT_MAX,
+        FLAGS,
+        "model",
+    },
+    {
+        "fish",
+        "equidistant fisheye projection",
+        0,
+        AV_OPT_TYPE_CONST,
+        { .i64 = DEWOBBLE_PROJECTION_EQUIDISTANT_FISHEYE },
+        INT_MIN,
+        INT_MAX,
+        FLAGS,
+        "model",
+    },
+
+    /* Stabilization algorithms */
+    {
+        "fixed",
+        "fix the camera orientation after the first frame",
+        0,
+        AV_OPT_TYPE_CONST,
+        { .i64 = STABILIZATION_ALGORITHM_FIXED },
+        INT_MIN,
+        INT_MAX,
+        FLAGS,
+        "stab",
+    },
+    {
+        "none",
+        "do not apply stabilization",
+        0,
+        AV_OPT_TYPE_CONST,
+        { .i64 = STABILIZATION_ALGORITHM_ORIGINAL },
+        INT_MIN,
+        INT_MAX,
+        FLAGS,
+        "stab",
+    },
+    {
+        "sg",
+        "smooth the camera orientation using a Savitzky-Golay filter",
+        0,
+        AV_OPT_TYPE_CONST,
+        { .i64 = STABILIZATION_ALGORITHM_SMOOTH },
+        INT_MIN,
+        INT_MAX,
+        FLAGS,
+        "stab",
+    },
+
+    /* Interpolation algorithms */
+    {
+        "nearest",
+        "nearest neighbour interpolation (fast)",
+        0,
+        AV_OPT_TYPE_CONST,
+        { .i64 = DEWOBBLE_INTERPOLATION_NEAREST },
+        INT_MIN,
+        INT_MAX,
+        FLAGS,
+        "interpolation",
+    },
+    {
+        "linear",
+        "bilinear interpolation (fast)",
+        0,
+        AV_OPT_TYPE_CONST,
+        { .i64 = DEWOBBLE_INTERPOLATION_LINEAR },
+        INT_MIN,
+        INT_MAX,
+        FLAGS,
+        "interpolation",
+    },
+    {
+        "cubic",
+        "bicubic interpolation (medium)",
+        0,
+        AV_OPT_TYPE_CONST,
+        { .i64 = DEWOBBLE_INTERPOLATION_CUBIC },
+        INT_MIN,
+        INT_MAX,
+        FLAGS,
+        "interpolation",
+    },
+    {
+        "lanczos",
+        "Lanczos4, in an 8x8 neighbourhood (slow)",
+        0,
+        AV_OPT_TYPE_CONST,
+        { .i64 = DEWOBBLE_INTERPOLATION_LANCZOS4 },
+        INT_MIN,
+        INT_MAX,
+        FLAGS,
+        "interpolation",
+    },
+
+    /* Border fill algorithms */
+    {
+        "constant",
+        "constant color (default black)",
+        0,
+        AV_OPT_TYPE_CONST,
+        { .i64 = DEWOBBLE_BORDER_CONSTANT },
+        INT_MIN,
+        INT_MAX,
+        FLAGS,
+        "border_type",
+    },
+    {
+        "reflect",
+        "reflection of the input about the edge",
+        0,
+        AV_OPT_TYPE_CONST,
+        { .i64 = DEWOBBLE_BORDER_REFLECT },
+        INT_MIN,
+        INT_MAX,
+        FLAGS,
+        "border_type",
+    },
+    {
+        "reflect101",
+        "reflection of the input about the middle of the pixel on the edge",
+        0,
+        AV_OPT_TYPE_CONST,
+        { .i64 = DEWOBBLE_BORDER_REFLECT_101 },
+        INT_MIN,
+        INT_MAX,
+        FLAGS,
+        "border_type",
+    },
+    {
+        "replicate",
+        "replicate the pixel on the edge",
+        0,
+        AV_OPT_TYPE_CONST,
+        { .i64 = DEWOBBLE_BORDER_REPLICATE },
+        INT_MIN,
+        INT_MAX,
+        FLAGS,
+        "border_type",
+    },
+    {
+        "wrap",
+        "wrap around to the opposite side of the source image",
+        0,
+        AV_OPT_TYPE_CONST,
+        { .i64 = DEWOBBLE_BORDER_WRAP },
+        INT_MIN,
+        INT_MAX,
+        FLAGS,
+        "border_type",
+    },
+    { NULL },
+};
+
+AVFILTER_DEFINE_CLASS(libdewobble_opencl);
+
+static const AVFilterPad inputs[] = {
+    {
+        .name = "default",
+        .type = AVMEDIA_TYPE_VIDEO,
+        .config_props = &libdewobble_opencl_config_input,
+    },
+};
+
+static const AVFilterPad outputs[] = {
+    {
+        .name = "default",
+        .type = AVMEDIA_TYPE_VIDEO,
+        .config_props = &ff_opencl_filter_config_output,
+    },
+};
+
+const AVFilter ff_vf_libdewobble_opencl = {
+    .name = "libdewobble_opencl",
+    .description = NULL_IF_CONFIG_SMALL(
+        "apply motion stabilization with awareness of camera projection "
+        "and/or change camera projection"),
+    .priv_size = sizeof(LibDewobbleOpenCLContext),
+    .priv_class = &libdewobble_opencl_class,
+    .init = &libdewobble_opencl_init,
+    .uninit = &libdewobble_opencl_uninit,
+    .query_formats = &ff_opencl_filter_query_formats,
+    FILTER_INPUTS(inputs),
+    FILTER_OUTPUTS(outputs),
+    .activate = activate,
+    .flags_internal = FF_FILTER_FLAG_HWFRAME_AWARE,
+};