From patchwork Tue Oct 25 09:13:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aman Karmani X-Patchwork-Id: 38987 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a20:4a86:b0:9d:28a3:170e with SMTP id fn6csp2685447pzb; Tue, 25 Oct 2022 02:15:56 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4d1AUHWazKB43kXtQ7v4FAIMpSCKpzIjuHShfI4fL9uUvmFwzDnj+lUKNKpuf52Gom6j0g X-Received: by 2002:a17:907:a057:b0:7ac:8e6a:8cd7 with SMTP id gz23-20020a170907a05700b007ac8e6a8cd7mr1350469ejc.678.1666689355822; Tue, 25 Oct 2022 02:15:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666689355; cv=none; d=google.com; s=arc-20160816; b=w6f8hPqOSw77pcdqtcFd0IIw7xu2B0N/RLWt7f13vM9K6NDuK7t+Bfxed5SamjRqp8 YPivK6jc/aqF9TySXDWvxBhKzLL/iKkRPhUdYD9WMbKgIiznankm7II3BQ6H/W40jC85 a/rRV/P80+zr38bO0w3I2aiKmps/qy97OpyVPSlk4ClSNcRcKbomhoJv2IGpGGWzdd0r OAro+DhKdp23XHUyNreaT3b1xqPG650Zuh76FJf4Md95O3lZ1K3HUi9tZkSqfmhLgjv4 lPPEUYONaqQawa0WaB86NY3HZr+2Opglacm2jZsFgIpZh0s//g4uBC7g+PlmzTQBAhK5 i6GA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:to:mime-version:fcc:date:references :in-reply-to:message-id:from:dkim-signature:delivered-to; bh=amjg1hIj49VKwIKrMBL6JEdpg6fqNoPHnV98WTIzcbQ=; b=Iwp/ScvwXk1rD7Z/tSrbJXFsEzuRdfLLPN/W57QJI22G1smqfq/05QhsI1k/K9ALz8 Go8HGy5W19PLj2PfSytkoxqot9kUs8vWtoe69sc6Ql7zcr7uywv+2od/GFlEcgm7k6E2 yz1e+QeTVPO//XyO8m+V+tmgdEvW74dVMNfQZzAZmazE9z5bX7O3v+cM15XV50StgK8n 028EwlBKkOg3EI5EMXZJzuJqSnkPt7f4DZR/jY8aggo10t3UU4FDSGKwt74MZh7Aa2A/ QX1qg3LA6xQHKraapyJSnPkWDb6TlUVFpQfglRecaSAUWyhjn8feMSk3qgGMguZJW1Vv HNaQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20210112 header.b=Qh8y8TyT; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id y14-20020a17090668ce00b0078cff711da6si1700395ejr.976.2022.10.25.02.15.55; Tue, 25 Oct 2022 02:15:55 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20210112 header.b=Qh8y8TyT; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id BFB9368BD39; Tue, 25 Oct 2022 12:14:11 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pf1-f178.google.com (mail-pf1-f178.google.com [209.85.210.178]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id D9D6168BD18 for ; Tue, 25 Oct 2022 12:14:06 +0300 (EEST) Received: by mail-pf1-f178.google.com with SMTP id m6so11295004pfb.0 for ; Tue, 25 Oct 2022 02:14:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=cc:to:mime-version:content-transfer-encoding:fcc:subject:date :references:in-reply-to:message-id:from:from:to:cc:subject:date :message-id:reply-to; bh=h8cR74C0eQwVPq9feV5zEvVdERj3tfg7ZKy5rVn6hHE=; b=Qh8y8TyTI/XWCTpQa8l6MKHcS+kgMNjDauSWEPPf8j/zJy7wmmUi4MBIaKZyUvSDpw mn54mEqYR0G49qA+7szUz+5zPb8ov8bBTUWPy4zHftb9AAIU9alfAla6yb9r08+26+t2 beW+5KjZ525zti4OhJ9ulygbmUJtnRa9H7HdJ/kvwtrfm8dco+CT0YJFTz/OuzXGYUhR C4OyyWh/z15gWBtN3xO5NR662WmhCaKUxx6zAdeWannf00lcu8A1BkjF+hYqu3JRs0f8 iNDT9PAppqTwfkmf/8qAerTGYvXPS3X0You4xn5vbGQ0eAidHIgdJnJHTqaBmr+WJZzC 7zUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:mime-version:content-transfer-encoding:fcc:subject:date :references:in-reply-to:message-id:from:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=h8cR74C0eQwVPq9feV5zEvVdERj3tfg7ZKy5rVn6hHE=; b=ncT8UNyAwGQHPM0/Tr+DX7lErEk8eglG19BuxTPtahSoAQqEPgkmAnEZIfb4UtwxEa I9glX3vsE31BR+xrQQf96reV1C5K22HdhVlW8IVrcQzQH8GKW/c3nVzHOk35j5TyqDpd WGDvjlUGUxx/ItpJRunvCbHLeq6F6iLGlUCiDAhG+duotllVo+RhC1UH0cYu3xWMttMC RhimjE0eQvku67jtIaQOXA7IRCAIzUSZhKLmkogdNUvywFwxwiMmxGv9I0QfrLnx7WUi LzheMI5QPaLzGA+BohYbdotHIipqjm3lOw4wajCFd9na8VGsaPI4f+Eue5JqfNKo1KED zwnw== X-Gm-Message-State: ACrzQf11f+nZwemBRuLqMvvjprgl3hyJ8C1UaCB+kHy1xh9iPT+kiiIZ a2Z/1dtaQou4o5BFwiw1ifow+uJoIrw= X-Received: by 2002:a05:6a00:248b:b0:56b:830d:bbdf with SMTP id c11-20020a056a00248b00b0056b830dbbdfmr16222448pfv.47.1666689244574; Tue, 25 Oct 2022 02:14:04 -0700 (PDT) Received: from [127.0.0.1] (master.gitmailbox.com. [34.83.118.50]) by smtp.gmail.com with ESMTPSA id v10-20020a170902b7ca00b00174be817124sm869951plz.221.2022.10.25.02.14.03 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 25 Oct 2022 02:14:04 -0700 (PDT) From: softworkz X-Google-Original-From: softworkz Message-Id: <2dce3f8101492dfa4fccab78e384895e80d99476.1666689226.git.ffmpegagent@gmail.com> In-Reply-To: References: Date: Tue, 25 Oct 2022 09:13:34 +0000 Fcc: Sent MIME-Version: 1.0 To: ffmpeg-devel@ffmpeg.org Subject: [FFmpeg-devel] [PATCH v9 13/25] avfilter/overlaygraphicsubs: Add overlaygraphicsubs and graphicsub2video filters X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: softworkz Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: KJj8Y29DzbPa From: softworkz - overlaygraphicsubs (VS -> V) Overlay graphic subtitles onto a video stream - graphicsub2video {S -> V) Converts graphic subtitles to video frames (with alpha) Gets auto-inserted for retaining compatibility with sub2video command lines Signed-off-by: softworkz --- doc/filters.texi | 118 +++++ libavfilter/Makefile | 2 + libavfilter/allfilters.c | 2 + libavfilter/vf_overlaygraphicsubs.c | 765 ++++++++++++++++++++++++++++ 4 files changed, 887 insertions(+) create mode 100644 libavfilter/vf_overlaygraphicsubs.c diff --git a/doc/filters.texi b/doc/filters.texi index bcd19cf931..70a371ee0e 100644 --- a/doc/filters.texi +++ b/doc/filters.texi @@ -27364,6 +27364,124 @@ tools. @c man end VIDEO SINKS +@chapter Subtitle Filters +@c man begin SUBTITLE FILTERS + +When you configure your FFmpeg build, you can disable any of the +existing filters using @code{--disable-filters}. + +Below is a description of the currently available subtitle filters. + +@section graphicsub2video + +Renders graphic subtitles as video frames. + +This filter replaces the previous "sub2video" hack which did the conversion implicitly and up-front as subtitle filtering wasn't possible at that time. +To retain compatibility with earlier sub2video command lines, this filter is being auto-inserted in those cases. + +For overlaying graphicsal subtitles it is recommended to use the 'overlaygraphicsubs' filter which is more efficient and takes less processing resources. + +This filter is still useful in cases where the overlay is done with hardware acceleration (e.g. overlay_qsv, overlay_vaapi, overlay_cuda) for preparing the overlay frames. + +Inputs: +@itemize +@item 0: Subtitles [BITMAP] +@end itemize + +Outputs: +@itemize +@item 0: Video [RGB32] +@end itemize + + +It accepts the following parameters: + +@table @option +@item size, s +Set the size of the output video frame. + +@end table + +@subsection Examples + +@itemize +@item +Overlay PGS subtitles +(not recommended - better use overlaygraphicsubs) +@example +ffmpeg -i "https://streams.videolan.org/samples/sub/PGS/Girl_With_The_Dragon_Tattoo_2%3A23%3A56.mkv" -filter_complex "[0:1]graphicsub2video[subs];[0:0][subs]overlay" output.mp4 +@end example + +@item +Overlay PGS subtitles implicitly +The graphicsub2video is inserted automatically for compatibility with legacy command lines. +@example +ffmpeg -i "https://streams.videolan.org/samples/sub/PGS/Girl_With_The_Dragon_Tattoo_2%3A23%3A56.mkv" -filter_complex "[0:0][0:1]overlay" output.mp4 +@end example +@end itemize + +@section overlaygraphicsubs + +Overlay graphic subtitles onto a video stream. + +This filter can blend graphical subtitles on a video stream directly, i.e. without creating full-size alpha images first. +The blending operation is limited to the area of the subtitle rectangles, which also means that no processing is done at times where no subtitles are to be displayed. + +Inputs: +@itemize +@item 0: Video [YUV420P, YUV422P, YUV444P, ARGB, RGBA, ABGR, BGRA, RGB24, BGR24] +@item 1: Subtitles [BITMAP] +@end itemize + +Outputs: +@itemize +@item 0: Video (same as input) +@end itemize + +It accepts the following parameters: + +@table @option +@item x +@item y +Set the expression for the x and y coordinates of the overlaid video +on the main video. Default value is "0" for both expressions. In case +the expression is invalid, it is set to a huge value (meaning that the +overlay will not be displayed within the output visible area). + +@item eof_action +See @ref{framesync}. + +@item eval +Set when the expressions for @option{x}, and @option{y} are evaluated. + +It accepts the following values: +@table @samp +@item init +only evaluate expressions once during the filter initialization or +when a command is processed + +@item frame +evaluate expressions for each incoming frame +@end table + +Default value is @samp{frame}. + +@item shortest +See @ref{framesync}. + +@end table + +@subsection Examples + +@itemize +@item +Overlay PGS subtitles +@example +ffmpeg -i "https://streams.videolan.org/samples/sub/PGS/Girl_With_The_Dragon_Tattoo_2%3A23%3A56.mkv" -filter_complex "[0:0][0:1]overlaygraphicsubs" output.mp4 +@end example +@end itemize +@c man end SUBTITLE FILTERS + @chapter Multimedia Filters @c man begin MULTIMEDIA FILTERS diff --git a/libavfilter/Makefile b/libavfilter/Makefile index 6db90028e0..0dae913b16 100644 --- a/libavfilter/Makefile +++ b/libavfilter/Makefile @@ -314,6 +314,7 @@ OBJS-$(CONFIG_GBLUR_FILTER) += vf_gblur.o OBJS-$(CONFIG_GBLUR_VULKAN_FILTER) += vf_gblur_vulkan.o vulkan.o vulkan_filter.o OBJS-$(CONFIG_GEQ_FILTER) += vf_geq.o OBJS-$(CONFIG_GRADFUN_FILTER) += vf_gradfun.o +OBJS-$(CONFIG_GRAPHICSUB2VIDEO_FILTER) += vf_overlaygraphicsubs.o framesync.o OBJS-$(CONFIG_GRAPHMONITOR_FILTER) += f_graphmonitor.o OBJS-$(CONFIG_GRAYWORLD_FILTER) += vf_grayworld.o OBJS-$(CONFIG_GREYEDGE_FILTER) += vf_colorconstancy.o @@ -398,6 +399,7 @@ OBJS-$(CONFIG_OVERLAY_OPENCL_FILTER) += vf_overlay_opencl.o opencl.o \ OBJS-$(CONFIG_OVERLAY_QSV_FILTER) += vf_overlay_qsv.o framesync.o OBJS-$(CONFIG_OVERLAY_VAAPI_FILTER) += vf_overlay_vaapi.o framesync.o vaapi_vpp.o OBJS-$(CONFIG_OVERLAY_VULKAN_FILTER) += vf_overlay_vulkan.o vulkan.o vulkan_filter.o +OBJS-$(CONFIG_OVERLAYGRAPHICSUBS_FILTER) += vf_overlaygraphicsubs.o framesync.o OBJS-$(CONFIG_OWDENOISE_FILTER) += vf_owdenoise.o OBJS-$(CONFIG_PAD_FILTER) += vf_pad.o OBJS-$(CONFIG_PAD_OPENCL_FILTER) += vf_pad_opencl.o opencl.o opencl/pad.o diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c index 173d85b982..71896ce904 100644 --- a/libavfilter/allfilters.c +++ b/libavfilter/allfilters.c @@ -374,6 +374,7 @@ extern const AVFilter ff_vf_overlay_qsv; extern const AVFilter ff_vf_overlay_vaapi; extern const AVFilter ff_vf_overlay_vulkan; extern const AVFilter ff_vf_overlay_cuda; +extern const AVFilter ff_vf_overlaygraphicsubs; extern const AVFilter ff_vf_owdenoise; extern const AVFilter ff_vf_pad; extern const AVFilter ff_vf_pad_opencl; @@ -564,6 +565,7 @@ extern const AVFilter ff_avf_showspectrumpic; extern const AVFilter ff_avf_showvolume; extern const AVFilter ff_avf_showwaves; extern const AVFilter ff_avf_showwavespic; +extern const AVFilter ff_svf_graphicsub2video; extern const AVFilter ff_vaf_spectrumsynth; /* multimedia sources */ diff --git a/libavfilter/vf_overlaygraphicsubs.c b/libavfilter/vf_overlaygraphicsubs.c new file mode 100644 index 0000000000..7b26d8ef37 --- /dev/null +++ b/libavfilter/vf_overlaygraphicsubs.c @@ -0,0 +1,765 @@ +/* + * Copyright (c) 2021 softworkz (derived from vf_overlay) + * Copyright (c) 2010 Stefano Sabatini + * Copyright (c) 2010 Baptiste Coudurier + * Copyright (c) 2007 Bobby Bingham + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +/** + * @file + * overlay graphical subtitles on top of a video frame + */ + +#include "libavutil/eval.h" +#include "libavutil/imgutils.h" +#include "libavutil/opt.h" +#include "internal.h" +#include "drawutils.h" +#include "framesync.h" + +enum var_name { + VAR_MAIN_W, VAR_MW, + VAR_MAIN_H, VAR_MH, + VAR_OVERLAY_W, VAR_OW, + VAR_OVERLAY_H, VAR_OH, + VAR_HSUB, + VAR_VSUB, + VAR_X, + VAR_Y, + VAR_N, + VAR_POS, + VAR_T, + VAR_VARS_NB +}; + +typedef struct OverlaySubsContext { + const AVClass *class; + int x, y; ///< position of overlaid picture + int w, h; + AVFrame *outpicref; + + int main_is_packed_rgb; + uint8_t main_rgba_map[4]; + int main_has_alpha; + uint8_t overlay_rgba_map[4]; + int eval_mode; ///< EvalMode + int use_caching; + AVFrame *cache_frame; + + FFFrameSync fs; + + int main_pix_step[4]; ///< steps per pixel for each plane of the main output + int hsub, vsub; ///< chroma subsampling values + const AVPixFmtDescriptor *main_desc; ///< format descriptor for main input + + double var_values[VAR_VARS_NB]; + char *x_expr, *y_expr; + + AVExpr *x_pexpr, *y_pexpr; + + int pic_counter; +} OverlaySubsContext; + +static const char *const var_names[] = { + "main_w", "W", ///< width of the main video + "main_h", "H", ///< height of the main video + "overlay_w", "w", ///< width of the overlay video + "overlay_h", "h", ///< height of the overlay video + "hsub", + "vsub", + "x", + "y", + "n", ///< number of frame + "pos", ///< position in the file + "t", ///< timestamp expressed in seconds + NULL +}; + +#define MAIN 0 +#define OVERLAY 1 + +#define R 0 +#define G 1 +#define B 2 +#define A 3 + +#define Y 0 +#define U 1 +#define V 2 + +enum EvalMode { + EVAL_MODE_INIT, + EVAL_MODE_FRAME, + EVAL_MODE_NB +}; + +static av_cold void overlay_graphicsubs_uninit(AVFilterContext *ctx) +{ + OverlaySubsContext *s = ctx->priv; + + av_frame_free(&s->cache_frame); + ff_framesync_uninit(&s->fs); + av_expr_free(s->x_pexpr); s->x_pexpr = NULL; + av_expr_free(s->y_pexpr); s->y_pexpr = NULL; +} + +static inline int normalize_xy(double d, int chroma_sub) +{ + if (isnan(d)) + return INT_MAX; + return (int)d & ~((1 << chroma_sub) - 1); +} + +static void eval_expr(AVFilterContext *ctx) +{ + OverlaySubsContext *s = ctx->priv; + + s->var_values[VAR_X] = av_expr_eval(s->x_pexpr, s->var_values, NULL); + s->var_values[VAR_Y] = av_expr_eval(s->y_pexpr, s->var_values, NULL); + /* It is necessary if x is expressed from y */ + s->var_values[VAR_X] = av_expr_eval(s->x_pexpr, s->var_values, NULL); + s->x = normalize_xy(s->var_values[VAR_X], s->hsub); + s->y = normalize_xy(s->var_values[VAR_Y], s->vsub); +} + +static int set_expr(AVExpr **pexpr, const char *expr, const char *option, void *log_ctx) +{ + int ret; + AVExpr *old = NULL; + + if (*pexpr) + old = *pexpr; + ret = av_expr_parse(pexpr, expr, var_names, + NULL, NULL, NULL, NULL, 0, log_ctx); + if (ret < 0) { + av_log(log_ctx, AV_LOG_ERROR, + "Error when evaluating the expression '%s' for %s\n", + expr, option); + *pexpr = old; + return ret; + } + + av_expr_free(old); + return 0; +} + +static int overlay_graphicsubs_query_formats(AVFilterContext *ctx) +{ + AVFilterFormats *formats; + AVFilterLink *inlink0 = ctx->inputs[0]; + AVFilterLink *inlink1 = ctx->inputs[1]; + AVFilterLink *outlink = ctx->outputs[0]; + int ret; + static const enum AVSubtitleType subtitle_fmts[] = { AV_SUBTITLE_FMT_BITMAP, AV_SUBTITLE_FMT_NONE }; + static const enum AVPixelFormat supported_pix_fmts[] = { + AV_PIX_FMT_YUV420P, AV_PIX_FMT_YUV422P, AV_PIX_FMT_YUV444P, + AV_PIX_FMT_ARGB, AV_PIX_FMT_RGBA, + AV_PIX_FMT_ABGR, AV_PIX_FMT_BGRA, + AV_PIX_FMT_RGB24, AV_PIX_FMT_BGR24, + AV_PIX_FMT_NONE + }; + + /* set input0 video formats */ + formats = ff_make_format_list(supported_pix_fmts); + if ((ret = ff_formats_ref(formats, &inlink0->outcfg.formats)) < 0) + return ret; + + /* set output0 video formats */ + if ((ret = ff_formats_ref(formats, &outlink->incfg.formats)) < 0) + return ret; + + /* set input1 subtitle formats */ + formats = ff_make_format_list(subtitle_fmts); + if ((ret = ff_formats_ref(formats, &inlink1->outcfg.formats)) < 0) + return ret; + + return 0; +} + +static int config_output(AVFilterLink *outlink) +{ + AVFilterContext *ctx = outlink->src; + OverlaySubsContext *s = ctx->priv; + int ret; + + if ((ret = ff_framesync_init_dualinput(&s->fs, ctx)) < 0) + return ret; + + outlink->w = ctx->inputs[MAIN]->w; + outlink->h = ctx->inputs[MAIN]->h; + outlink->time_base = ctx->inputs[MAIN]->time_base; + outlink->frame_rate = ctx->inputs[MAIN]->frame_rate; + + return ff_framesync_configure(&s->fs); +} + +// divide by 255 and round to nearest +// apply a fast variant: (X+127)/255 = ((X+127)*257+257)>>16 = ((X+128)*257)>>16 +#define FAST_DIV255(x) ((((x) + 128) * 257) >> 16) + +// calculate the non-pre-multiplied alpha, applying the general equation: +// alpha = alpha_overlay / ( (alpha_main + alpha_overlay) - (alpha_main * alpha_overlay) ) +// (((x) << 16) - ((x) << 9) + (x)) is a faster version of: 255 * 255 * x +// ((((x) + (y)) << 8) - ((x) + (y)) - (y) * (x)) is a faster version of: 255 * (x + y) +#define UNPREMULTIPLY_ALPHA(x, y) ((((x) << 16) - ((x) << 9) + (x)) / ((((x) + (y)) << 8) - ((x) + (y)) - (y) * (x))) + +/** + * Blend image in src to destination buffer dst at position (x, y). + */ +static av_always_inline void blend_packed_rgb(const AVFilterContext *ctx, + const AVFrame *dst, const AVSubtitleArea *src, + int x, int y, + int is_straight) +{ + OverlaySubsContext *s = ctx->priv; + int i, imax, j, jmax; + const int src_w = src->w; + const int src_h = src->h; + const int dst_w = dst->width; + const int dst_h = dst->height; + uint8_t alpha; ///< the amount of overlay to blend on to main + const int dr = s->main_rgba_map[R]; + const int dg = s->main_rgba_map[G]; + const int db = s->main_rgba_map[B]; + const int da = s->main_rgba_map[A]; + const int dstep = s->main_pix_step[0]; + const int sr = s->overlay_rgba_map[R]; + const int sg = s->overlay_rgba_map[G]; + const int sb = s->overlay_rgba_map[B]; + const int sa = s->overlay_rgba_map[A]; + int slice_start, slice_end; + uint8_t *S, *sp, *d, *dp; + + i = FFMAX(-y, 0); + imax = FFMIN3(-y + dst_h, FFMIN(src_h, dst_h), y + src_h); + + slice_start = i; + slice_end = i + imax; + + sp = src->buf[0]->data + slice_start * src->linesize[0]; + dp = dst->data[0] + (slice_start + y) * dst->linesize[0]; + + for (i = slice_start; i < slice_end; i++) { + j = FFMAX(-x, 0); + S = sp + j; + d = dp + ((x + j) * dstep); + + for (jmax = FFMIN(-x + dst_w, src_w); j < jmax; j++) { + uint32_t val = src->pal[*S]; + const uint8_t *sval = (uint8_t *)&val; + alpha = sval[sa]; + + // if the main channel has an alpha channel, alpha has to be calculated + // to create an un-premultiplied (straight) alpha value + if (s->main_has_alpha && alpha != 0 && alpha != 255) { + const uint8_t alpha_d = d[da]; + alpha = UNPREMULTIPLY_ALPHA(alpha, alpha_d); + } + + switch (alpha) { + case 0: + break; + case 255: + d[dr] = sval[sr]; + d[dg] = sval[sg]; + d[db] = sval[sb]; + break; + default: + // main_value = main_value * (1 - alpha) + overlay_value * alpha + // since alpha is in the range 0-255, the result must divided by 255 + d[dr] = is_straight ? FAST_DIV255(d[dr] * (255 - alpha) + sval[sr] * alpha) : + FFMIN(FAST_DIV255(d[dr] * (255 - alpha)) + sval[sr], 255); + d[dg] = is_straight ? FAST_DIV255(d[dg] * (255 - alpha) + sval[sg] * alpha) : + FFMIN(FAST_DIV255(d[dg] * (255 - alpha)) + sval[sg], 255); + d[db] = is_straight ? FAST_DIV255(d[db] * (255 - alpha) + sval[sb] * alpha) : + FFMIN(FAST_DIV255(d[db] * (255 - alpha)) + sval[sb], 255); + } + + if (s->main_has_alpha) { + switch (alpha) { + case 0: + break; + case 255: + d[da] = sval[sa]; + break; + default: + // apply alpha compositing: main_alpha += (1-main_alpha) * overlay_alpha + d[da] += FAST_DIV255((255 - d[da]) * S[sa]); + } + } + d += dstep; + S += 1; + } + dp += dst->linesize[0]; + sp += src->linesize[0]; + } +} + +static av_always_inline void blend_plane_8_8bits(const AVFilterContext *ctx, const AVFrame *dst, const AVSubtitleArea *area, + const uint32_t *yuv_pal, int src_w, int src_h, int dst_w, int dst_h, int plane, int hsub, int vsub, + int x, int y, int dst_plane, int dst_offset, int dst_step) +{ + const int src_wp = AV_CEIL_RSHIFT(src_w, hsub); + const int src_hp = AV_CEIL_RSHIFT(src_h, vsub); + const int dst_wp = AV_CEIL_RSHIFT(dst_w, hsub); + const int dst_hp = AV_CEIL_RSHIFT(dst_h, vsub); + const int yp = y >> vsub; + const int xp = x >> hsub; + uint8_t *s, *sp, *d, *dp, *dap; + int imax, i, j, jmax; + int slice_start, slice_end; + + i = FFMAX(-yp, 0); \ + imax = FFMIN3(-yp + dst_hp, FFMIN(src_hp, dst_hp), yp + src_hp); \ + + slice_start = i; + slice_end = i + imax; + + sp = area->buf[0]->data + (slice_start << vsub) * area->linesize[0]; + dp = dst->data[dst_plane] + (yp + slice_start) * dst->linesize[dst_plane] + dst_offset; + + dap = dst->data[3] + ((yp + slice_start) << vsub) * dst->linesize[3]; + + for (i = slice_start; i < slice_end; i++) { + j = FFMAX(-xp, 0); + d = dp + (xp + j) * dst_step; + s = sp + (j << hsub); + jmax = FFMIN(-xp + dst_wp, src_wp); + + for (; j < jmax; j++) { + uint32_t val = yuv_pal[*s]; + const uint8_t *sval = (uint8_t *)&val; + const int alpha = sval[3]; + const int max = 255, mid = 128; + const int d_int = *d; + const int sval_int = sval[plane]; + + switch (alpha) { + case 0: + break; + case 255: + *d = sval[plane]; + break; + default: + if (plane > 0) + *d = av_clip(FAST_DIV255((d_int - mid) * (max - alpha) + (sval_int - mid) * alpha) , -mid, mid) + mid; + else + *d = FAST_DIV255(d_int * (max - alpha) + sval_int * alpha); + break; + } + + d += dst_step; + s += 1 << hsub; + } + dp += dst->linesize[dst_plane]; + sp += (1 << vsub) * area->linesize[0]; + dap += (1 << vsub) * dst->linesize[3]; + } +} + +#define RGB2Y(r, g, b) (uint8_t)(((66 * (r) + 129 * (g) + 25 * (b) + 128) >> 8) + 16) +#define RGB2U(r, g, b) (uint8_t)(((-38 * (r) - 74 * (g) + 112 * (b) + 128) >> 8) + 128) +#define RGB2V(r, g, b) (uint8_t)(((112 * (r) - 94 * (g) - 18 * (b) + 128) >> 8) + 128) +/* Converts R8 G8 B8 color to YUV. */ +static av_always_inline void rgb_2_yuv(uint8_t r, uint8_t g, uint8_t b, uint8_t* y, uint8_t* u, uint8_t* v) +{ + *y = RGB2Y((int)r, (int)g, (int)b); + *u = RGB2U((int)r, (int)g, (int)b); + *v = RGB2V((int)r, (int)g, (int)b); +} + + +static av_always_inline void blend_yuv_8_8bits(AVFilterContext *ctx, AVFrame *dst, const AVSubtitleArea *area, int hsub, int vsub, int x, int y) +{ + OverlaySubsContext *s = ctx->priv; + const int src_w = area->w; + const int src_h = area->h; + const int dst_w = dst->width; + const int dst_h = dst->height; + const int sr = s->overlay_rgba_map[R]; + const int sg = s->overlay_rgba_map[G]; + const int sb = s->overlay_rgba_map[B]; + const int sa = s->overlay_rgba_map[A]; + uint32_t yuvpal[256]; + + for (int i = 0; i < 256; ++i) { + const uint8_t *rgba = (const uint8_t *)&area->pal[i]; + uint8_t *yuva = (uint8_t *)&yuvpal[i]; + rgb_2_yuv(rgba[sr], rgba[sg], rgba[sb], &yuva[Y], &yuva[U], &yuva[V]); + yuva[3] = rgba[sa]; + } + + blend_plane_8_8bits(ctx, dst, area, yuvpal, src_w, src_h, dst_w, dst_h, Y, 0, 0, x, y, s->main_desc->comp[Y].plane, s->main_desc->comp[Y].offset, s->main_desc->comp[Y].step); + blend_plane_8_8bits(ctx, dst, area, yuvpal, src_w, src_h, dst_w, dst_h, U, hsub, vsub, x, y, s->main_desc->comp[U].plane, s->main_desc->comp[U].offset, s->main_desc->comp[U].step); + blend_plane_8_8bits(ctx, dst, area, yuvpal, src_w, src_h, dst_w, dst_h, V, hsub, vsub, x, y, s->main_desc->comp[V].plane, s->main_desc->comp[V].offset, s->main_desc->comp[V].step); +} + +static int config_input_main(AVFilterLink *inlink) +{ + int ret; + AVFilterContext *ctx = inlink->dst; + OverlaySubsContext *s = inlink->dst->priv; + const AVPixFmtDescriptor *pix_desc = av_pix_fmt_desc_get(inlink->format); + + av_image_fill_max_pixsteps(s->main_pix_step, NULL, pix_desc); + ff_fill_rgba_map(s->overlay_rgba_map, AV_PIX_FMT_RGB32); // it's actually AV_PIX_FMT_PAL8); + + s->hsub = pix_desc->log2_chroma_w; + s->vsub = pix_desc->log2_chroma_h; + + s->main_desc = pix_desc; + + s->main_is_packed_rgb = ff_fill_rgba_map(s->main_rgba_map, inlink->format) >= 0; + s->main_has_alpha = !!(pix_desc->flags & AV_PIX_FMT_FLAG_ALPHA); + + /* Finish the configuration by evaluating the expressions + now when both inputs are configured. */ + s->var_values[VAR_MAIN_W ] = s->var_values[VAR_MW] = ctx->inputs[MAIN ]->w; + s->var_values[VAR_MAIN_H ] = s->var_values[VAR_MH] = ctx->inputs[MAIN ]->h; + s->var_values[VAR_OVERLAY_W] = s->var_values[VAR_OW] = ctx->inputs[OVERLAY]->w; + s->var_values[VAR_OVERLAY_H] = s->var_values[VAR_OH] = ctx->inputs[OVERLAY]->h; + s->var_values[VAR_HSUB] = 1<log2_chroma_w; + s->var_values[VAR_VSUB] = 1<log2_chroma_h; + s->var_values[VAR_X] = NAN; + s->var_values[VAR_Y] = NAN; + s->var_values[VAR_N] = 0; + s->var_values[VAR_T] = NAN; + s->var_values[VAR_POS] = NAN; + + if ((ret = set_expr(&s->x_pexpr, s->x_expr, "x", ctx)) < 0 || + (ret = set_expr(&s->y_pexpr, s->y_expr, "y", ctx)) < 0) + return ret; + + if (s->eval_mode == EVAL_MODE_INIT) { + eval_expr(ctx); + av_log(ctx, AV_LOG_VERBOSE, "x:%f xi:%d y:%f yi:%d\n", + s->var_values[VAR_X], s->x, + s->var_values[VAR_Y], s->y); + } + + av_log(ctx, AV_LOG_VERBOSE, + "main w:%d h:%d fmt:%s overlay w:%d h:%d fmt:%s\n", + ctx->inputs[MAIN]->w, ctx->inputs[MAIN]->h, + av_get_pix_fmt_name(ctx->inputs[MAIN]->format), + ctx->inputs[OVERLAY]->w, ctx->inputs[OVERLAY]->h, + av_get_pix_fmt_name(ctx->inputs[OVERLAY]->format)); + return 0; +} + +static int do_blend(FFFrameSync *fs) +{ + AVFilterContext *ctx = fs->parent; + AVFrame *mainpic, *second; + OverlaySubsContext *s = ctx->priv; + AVFilterLink *inlink = ctx->inputs[0]; + unsigned i; + int ret; + + ret = ff_framesync_dualinput_get_writable(fs, &mainpic, &second); + if (ret < 0) + return ret; + if (!second) + return ff_filter_frame(ctx->outputs[0], mainpic); + + if (s->eval_mode == EVAL_MODE_FRAME) { + int64_t pos = mainpic->pkt_pos; + + s->var_values[VAR_N] = (double)inlink->frame_count_out; + s->var_values[VAR_T] = mainpic->pts == AV_NOPTS_VALUE ? + NAN :(double)mainpic->pts * av_q2d(inlink->time_base); + s->var_values[VAR_POS] = pos == -1 ? NAN : (double)pos; + + s->var_values[VAR_OVERLAY_W] = s->var_values[VAR_OW] = second->width; + s->var_values[VAR_OVERLAY_H] = s->var_values[VAR_OH] = second->height; + s->var_values[VAR_MAIN_W ] = s->var_values[VAR_MW] = mainpic->width; + s->var_values[VAR_MAIN_H ] = s->var_values[VAR_MH] = mainpic->height; + + eval_expr(ctx); + av_log(ctx, AV_LOG_DEBUG, "n:%f t:%f pos:%f x:%f xi:%d y:%f yi:%d\n", + s->var_values[VAR_N], s->var_values[VAR_T], s->var_values[VAR_POS], + s->var_values[VAR_X], s->x, + s->var_values[VAR_Y], s->y); + } + + for (i = 0; i < second->num_subtitle_areas; i++) { + const AVSubtitleArea *sub_area = second->subtitle_areas[i]; + + if (sub_area->type != AV_SUBTITLE_FMT_BITMAP) { + av_log(NULL, AV_LOG_WARNING, "overlay_graphicsubs: non-bitmap subtitle\n"); + return AVERROR_INVALIDDATA; + } + + switch (inlink->format) { + case AV_PIX_FMT_YUV420P: + blend_yuv_8_8bits(ctx, mainpic, sub_area, 1, 1, sub_area->x + s->x, sub_area->y + s->y); + break; + case AV_PIX_FMT_YUV422P: + blend_yuv_8_8bits(ctx, mainpic, sub_area, 1, 0, sub_area->x + s->x, sub_area->y + s->y); + break; + case AV_PIX_FMT_YUV444P: + blend_yuv_8_8bits(ctx, mainpic, sub_area, 0, 0, sub_area->x + s->x, sub_area->y + s->y); + break; + case AV_PIX_FMT_RGB24: + case AV_PIX_FMT_BGR24: + case AV_PIX_FMT_ARGB: + case AV_PIX_FMT_RGBA: + case AV_PIX_FMT_BGRA: + case AV_PIX_FMT_ABGR: + blend_packed_rgb(ctx, mainpic, sub_area, sub_area->x + s->x, sub_area->y + s->y, 1); + break; + default: + av_log(NULL, AV_LOG_ERROR, "Unsupported input pix fmt: %d\n", inlink->format); + return AVERROR(EINVAL); + } + } + + return ff_filter_frame(ctx->outputs[0], mainpic); +} + +static av_cold int overlay_graphicsubs_init(AVFilterContext *ctx) +{ + OverlaySubsContext *s = ctx->priv; + + s->fs.on_event = do_blend; + return 0; +} + +static int overlay_graphicsubs_activate(AVFilterContext *ctx) +{ + OverlaySubsContext *s = ctx->priv; + return ff_framesync_activate(&s->fs); +} + +static int graphicsub2video_query_formats(AVFilterContext *ctx) +{ + AVFilterFormats *formats; + AVFilterLink *inlink = ctx->inputs[0]; + AVFilterLink *outlink = ctx->outputs[0]; + static const enum AVSubtitleType subtitle_fmts[] = { AV_SUBTITLE_FMT_BITMAP, AV_SUBTITLE_FMT_NONE }; + static const enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_RGB32, AV_PIX_FMT_NONE }; + int ret; + + /* set input subtitle formats */ + formats = ff_make_format_list(subtitle_fmts); + if ((ret = ff_formats_ref(formats, &inlink->outcfg.formats)) < 0) + return ret; + + /* set output video formats */ + formats = ff_make_format_list(pix_fmts); + if ((ret = ff_formats_ref(formats, &outlink->incfg.formats)) < 0) + return ret; + + return 0; +} + +static int graphicsub2video_config_input(AVFilterLink *inlink) +{ + AVFilterContext *ctx = inlink->dst; + OverlaySubsContext *s = ctx->priv; + + if (s->w <= 0 || s->h <= 0) { + s->w = inlink->w; + s->h = inlink->h; + } + return 0; +} + +static int graphicsub2video_config_output(AVFilterLink *outlink) +{ + const AVFilterContext *ctx = outlink->src; + OverlaySubsContext *s = ctx->priv; + const AVPixFmtDescriptor *pix_desc = av_pix_fmt_desc_get(outlink->format); + + outlink->w = s->w; + outlink->h = s->h; + + if (outlink->w == 0 && outlink->h == 0) { + outlink->w = 1; + outlink->h = 1; + } + outlink->sample_aspect_ratio = (AVRational){1,1}; + outlink->time_base = ctx->inputs[0]->time_base; + outlink->frame_rate = ctx->inputs[0]->frame_rate; + + av_image_fill_max_pixsteps(s->main_pix_step, NULL, pix_desc); + ff_fill_rgba_map(s->overlay_rgba_map, AV_PIX_FMT_RGB32); + + s->hsub = pix_desc->log2_chroma_w; + s->vsub = pix_desc->log2_chroma_h; + + s->main_desc = pix_desc; + + s->main_is_packed_rgb = ff_fill_rgba_map(s->main_rgba_map, outlink->format) >= 0; + s->main_has_alpha = !!(pix_desc->flags & AV_PIX_FMT_FLAG_ALPHA); + + return 0; +} + +static int graphicsub2video_filter_frame(AVFilterLink *inlink, AVFrame *frame) +{ + AVFilterLink *outlink = inlink->dst->outputs[0]; + const AVFilterContext *ctx = outlink->src; + OverlaySubsContext *s = ctx->priv; + AVFrame *out; + const unsigned num_rects = frame->num_subtitle_areas; + unsigned int i; + int ret; + + if (s->use_caching && s->cache_frame && frame->repeat_sub + && s->cache_frame->subtitle_timing.start_pts == frame->subtitle_timing.start_pts) { + out = av_frame_clone(s->cache_frame); + if (!out) + return AVERROR(ENOMEM); + + ret = av_frame_copy_props(out, frame); + if (ret < 0) + return ret; + + av_log(inlink->dst, AV_LOG_DEBUG, "graphicsub2video CACHED - size %dx%d pts: %"PRId64" areas: %d\n", frame->width, frame->height, frame->subtitle_timing.start_pts, frame->num_subtitle_areas); + av_frame_free(&frame); + return ff_filter_frame(outlink, out); + } + + + out = ff_get_video_buffer(outlink, outlink->w, outlink->h); + if (!out) { + av_frame_free(&frame); + return AVERROR(ENOMEM); + } + + memset(out->data[0], 0, (size_t)out->linesize[0] * out->height); + + out->pts = out->pkt_dts = out->best_effort_timestamp = frame->pts; + out->coded_picture_number = out->display_picture_number = s->pic_counter++; + + for (i = 0; i < num_rects; i++) { + const AVSubtitleArea *sub_rect = frame->subtitle_areas[i]; + + if (sub_rect->type != AV_SUBTITLE_FMT_BITMAP) { + av_log(NULL, AV_LOG_WARNING, "graphicsub2video: non-bitmap subtitle\n"); + av_frame_free(&frame); + return AVERROR_INVALIDDATA; + } + + blend_packed_rgb(inlink->dst, out, sub_rect, sub_rect->x, sub_rect->y, 1); + } + + av_log(inlink->dst, AV_LOG_DEBUG, "graphicsub2video output - size %dx%d pts: %"PRId64" areas: %d\n", out->width, out->height, out->pts, frame->num_subtitle_areas); + + if (s->use_caching) { + av_frame_free(&s->cache_frame); + s->cache_frame = av_frame_clone(out); + } + + av_frame_free(&frame); + return ff_filter_frame(outlink, out); +} + +#define OFFSET(x) offsetof(OverlaySubsContext, x) +#define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_FILTERING_PARAM) + +static const AVOption overlaygraphicsubs_options[] = { + { "x", "set the x expression", OFFSET(x_expr), AV_OPT_TYPE_STRING, {.str = "0"}, 0, 0, .flags = FLAGS }, + { "y", "set the y expression", OFFSET(y_expr), AV_OPT_TYPE_STRING, {.str = "0"}, 0, 0, .flags = FLAGS }, + { "eof_action", "Action to take when encountering EOF from secondary input ", + OFFSET(fs.opt_eof_action), AV_OPT_TYPE_INT, { .i64 = EOF_ACTION_REPEAT }, + EOF_ACTION_REPEAT, EOF_ACTION_PASS, .flags = FLAGS, "eof_action" }, + { "repeat", "Repeat the previous frame.", 0, AV_OPT_TYPE_CONST, { .i64 = EOF_ACTION_REPEAT }, .flags = FLAGS, "eof_action" }, + { "endall", "End both streams.", 0, AV_OPT_TYPE_CONST, { .i64 = EOF_ACTION_ENDALL }, .flags = FLAGS, "eof_action" }, + { "pass", "Pass through the main input.", 0, AV_OPT_TYPE_CONST, { .i64 = EOF_ACTION_PASS }, .flags = FLAGS, "eof_action" }, + { "eval", "specify when to evaluate expressions", OFFSET(eval_mode), AV_OPT_TYPE_INT, {.i64 = EVAL_MODE_FRAME}, 0, EVAL_MODE_NB-1, FLAGS, "eval" }, + { "init", "eval expressions once during initialization", 0, AV_OPT_TYPE_CONST, {.i64=EVAL_MODE_INIT}, .flags = FLAGS, .unit = "eval" }, + { "frame", "eval expressions per-frame", 0, AV_OPT_TYPE_CONST, {.i64=EVAL_MODE_FRAME}, .flags = FLAGS, .unit = "eval" }, + { "shortest", "force termination when the shortest input terminates", OFFSET(fs.opt_shortest), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, .flags = FLAGS }, + { "repeatlast", "repeat overlay of the last overlay frame", OFFSET(fs.opt_repeatlast), AV_OPT_TYPE_BOOL, {.i64=1}, 0, 1, .flags = FLAGS }, + { NULL } +}; + +static const AVOption graphicsub2video_options[] = { + { "size", "set output frame size", OFFSET(w), AV_OPT_TYPE_IMAGE_SIZE, {.str = NULL}, 0, 0, .flags = FLAGS }, + { "s", "set output frame size", OFFSET(w), AV_OPT_TYPE_IMAGE_SIZE, {.str = NULL}, 0, 0, .flags = FLAGS }, + { "use_caching", "Cache output frames", OFFSET(use_caching), AV_OPT_TYPE_BOOL, { .i64 = 1}, 0, 1, .flags = FLAGS }, + { NULL } +}; + +FRAMESYNC_DEFINE_CLASS(overlaygraphicsubs, OverlaySubsContext, fs); + +static const AVFilterPad overlaygraphicsubs_inputs[] = { + { + .name = "main", + .type = AVMEDIA_TYPE_VIDEO, + .config_props = config_input_main, + .flags = AVFILTERPAD_FLAG_NEEDS_WRITABLE, + }, + { + .name = "overlay", + .type = AVMEDIA_TYPE_SUBTITLE, + }, +}; + +static const AVFilterPad overlaygraphicsubs_outputs[] = { + { + .name = "default", + .type = AVMEDIA_TYPE_VIDEO, + .config_props = config_output, + }, +}; + +const AVFilter ff_vf_overlaygraphicsubs = { + .name = "overlaygraphicsubs", + .description = NULL_IF_CONFIG_SMALL("Overlay graphical subtitles on top of the input."), + .preinit = overlaygraphicsubs_framesync_preinit, + .init = overlay_graphicsubs_init, + .uninit = overlay_graphicsubs_uninit, + .priv_size = sizeof(OverlaySubsContext), + .priv_class = &overlaygraphicsubs_class, + .activate = overlay_graphicsubs_activate, + FILTER_INPUTS(overlaygraphicsubs_inputs), + FILTER_OUTPUTS(overlaygraphicsubs_outputs), + FILTER_QUERY_FUNC(overlay_graphicsubs_query_formats), +}; + +AVFILTER_DEFINE_CLASS(graphicsub2video); + +static const AVFilterPad graphicsub2video_inputs[] = { + { + .name = "default", + .type = AVMEDIA_TYPE_SUBTITLE, + .filter_frame = graphicsub2video_filter_frame, + .config_props = graphicsub2video_config_input, + }, +}; + +static const AVFilterPad graphicsub2video_outputs[] = { + { + .name = "default", + .type = AVMEDIA_TYPE_VIDEO, + .config_props = graphicsub2video_config_output, + }, +}; + +const AVFilter ff_svf_graphicsub2video = { + .name = "graphicsub2video", + .description = NULL_IF_CONFIG_SMALL("Convert graphical subtitles to video"), + .priv_size = sizeof(OverlaySubsContext), + .priv_class = &graphicsub2video_class, + FILTER_INPUTS(graphicsub2video_inputs), + FILTER_OUTPUTS(graphicsub2video_outputs), + FILTER_QUERY_FUNC(graphicsub2video_query_formats), +};