From patchwork Sun Jan 26 13:46:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul B Mahol X-Patchwork-Id: 17558 Return-Path: X-Original-To: patchwork@ffaux-bg.ffmpeg.org Delivered-To: patchwork@ffaux-bg.ffmpeg.org Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by ffaux.localdomain (Postfix) with ESMTP id 3A0D244BEF0 for ; Sun, 26 Jan 2020 16:15:24 +0200 (EET) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 132E868AEB0; Sun, 26 Jan 2020 16:15:24 +0200 (EET) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-wr1-f65.google.com (mail-wr1-f65.google.com [209.85.221.65]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 7748968ADD5 for ; Sun, 26 Jan 2020 16:15:17 +0200 (EET) Received: by mail-wr1-f65.google.com with SMTP id d16so7733451wre.10 for ; Sun, 26 Jan 2020 06:15:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id; bh=JFt0dzXjz1a3umAjKrASMIfMVKnmmY9Y9lYta4h5+WI=; b=IMyeo13jLp0ivON7xFJJFhQu52+Q7EoFOf0NhyxenPKVIVS4OcjvlyoHq0G78ijECa YHEWG04ihEv6XXK22VbnDnVIbRNCpgNmb0m2pry+RKsmCnO6nWNl7lcIW3mRNpGriADo wTIjsEmPEbKV21+OeDL0nmCTKG0/yavJBVVbVXBR6eKYlmqPXVmVyncrXYCifKLtMP36 vN9Pu6s19nX3q3XaUyjh1rK4IUaTLaNrgluJ0KqOyobpxdx6kPcU0pmVTvNZWhNHDGhT VyxDUQ1CKGwWDep4dc1py7BjiWiNzYMJjJ5wJquvOk9SG7HygOPprYLDGkT9eF6Mz1w1 4hIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id; bh=JFt0dzXjz1a3umAjKrASMIfMVKnmmY9Y9lYta4h5+WI=; b=oIzMbobcjfIYWHhw5OfRe8gbL8nx1WHny8veaI/M/Zbh4fApthe5kY6lz/iUyyxxiu 2Rhdn887m7DVIFVWAoRI22O/5U9/+vZy2DdhAOxBuobFvJ99eOxT+AHbFlGb4Np7TspB AP3T93sY/5PbeIDlFHf0JFr1N6BL8mHtsMJNM06vN5tp/5wK555JgOzRO47z6B6E1Six OyQvzBZesKxQ32pVQYQp7boO4rrqoI/p/2KEjc16tGvMCsjzsPBcqyIt7hKlShpMA2Y3 pKx0OjSNHPxPS31hJ6CcwKK5vlmOy2UIRpIGUKfOOV2a/GsvKxk+345EiOFiSlxuW7ou flUQ== X-Gm-Message-State: APjAAAWQY3Fgh67aNTjoOgO8qVuH5W7ENGHk02QLPMOSUNQUnGvVJnJW 4DRVN3PrUT2/ftRcQuNVQQy8WFUzjPk= X-Google-Smtp-Source: APXvYqw2wnpOKmJ4aB7U8Efwys6nseswvi3jlcYj2GxOpfrCvj9V0iHaveecCILEet1c2sr42vsUlw== X-Received: by 2002:adf:e74f:: with SMTP id c15mr15256106wrn.274.1580046430284; Sun, 26 Jan 2020 05:47:10 -0800 (PST) Received: from localhost.localdomain ([94.250.182.166]) by smtp.gmail.com with ESMTPSA id d8sm16498491wre.13.2020.01.26.05.47.07 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 26 Jan 2020 05:47:08 -0800 (PST) From: Paul B Mahol To: ffmpeg-devel@ffmpeg.org Date: Sun, 26 Jan 2020 14:46:56 +0100 Message-Id: <20200126134656.24112-1-onemda@gmail.com> X-Mailer: git-send-email 2.17.1 Subject: [FFmpeg-devel] [PATCH] avfilter: add xfade filter X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches MIME-Version: 1.0 Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" Signed-off-by: Paul B Mahol --- doc/filters.texi | 31 ++ libavfilter/Makefile | 1 + libavfilter/allfilters.c | 1 + libavfilter/vf_xfade.c | 613 +++++++++++++++++++++++++++++++++++++++ 4 files changed, 646 insertions(+) create mode 100644 libavfilter/vf_xfade.c diff --git a/doc/filters.texi b/doc/filters.texi index 3f40af8439..12906c15be 100644 --- a/doc/filters.texi +++ b/doc/filters.texi @@ -20033,6 +20033,37 @@ If number of inputs is even number, than result will be mean value between two m Set which planes to filter. Default value is @code{15}, by which all planes are processed. @end table +@section xfade + +Apply cross fade from one input video stream to another input video stream. +The cross fade is applied for specified duration. + +The filter accepts the following options: + +@table @option +@item transition +Set one of available transition effects: + +@table @samp +@item fade +@item wipeleft +@item wiperight +@item wipeup +@item wipedown +@item slideleft +@item slideright +@item slideup +@item slidedown +@end table +Default transition effect is fade. + +@item duration +Set cross fade duration in seconds. + +@item offset +Set cross fade start relative to first input stream. +@end table + @section xstack Stack video inputs into custom layout. diff --git a/libavfilter/Makefile b/libavfilter/Makefile index 58b3077dec..72804323d5 100644 --- a/libavfilter/Makefile +++ b/libavfilter/Makefile @@ -441,6 +441,7 @@ OBJS-$(CONFIG_W3FDIF_FILTER) += vf_w3fdif.o OBJS-$(CONFIG_WAVEFORM_FILTER) += vf_waveform.o OBJS-$(CONFIG_WEAVE_FILTER) += vf_weave.o OBJS-$(CONFIG_XBR_FILTER) += vf_xbr.o +OBJS-$(CONFIG_XFADE_FILTER) += vf_xfade.o OBJS-$(CONFIG_XMEDIAN_FILTER) += vf_xmedian.o framesync.o OBJS-$(CONFIG_XSTACK_FILTER) += vf_stack.o framesync.o OBJS-$(CONFIG_YADIF_FILTER) += vf_yadif.o yadif_common.o diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c index 6270c18ae2..f7ab2def92 100644 --- a/libavfilter/allfilters.c +++ b/libavfilter/allfilters.c @@ -420,6 +420,7 @@ extern AVFilter ff_vf_w3fdif; extern AVFilter ff_vf_waveform; extern AVFilter ff_vf_weave; extern AVFilter ff_vf_xbr; +extern AVFilter ff_vf_xfade; extern AVFilter ff_vf_xmedian; extern AVFilter ff_vf_xstack; extern AVFilter ff_vf_yadif; diff --git a/libavfilter/vf_xfade.c b/libavfilter/vf_xfade.c new file mode 100644 index 0000000000..957b33593b --- /dev/null +++ b/libavfilter/vf_xfade.c @@ -0,0 +1,613 @@ +/* + * Copyright (c) 2020 Paul B Mahol + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include "libavutil/imgutils.h" +#include "libavutil/eval.h" +#include "libavutil/opt.h" +#include "libavutil/pixfmt.h" +#include "avfilter.h" +#include "formats.h" +#include "framesync.h" +#include "internal.h" +#include "filters.h" +#include "video.h" + +enum XFadeTransitions { + CUSTOM = -1, + FADE, + WIPELEFT, + WIPERIGHT, + WIPEUP, + WIPEDOWN, + SLIDELEFT, + SLIDERIGHT, + SLIDEUP, + SLIDEDOWN, + NB_TRANSITIONS, +}; + +typedef struct XFadeContext { + const AVClass *class; + + int transition; + int64_t duration; + int64_t offset; + + int nb_planes; + int depth; + + int64_t duration_pts; + int64_t offset_pts; + int64_t first_pts; + int64_t last_pts; + int64_t pts; + int xfade_is_over; + int need_second; + int eof[2]; + AVFrame *xf[2]; + + void (*transitionf)(AVFilterContext *ctx, const AVFrame *a, const AVFrame *b, AVFrame *out, float progress, + int slice_start, int slice_end); +} XFadeContext; + +static const char *const var_names[] = { "X", "Y", "W", "H", "A", "B", "P", NULL }; +enum { VAR_X, VAR_Y, VAR_W, VAR_H, VAR_A, VAR_B, VAR_PROGRESS, VAR_VARS_NB }; + +typedef struct ThreadData { + const AVFrame *xf[2]; + AVFrame *out; + float progress; +} ThreadData; + +static int query_formats(AVFilterContext *ctx) +{ + static const enum AVPixelFormat pix_fmts[] = { + AV_PIX_FMT_YUVA444P, + AV_PIX_FMT_YUVJ444P, + AV_PIX_FMT_YUV444P, + AV_PIX_FMT_GBRP, AV_PIX_FMT_GBRAP, AV_PIX_FMT_GRAY8, + AV_PIX_FMT_YUVA444P9, AV_PIX_FMT_GBRP9, + AV_PIX_FMT_YUV444P10, + AV_PIX_FMT_YUVA444P10, + AV_PIX_FMT_GBRP10, AV_PIX_FMT_GBRAP10, AV_PIX_FMT_GRAY10, + AV_PIX_FMT_YUV444P12, + AV_PIX_FMT_YUVA444P12, + AV_PIX_FMT_GBRP12, AV_PIX_FMT_GBRAP12, AV_PIX_FMT_GRAY12, + AV_PIX_FMT_YUV444P16, + AV_PIX_FMT_YUVA444P16, + AV_PIX_FMT_GBRP16, AV_PIX_FMT_GBRAP16, AV_PIX_FMT_GRAY16, + AV_PIX_FMT_NONE + }; + + AVFilterFormats *fmts_list = ff_make_format_list(pix_fmts); + if (!fmts_list) + return AVERROR(ENOMEM); + return ff_set_common_formats(ctx, fmts_list); +} + +static av_cold void uninit(AVFilterContext *ctx) +{ + XFadeContext *s = ctx->priv; +} + +#define OFFSET(x) offsetof(XFadeContext, x) +#define FLAGS (AV_OPT_FLAG_FILTERING_PARAM | AV_OPT_FLAG_VIDEO_PARAM) + +static const AVOption xfade_options[] = { + { "transition", "set cross fade transition", OFFSET(transition), AV_OPT_TYPE_INT, {.i64=FADE}, 0, NB_TRANSITIONS-1, FLAGS, "transition" }, + { "fade", "fade transition", 0, AV_OPT_TYPE_CONST, {.i64=FADE}, 0, 0, FLAGS, "transition" }, + { "wipeleft", "wipe left transition", 0, AV_OPT_TYPE_CONST, {.i64=WIPELEFT}, 0, 0, FLAGS, "transition" }, + { "wiperight", "wipe right transition", 0, AV_OPT_TYPE_CONST, {.i64=WIPERIGHT}, 0, 0, FLAGS, "transition" }, + { "wipeup", "wipe up transition", 0, AV_OPT_TYPE_CONST, {.i64=WIPEUP}, 0, 0, FLAGS, "transition" }, + { "wipedown", "wipe down transition", 0, AV_OPT_TYPE_CONST, {.i64=WIPEDOWN}, 0, 0, FLAGS, "transition" }, + { "slideleft", "slide left transition", 0, AV_OPT_TYPE_CONST, {.i64=SLIDELEFT}, 0, 0, FLAGS, "transition" }, + { "slideright", "slide right transition", 0, AV_OPT_TYPE_CONST, {.i64=SLIDERIGHT}, 0, 0, FLAGS, "transition" }, + { "slideup", "slide up transition", 0, AV_OPT_TYPE_CONST, {.i64=SLIDEUP}, 0, 0, FLAGS, "transition" }, + { "slidedown", "slide down transition", 0, AV_OPT_TYPE_CONST, {.i64=SLIDEDOWN}, 0, 0, FLAGS, "transition" }, + { "duration", "set cross fade duration", OFFSET(duration), AV_OPT_TYPE_DURATION, {.i64=1000000}, 0, 60000000, FLAGS }, + { "offset", "set cross fade start relative to first input stream", OFFSET(offset), AV_OPT_TYPE_DURATION, {.i64=0}, 0, 60000000, FLAGS }, + { NULL } +}; + +AVFILTER_DEFINE_CLASS(xfade); + +#define FADE_TRANSITION(name, type, div) \ +static void fade##name##_transition(AVFilterContext *ctx, \ + const AVFrame *a, const AVFrame *b, AVFrame *out, \ + float progress, \ + int slice_start, int slice_end) \ +{ \ + XFadeContext *s = ctx->priv; \ + const int height = slice_end - slice_start; \ + \ + for (int p = 0; p < s->nb_planes; p++) { \ + const type *xf0 = (const type *)(a->data[p] + slice_start * a->linesize[p]); \ + const type *xf1 = (const type *)(b->data[p] + slice_start * b->linesize[p]); \ + type *dst = (type *)(out->data[p] + slice_start * out->linesize[p]); \ + \ + for (int y = 0; y < height; y++) { \ + for (int x = 0; x < out->width; x++) { \ + dst[x] = progress * xf0[x] + (1.f - progress) * xf1[x]; \ + } \ + \ + dst += out->linesize[p] / div; \ + xf0 += a->linesize[p] / div; \ + xf1 += b->linesize[p] / div; \ + } \ + } \ +} + +FADE_TRANSITION(8, uint8_t, 1) +FADE_TRANSITION(16, uint16_t, 2) + +#define WIPELEFT_TRANSITION(name, type, div) \ +static void wipeleft##name##_transition(AVFilterContext *ctx, \ + const AVFrame *a, const AVFrame *b, AVFrame *out, \ + float progress, \ + int slice_start, int slice_end) \ +{ \ + XFadeContext *s = ctx->priv; \ + const int height = slice_end - slice_start; \ + const int z = out->width * progress; \ + \ + for (int p = 0; p < s->nb_planes; p++) { \ + const type *xf0 = (const type *)(a->data[p] + slice_start * a->linesize[p]); \ + const type *xf1 = (const type *)(b->data[p] + slice_start * b->linesize[p]); \ + type *dst = (type *)(out->data[p] + slice_start * out->linesize[p]); \ + \ + for (int y = 0; y < height; y++) { \ + for (int x = 0; x < out->width; x++) { \ + dst[x] = x > z ? xf1[x] : xf0[x]; \ + } \ + \ + dst += out->linesize[p] / div; \ + xf0 += a->linesize[p] / div; \ + xf1 += b->linesize[p] / div; \ + } \ + } \ +} + +WIPELEFT_TRANSITION(8, uint8_t, 1) +WIPELEFT_TRANSITION(16, uint16_t, 2) + +#define WIPERIGHT_TRANSITION(name, type, div) \ +static void wiperight##name##_transition(AVFilterContext *ctx, \ + const AVFrame *a, const AVFrame *b, AVFrame *out, \ + float progress, \ + int slice_start, int slice_end) \ +{ \ + XFadeContext *s = ctx->priv; \ + const int height = slice_end - slice_start; \ + const int z = out->width * (1.f - progress); \ + \ + for (int p = 0; p < s->nb_planes; p++) { \ + const type *xf0 = (const type *)(a->data[p] + slice_start * a->linesize[p]); \ + const type *xf1 = (const type *)(b->data[p] + slice_start * b->linesize[p]); \ + type *dst = (type *)(out->data[p] + slice_start * out->linesize[p]); \ + \ + for (int y = 0; y < height; y++) { \ + for (int x = 0; x < out->width; x++) { \ + dst[x] = x > z ? xf0[x] : xf1[x]; \ + } \ + \ + dst += out->linesize[p] / div; \ + xf0 += a->linesize[p] / div; \ + xf1 += b->linesize[p] / div; \ + } \ + } \ +} + +WIPERIGHT_TRANSITION(8, uint8_t, 1) +WIPERIGHT_TRANSITION(16, uint16_t, 2) + +#define WIPEUP_TRANSITION(name, type, div) \ +static void wipeup##name##_transition(AVFilterContext *ctx, \ + const AVFrame *a, const AVFrame *b, AVFrame *out, \ + float progress, \ + int slice_start, int slice_end) \ +{ \ + XFadeContext *s = ctx->priv; \ + const int height = slice_end - slice_start; \ + const int z = out->height * progress; \ + \ + for (int p = 0; p < s->nb_planes; p++) { \ + const type *xf0 = (const type *)(a->data[p] + slice_start * a->linesize[p]); \ + const type *xf1 = (const type *)(b->data[p] + slice_start * b->linesize[p]); \ + type *dst = (type *)(out->data[p] + slice_start * out->linesize[p]); \ + \ + for (int y = 0; y < height; y++) { \ + for (int x = 0; x < out->width; x++) { \ + dst[x] = slice_start + y > z ? xf1[x] : xf0[x]; \ + } \ + \ + dst += out->linesize[p] / div; \ + xf0 += a->linesize[p] / div; \ + xf1 += b->linesize[p] / div; \ + } \ + } \ +} + +WIPEUP_TRANSITION(8, uint8_t, 1) +WIPEUP_TRANSITION(16, uint16_t, 2) + +#define WIPEDOWN_TRANSITION(name, type, div) \ +static void wipedown##name##_transition(AVFilterContext *ctx, \ + const AVFrame *a, const AVFrame *b, AVFrame *out, \ + float progress, \ + int slice_start, int slice_end) \ +{ \ + XFadeContext *s = ctx->priv; \ + const int height = slice_end - slice_start; \ + const int z = out->height * (1.f - progress); \ + \ + for (int p = 0; p < s->nb_planes; p++) { \ + const type *xf0 = (const type *)(a->data[p] + slice_start * a->linesize[p]); \ + const type *xf1 = (const type *)(b->data[p] + slice_start * b->linesize[p]); \ + type *dst = (type *)(out->data[p] + slice_start * out->linesize[p]); \ + \ + for (int y = 0; y < height; y++) { \ + for (int x = 0; x < out->width; x++) { \ + dst[x] = slice_start + y > z ? xf0[x] : xf1[x]; \ + } \ + \ + dst += out->linesize[p] / div; \ + xf0 += a->linesize[p] / div; \ + xf1 += b->linesize[p] / div; \ + } \ + } \ +} + +WIPEDOWN_TRANSITION(8, uint8_t, 1) +WIPEDOWN_TRANSITION(16, uint16_t, 2) + +#define SLIDELEFT_TRANSITION(name, type, div) \ +static void slideleft##name##_transition(AVFilterContext *ctx, \ + const AVFrame *a, const AVFrame *b, AVFrame *out, \ + float progress, \ + int slice_start, int slice_end) \ +{ \ + XFadeContext *s = ctx->priv; \ + const int height = slice_end - slice_start; \ + const int width = out->width; \ + const int z = -progress * width; \ + \ + for (int p = 0; p < s->nb_planes; p++) { \ + const type *xf0 = (const type *)(a->data[p] + slice_start * a->linesize[p]); \ + const type *xf1 = (const type *)(b->data[p] + slice_start * b->linesize[p]); \ + type *dst = (type *)(out->data[p] + slice_start * out->linesize[p]); \ + \ + for (int y = 0; y < height; y++) { \ + for (int x = 0; x < width; x++) { \ + const int zx = z + x; \ + const int zz = zx % width + width * (zx < 0); \ + dst[x] = (zx > 0) * (zx < width) ? xf1[zz] : xf0[zz]; \ + } \ + \ + dst += out->linesize[p] / div; \ + xf0 += a->linesize[p] / div; \ + xf1 += b->linesize[p] / div; \ + } \ + } \ +} + +SLIDELEFT_TRANSITION(8, uint8_t, 1) +SLIDELEFT_TRANSITION(16, uint16_t, 2) + +#define SLIDERIGHT_TRANSITION(name, type, div) \ +static void slideright##name##_transition(AVFilterContext *ctx, \ + const AVFrame *a, const AVFrame *b, AVFrame *out, \ + float progress, \ + int slice_start, int slice_end) \ +{ \ + XFadeContext *s = ctx->priv; \ + const int height = slice_end - slice_start; \ + const int width = out->width; \ + const int z = progress * width; \ + \ + for (int p = 0; p < s->nb_planes; p++) { \ + const type *xf0 = (const type *)(a->data[p] + slice_start * a->linesize[p]); \ + const type *xf1 = (const type *)(b->data[p] + slice_start * b->linesize[p]); \ + type *dst = (type *)(out->data[p] + slice_start * out->linesize[p]); \ + \ + for (int y = 0; y < height; y++) { \ + for (int x = 0; x < out->width; x++) { \ + const int zx = z + x; \ + const int zz = zx % width + width * (zx < 0); \ + dst[x] = (zx > 0) * (zx < width) ? xf1[zz] : xf0[zz]; \ + } \ + \ + dst += out->linesize[p] / div; \ + xf0 += a->linesize[p] / div; \ + xf1 += b->linesize[p] / div; \ + } \ + } \ +} + +SLIDERIGHT_TRANSITION(8, uint8_t, 1) +SLIDERIGHT_TRANSITION(16, uint16_t, 2) + +#define SLIDEUP_TRANSITION(name, type, div) \ +static void slideup##name##_transition(AVFilterContext *ctx, \ + const AVFrame *a, const AVFrame *b, AVFrame *out, \ + float progress, \ + int slice_start, int slice_end) \ +{ \ + XFadeContext *s = ctx->priv; \ + const int height = out->height; \ + const int z = -progress * height; \ + \ + for (int p = 0; p < s->nb_planes; p++) { \ + type *dst = (type *)(out->data[p] + slice_start * out->linesize[p]); \ + \ + for (int y = slice_start; y < slice_end; y++) { \ + const int zy = z + y; \ + const int zz = zy % height + height * (zy < 0); \ + const type *xf0 = (const type *)(a->data[p] + zz * a->linesize[p]); \ + const type *xf1 = (const type *)(b->data[p] + zz * b->linesize[p]); \ + \ + for (int x = 0; x < out->width; x++) { \ + dst[x] = (zy > 0) * (zy < height) ? xf1[x] : xf0[x]; \ + } \ + \ + dst += out->linesize[p] / div; \ + } \ + } \ +} + +SLIDEUP_TRANSITION(8, uint8_t, 1) +SLIDEUP_TRANSITION(16, uint16_t, 2) + +#define SLIDEDOWN_TRANSITION(name, type, div) \ +static void slidedown##name##_transition(AVFilterContext *ctx, \ + const AVFrame *a, const AVFrame *b, AVFrame *out, \ + float progress, \ + int slice_start, int slice_end) \ +{ \ + XFadeContext *s = ctx->priv; \ + const int height = out->height; \ + const int z = progress * height; \ + \ + for (int p = 0; p < s->nb_planes; p++) { \ + type *dst = (type *)(out->data[p] + slice_start * out->linesize[p]); \ + \ + for (int y = slice_start; y < slice_end; y++) { \ + const int zy = z + y; \ + const int zz = zy % height + height * (zy < 0); \ + const type *xf0 = (const type *)(a->data[p] + zz * a->linesize[p]); \ + const type *xf1 = (const type *)(b->data[p] + zz * b->linesize[p]); \ + \ + for (int x = 0; x < out->width; x++) { \ + dst[x] = (zy > 0) * (zy < height) ? xf1[x] : xf0[x]; \ + } \ + \ + dst += out->linesize[p] / div; \ + } \ + } \ +} + +SLIDEDOWN_TRANSITION(8, uint8_t, 1) +SLIDEDOWN_TRANSITION(16, uint16_t, 2) + +static int config_output(AVFilterLink *outlink) +{ + AVFilterContext *ctx = outlink->src; + AVFilterLink *toplink = ctx->inputs[0]; + AVFilterLink *bottomlink = ctx->inputs[1]; + XFadeContext *s = ctx->priv; + const AVPixFmtDescriptor *pix_desc = av_pix_fmt_desc_get(toplink->format); + + if (toplink->format != bottomlink->format) { + av_log(ctx, AV_LOG_ERROR, "inputs must be of same pixel format\n"); + return AVERROR(EINVAL); + } + if (toplink->w != bottomlink->w || toplink->h != bottomlink->h) { + av_log(ctx, AV_LOG_ERROR, "First input link %s parameters " + "(size %dx%d) do not match the corresponding " + "second input link %s parameters (size %dx%d)\n", + ctx->input_pads[0].name, toplink->w, toplink->h, + ctx->input_pads[1].name, bottomlink->w, bottomlink->h); + return AVERROR(EINVAL); + } + + outlink->w = toplink->w; + outlink->h = toplink->h; + outlink->time_base = toplink->time_base; + outlink->sample_aspect_ratio = toplink->sample_aspect_ratio; + outlink->frame_rate = toplink->frame_rate; + + s->depth = pix_desc->comp[0].depth; + s->nb_planes = av_pix_fmt_count_planes(toplink->format); + + s->first_pts = s->last_pts = s->pts = AV_NOPTS_VALUE; + + if (s->duration) + s->duration_pts = av_rescale_q(s->duration, AV_TIME_BASE_Q, outlink->time_base); + if (s->offset) + s->offset_pts = av_rescale_q(s->offset, AV_TIME_BASE_Q, outlink->time_base); + + switch (s->transition) { + case FADE: s->transitionf = s->depth <= 8 ? fade8_transition : fade16_transition ; break; + case WIPELEFT: s->transitionf = s->depth <= 8 ? wipeleft8_transition : wipeleft16_transition ; break; + case WIPERIGHT: s->transitionf = s->depth <= 8 ? wiperight8_transition : wiperight16_transition ; break; + case WIPEUP: s->transitionf = s->depth <= 8 ? wipeup8_transition : wipeup16_transition ; break; + case WIPEDOWN: s->transitionf = s->depth <= 8 ? wipedown8_transition : wipedown16_transition ; break; + case SLIDELEFT: s->transitionf = s->depth <= 8 ? slideleft8_transition : slideleft16_transition ; break; + case SLIDERIGHT: s->transitionf = s->depth <= 8 ? slideright8_transition : slideright16_transition ; break; + case SLIDEUP: s->transitionf = s->depth <= 8 ? slideup8_transition : slideup16_transition ; break; + case SLIDEDOWN: s->transitionf = s->depth <= 8 ? slidedown8_transition : slidedown16_transition ; break; + } + + return 0; +} + +static int xfade_slice(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs) +{ + XFadeContext *s = ctx->priv; + AVFilterLink *outlink = ctx->outputs[0]; + ThreadData *td = arg; + int slice_start = (outlink->h * jobnr ) / nb_jobs; + int slice_end = (outlink->h * (jobnr+1)) / nb_jobs; + + s->transitionf(ctx, td->xf[0], td->xf[1], td->out, td->progress, slice_start, slice_end); + + return 0; +} + +static int xfade_frame(AVFilterContext *ctx, AVFrame *a, AVFrame *b) +{ + XFadeContext *s = ctx->priv; + AVFilterLink *outlink = ctx->outputs[0]; + float progress = av_clipf(1.f - ((float)(s->pts - s->first_pts - s->offset_pts) / s->duration_pts), 0.f, 1.f); + ThreadData td; + AVFrame *out; + + out = ff_get_video_buffer(outlink, outlink->w, outlink->h); + if (!out) + return AVERROR(ENOMEM); + + td.xf[0] = a, td.xf[1] = b, td.out = out, td.progress = progress; + ctx->internal->execute(ctx, xfade_slice, &td, NULL, FFMIN(outlink->h, ff_filter_get_nb_threads(ctx))); + + out->pts = s->pts; + + return ff_filter_frame(outlink, out); +} + +static int xfade_activate(AVFilterContext *ctx) +{ + XFadeContext *s = ctx->priv; + AVFilterLink *outlink = ctx->outputs[0]; + AVFrame *in = NULL; + int ret = 0, status; + int64_t pts; + + FF_FILTER_FORWARD_STATUS_BACK_ALL(outlink, ctx); + + if (s->xfade_is_over) { + ret = ff_inlink_consume_frame(ctx->inputs[1], &in); + if (ret < 0) { + return ret; + } else if (ff_inlink_acknowledge_status(ctx->inputs[1], &status, &pts)) { + ff_outlink_set_status(outlink, status, s->pts); + return 0; + } else if (!ret) { + if (ff_outlink_frame_wanted(outlink)) { + ff_inlink_request_frame(ctx->inputs[1]); + return 0; + } + } else { + in->pts = (in->pts - s->last_pts) + s->pts; + return ff_filter_frame(outlink, in); + } + } + + if (ff_inlink_queued_frames(ctx->inputs[0]) > 0) { + s->xf[0] = ff_inlink_peek_frame(ctx->inputs[0], 0); + if (s->xf[0]) { + if (s->first_pts == AV_NOPTS_VALUE) { + s->first_pts = s->xf[0]->pts; + } + s->pts = s->xf[0]->pts; + if (s->first_pts + s->offset_pts > s->xf[0]->pts) { + s->xf[0] = NULL; + s->need_second = 0; + ff_inlink_consume_frame(ctx->inputs[0], &in); + return ff_filter_frame(outlink, in); + } + + s->need_second = 1; + } + } + + if (s->xf[0] && ff_inlink_queued_frames(ctx->inputs[1]) > 0) { + ff_inlink_consume_frame(ctx->inputs[0], &s->xf[0]); + ff_inlink_consume_frame(ctx->inputs[1], &s->xf[1]); + + s->last_pts = s->xf[1]->pts; + s->pts = s->xf[0]->pts; + if (s->xf[0]->pts - (s->first_pts + s->offset_pts) > s->duration_pts) + s->xfade_is_over = 1; + ret = xfade_frame(ctx, s->xf[0], s->xf[1]); + av_frame_free(&s->xf[0]); + av_frame_free(&s->xf[1]); + return ret; + } + + if (ff_inlink_queued_frames(ctx->inputs[0]) > 0 && + ff_inlink_queued_frames(ctx->inputs[1]) > 0) { + ff_filter_set_ready(ctx, 100); + return 0; + } + + if (ff_outlink_frame_wanted(outlink)) { + if (!s->eof[0] && ff_outlink_get_status(ctx->inputs[0])) { + s->eof[0] = 1; + s->xfade_is_over = 1; + } + if (!s->eof[1] && ff_outlink_get_status(ctx->inputs[1])) { + s->eof[1] = 1; + } + if (!s->eof[0] && !s->xf[0]) + ff_inlink_request_frame(ctx->inputs[0]); + if (!s->eof[1] && (s->need_second || s->eof[0])) + ff_inlink_request_frame(ctx->inputs[1]); + if (s->eof[0] && s->eof[1] && ( + ff_inlink_queued_frames(ctx->inputs[0]) <= 0 || + ff_inlink_queued_frames(ctx->inputs[1]) <= 0)) + ff_outlink_set_status(outlink, AVERROR_EOF, AV_NOPTS_VALUE); + return 0; + } + + return FFERROR_NOT_READY; +} + +static const AVFilterPad xfade_inputs[] = { + { + .name = "main", + .type = AVMEDIA_TYPE_VIDEO, + }, + { + .name = "xfade", + .type = AVMEDIA_TYPE_VIDEO, + }, + { NULL } +}; + +static const AVFilterPad xfade_outputs[] = { + { + .name = "default", + .type = AVMEDIA_TYPE_VIDEO, + .config_props = config_output, + }, + { NULL } +}; + +AVFilter ff_vf_xfade = { + .name = "xfade", + .description = NULL_IF_CONFIG_SMALL("Cross fade one video with another video."), + .priv_size = sizeof(XFadeContext), + .priv_class = &xfade_class, + .query_formats = query_formats, + .activate = xfade_activate, + .uninit = uninit, + .inputs = xfade_inputs, + .outputs = xfade_outputs, + .flags = AVFILTER_FLAG_SLICE_THREADS, +};