From patchwork Thu Jan 23 18:42:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul B Mahol X-Patchwork-Id: 17496 Return-Path: X-Original-To: patchwork@ffaux-bg.ffmpeg.org Delivered-To: patchwork@ffaux-bg.ffmpeg.org Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by ffaux.localdomain (Postfix) with ESMTP id E4497449B3C for ; Thu, 23 Jan 2020 20:42:52 +0200 (EET) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id AF4B968B057; Thu, 23 Jan 2020 20:42:52 +0200 (EET) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-wr1-f44.google.com (mail-wr1-f44.google.com [209.85.221.44]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 02DDB68A454 for ; Thu, 23 Jan 2020 20:42:45 +0200 (EET) Received: by mail-wr1-f44.google.com with SMTP id j42so4235180wrj.12 for ; Thu, 23 Jan 2020 10:42:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id; bh=geS/XKSn9hT7W30ONtxft4QMGeZxKN0reMMUYCnxyOU=; b=nMNDOdv0cBbv6zOTynTdOZm3ccgGJYhUO4a6oL8waV67dh5ad1Pw/z2cAptepQujr2 49TJ4RoDzwq0W3xCKmUt6ysEwbtekSiKKyBUQoCYCElQvDgufgMkcyiPFbzuG+R03oBV G5446AY/FWWtpjujMTuuLq6wGJ2Kq6+cqSvxmR+5nFRPrM/FJLAA35GHFUhzzFfA+6QM 1Argqa/vS2kdnTZ94kRwNSJkx4n68s0WyCSY9E/AkGT7wwqDtZmv14+fa5kRUufqb0ZK H5xzsxFdzaDW89zOzsvnygGtz4qflJ2H8DudIdPv8x33GnwDGan57KD68dGdOYFkBfSU PSKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id; bh=geS/XKSn9hT7W30ONtxft4QMGeZxKN0reMMUYCnxyOU=; b=YfnoJDaKOOUhysoGP7c0STtCCE9i2nZ5mePgMlpsyf8mYUBF0bM3Fe/a+bDxOgbdYH RXrSoVYcN4/+Pu2UIPS0VMpygy33bvQ4e8odB9Hgbqo5xVAlOL/uPqkv5woIXgy/aDyl E1e/aUItT0M35N1n0TkCKzhjfJyi6UyiFkJpqCaChRM2TSJWGp9iKqQYrs1Rrs4eXJ7g Jg0FsFWz+uEOiSRLSHBU0xPCwL/RULrtNyWogf4qbtPUqS9nhuTiiYn2ES5p1iH9b4Ho GqTRsM2JsaycVis187iNQ9NeuFxBWC2TJIlwaXW4fGSUg04iF1LDcNuv+JJoEK0tsUtb 3niA== X-Gm-Message-State: APjAAAUVVMmjLH8B+Sv3B1ZrFl3tmbMcndWmyY0yCVDZA2P7rX3Orak0 VsOxj8X9zfC3ZY68HC1v7fiFFhwkj14= X-Google-Smtp-Source: APXvYqwUoFRuI9+ireyH3tKutBwrnBFvIK9Vh+x2DGuMG9UdlpsZwuJMpuc4XZEEh62AIl+/ktEFjw== X-Received: by 2002:adf:cf0a:: with SMTP id o10mr18666236wrj.325.1579804964827; Thu, 23 Jan 2020 10:42:44 -0800 (PST) Received: from localhost.localdomain ([94.250.182.166]) by smtp.gmail.com with ESMTPSA id n10sm4013668wrt.14.2020.01.23.10.42.43 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Jan 2020 10:42:43 -0800 (PST) From: Paul B Mahol To: ffmpeg-devel@ffmpeg.org Date: Thu, 23 Jan 2020 19:42:33 +0100 Message-Id: <20200123184233.30590-1-onemda@gmail.com> X-Mailer: git-send-email 2.17.1 Subject: [FFmpeg-devel] [PATCH] avfilter: add xfade opencl filter X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches MIME-Version: 1.0 Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" Signed-off-by: Paul B Mahol --- doc/filters.texi | 76 +++++++ libavfilter/Makefile | 1 + libavfilter/allfilters.c | 1 + libavfilter/vf_xfade_opencl.c | 378 ++++++++++++++++++++++++++++++++++ 4 files changed, 456 insertions(+) create mode 100644 libavfilter/vf_xfade_opencl.c diff --git a/doc/filters.texi b/doc/filters.texi index a9ae75f0c0..e9275ca351 100644 --- a/doc/filters.texi +++ b/doc/filters.texi @@ -21341,6 +21341,82 @@ Apply a strong blur of both luma and chroma parameters: @end example @end itemize +@section xfade_opencl + +Cross fade two videos with custom transition effect by using OpenCL. + +It accepts the following options: + +@table @option +@item source +OpenCL program source file. + +@item kernel +Set name of kernel to use for transition from program source file. + +@item duration +Set duration of video transition. + +@item offset +Set time of start of transition relative to first video. +@end table + +The program source file must contain a kernel function with the given name, +which will be run once for each plane of the output. Each run on a plane +gets enqueued as a separate 2D global NDRange with one work-item for each +pixel to be generated. The global ID offset for each work-item is therefore +the coordinates of a pixel in the destination image. + +The kernel function needs to take the following arguments: +@itemize +@item +Destination image, @var{__write_only image2d_t}. + +This image will become the output; the kernel should write all of it. + +@item +First Source image, @var{__read_only image2d_t}. +Second Source image, @var{__read_only image2d_t}. + +These are the most recent images on each input. The kernel may read from +them to generate the output, but they can't be written to. + +@item +Transition progress, @var{float}. This value is always between 0 and 1 inclusive. +@end itemize + +Example programs: + +@itemize +@item +Apply dots curtain transition effect: +@verbatim +__kernel void blend_images(__write_only image2d_t dst, + __read_only image2d_t src1, + __read_only image2d_t src2, + float progress) +{ + const sampler_t sampler = (CLK_NORMALIZED_COORDS_FALSE | + CLK_FILTER_LINEAR); + int2 p = (int2)(get_global_id(0), get_global_id(1)); + float2 rp = (float2)(get_global_id(0), get_global_id(1)); + float2 dim = (float2)(get_image_dim(src1).x, get_image_dim(src1).y); + rp = rp / dim; + + float2 dots = (float2)(20.0, 20.0); + float2 center = (float2)(0,0); + float2 unused; + + float4 val1 = read_imagef(src1, sampler, p); + float4 val2 = read_imagef(src2, sampler, p); + bool next = distance(fract(rp * dots, &unused), (float2)(0.5, 0.5)) < (progress / distance(rp, center)); + + write_imagef(dst, p, next ? val1 : val2); +} +@end verbatim + +@end itemize + @c man end OPENCL VIDEO FILTERS @chapter VAAPI Video Filters diff --git a/libavfilter/Makefile b/libavfilter/Makefile index 58b3077dec..7b8dfbd6da 100644 --- a/libavfilter/Makefile +++ b/libavfilter/Makefile @@ -441,6 +441,7 @@ OBJS-$(CONFIG_W3FDIF_FILTER) += vf_w3fdif.o OBJS-$(CONFIG_WAVEFORM_FILTER) += vf_waveform.o OBJS-$(CONFIG_WEAVE_FILTER) += vf_weave.o OBJS-$(CONFIG_XBR_FILTER) += vf_xbr.o +OBJS-$(CONFIG_XFADE_OPENCL_FILTER) += vf_xfade_opencl.o opencl.o OBJS-$(CONFIG_XMEDIAN_FILTER) += vf_xmedian.o framesync.o OBJS-$(CONFIG_XSTACK_FILTER) += vf_stack.o framesync.o OBJS-$(CONFIG_YADIF_FILTER) += vf_yadif.o yadif_common.o diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c index 6270c18ae2..8a7eac3757 100644 --- a/libavfilter/allfilters.c +++ b/libavfilter/allfilters.c @@ -420,6 +420,7 @@ extern AVFilter ff_vf_w3fdif; extern AVFilter ff_vf_waveform; extern AVFilter ff_vf_weave; extern AVFilter ff_vf_xbr; +extern AVFilter ff_vf_xfade_opencl; extern AVFilter ff_vf_xmedian; extern AVFilter ff_vf_xstack; extern AVFilter ff_vf_yadif; diff --git a/libavfilter/vf_xfade_opencl.c b/libavfilter/vf_xfade_opencl.c new file mode 100644 index 0000000000..4b43fe70fa --- /dev/null +++ b/libavfilter/vf_xfade_opencl.c @@ -0,0 +1,378 @@ +/* + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include "libavutil/log.h" +#include "libavutil/mem.h" +#include "libavutil/opt.h" +#include "libavutil/pixdesc.h" + +#include "avfilter.h" +#include "filters.h" +#include "framesync.h" +#include "internal.h" +#include "opencl.h" +#include "opencl_source.h" +#include "video.h" + +typedef struct XFadeOpenCLContext { + OpenCLFilterContext ocf; + + const char *source_file; + const char *kernel_name; + + int initialised; + cl_kernel kernel; + cl_command_queue command_queue; + + int nb_planes; + + int64_t duration; + int64_t offset; + int64_t duration_pts; + int64_t offset_pts; + int64_t first_pts; + int64_t pts; + int xfade_is_over; + int need_second; + int eof[2]; + AVFrame *xf[2]; +} XFadeOpenCLContext; + +static int xfade_opencl_load(AVFilterContext *avctx, + enum AVPixelFormat main_format, + enum AVPixelFormat xfade_format) +{ + XFadeOpenCLContext *ctx = avctx->priv; + cl_int cle; + const AVPixFmtDescriptor *main_desc, *xfade_desc; + int err, i, main_planes, xfade_planes; + + ctx->ocf.output_width = avctx->inputs[0]->w; + ctx->ocf.output_height = avctx->inputs[0]->h; + ctx->ocf.output_format = avctx->inputs[0]->format; + + main_desc = av_pix_fmt_desc_get(main_format); + xfade_desc = av_pix_fmt_desc_get(xfade_format); + + main_planes = xfade_planes = 0; + for (i = 0; i < main_desc->nb_components; i++) + main_planes = FFMAX(main_planes, + main_desc->comp[i].plane + 1); + for (i = 0; i < xfade_desc->nb_components; i++) + xfade_planes = FFMAX(xfade_planes, + xfade_desc->comp[i].plane + 1); + + ctx->nb_planes = main_planes; + + err = ff_opencl_filter_load_program_from_file(avctx, ctx->source_file); + if (err < 0) + return err; + + ctx->command_queue = clCreateCommandQueue(ctx->ocf.hwctx->context, + ctx->ocf.hwctx->device_id, + 0, &cle); + CL_FAIL_ON_ERROR(AVERROR(EIO), "Failed to create OpenCL " + "command queue %d.\n", cle); + + ctx->kernel = clCreateKernel(ctx->ocf.program, ctx->kernel_name, &cle); + CL_FAIL_ON_ERROR(AVERROR(EIO), "Failed to create kernel %d.\n", cle); + + ctx->initialised = 1; + + return 0; + +fail: + if (ctx->command_queue) + clReleaseCommandQueue(ctx->command_queue); + if (ctx->kernel) + clReleaseKernel(ctx->kernel); + return err; +} + +static int xfade_frame(AVFilterContext *avctx, AVFrame *a, AVFrame *b) +{ + AVFilterLink *outlink = avctx->outputs[0]; + XFadeOpenCLContext *ctx = avctx->priv; + AVFrame *output; + cl_int cle; + cl_float progress = 1.f - ((cl_float)(ctx->pts - ctx->first_pts - ctx->offset_pts) / ctx->duration_pts); + size_t global_work[2]; + int kernel_arg = 0; + int err, plane; + + if (!ctx->initialised) { + AVHWFramesContext *main_fc = + (AVHWFramesContext*)a->hw_frames_ctx->data; + AVHWFramesContext *xfade_fc = + (AVHWFramesContext*)b->hw_frames_ctx->data; + + err = xfade_opencl_load(avctx, main_fc->sw_format, + xfade_fc->sw_format); + if (err < 0) + return err; + } + + output = ff_get_video_buffer(outlink, outlink->w, outlink->h); + if (!output) { + err = AVERROR(ENOMEM); + goto fail; + } + + for (plane = 0; plane < ctx->nb_planes; plane++) { + cl_mem mem; + kernel_arg = 0; + + mem = (cl_mem)output->data[plane]; + CL_SET_KERNEL_ARG(ctx->kernel, kernel_arg, cl_mem, &mem); + kernel_arg++; + + mem = (cl_mem)ctx->xf[0]->data[plane]; + CL_SET_KERNEL_ARG(ctx->kernel, kernel_arg, cl_mem, &mem); + kernel_arg++; + + mem = (cl_mem)ctx->xf[1]->data[plane]; + CL_SET_KERNEL_ARG(ctx->kernel, kernel_arg, cl_mem, &mem); + kernel_arg++; + + CL_SET_KERNEL_ARG(ctx->kernel, kernel_arg, cl_float, &progress); + kernel_arg++; + + err = ff_opencl_filter_work_size_from_image(avctx, global_work, + output, plane, 0); + if (err < 0) + goto fail; + + cle = clEnqueueNDRangeKernel(ctx->command_queue, ctx->kernel, 2, NULL, + global_work, NULL, 0, NULL, NULL); + CL_FAIL_ON_ERROR(AVERROR(EIO), "Failed to enqueue xfade kernel " + "for plane %d: %d.\n", plane, cle); + } + + cle = clFinish(ctx->command_queue); + CL_FAIL_ON_ERROR(AVERROR(EIO), "Failed to finish command queue: %d.\n", cle); + + err = av_frame_copy_props(output, ctx->xf[0]); + if (err < 0) + goto fail; + + output->pts = ctx->pts; + + return ff_filter_frame(outlink, output); + +fail: + av_frame_free(&output); + return err; +} + +static int xfade_opencl_config_output(AVFilterLink *outlink) +{ + AVFilterContext *avctx = outlink->src; + XFadeOpenCLContext *ctx = avctx->priv; + AVFilterLink *mainlink = avctx->inputs[0]; + int err; + + err = ff_opencl_filter_config_output(outlink); + if (err < 0) + return err; + + ctx->first_pts = ctx->pts = AV_NOPTS_VALUE; + + outlink->w = mainlink->w; + outlink->h = mainlink->h; + outlink->time_base = mainlink->time_base; + outlink->sample_aspect_ratio = mainlink->sample_aspect_ratio; + outlink->frame_rate = mainlink->frame_rate; + + if (ctx->duration) + ctx->duration_pts = av_rescale_q(ctx->duration, AV_TIME_BASE_Q, outlink->time_base); + if (ctx->offset) + ctx->offset_pts = av_rescale_q(ctx->offset, AV_TIME_BASE_Q, outlink->time_base); + + return 0; +} + +static int xfade_opencl_activate(AVFilterContext *avctx) +{ + XFadeOpenCLContext *ctx = avctx->priv; + AVFilterLink *outlink = avctx->outputs[0]; + AVFrame *in = NULL; + int ret = 0, status; + int64_t pts; + + FF_FILTER_FORWARD_STATUS_BACK_ALL(outlink, avctx); + + if (ctx->xfade_is_over) { + ret = ff_inlink_consume_frame(avctx->inputs[1], &in); + if (ret < 0) { + return ret; + } else if (ff_inlink_acknowledge_status(avctx->inputs[1], &status, &pts)) { + ff_outlink_set_status(outlink, status, ctx->pts); + return 0; + } else if (!ret) { + if (ff_outlink_frame_wanted(outlink)) { + ff_inlink_request_frame(avctx->inputs[1]); + return 0; + } + } else { + in->pts = ctx->pts; + ctx->pts += av_rescale_q(1, av_inv_q(outlink->frame_rate), outlink->time_base); + return ff_filter_frame(outlink, in); + } + } + + if (ff_inlink_queued_frames(avctx->inputs[0]) > 0) { + ctx->xf[0] = ff_inlink_peek_frame(avctx->inputs[0], 0); + if (ctx->xf[0]) { + if (ctx->first_pts == AV_NOPTS_VALUE) { + ctx->first_pts = ctx->xf[0]->pts; + } + ctx->pts = ctx->xf[0]->pts; + if (ctx->first_pts + ctx->offset_pts > ctx->xf[0]->pts) { + ctx->xf[0] = NULL; + ctx->need_second = 0; + ff_inlink_consume_frame(avctx->inputs[0], &in); + return ff_filter_frame(outlink, in); + } + + ctx->need_second = 1; + } + } + + if (ctx->xf[0] && ff_inlink_queued_frames(avctx->inputs[1]) > 0) { + ff_inlink_consume_frame(avctx->inputs[0], &ctx->xf[0]); + ff_inlink_consume_frame(avctx->inputs[1], &ctx->xf[1]); + + ctx->pts = ctx->xf[0]->pts; + if (ctx->xf[0]->pts - (ctx->first_pts + ctx->offset_pts) > ctx->duration_pts) + ctx->xfade_is_over = 1; + ret = xfade_frame(avctx, ctx->xf[0], ctx->xf[1]); + av_frame_free(&ctx->xf[0]); + av_frame_free(&ctx->xf[1]); + return ret; + } + + if (ff_inlink_queued_frames(avctx->inputs[0]) > 0 && + ff_inlink_queued_frames(avctx->inputs[1]) > 0) { + ff_filter_set_ready(avctx, 100); + return 0; + } + + if (ff_outlink_frame_wanted(outlink)) { + if (!ctx->eof[0] && ff_outlink_get_status(avctx->inputs[0])) { + ctx->eof[0] = 1; + ctx->xfade_is_over = 1; + } + if (!ctx->eof[1] && ff_outlink_get_status(avctx->inputs[1])) { + ctx->eof[1] = 1; + } + if (!ctx->eof[0] && !ctx->xf[0]) + ff_inlink_request_frame(avctx->inputs[0]); + if (!ctx->eof[1] && (ctx->need_second || ctx->eof[0])) + ff_inlink_request_frame(avctx->inputs[1]); + if (ctx->eof[0] && ctx->eof[1] && ( + ff_inlink_queued_frames(avctx->inputs[0]) <= 0 || + ff_inlink_queued_frames(avctx->inputs[1]) <= 0)) + ff_outlink_set_status(outlink, AVERROR_EOF, AV_NOPTS_VALUE); + return 0; + } + + return FFERROR_NOT_READY; +} + +static av_cold void xfade_opencl_uninit(AVFilterContext *avctx) +{ + XFadeOpenCLContext *ctx = avctx->priv; + cl_int cle; + + if (ctx->kernel) { + cle = clReleaseKernel(ctx->kernel); + if (cle != CL_SUCCESS) + av_log(avctx, AV_LOG_ERROR, "Failed to release " + "kernel: %d.\n", cle); + } + + if (ctx->command_queue) { + cle = clReleaseCommandQueue(ctx->command_queue); + if (cle != CL_SUCCESS) + av_log(avctx, AV_LOG_ERROR, "Failed to release " + "command queue: %d.\n", cle); + } + + ff_opencl_filter_uninit(avctx); +} + +static AVFrame *get_video_buffer(AVFilterLink *inlink, int w, int h) +{ + XFadeOpenCLContext *s = inlink->dst->priv; + + return s->xfade_is_over || !s->need_second ? + ff_null_get_video_buffer (inlink, w, h) : + ff_default_get_video_buffer(inlink, w, h); +} + +#define OFFSET(x) offsetof(XFadeOpenCLContext, x) +#define FLAGS (AV_OPT_FLAG_FILTERING_PARAM | AV_OPT_FLAG_VIDEO_PARAM) + +static const AVOption xfade_opencl_options[] = { + { "source", "set OpenCL program source file", OFFSET(source_file), AV_OPT_TYPE_STRING, {.str = NULL}, .flags = FLAGS }, + { "kernel", "set kernel name in program", OFFSET(kernel_name), AV_OPT_TYPE_STRING, {.str = NULL}, .flags = FLAGS }, + { "duration", "set cross fade duration", OFFSET(duration), AV_OPT_TYPE_DURATION, {.i64=1000000}, 0, 60000000, FLAGS }, + { "offset", "set cross fade start relative to first input stream", OFFSET(offset), AV_OPT_TYPE_DURATION, {.i64=0}, 0, 60000000, FLAGS }, + { NULL } +}; + +AVFILTER_DEFINE_CLASS(xfade_opencl); + +static const AVFilterPad xfade_opencl_inputs[] = { + { + .name = "main", + .type = AVMEDIA_TYPE_VIDEO, + .get_video_buffer = get_video_buffer, + .config_props = &ff_opencl_filter_config_input, + }, + { + .name = "xfade", + .type = AVMEDIA_TYPE_VIDEO, + .get_video_buffer = get_video_buffer, + .config_props = &ff_opencl_filter_config_input, + }, + { NULL } +}; + +static const AVFilterPad xfade_opencl_outputs[] = { + { + .name = "default", + .type = AVMEDIA_TYPE_VIDEO, + .config_props = &xfade_opencl_config_output, + }, + { NULL } +}; + +AVFilter ff_vf_xfade_opencl = { + .name = "xfade_opencl", + .description = NULL_IF_CONFIG_SMALL("Cross fade one video with another video."), + .priv_size = sizeof(XFadeOpenCLContext), + .priv_class = &xfade_opencl_class, + .init = &ff_opencl_filter_init, + .uninit = &xfade_opencl_uninit, + .query_formats = &ff_opencl_filter_query_formats, + .activate = &xfade_opencl_activate, + .inputs = xfade_opencl_inputs, + .outputs = xfade_opencl_outputs, + .flags_internal = FF_FILTER_FLAG_HWFRAME_AWARE, +};