From patchwork Sun Sep 3 10:59:47 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul B Mahol X-Patchwork-Id: 4965 Delivered-To: ffmpegpatchwork@gmail.com Received: by 10.2.15.201 with SMTP id 70csp2171713jao; Sun, 3 Sep 2017 04:06:22 -0700 (PDT) X-Received: by 10.28.67.197 with SMTP id q188mr2337603wma.156.1504436782512; Sun, 03 Sep 2017 04:06:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1504436782; cv=none; d=google.com; s=arc-20160816; b=CslGKLgDCF38S6IufwAc/+WfLUNaE+sar9SXaLCunRemz5SY9qfbk/MRAWl1R22zkx yq0lIk+Z2oFmQerN5VPDIYGdqEWVfuAWSj/XO7DJGycDQhZ5fFC0x4XOM/ziBoWEKASA lZKZ53+WMszjCGWpdv6CJ38CiptQ1xNVg2jHD0PNQSG5Fuh7tHJb6LlGt76xjw4fNKP4 1i0p0xPbYHsqVv2pVUTzSmz6+4UiZsrLdAdEriPGslwBJmNYba/e2Fpr58wtOThDlDcG jxB0fO9ThnneCfR3k7GzseqktkBnRMTTb9goPf7XbEGDOLx9c4GuDEwd4o4CoY5cKATj VU8w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:references:in-reply-to:message-id:date :to:from:dkim-signature:delivered-to:arc-authentication-results; bh=kIXUmu+qZzR+JK4lgsXFNcyjENm90iC/0DulRnz/y0Y=; b=AfQ82Cx7RvPiO+d381ZY3if7LI/23wNqfcJCguBZBvY315ij47fsn68hoOw21v/sW6 i+F02NFrcNb7dWy1hbERr8n5v04Hyb2+1ayjIySEDt/lL3QABKjBoVlS5uoB1SkLZOQF 0z71BWMHAoevilsY8rzVGI++EQ6dgY0HeZl2ed+IhnXrdfN407HXKzyNYoODOCwb7mJK HyzLqzG8oU/OJew1drDFI6y/ZaGDxRYjMA9DpEmkRwLuyO392o9J74Wk3RZyCIuuCFTN 1GK9SaH4JOxPDpLgDYnuAXGV+IrLzsDuNERxYmhw7XZX7txAprODqE0t0/Rrh8GjP/wP hW6g== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=iJ89TJ4I; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id u4si3142007wmg.89.2017.09.03.04.06.21; Sun, 03 Sep 2017 04:06:22 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=iJ89TJ4I; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id B98F9689DFA; Sun, 3 Sep 2017 14:06:16 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-wm0-f65.google.com (mail-wm0-f65.google.com [74.125.82.65]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 9EDA8689D0A for ; Sun, 3 Sep 2017 14:06:10 +0300 (EEST) Received: by mail-wm0-f65.google.com with SMTP id x189so3970844wmg.4 for ; Sun, 03 Sep 2017 04:06:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references; bh=XDyP3Ni0cycIPWgttTUB7iEN/SJPdVzTAFnE0dWSiuw=; b=iJ89TJ4IqnKFGinNCuFZm5G+NSRdWFMNnCYW31rg20UWcDuExkQnC+5EehC5DlrMqm rMq4Xv/pV0Mz0MamADi6OtdNmuhoEFB0F7m3BmS+oLCAosJX9tsXcRxTIai/BrQXzQyv 8b2+lEcC+PnE3uq4vK83lZ8w0hvzFgVstF2Oy6Cit4xYMoxhkLcmo3mp/TDcyXIyXWB4 vqPv7PFUa6fem8duV96Q6JYHRukjhaqdqS5X+8QCu1t8qRTPlAqK6wClpUIIKh1YZ12I Nf3HDkQfLrVRwjACN2XLbfi+p9kFA66xBspqckEO3r9suV0nBtNqlePRvNT0sbskK2wv h1Pw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=XDyP3Ni0cycIPWgttTUB7iEN/SJPdVzTAFnE0dWSiuw=; b=hlYr7D+SpFjDm5FgQPzpFxThXftH3cjRB6d9vS8VgzM4hi18rIsmmHAMJY7i84m8Ll JSFiE3R+rInZQ9a1f1jwyoBU5CmJH9Jzdmi5HtjTUjvnQGInFrn4xQfAwCsW6K9ZkONb SYs3SoNpJgQUQRgFayF04HBLRBpjfWTfgu2lVitPVhZnEBS/z3rHqVY8G3ENUPbmu0zE Mn33jdd766xWmCvJGoeuz2gdBeyuRUUrD21BfQn1VIBTNisS/TS9vSWIjgqMiODijTtB rQqnqdwnW0mrIeIbNKreryhEi5xA2tqtx3zAx5xyfLsfydM2aZp81GOqTN6A7jBIQpIl MZ+g== X-Gm-Message-State: AHPjjUh7Xd/npl/xovt9UtQQYHGY6Svc8DfnGQ4UZJaKG4PndZuGVryv GUTmqeTgL8oh3XuD X-Google-Smtp-Source: ADKCNb7lHo/F9jSE2YX5SfCsP62eExT9gvRWUQijYdBKZGsJPpKJQPbuREadvdJ/gr0pv11bYUWGQg== X-Received: by 10.28.134.133 with SMTP id i127mr2029411wmd.162.1504436401583; Sun, 03 Sep 2017 04:00:01 -0700 (PDT) Received: from localhost.localdomain ([94.250.174.60]) by smtp.gmail.com with ESMTPSA id j62sm12770954wmf.4.2017.09.03.03.59.59 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 03 Sep 2017 04:00:00 -0700 (PDT) From: Paul B Mahol To: ffmpeg-devel@ffmpeg.org Date: Sun, 3 Sep 2017 12:59:47 +0200 Message-Id: <20170903105947.12926-1-onemda@gmail.com> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170902193217.27002-1-onemda@gmail.com> References: <20170902193217.27002-1-onemda@gmail.com> Subject: [FFmpeg-devel] [PATCH] avfilter: add generic FFT video convolve filter X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches MIME-Version: 1.0 Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" Signed-off-by: Paul B Mahol --- doc/filters.texi | 14 ++ libavfilter/Makefile | 1 + libavfilter/allfilters.c | 1 + libavfilter/vf_convolve.c | 513 ++++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 529 insertions(+) create mode 100644 libavfilter/vf_convolve.c diff --git a/doc/filters.texi b/doc/filters.texi index afcb99d..0e595a6 100644 --- a/doc/filters.texi +++ b/doc/filters.texi @@ -5867,6 +5867,20 @@ For example to convert the input to SMPTE-240M, use the command: colorspace=smpte240m @end example +@section convolve + +Apply 2D convolution of video stream in frequency domain using second stream as impulse. + +The filter accepts the following options: + +@table @option +@item planes +Set which planes to process. + +@item impulse +Set which impulse will be processed, can be @var{first} or @var{all}. Default is @var{all}. +@end table + @section convolution Apply convolution 3x3 or 5x5 filter. diff --git a/libavfilter/Makefile b/libavfilter/Makefile index ee840b0..d76d57c 100644 --- a/libavfilter/Makefile +++ b/libavfilter/Makefile @@ -149,6 +149,7 @@ OBJS-$(CONFIG_COLORLEVELS_FILTER) += vf_colorlevels.o OBJS-$(CONFIG_COLORMATRIX_FILTER) += vf_colormatrix.o OBJS-$(CONFIG_COLORSPACE_FILTER) += vf_colorspace.o colorspacedsp.o OBJS-$(CONFIG_CONVOLUTION_FILTER) += vf_convolution.o +OBJS-$(CONFIG_CONVOLVE_FILTER) += vf_convolve.o OBJS-$(CONFIG_COPY_FILTER) += vf_copy.o OBJS-$(CONFIG_COREIMAGE_FILTER) += vf_coreimage.o OBJS-$(CONFIG_COVER_RECT_FILTER) += vf_cover_rect.o lavfutils.o diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c index 8b9b9a4..890a7b8 100644 --- a/libavfilter/allfilters.c +++ b/libavfilter/allfilters.c @@ -161,6 +161,7 @@ static void register_all(void) REGISTER_FILTER(COLORMATRIX, colormatrix, vf); REGISTER_FILTER(COLORSPACE, colorspace, vf); REGISTER_FILTER(CONVOLUTION, convolution, vf); + REGISTER_FILTER(CONVOLVE, convolve, vf); REGISTER_FILTER(COPY, copy, vf); REGISTER_FILTER(COREIMAGE, coreimage, vf); REGISTER_FILTER(COVER_RECT, cover_rect, vf); diff --git a/libavfilter/vf_convolve.c b/libavfilter/vf_convolve.c new file mode 100644 index 0000000..2f3441e --- /dev/null +++ b/libavfilter/vf_convolve.c @@ -0,0 +1,513 @@ +/* + * Copyright (c) 2017 Paul B Mahol + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include "libavutil/imgutils.h" +#include "libavutil/opt.h" +#include "libavutil/pixdesc.h" +#include "libavcodec/avfft.h" + +#include "avfilter.h" +#include "formats.h" +#include "framesync2.h" +#include "internal.h" +#include "video.h" + +typedef struct ConvolveContext { + const AVClass *class; + FFFrameSync fs; + + FFTContext *fft[4]; + FFTContext *ifft[4]; + + int fft_bits[4]; + int fft_len[4]; + int planewidth[4]; + int planeheight[4]; + + FFTComplex *fft_hdata[4]; + FFTComplex *fft_vdata[4]; + FFTComplex *fft_hdata_impulse[4]; + FFTComplex *fft_vdata_impulse[4]; + + int direct; + int depth; + int planes; + int impulse; + int nb_planes; + int got_impulse[4]; +} ConvolveContext; + +#define OFFSET(x) offsetof(ConvolveContext, x) +#define FLAGS AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_VIDEO_PARAM + +static const AVOption convolve_options[] = { + { "planes", "set planes to convolve", OFFSET(planes), AV_OPT_TYPE_INT, {.i64=7}, 0, 15, FLAGS }, + { "impulse", "when to process impulses", OFFSET(impulse), AV_OPT_TYPE_INT, {.i64=1}, 0, 1, FLAGS, "impulse" }, + { "first", "process only first impulse, ignore rest", 0, AV_OPT_TYPE_CONST, {.i64=0}, 0, 0, FLAGS, "impulse" }, + { "all", "process all impulses", 0, AV_OPT_TYPE_CONST, {.i64=1}, 0, 0, FLAGS, "impulse" }, + { NULL }, +}; + +FRAMESYNC_DEFINE_CLASS(convolve, ConvolveContext, fs); + +static int query_formats(AVFilterContext *ctx) +{ + static const enum AVPixelFormat pixel_fmts_fftfilt[] = { + AV_PIX_FMT_YUVA444P, AV_PIX_FMT_YUV444P, AV_PIX_FMT_YUV440P, + AV_PIX_FMT_YUVJ444P, AV_PIX_FMT_YUVJ440P, + AV_PIX_FMT_YUVA422P, AV_PIX_FMT_YUV422P, AV_PIX_FMT_YUVA420P, AV_PIX_FMT_YUV420P, + AV_PIX_FMT_YUVJ422P, AV_PIX_FMT_YUVJ420P, + AV_PIX_FMT_YUVJ411P, AV_PIX_FMT_YUV411P, AV_PIX_FMT_YUV410P, + AV_PIX_FMT_YUV420P9, AV_PIX_FMT_YUV422P9, AV_PIX_FMT_YUV444P9, + AV_PIX_FMT_YUV420P10, AV_PIX_FMT_YUV422P10, AV_PIX_FMT_YUV444P10, + AV_PIX_FMT_YUV420P12, AV_PIX_FMT_YUV422P12, AV_PIX_FMT_YUV444P12, AV_PIX_FMT_YUV440P12, + AV_PIX_FMT_YUV420P14, AV_PIX_FMT_YUV422P14, AV_PIX_FMT_YUV444P14, + AV_PIX_FMT_YUV420P16, AV_PIX_FMT_YUV422P16, AV_PIX_FMT_YUV444P16, + AV_PIX_FMT_YUVA420P9, AV_PIX_FMT_YUVA422P9, AV_PIX_FMT_YUVA444P9, + AV_PIX_FMT_YUVA420P10, AV_PIX_FMT_YUVA422P10, AV_PIX_FMT_YUVA444P10, + AV_PIX_FMT_YUVA420P16, AV_PIX_FMT_YUVA422P16, AV_PIX_FMT_YUVA444P16, + AV_PIX_FMT_GBRP, AV_PIX_FMT_GBRP9, AV_PIX_FMT_GBRP10, + AV_PIX_FMT_GBRP12, AV_PIX_FMT_GBRP14, AV_PIX_FMT_GBRP16, + AV_PIX_FMT_GBRAP, AV_PIX_FMT_GBRAP10, AV_PIX_FMT_GBRAP12, AV_PIX_FMT_GBRAP16, + AV_PIX_FMT_GRAY8, AV_PIX_FMT_GRAY9, AV_PIX_FMT_GRAY10, AV_PIX_FMT_GRAY12, AV_PIX_FMT_GRAY16, + AV_PIX_FMT_NONE + }; + + AVFilterFormats *fmts_list = ff_make_format_list(pixel_fmts_fftfilt); + if (!fmts_list) + return AVERROR(ENOMEM); + return ff_set_common_formats(ctx, fmts_list); +} + +static int config_input_main(AVFilterLink *inlink) +{ + ConvolveContext *s = inlink->dst->priv; + int fft_bits, i; + const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format); + + s->planewidth[1] = s->planewidth[2] = AV_CEIL_RSHIFT(inlink->w, desc->log2_chroma_w); + s->planewidth[0] = s->planewidth[3] = inlink->w; + s->planeheight[1] = s->planeheight[2] = AV_CEIL_RSHIFT(inlink->h, desc->log2_chroma_h); + s->planeheight[0] = s->planeheight[3] = inlink->h; + + s->nb_planes = desc->nb_components; + s->depth = desc->comp[0].depth; + + for (i = 0; i < s->nb_planes; i++) { + int w = s->planewidth[i]; + int h = s->planeheight[i]; + int n = FFMAX(w, h); + + for (fft_bits = 1; 1 << fft_bits < n; fft_bits++); + + s->fft_bits[i] = fft_bits; + s->fft_len[i] = 1 << s->fft_bits[i]; + + if (!(s->fft_hdata[i] = av_calloc(s->fft_len[i], s->fft_len[i] * sizeof(FFTComplex)))) + return AVERROR(ENOMEM); + + if (!(s->fft_vdata[i] = av_calloc(s->fft_len[i], s->fft_len[i] * sizeof(FFTComplex)))) + return AVERROR(ENOMEM); + + if (!(s->fft_hdata_impulse[i] = av_calloc(s->fft_len[i], s->fft_len[i] * sizeof(FFTComplex)))) + return AVERROR(ENOMEM); + + if (!(s->fft_vdata_impulse[i] = av_calloc(s->fft_len[i], s->fft_len[i] * sizeof(FFTComplex)))) + return AVERROR(ENOMEM); + } + + s->direct = !(s->planewidth[0] == s->planeheight[0] && s->fft_len[0] == s->planeheight[0]); + + return 0; +} + +static int config_input_impulse(AVFilterLink *inlink) +{ + AVFilterContext *ctx = inlink->dst; + + if (ctx->inputs[0]->w != ctx->inputs[1]->w || + ctx->inputs[0]->h != ctx->inputs[1]->h) { + av_log(ctx, AV_LOG_ERROR, "Width and height of input videos must be same.\n"); + return AVERROR(EINVAL); + } + if (ctx->inputs[0]->format != ctx->inputs[1]->format) { + av_log(ctx, AV_LOG_ERROR, "Inputs must be of same pixel format.\n"); + return AVERROR(EINVAL); + } + + return 0; +} + +static void fft_horizontal(ConvolveContext *s, FFTComplex *fft_hdata, + AVFrame *in, int w, int h, int n, int plane, float scale) +{ + int y, x; + + for (y = 0; y < h; y++) { + if (s->depth == 8) { + const uint8_t *src = in->data[plane] + in->linesize[plane] * y; + + for (x = 0; x < w; x++) { + fft_hdata[(y) * n + x].re = src[x] * scale; + fft_hdata[(y) * n + x].im = 0; + } + } else { + const uint16_t *src = (const uint16_t *)(in->data[plane] + in->linesize[plane] * y); + + for (x = 0; x < w; x++) { + fft_hdata[(y) * n + x].re = src[x] * scale; + fft_hdata[(y) * n + x].im = 0; + } + } + for (; x < n; x++) { + fft_hdata[y * n + x].re = 0; + fft_hdata[y * n + x].im = 0; + } + } + + for (; y < n; y++) { + for (x = 0; x < n; x++) { + fft_hdata[y * n + x].re = 0; + fft_hdata[y * n + x].im = 0; + } + } + + for (y = 0; y < n; y++) { + av_fft_permute(s->fft[plane], fft_hdata + y * n); + av_fft_calc(s->fft[plane], fft_hdata + y * n); + } +} + +static void fft_vertical(ConvolveContext *s, FFTComplex *fft_hdata, FFTComplex *fft_vdata, + int n, int plane) +{ + int y, x; + + for (y = 0; y < n; y++) { + for (x = 0; x < n; x++) { + fft_vdata[y * n + x].re = fft_hdata[x * n + y].re; + fft_vdata[y * n + x].im = fft_hdata[x * n + y].im; + } + for (; x < n; x++) { + fft_vdata[y * n + x].re = 0; + fft_vdata[y * n + x].im = 0; + } + av_fft_permute(s->fft[plane], fft_vdata + y * n); + av_fft_calc(s->fft[plane], fft_vdata + y * n); + } +} + +static void ifft_vertical(ConvolveContext *s, int n, int plane) +{ + int y, x; + + for (y = 0; y < n; y++) { + av_fft_permute(s->ifft[plane], s->fft_vdata[plane] + y * n); + av_fft_calc(s->ifft[plane], s->fft_vdata[plane] + y * n); + for (x = 0; x < n; x++) { + s->fft_hdata[plane][x * n + y].re = s->fft_vdata[plane][y * n + x].re; + s->fft_hdata[plane][x * n + y].im = s->fft_vdata[plane][y * n + x].im; + } + } +} + +static void ifft_horizontal(ConvolveContext *s, AVFrame *out, + int w, int h, int n, int plane) +{ + const float scale = 1.f / (n * n); + const int max = (1 << s->depth) - 1; + const int oh = w == h && n == w ? 0 : h / 2; + const int ow = w == h && n == w ? 0 : w / 2; + int y, x; + + for (y = 0; y < n; y++) { + av_fft_permute(s->ifft[plane], s->fft_hdata[plane] + y * n); + av_fft_calc(s->ifft[plane], s->fft_hdata[plane] + y * n); + } + + if (s->depth == 8) { + for (y = 0; y < h; y++) { + uint8_t *dst = out->data[plane] + y * out->linesize[plane]; + for (x = 0; x < w; x++) + dst[x] = av_clip(s->fft_hdata[plane][(y+oh) * n + x+ow].re * scale, 0, 255); + } + } else { + for (y = 0; y < h; y++) { + uint16_t *dst = (uint16_t *)(out->data[plane] + y * out->linesize[plane]); + for (x = 0; x < w; x++) + dst[x] = av_clip(s->fft_hdata[plane][(y+oh) * n + x+ow].re * scale, 0, max); + } + } +} + +static void fftshift8(uint8_t *out, ptrdiff_t out_linesize, + const uint8_t *in, ptrdiff_t in_linesize, + int w, int h) +{ + int hw = (w >> 1), hh = (h >> 1); + int y, x; + + for (x = 0; x < hw; x++) + out[x+1] = in[(h-hh) * in_linesize + x + hw]; + for (; x < w-1; x++) + out[x+1] = in[(h-hh) * in_linesize + x - hw]; + out[0] = in[(h-hh) * in_linesize + x - hw]; + out += out_linesize; + for (y = 0; y < hh; y++) { + for (x = 0; x < hw; x++) + out[x+1] = in[(y+hh) * in_linesize + x + hw]; + for (; x < w-1; x++) + out[x+1] = in[(y+hh) * in_linesize + x - hw]; + out[0] = in[(y+hh) * in_linesize + x - hw]; + + out += out_linesize; + } + for (; y < h-1; y++) { + for (x = 0; x < hw; x++) + out[x+1] = in[(y-hh) * in_linesize + x + hw]; + for (; x < w-1; x++) + out[x+1] = in[(y-hh) * in_linesize + x - hw]; + out[0] = in[(y-hh) * in_linesize + x - hw]; + + out += out_linesize; + } +} + +static void fftshift16(uint8_t *dst, ptrdiff_t out_linesize, + const uint8_t *src, ptrdiff_t in_linesize, + int w, int h) +{ + const uint16_t *in = (const uint16_t *)src; + uint16_t *out = (uint16_t *)dst; + int hw = (w >> 1), hh = (h >> 1); + int y, x; + + out_linesize /= 2; + in_linesize /= 2; + + for (x = 0; x < hw; x++) + out[x+1] = in[(h-hh) * in_linesize + x + hw]; + for (; x < w-1; x++) + out[x+1] = in[(h-hh) * in_linesize + x - hw]; + out[0] = in[(h-hh) * in_linesize + x - hw]; + out += out_linesize; + for (y = 0; y < hh; y++) { + for (x = 0; x < hw; x++) + out[x+1] = in[(y+hh) * in_linesize + x + hw]; + for (; x < w-1; x++) + out[x+1] = in[(y+hh) * in_linesize + x - hw]; + out[0] = in[(y+hh) * in_linesize + x - hw]; + + out += out_linesize; + } + for (; y < h-1; y++) { + for (x = 0; x < hw; x++) + out[x+1] = in[(y-hh) * in_linesize + x + hw]; + for (; x < w-1; x++) + out[x+1] = in[(y-hh) * in_linesize + x - hw]; + out[0] = in[(y-hh) * in_linesize + x - hw]; + + out += out_linesize; + } +} + +static int do_convolve(FFFrameSync *fs) +{ + AVFilterContext *ctx = fs->parent; + AVFilterLink *outlink = ctx->outputs[0]; + ConvolveContext *s = ctx->priv; + AVFrame *out, *mainpic = NULL, *impulsepic = NULL; + int ret, y, x, plane; + + ret = ff_framesync2_dualinput_get(fs, &mainpic, &impulsepic); + if (ret < 0) + return ret; + if (!impulsepic) + return ff_filter_frame(ctx->outputs[0], mainpic); + + if (!s->direct) { + out = ff_get_video_buffer(outlink, outlink->w, outlink->h); + if (!out) { + av_frame_free(&mainpic); + av_frame_free(&impulsepic); + return AVERROR(ENOMEM); + } + av_frame_copy_props(out, mainpic); + } + + for (plane = 0; plane < s->nb_planes; plane++) { + const int n = s->fft_len[plane]; + const int w = s->planewidth[plane]; + const int h = s->planeheight[plane]; + float total = 0; + + if (!(s->planes & (1 << plane))) + continue; + + fft_horizontal(s, s->fft_hdata[plane], mainpic, w, h, n, plane, 1.f); + fft_vertical(s, s->fft_hdata[plane], s->fft_vdata[plane], + n, plane); + + if ((!s->impulse && !s->got_impulse[plane]) || s->impulse) { + if (s->depth == 8) { + for (y = 0; y < h; y++) { + const uint8_t *src = (const uint8_t *)(impulsepic->data[plane] + y * impulsepic->linesize[plane]) ; + for (x = 0; x < w; x++) { + total += src[x]; + } + } + } else { + for (y = 0; y < h; y++) { + const uint16_t *src = (const uint16_t *)(impulsepic->data[plane] + y * impulsepic->linesize[plane]) ; + for (x = 0; x < w; x++) { + total += src[x]; + } + } + } + total = FFMAX(1, total); + + fft_horizontal(s, s->fft_hdata_impulse[plane], impulsepic, w, h, n, plane, 1 / total); + fft_vertical(s, s->fft_hdata_impulse[plane], s->fft_vdata_impulse[plane], + n, plane); + + s->got_impulse[plane] = 1; + } + + for (y = 0; y < n; y++) { + for (x = 0; x < n; x++) { + FFTSample re, im, ire, iim; + + re = s->fft_vdata[plane][y*n + x].re; + im = s->fft_vdata[plane][y*n + x].im; + ire = s->fft_vdata_impulse[plane][y*n + x].re; + iim = s->fft_vdata_impulse[plane][y*n + x].im; + + s->fft_vdata[plane][y*n + x].re = ire * re - iim * im; + s->fft_vdata[plane][y*n + x].im = iim * re + ire * im; + } + } + + ifft_vertical(s, n, plane); + ifft_horizontal(s, mainpic, w, h, n, plane); + + if (!s->direct) { + if (s->depth == 8) + fftshift8(out->data[plane], out->linesize[plane], + mainpic->data[plane], mainpic->linesize[plane], w, h); + else + fftshift16(out->data[plane], out->linesize[plane], + mainpic->data[plane], mainpic->linesize[plane], w, h); + } + } + + if (!s->direct) { + av_frame_free(&mainpic); + mainpic = out; + } + + return ff_filter_frame(outlink, mainpic); +} + +static int config_output(AVFilterLink *outlink) +{ + AVFilterContext *ctx = outlink->src; + ConvolveContext *s = ctx->priv; + AVFilterLink *mainlink = ctx->inputs[0]; + int ret, i; + + s->fs.on_event = do_convolve; + ret = ff_framesync2_init_dualinput(&s->fs, ctx); + if (ret < 0) + return ret; + outlink->w = mainlink->w; + outlink->h = mainlink->h; + outlink->time_base = mainlink->time_base; + outlink->sample_aspect_ratio = mainlink->sample_aspect_ratio; + outlink->frame_rate = mainlink->frame_rate; + + if ((ret = ff_framesync2_configure(&s->fs)) < 0) + return ret; + + for (i = 0; i < 4; i++) { + s->fft[i] = av_fft_init(s->fft_bits[i], 0); + s->ifft[i] = av_fft_init(s->fft_bits[i], 1); + } + + return 0; +} + +static int activate(AVFilterContext *ctx) +{ + ConvolveContext *s = ctx->priv; + return ff_framesync2_activate(&s->fs); +} + +static av_cold void uninit(AVFilterContext *ctx) +{ + ConvolveContext *s = ctx->priv; + int i; + + for (i = 0; i < 4; i++) { + av_freep(&s->fft_hdata[i]); + av_freep(&s->fft_vdata[i]); + av_freep(&s->fft_hdata_impulse[i]); + av_freep(&s->fft_vdata_impulse[i]); + av_fft_end(s->fft[i]); + av_fft_end(s->ifft[i]); + } + + ff_framesync2_uninit(&s->fs); +} + +static const AVFilterPad convolve_inputs[] = { + { + .name = "main", + .type = AVMEDIA_TYPE_VIDEO, + .config_props = config_input_main, + },{ + .name = "impulse", + .type = AVMEDIA_TYPE_VIDEO, + .config_props = config_input_impulse, + }, + { NULL } +}; + +static const AVFilterPad convolve_outputs[] = { + { + .name = "default", + .type = AVMEDIA_TYPE_VIDEO, + .config_props = config_output, + }, + { NULL } +}; + +AVFilter ff_vf_convolve = { + .name = "convolve", + .description = NULL_IF_CONFIG_SMALL("Convolve first video stream with second video stream."), + .preinit = convolve_framesync_preinit, + .uninit = uninit, + .query_formats = query_formats, + .activate = activate, + .priv_size = sizeof(ConvolveContext), + .priv_class = &convolve_class, + .inputs = convolve_inputs, + .outputs = convolve_outputs, + .flags = AVFILTER_FLAG_SUPPORT_TIMELINE_INTERNAL, +};