From patchwork Thu Jul 12 08:10:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Seo X-Patchwork-Id: 9691 Delivered-To: ffmpegpatchwork@gmail.com Received: by 2002:a02:104:0:0:0:0:0 with SMTP id c4-v6csp1386434jad; Thu, 12 Jul 2018 01:11:19 -0700 (PDT) X-Google-Smtp-Source: AAOMgperaYk7kZOCye8GNuO5SGj4N4DshMO64+bKjrlxRg7lBNo0ZEKN+K6BdGkC11V6EDFpI9WA X-Received: by 2002:adf:ac66:: with SMTP id v93-v6mr884151wrc.7.1531383079841; Thu, 12 Jul 2018 01:11:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531383079; cv=none; d=google.com; s=arc-20160816; b=UofOerzbpM1AtzyFNrngQH3oVBCFtJzLScWvvSacvFT7xNmtv4t1eLp2vbbdUot4SG ybeanvNuuWZ1XIxNo2DN63gWYuSGFgp1RxUMPCQYMZVqj+AtQqctjAVpYHhE24IqUpbs TEfhQrM8a0RDKEHF8iIEXG4VMGaovYmqboP/Yokjk280nhK1X+BPMmNVTw2xigPzCfoW G++FlDNAh9O/r6E7xDGkmekvcTXXrj2Yj8Y94WbzLUwaK1gWGZyYeWkhKg6pYuYXBGtT VmDuVgOZ4yiLC90sa8fM1xep6TAIYGzTrgy/A0ZmKLNjsSkxK3YCKeNOeJska4SamzUh oY8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:references:in-reply-to:message-id:date :to:from:dkim-signature:delivered-to:arc-authentication-results; bh=AT17hzb0z2Po4J3DHpR0+BMyuxlt5OQcAMJrWZHW+OA=; b=o4pRkoCr3gmaCIvdr76KY9sM7xZBQY7wRRAxqC0JtlIzvDCpw+DoXby0tqcVg7cy0P 7ThwBwoJAb402FvtFMxl/yFtKn0guML6x84g8P1+93opGDLGj0GdjT6LLCXK59alMiKd G+50ZGpEZFzmDE/u+NB9Q2uswPRxCfGrMhOPhC9pwNTlOBpX6WUSPGiZpHDhLsD/rN00 ND3KyWaYLyEgtbTu1YFauoh8cdmxsEJPBZgkCsWipiQLioaKd1ibWBSjdjiy8OaQXU0q h0RZTDWL2BjM7N+fmglLjPJp/5vwn/9miVgo7aWDyxySngn2Y3HoeY/6UHDcaDS1dIAF Kh6w== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=P8O6zKEm; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id q130-v6si3088867wme.187.2018.07.12.01.11.18; Thu, 12 Jul 2018 01:11:19 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=P8O6zKEm; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 62D2168A5C6; Thu, 12 Jul 2018 11:11:08 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pf0-f179.google.com (mail-pf0-f179.google.com [209.85.192.179]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id DF04168A380 for ; Thu, 12 Jul 2018 11:11:00 +0300 (EEST) Received: by mail-pf0-f179.google.com with SMTP id c21-v6so15533296pfn.8 for ; Thu, 12 Jul 2018 01:11:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=G6WNL7E6TMtOmiQ2w3V47M0OtxAEjuwINXMXYCzckNo=; b=P8O6zKEmQtSbJOadjPxgBfjsIOhDptJUyY7GK1myqu1Ypry4Nnpbs6DspWUMKQc7DB XB1n7K5kRoEQhNsYii/VFTvj2GZQPvSHVqRy6mW52aP8zo2nUHzktaaNY3t7RvbyMuAb 2dzlu9/9jiCZj+zczlbd3anWXSDe63Qc1ig3Ou84f832/TXq5RwiFo6MHWXzga4JoPn4 xlZNSEOuzjLt1+CxwMGKHtbwvgcaPb1XUFXtZ0y9j+yOvA8WSuA/dHnenzC7LojXaoLm y1+3DKi53bFzYd98QWuN0z3JVvxtjZApphXhT5qA+r8BtLbC2kqO5Z/nWx7higzgDlIa mWNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=G6WNL7E6TMtOmiQ2w3V47M0OtxAEjuwINXMXYCzckNo=; b=YqJd/Qw7zO67mIh0AApp1N6hgVrfpzs9VtPQWyPS3dF3Zjt8k8cNM3cfrLoE5jEuMr uAcXBe4Y85nuP+N/qcKrdRyC+N9JPHGtT9JAJZKxzngWuwHve4W1vL/Fc/dMX4hryT2Q CaAay6JBpdDk93jefrLH5FTxEEntlHL1JgZRySIdMSZzWgxvy5RJaKaaIuWfytDp8B3M S4A1Kigh8VY/SzdR5SkfBnyZ+wTjx7W7ejbCCrM46rIvph+k3KYvWoTrN8DU8iGDtwvy AE58+QAZZ1tLQ/tmvP2ClPBZ49Fsp2qBbQhZizUBGGCp9WJz6yuW1gFbFksjC/9qLsQr CmtA== X-Gm-Message-State: AOUpUlHp6ZWFaIS0KH6fuFv0otzYyN01RJ16Ter/G5q4BMI5DJ0SAIk4 vRiMa6ZU1sc0wgNEhd9U/GFHMhsi X-Received: by 2002:a63:735d:: with SMTP id d29-v6mr1194281pgn.156.1531383066556; Thu, 12 Jul 2018 01:11:06 -0700 (PDT) Received: from localhost.localdomain ([175.214.46.32]) by smtp.gmail.com with ESMTPSA id v6-v6sm70970588pfa.28.2018.07.12.01.11.04 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 12 Jul 2018 01:11:05 -0700 (PDT) From: Stephen Seo To: ffmpeg-devel@ffmpeg.org Date: Thu, 12 Jul 2018 17:10:24 +0900 Message-Id: <20180712081024.7114-1-seo.disparate@gmail.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: References: Subject: [FFmpeg-devel] [PATCH v3] Add lensfun filter X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Stephen Seo MIME-Version: 1.0 Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" Lensfun is a library that applies lens correction to an image using a database of cameras/lenses (you provide the camera and lens models, and it uses the corresponding database entry's parameters to apply lens correction). It is licensed under LGPL3. The lensfun filter utilizes the lensfun library to apply lens correction to videos as well as images. This filter was created out of necessity since I wanted to apply lens correction to a video and the lenscorrection filter did not work for me. While this filter requires little info from the user to apply lens correction, the flaw is that lensfun is intended to be used on indvidual images. When used on a video, the parameters such as focal length is constant, so lens correction may fail on videos where the camera's focal length changes (zooming in or out via zoom lens). To use this filter correctly on videos where such parameters change, timeline editing may be used since this filter supports it. Note that valgrind shows a small memory leak which is not from this filter but from the lensfun library (memory is allocated when loading the lensfun database but it somehow isn't deallocated even during cleanup; it is briefly created in the init function of the filter, and destroyed before the init function returns). This may have been fixed by the latest commit in the lensfun repository; the current latest release of lensfun is almost 3 years ago. Bi-Linear interpolation is used by default as lanczos interpolation shows more artifacts in the corrected image in my tests. The lanczos interpolation is derived from lenstool's implementation of lanczos interpolation. Lenstool is an app within the lensfun repository which is licensed under GPL3. v2 of this patch fixes license notice in libavfilter/vf_lensfun.c v3 of this patch fixes code style and dependency to gplv3 (thanks to Paul B Mahol for pointing out the mentioned issues). Signed-off-by: Stephen Seo --- configure | 5 + doc/filters.texi | 103 ++++++++ libavfilter/Makefile | 1 + libavfilter/allfilters.c | 1 + libavfilter/vf_lensfun.c | 532 +++++++++++++++++++++++++++++++++++++++ 5 files changed, 642 insertions(+) create mode 100644 libavfilter/vf_lensfun.c diff --git a/configure b/configure index b1a4dcfc42..095390427d 100755 --- a/configure +++ b/configure @@ -217,6 +217,7 @@ External library support: --disable-iconv disable iconv [autodetect] --enable-jni enable JNI support [no] --enable-ladspa enable LADSPA audio filtering [no] + --enable-lensfun enable lensfun lens correction [no] --enable-libaom enable AV1 video encoding/decoding via libaom [no] --enable-libass enable libass subtitles rendering, needed for subtitles and ass filter [no] @@ -1656,6 +1657,7 @@ EXTERNAL_LIBRARY_NONFREE_LIST=" EXTERNAL_LIBRARY_VERSION3_LIST=" gmp + lensfun libopencore_amrnb libopencore_amrwb libvmaf @@ -3353,6 +3355,8 @@ hqdn3d_filter_deps="gpl" interlace_filter_deps="gpl" kerndeint_filter_deps="gpl" ladspa_filter_deps="ladspa libdl" +lensfun_filter_deps="gplv3 lensfun" +lensfun_src_filter_deps="gplv3 lensfun" lv2_filter_deps="lv2" mcdeint_filter_deps="avcodec gpl" movie_filter_deps="avcodec avformat" @@ -5994,6 +5998,7 @@ enabled gmp && require gmp gmp.h mpz_export -lgmp enabled gnutls && require_pkg_config gnutls gnutls gnutls/gnutls.h gnutls_global_init enabled jni && { [ $target_os = "android" ] && check_header jni.h && enabled pthreads || die "ERROR: jni not found"; } enabled ladspa && require_header ladspa.h +enabled lensfun && require_pkg_config lensfun lensfun lensfun.h lf_db_new enabled libaom && require_pkg_config libaom "aom >= 1.0.0" aom/aom_codec.h aom_codec_version enabled lv2 && require_pkg_config lv2 lilv-0 "lilv/lilv.h" lilv_world_new enabled libiec61883 && require libiec61883 libiec61883/iec61883.h iec61883_cmp_connect -lraw1394 -lavc1394 -lrom1394 -liec61883 diff --git a/doc/filters.texi b/doc/filters.texi index d236bd69b7..528756c2af 100644 --- a/doc/filters.texi +++ b/doc/filters.texi @@ -10700,6 +10700,109 @@ The formula that generates the correction is: where @var{r_0} is halve of the image diagonal and @var{r_src} and @var{r_tgt} are the distances from the focal point in the source and target images, respectively. +@section lensfun + +Apply lens correction via the lensfun library (@url{http://lensfun.sourceforge.net/}). + +The @code{lensfun} filter requires the camera make, camera model, and lens model +to apply the lens correction. The filter will load the lensfun database and +query it to find the corresponding camera and lens entries in the database. As +long as these entries can be found with the given options, the filter can +perform corrections on frames. Note that incomplete strings will result in the +filter choosing the best match with the given options, and the filter will +output the chosen camera and lens models (logged with level "info"). You must +provide the make, camera model, and lens model as they are required. + +The filter accepts the following options: + +@table @option +@item make +The make of the camera (for example, "Canon"). This option is required. +@item model +The model of the camera (for example, "Canon EOS 100D"). This option is +required. +@item lens_model +The model of the lens (for example, "Canon EF-S 18-55mm f/3.5-5.6 IS STM"). This +option is required. +@item mode +The type of correction to apply. The following values are valid options: +@table @samp +@item vignetting +Enables fixing lens vignetting. +@item geometry +Enables fixing lens geometry. This is the default. +@item subpixel +Enables fixing chromatic aberrations. +@item vig_geo +Enables fixing lens vignetting and lens geometry. +@item vig_subpixel +Enables fixing lens vignetting and chromatic aberrations. +@item distortion +Enables fixing both lens geometry and chromatic aberrations. +@item all +Enables all possible corrections. +@end table +@item focal_length +The focal length of the image/video (zoom; expected constant for video). For +example, a 18--55mm lens has focal length range of [18--55], so a value in that +range should be chosen when using that lens. Default 18. +@item aperture +The aperture of the image/video (expected constant for video). Note that +aperture is only used for vignetting correction. Default 3.5. +@item focus_distance +The focus distance of the image/video (expected constant for video). Note that +focus distance is only used for vignetting and only slightly affects the +vignetting correction process. If unknown, leave it at the default value (which +is 1000). +@item target_geometry +The target geometry of the output image/video. The following values are valid +options: +@table @samp +@item rectilinear +(default) +@item fisheye +@item panoramic +@item equirectangular +@item fisheye_orthographic +@item fisheye_stereographic +@item fisheye_equisolid +@item fisheye_thoby +@end table +@item reverse +Apply the reverse of image correction (instead of correcting distortion, apply +it). +@item interpolation +The type of interpolation used when correcting distortion. The following values +are valid options: +@table @samp +@item nearest +@item linear +(default) +@item lanczos +@end table +@end table + +@subsection Examples + +@itemize +@item +Apply lens correction with make "Canon", camera model "Canon EOS 100D", and lens +model "Canon EF-S 18-55mm f/3.5-5.6 IS STM" with focal length of "18" and +aperture of "8.0". + +@example +ffmpeg -i input.mov -vf lensfun=make=Canon:model="Canon EOS 100D":lens_model="Canon EF-S 18-55mm f/3.5-5.6 IS STM":focal_length=18:aperture=8 -c:v h264 -b:v 8000k output.mov +@end example + +@item +Apply the same as before, but only for the first 5 seconds of video. + +@example +ffmpeg -i input.mov -vf lensfun=make=Canon:model="Canon EOS 100D":lens_model="Canon EF-S 18-55mm f/3.5-5.6 IS STM":focal_length=18:aperture=8:enable='lte(t\,5)' -c:v h264 -b:v 8000k output.mov +@end example + +@end itemize + @section libvmaf Obtain the VMAF (Video Multi-Method Assessment Fusion) diff --git a/libavfilter/Makefile b/libavfilter/Makefile index 7735c26529..c19848d203 100644 --- a/libavfilter/Makefile +++ b/libavfilter/Makefile @@ -391,6 +391,7 @@ OBJS-$(CONFIG_YADIF_FILTER) += vf_yadif.o OBJS-$(CONFIG_ZMQ_FILTER) += f_zmq.o OBJS-$(CONFIG_ZOOMPAN_FILTER) += vf_zoompan.o OBJS-$(CONFIG_ZSCALE_FILTER) += vf_zscale.o +OBJS-$(CONFIG_LENSFUN_FILTER) += vf_lensfun.o OBJS-$(CONFIG_ALLRGB_FILTER) += vsrc_testsrc.o OBJS-$(CONFIG_ALLYUV_FILTER) += vsrc_testsrc.o diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c index 0ded83ede2..521bc53164 100644 --- a/libavfilter/allfilters.c +++ b/libavfilter/allfilters.c @@ -237,6 +237,7 @@ extern AVFilter ff_vf_interlace; extern AVFilter ff_vf_interleave; extern AVFilter ff_vf_kerndeint; extern AVFilter ff_vf_lenscorrection; +extern AVFilter ff_vf_lensfun; extern AVFilter ff_vf_libvmaf; extern AVFilter ff_vf_limiter; extern AVFilter ff_vf_loop; diff --git a/libavfilter/vf_lensfun.c b/libavfilter/vf_lensfun.c new file mode 100644 index 0000000000..fe1a158c6e --- /dev/null +++ b/libavfilter/vf_lensfun.c @@ -0,0 +1,532 @@ +/* + * Copyright (C) 2007 by Andrew Zabolotny (author of lensfun, from which this filter derives from) + * Copyright (C) 2018 Stephen Seo + * + * This file is part of FFmpeg. + * + * This program is free software: you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation, either version 3 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see . + */ + +/** + * @file + * Lensfun filter, applies lens correction with parameters from the lensfun database + * + * @see https://lensfun.sourceforge.net/ + */ + +#include +#include + +#include "libavutil/avassert.h" +#include "libavutil/imgutils.h" +#include "libavutil/opt.h" +#include "libswscale/swscale.h" +#include "avfilter.h" +#include "formats.h" +#include "internal.h" +#include "video.h" + +#include + +#define LANCZOS_RESOLUTION 256 + +enum Mode { + VIGNETTING = 0x1, + GEOMETRY_DISTORTION = 0x2, + SUBPIXEL_DISTORTION = 0x4 +}; + +enum InterpolationType { + NEAREST, + LINEAR, + LANCZOS +}; + +typedef struct VignettingThreadData { + int width, height; + uint8_t *data_in; + int linesize_in; + int pixel_composition; + lfModifier *modifier; +} VignettingThreadData; + +typedef struct DistortionCorrectionThreadData { + int width, height; + const float *distortion_coords; + const uint8_t *data_in; + uint8_t *data_out; + int linesize_in, linesize_out; + const float *interpolation; + int mode; + int interpolation_type; +} DistortionCorrectionThreadData; + +typedef struct LensfunContext { + const AVClass *class; + const char *make, *model, *lens_model; + int mode; + float focal_length; + float aperture; + float focus_distance; + int target_geometry; + int reverse; + int interpolation_type; + + float *distortion_coords; + float *interpolation; + + lfLens *lens; + lfCamera *camera; + lfModifier *modifier; +} LensfunContext; + +#define OFFSET(x) offsetof(LensfunContext, x) +#define FLAGS AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_VIDEO_PARAM +static const AVOption lensfun_options[] = { + { "make", "set camera maker", OFFSET(make), AV_OPT_TYPE_STRING, {.str=NULL}, 0, 0, FLAGS }, + { "model", "set camera model", OFFSET(model), AV_OPT_TYPE_STRING, {.str=NULL}, 0, 0, FLAGS }, + { "lens_model", "set lens model", OFFSET(lens_model), AV_OPT_TYPE_STRING, {.str=NULL}, 0, 0, FLAGS }, + { "mode", "set mode", OFFSET(mode), AV_OPT_TYPE_INT, {.i64=GEOMETRY_DISTORTION}, 0, VIGNETTING | GEOMETRY_DISTORTION | SUBPIXEL_DISTORTION, FLAGS, "mode" }, + { "vignetting", "fix lens vignetting", 0, AV_OPT_TYPE_CONST, {.i64=VIGNETTING}, 0, 0, FLAGS, "mode" }, + { "geometry", "correct geometry distortion", 0, AV_OPT_TYPE_CONST, {.i64=GEOMETRY_DISTORTION}, 0, 0, FLAGS, "mode" }, + { "subpixel", "fix chromatic aberrations", 0, AV_OPT_TYPE_CONST, {.i64=SUBPIXEL_DISTORTION}, 0, 0, FLAGS, "mode" }, + { "vig_geo", "fix lens vignetting and correct geometry distortion", 0, AV_OPT_TYPE_CONST, {.i64=VIGNETTING | GEOMETRY_DISTORTION}, 0, 0, FLAGS, "mode" }, + { "vig_subpixel", "fix lens vignetting and chromatic aberrations", 0, AV_OPT_TYPE_CONST, {.i64=VIGNETTING | SUBPIXEL_DISTORTION}, 0, 0, FLAGS, "mode" }, + { "distortion", "correct geometry distortion and chromatic aberrations", 0, AV_OPT_TYPE_CONST, {.i64=GEOMETRY_DISTORTION | SUBPIXEL_DISTORTION}, 0, 0, FLAGS, "mode" }, + { "all", NULL, 0, AV_OPT_TYPE_CONST, {.i64=VIGNETTING | GEOMETRY_DISTORTION | SUBPIXEL_DISTORTION}, 0, 0, FLAGS, "mode" }, + { "focal_length", "focal length of video (zoom; expected constant)", OFFSET(focal_length), AV_OPT_TYPE_FLOAT, {.dbl=18}, 0.0, DBL_MAX, FLAGS }, + { "aperture", "aperture (expected constant)", OFFSET(aperture), AV_OPT_TYPE_FLOAT, {.dbl=3.5}, 0.0, DBL_MAX, FLAGS }, + { "focus_distance", "focus distance (expected constant)", OFFSET(focus_distance), AV_OPT_TYPE_FLOAT, {.dbl=1000.0f}, 0.0, DBL_MAX, FLAGS }, + { "target_geometry", "target geometry of the lens correction (only when geometry correction is enabled)", OFFSET(target_geometry), AV_OPT_TYPE_INT, {.i64=LF_RECTILINEAR}, 0, INT_MAX, FLAGS, "lens_geometry" }, + { "rectilinear", "rectilinear lens (default)", 0, AV_OPT_TYPE_CONST, {.i64=LF_RECTILINEAR}, 0, 0, FLAGS, "lens_geometry" }, + { "fisheye", "fisheye lens", 0, AV_OPT_TYPE_CONST, {.i64=LF_FISHEYE}, 0, 0, FLAGS, "lens_geometry" }, + { "panoramic", "panoramic (cylindrical)", 0, AV_OPT_TYPE_CONST, {.i64=LF_PANORAMIC}, 0, 0, FLAGS, "lens_geometry" }, + { "equirectangular", "equirectangular", 0, AV_OPT_TYPE_CONST, {.i64=LF_EQUIRECTANGULAR}, 0, 0, FLAGS, "lens_geometry" }, + { "fisheye_orthographic", "orthographic fisheye", 0, AV_OPT_TYPE_CONST, {.i64=LF_FISHEYE_ORTHOGRAPHIC}, 0, 0, FLAGS, "lens_geometry" }, + { "fisheye_stereographic", "stereographic fisheye", 0, AV_OPT_TYPE_CONST, {.i64=LF_FISHEYE_STEREOGRAPHIC}, 0, 0, FLAGS, "lens_geometry" }, + { "fisheye_equisolid", "equisolid fisheye", 0, AV_OPT_TYPE_CONST, {.i64=LF_FISHEYE_EQUISOLID}, 0, 0, FLAGS, "lens_geometry" }, + { "fisheye_thoby", "fisheye as measured by thoby", 0, AV_OPT_TYPE_CONST, {.i64=LF_FISHEYE_THOBY}, 0, 0, FLAGS, "lens_geometry" }, + { "reverse", "Does reverse correction (regular image to lens distorted)", OFFSET(reverse), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1, FLAGS }, + { "interpolation", "Type of interpolation", OFFSET(interpolation_type), AV_OPT_TYPE_INT, {.i64=LINEAR}, 0, LANCZOS, FLAGS, "interpolation" }, + { "nearest", NULL, 0, AV_OPT_TYPE_CONST, {.i64=NEAREST}, 0, 0, FLAGS, "interpolation" }, + { "linear", NULL, 0, AV_OPT_TYPE_CONST, {.i64=LINEAR}, 0, 0, FLAGS, "interpolation" }, + { "lanczos", NULL, 0, AV_OPT_TYPE_CONST, {.i64=LANCZOS}, 0, 0, FLAGS, "interpolation" }, + { NULL } +}; + +AVFILTER_DEFINE_CLASS(lensfun); + +static av_cold int init(AVFilterContext *ctx) +{ + LensfunContext *lensfun = ctx->priv; + lfDatabase *db; + const lfCamera **cameras; + const lfLens **lenses; + + if(!lensfun->make) { + av_log(NULL, AV_LOG_FATAL, "ERROR vf_lensfun: Option \"make\" not specified\n"); + return AVERROR(EINVAL); + } else if(!lensfun->model) { + av_log(NULL, AV_LOG_FATAL, "ERROR vf_lensfun: Option \"model\" not specified\n"); + return AVERROR(EINVAL); + } else if(!lensfun->lens_model) { + av_log(NULL, AV_LOG_FATAL, "ERROR vf_lensfun: Option \"lens_model\" not specified\n"); + return AVERROR(EINVAL); + } + + lensfun->lens = lf_lens_new(); + lensfun->camera = lf_camera_new(); + + db = lf_db_new(); + if(lf_db_load(db) != LF_NO_ERROR) { + lf_db_destroy(db); + av_log(NULL, AV_LOG_FATAL, "vf_lensfun: Failed to load lensfun database\n"); + return AVERROR_INVALIDDATA; + } + + cameras = lf_db_find_cameras(db, lensfun->make, lensfun->model); + if(cameras != NULL && *cameras != NULL) { + lf_camera_copy(lensfun->camera, *cameras); + av_log(NULL, AV_LOG_INFO, "vf_lensfun: Using camera %s\n", lensfun->camera->Model); + } else { + lf_free(cameras); + lf_db_destroy(db); + av_log(NULL, AV_LOG_FATAL, "vf_lensfun: Failed to find camera in lensfun database\n"); + return AVERROR_INVALIDDATA; + } + lf_free(cameras); + + lenses = lf_db_find_lenses_hd(db, lensfun->camera, NULL, lensfun->lens_model, 0); + if(lenses != NULL && *lenses != NULL) { + lf_lens_copy(lensfun->lens, *lenses); + av_log(NULL, AV_LOG_INFO, "vf_lensfun: Using lens %s\n", lensfun->lens->Model); + } else { + lf_free(lenses); + lf_db_destroy(db); + av_log(NULL, AV_LOG_FATAL, "vf_lensfun: Failed to find lens in lensfun database\n"); + return AVERROR_INVALIDDATA; + } + lf_free(lenses); + + lf_db_destroy(db); + return 0; +} + +static int query_formats(AVFilterContext *ctx) +{ + // Some of the functions provided by lensfun require pixels in RGB format + static const enum AVPixelFormat fmts[] = {AV_PIX_FMT_RGB24, AV_PIX_FMT_NONE}; + AVFilterFormats *fmts_list = ff_make_format_list(fmts); + return ff_set_common_formats(ctx, fmts_list); +} + +static float lanczos_kernel(float x) +{ + if(x == 0.0f) + return 1.0f; + else if(x > -2.0f && x < 2.0f) + return (2.0f * sin(M_PI * x) * sin(M_PI / 2.0f * x)) / (M_PI * M_PI * x * x); + else + return 0.0f; +} + +static int config_props(AVFilterLink *inlink) +{ + AVFilterContext *ctx = inlink->dst; + LensfunContext *lensfun = ctx->priv; + int index; + float a; + int lensfun_mode = 0; + + if(!lensfun->modifier) { + if(lensfun->camera && lensfun->lens) { + lensfun->modifier = lf_modifier_new(lensfun->lens, + lensfun->camera->CropFactor, + inlink->w, + inlink->h); + if(lensfun->mode & VIGNETTING) + lensfun_mode |= LF_MODIFY_VIGNETTING; + if(lensfun->mode & GEOMETRY_DISTORTION) + lensfun_mode |= LF_MODIFY_DISTORTION | LF_MODIFY_GEOMETRY | LF_MODIFY_SCALE; + if(lensfun->mode & SUBPIXEL_DISTORTION) + lensfun_mode |= LF_MODIFY_TCA; + lf_modifier_initialize(lensfun->modifier, + lensfun->lens, + LF_PF_U8, + lensfun->focal_length, + lensfun->aperture, + lensfun->focus_distance, + 0.0, + lensfun->target_geometry, + lensfun_mode, + lensfun->reverse); + } else { + return AVERROR_INVALIDDATA; + } + } + + if(!lensfun->distortion_coords) { + if(lensfun->mode & SUBPIXEL_DISTORTION) { + lensfun->distortion_coords = malloc(sizeof(float) * inlink->w * inlink->h * 2 * 3); + if(lensfun->mode & GEOMETRY_DISTORTION) { + // apply both geometry and subpixel distortion + lf_modifier_apply_subpixel_geometry_distortion(lensfun->modifier, + 0, 0, + inlink->w, inlink->h, + lensfun->distortion_coords); + } else { + // apply only subpixsel distortion + lf_modifier_apply_subpixel_distortion(lensfun->modifier, + 0, 0, + inlink->w, inlink->h, + lensfun->distortion_coords); + } + } else if(lensfun->mode & GEOMETRY_DISTORTION) { + lensfun->distortion_coords = malloc(sizeof(float) * inlink->w * inlink->h * 2); + // apply only geometry distortion + lf_modifier_apply_geometry_distortion(lensfun->modifier, + 0, 0, + inlink->w, inlink->h, + lensfun->distortion_coords); + } + } + + if(!lensfun->interpolation) + if(lensfun->interpolation_type == LANCZOS) { + lensfun->interpolation = malloc(sizeof(float) * 4 * LANCZOS_RESOLUTION); + for(index = 0; index < 4 * LANCZOS_RESOLUTION; ++index) { + a = sqrt((float)index / LANCZOS_RESOLUTION); + if(a == 0.0f) + lensfun->interpolation[index] = 1.0f; + else + lensfun->interpolation[index] = lanczos_kernel(a); + } + } + + return 0; +} + +static int vignetting_filter_slice(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs) +{ + const VignettingThreadData *thread_data = arg; + const int slice_start = thread_data->height * jobnr / nb_jobs; + const int slice_end = thread_data->height * (jobnr + 1) / nb_jobs; + + lf_modifier_apply_color_modification(thread_data->modifier, + thread_data->data_in + slice_start * thread_data->linesize_in, + 0, + slice_start, + thread_data->width, + slice_end - slice_start, + thread_data->pixel_composition, + thread_data->linesize_in); + + return 0; +} + +static float square(float x) +{ + return x * x; +} + +static int distortion_correction_filter_slice(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs) +{ + const DistortionCorrectionThreadData *thread_data = arg; + const int slice_start = thread_data->height * jobnr / nb_jobs; + const int slice_end = thread_data->height * (jobnr + 1) / nb_jobs; + + int x, y, i, j, rgb_index; + float interpolated, new_x, new_y, d, norm; + int new_x_int, new_y_int; + for(y = slice_start; y < slice_end; ++y) + for(x = 0; x < thread_data->width; ++x) + for(rgb_index = 0; rgb_index < 3; ++rgb_index) { + if(thread_data->mode & SUBPIXEL_DISTORTION) { + // subpixel (and possibly geometry) distortion correction was applied, correct distortion + switch(thread_data->interpolation_type) { + case NEAREST: + new_x_int = thread_data->distortion_coords[x * 2 * 3 + y * thread_data->width * 2 * 3 + rgb_index * 2] + 0.5f; + new_y_int = thread_data->distortion_coords[x * 2 * 3 + y * thread_data->width * 2 * 3 + rgb_index * 2 + 1] + 0.5f; + if(new_x_int < 0 || new_x_int >= thread_data->width || new_y_int < 0 || new_y_int >= thread_data->height) + thread_data->data_out[x * 3 + rgb_index + y * thread_data->linesize_out] = 0; + else + thread_data->data_out[x * 3 + rgb_index + y * thread_data->linesize_out] = thread_data->data_in[new_x_int * 3 + rgb_index + new_y_int * thread_data->linesize_in]; + break; + case LINEAR: + interpolated = 0.0f; + new_x = thread_data->distortion_coords[x * 2 * 3 + y * thread_data->width * 2 * 3 + rgb_index * 2]; + new_x_int = new_x; + new_y = thread_data->distortion_coords[x * 2 * 3 + y * thread_data->width * 2 * 3 + rgb_index * 2 + 1]; + new_y_int = new_y; + if(new_x_int < 0 || new_x_int + 1 >= thread_data->width || new_y_int < 0 || new_y_int + 1 >= thread_data->height) + thread_data->data_out[x * 3 + rgb_index + y * thread_data->linesize_out] = 0; + else + thread_data->data_out[x * 3 + rgb_index + y * thread_data->linesize_out] = + thread_data->data_in[ new_x_int * 3 + rgb_index + new_y_int * thread_data->linesize_in] * (new_x_int + 1 - new_x) * (new_y_int + 1 - new_y) + + thread_data->data_in[(new_x_int + 1) * 3 + rgb_index + new_y_int * thread_data->linesize_in] * (new_x - new_x_int) * (new_y_int + 1 - new_y) + + thread_data->data_in[ new_x_int * 3 + rgb_index + (new_y_int + 1) * thread_data->linesize_in] * (new_x_int + 1 - new_x) * (new_y - new_y_int) + + thread_data->data_in[(new_x_int + 1) * 3 + rgb_index + (new_y_int + 1) * thread_data->linesize_in] * (new_x - new_x_int) * (new_y - new_y_int); + break; + case LANCZOS: + interpolated = 0.0f; + norm = 0.0f; + new_x = thread_data->distortion_coords[x * 2 * 3 + y * thread_data->width * 2 * 3 + rgb_index * 2]; + new_x_int = new_x; + new_y = thread_data->distortion_coords[x * 2 * 3 + y * thread_data->width * 2 * 3 + rgb_index * 2 + 1]; + new_y_int = new_y; + for(j = 0; j < 4; ++j) + for(i = 0; i < 4; ++i) { + if(new_x_int + i - 2 < 0 || new_x_int + i - 2 >= thread_data->width || new_y_int + j - 2 < 0 || new_y_int + j - 2 >= thread_data->height) + continue; + d = square(new_x - (new_x_int + i - 2)) * square(new_y - (new_y_int + j - 2)); + if(d >= 4.0f) + continue; + d = thread_data->interpolation[(int)(d * LANCZOS_RESOLUTION)]; + norm += d; + interpolated += thread_data->data_in[(new_x_int + i - 2) * 3 + rgb_index + (new_y_int + j - 2) * thread_data->linesize_in] * d; + } + if(norm == 0.0f) + thread_data->data_out[x * 3 + rgb_index + y * thread_data->linesize_out] = 0; + else { + interpolated /= norm; + thread_data->data_out[x * 3 + rgb_index + y * thread_data->linesize_out] = interpolated < 0.0f ? 0.0f : interpolated > 255.0f ? 255 : interpolated; + } + break; + } + } + else if(thread_data->mode & GEOMETRY_DISTORTION) { + // geometry distortion correction was applied, correct distortion + switch(thread_data->interpolation_type) { + case NEAREST: + new_x_int = thread_data->distortion_coords[x * 2 + y * thread_data->width * 2] + 0.5f; + new_y_int = thread_data->distortion_coords[x * 2 + y * thread_data->width * 2 + 1] + 0.5f; + if(new_x_int < 0 || new_x_int >= thread_data->width || new_y_int < 0 || new_y_int >= thread_data->height) + thread_data->data_out[x * 3 + rgb_index + y * thread_data->linesize_out] = 0; + else + thread_data->data_out[x * 3 + rgb_index + y * thread_data->linesize_out] = thread_data->data_in[new_x_int * 3 + rgb_index + new_y_int * thread_data->linesize_in]; + break; + case LINEAR: + interpolated = 0.0f; + new_x = thread_data->distortion_coords[x * 2 + y * thread_data->width * 2]; + new_x_int = new_x; + new_y = thread_data->distortion_coords[x * 2 + y * thread_data->width * 2 + 1]; + new_y_int = new_y; + if(new_x_int < 0 || new_x_int + 1 >= thread_data->width || new_y_int < 0 || new_y_int + 1 >= thread_data->height) + thread_data->data_out[x * 3 + rgb_index + y * thread_data->linesize_out] = 0; + else + thread_data->data_out[x * 3 + rgb_index + y * thread_data->linesize_out] = + thread_data->data_in[ new_x_int * 3 + rgb_index + new_y_int * thread_data->linesize_in] * (new_x_int + 1 - new_x) * (new_y_int + 1 - new_y) + + thread_data->data_in[(new_x_int + 1) * 3 + rgb_index + new_y_int * thread_data->linesize_in] * (new_x - new_x_int) * (new_y_int + 1 - new_y) + + thread_data->data_in[ new_x_int * 3 + rgb_index + (new_y_int + 1) * thread_data->linesize_in] * (new_x_int + 1 - new_x) * (new_y - new_y_int) + + thread_data->data_in[(new_x_int + 1) * 3 + rgb_index + (new_y_int + 1) * thread_data->linesize_in] * (new_x - new_x_int) * (new_y - new_y_int); + break; + case LANCZOS: + interpolated = 0.0f; + norm = 0.0f; + new_x = thread_data->distortion_coords[x * 2 + y * thread_data->width * 2]; + new_x_int = new_x; + new_y = thread_data->distortion_coords[x * 2 + 1 + y * thread_data->width * 2]; + new_y_int = new_y; + for(j = 0; j < 4; ++j) + for(i = 0; i < 4; ++i) { + if(new_x_int + i - 2 < 0 || new_x_int + i - 2 >= thread_data->width || new_y_int + j - 2 < 0 || new_y_int + j - 2 >= thread_data->height) + continue; + d = square(new_x - (new_x_int + i - 2)) * square(new_y - (new_y_int + j - 2)); + if(d >= 4.0f) + continue; + d = thread_data->interpolation[(int)(d * LANCZOS_RESOLUTION)]; + norm += d; + interpolated += thread_data->data_in[(new_x_int + i - 2) * 3 + rgb_index + (new_y_int + j - 2) * thread_data->linesize_in] * d; + } + if(norm == 0.0f) + thread_data->data_out[x * 3 + rgb_index + y * thread_data->linesize_out] = 0; + else { + interpolated /= norm; + thread_data->data_out[x * 3 + rgb_index + y * thread_data->linesize_out] = interpolated < 0.0f ? 0.0f : interpolated > 255.0f ? 255 : interpolated; + } + break; + } + } + else { + // no distortion correction was applied + thread_data->data_out[x * 3 + rgb_index + y * thread_data->linesize_out] = thread_data->data_in[x * 3 + rgb_index + y * thread_data->linesize_in]; + } + } + + return 0; +} + +static int filter_frame(AVFilterLink *inlink, AVFrame *in) +{ + AVFilterContext *ctx = inlink->dst; + LensfunContext *lensfun = ctx->priv; + AVFilterLink *outlink = ctx->outputs[0]; + AVFrame *out; + VignettingThreadData vignetting_thread_data; + DistortionCorrectionThreadData distortion_correction_thread_data; + + if(lensfun->mode & VIGNETTING) { + av_frame_make_writable(in); + + vignetting_thread_data.width = inlink->w; + vignetting_thread_data.height = inlink->h; + vignetting_thread_data.data_in = in->data[0]; + vignetting_thread_data.linesize_in = in->linesize[0]; + vignetting_thread_data.pixel_composition = LF_CR_3(RED, GREEN, BLUE); + vignetting_thread_data.modifier = lensfun->modifier; + + ctx->internal->execute(ctx, + vignetting_filter_slice, + &vignetting_thread_data, + NULL, + FFMIN(outlink->h, ctx->graph->nb_threads)); + } + + if(lensfun->mode & (GEOMETRY_DISTORTION | SUBPIXEL_DISTORTION)) { + out = ff_get_video_buffer(outlink, outlink->w, outlink->h); + if(!out) { + av_frame_free(&in); + return AVERROR(ENOMEM); + } + av_frame_copy_props(out, in); + + distortion_correction_thread_data.width = inlink->w; + distortion_correction_thread_data.height = inlink->h; + distortion_correction_thread_data.distortion_coords = lensfun->distortion_coords; + distortion_correction_thread_data.data_in = in->data[0]; + distortion_correction_thread_data.data_out = out->data[0]; + distortion_correction_thread_data.linesize_in = in->linesize[0]; + distortion_correction_thread_data.linesize_out = out->linesize[0]; + distortion_correction_thread_data.interpolation = lensfun->interpolation; + distortion_correction_thread_data.mode = lensfun->mode; + distortion_correction_thread_data.interpolation_type = lensfun->interpolation_type; + + ctx->internal->execute(ctx, + distortion_correction_filter_slice, + &distortion_correction_thread_data, + NULL, + FFMIN(outlink->h, ctx->graph->nb_threads)); + + av_frame_free(&in); + return ff_filter_frame(outlink, out); + } else + return ff_filter_frame(outlink, in); +} + +static av_cold void uninit(AVFilterContext *ctx) +{ + LensfunContext *lensfun = ctx->priv; + + if(lensfun->camera) + lf_camera_destroy(lensfun->camera); + if(lensfun->lens) + lf_lens_destroy(lensfun->lens); + if(lensfun->modifier) + lf_modifier_destroy(lensfun->modifier); + if(lensfun->distortion_coords) + free(lensfun->distortion_coords); + if(lensfun->interpolation) + free(lensfun->interpolation); +} + +static const AVFilterPad lensfun_inputs[] = { + { + .name = "default", + .type = AVMEDIA_TYPE_VIDEO, + .config_props = config_props, + .filter_frame = filter_frame, + }, + { NULL } +}; + +static const AVFilterPad lensfun_outputs[] = { + { + .name = "default", + .type = AVMEDIA_TYPE_VIDEO, + }, + { NULL } +}; + +AVFilter ff_vf_lensfun = { + .name = "lensfun", + .description = NULL_IF_CONFIG_SMALL("Apply correction to an image based on info derived from the lensfun database."), + .priv_size = sizeof(LensfunContext), + .init = init, + .uninit = uninit, + .query_formats = query_formats, + .inputs = lensfun_inputs, + .outputs = lensfun_outputs, + .priv_class = &lensfun_class, + .flags = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC | AVFILTER_FLAG_SLICE_THREADS, +};