From patchwork Mon Dec 3 22:27:09 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Mina X-Patchwork-Id: 11266 Return-Path: X-Original-To: patchwork@ffaux-bg.ffmpeg.org Delivered-To: patchwork@ffaux-bg.ffmpeg.org Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by ffaux.localdomain (Postfix) with ESMTP id 2E01A44DC08 for ; Tue, 4 Dec 2018 00:33:14 +0200 (EET) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id CDED9689E85; Tue, 4 Dec 2018 00:33:05 +0200 (EET) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-wm1-f49.google.com (mail-wm1-f49.google.com [209.85.128.49]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 1BDAB689E85 for ; Tue, 4 Dec 2018 00:32:59 +0200 (EET) Received: by mail-wm1-f49.google.com with SMTP id a18so7384669wmj.1 for ; Mon, 03 Dec 2018 14:33:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=to:from:subject:message-id:date:user-agent:mime-version :content-language; bh=KiGGS5bJUcseCxaJ1uz0VZDil50SfSCNdjNPNil1r0A=; b=mfKM6Dq5aCjFmFKRsHI9R+uIeXhOqtXG+VV0xrmbv3F6PhtfrL6uveUGd7iD5jtmVJ l4d9T6Q6Z+HU7GN/0MnO2dray9sh7cjOfjaz+zK2QTb82DIPomSW3G+kpDe2fIJR8Y1w /YA0zFdpJs2bsMKLndiK+Rxctuq3AGZH1Lb7IyW/d+r4b5yWE0JvCPKqXgxJEhvz3n6s syN01oJdG4JB8jEqP5DuUzipva6j3zgW34W9gknro+ZOMpM/pNizifbZNP5mZQuuJKXO mKyFkZjegEbVpllQEgg3h9NAS861+WIc1lc2v/SulbfsmijunaMezBHU99r785Wom9gl 8T0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:to:from:subject:message-id:date:user-agent :mime-version:content-language; bh=KiGGS5bJUcseCxaJ1uz0VZDil50SfSCNdjNPNil1r0A=; b=J0AznTWYQt7Nlml++NfwhQ7XV+hK9Vc5EaylQeg9saA0eXPFsv3UyqFuIpAKiHRa+8 kOV8DnLtSX06lNp5GFetvP5k+hi2kgvtHjr6uOYAjgc/gptiYt3zDqAUwYgVJFNhpaXi H/nxgbUVVdWD9mXW+Ynkj71HkiKDBVlwgZ1/A5ZmEdipPcf5uvA5yCNg8jCWiNDbgrZx hwD0ptg287knOjYK9PuPboMgVMGYTBUUK7oHYjejDpvIpEZCUDlQCvbQdgieRweql1El tm73q3deE0hWImP3r6d48I0RbT+i3r1LTvQ5iV+GrknSD9w8+27XTghDgyc6QRCvlxoW Mw6A== X-Gm-Message-State: AA+aEWY6/A9EmIpN7amA40nHSiYAEVsU/DBIxnk24iR/jWPHPj1fiNMK I+Tzq2zAUcEtQEtTq0OArina4ZLH X-Google-Smtp-Source: AFSGD/VSwJOZbu1aXbYLM7kSWndidA/cB+TWB4nlIEUPPLt/IV+oGEQHsG0kehObiswOXOWRh2rF8Q== X-Received: by 2002:a1c:a9cf:: with SMTP id s198mr9787575wme.120.1543876032207; Mon, 03 Dec 2018 14:27:12 -0800 (PST) Received: from [192.168.1.103] ([41.45.135.158]) by smtp.gmail.com with ESMTPSA id j76sm22371771wmf.26.2018.12.03.14.27.10 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 03 Dec 2018 14:27:11 -0800 (PST) To: ffmpeg-devel@ffmpeg.org From: Mina Message-ID: <3ab0e3d8-8e23-c248-42fd-6b0d79f0fd5f@gmail.com> Date: Tue, 4 Dec 2018 00:27:09 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.2.1 MIME-Version: 1.0 Content-Language: en-US Subject: [FFmpeg-devel] [PATCH v3] lavfi/vf_colorconstancy: add weighted_greyedge X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" Hi,   This patch was part of GSoC Color Constancy filter. It introduces an improved color constancy filter(weighted_greyedge) based on the already pushed greyedge. I'm sending it again after updating it against latest version of master. V3 changes: - fixed inconsistency in filters.texi Regards, Mina Sami From f4891ca38470893d29864821cd1988bb026cf160 Mon Sep 17 00:00:00 2001 From: Mina Date: Tue, 4 Dec 2018 00:22:23 +0200 Subject: [PATCH] lavfi/vf_colorconstancy: add weighted_greyedge Signed-off-by: Mina --- doc/filters.texi | 39 ++++ libavfilter/Makefile | 1 + libavfilter/allfilters.c | 1 + libavfilter/vf_colorconstancy.c | 322 ++++++++++++++++++++++++-------- 4 files changed, 288 insertions(+), 75 deletions(-) diff --git a/doc/filters.texi b/doc/filters.texi index 41fbbc5329..1cad919ba7 100644 --- a/doc/filters.texi +++ b/doc/filters.texi @@ -18038,6 +18038,45 @@ separatefields,select=eq(mod(n,4),0)+eq(mod(n,4),3),weave @end example @end itemize +@section weighted_greyedge +A color constancy variation filter which estimates scene illumination via weighted grey edge +algorithm and corrects the scene colors accordingly. + +See: @url{http://refbase.cvc.uab.es/files/GGW2012.pdf} + +The filter accepts the following options: + +@table @option +@item minknorm +The Minkowski parameter to be used for calculating the Minkowski distance. Must +be chosen in the range [1,20] and default value = 1. + +@item sigma +The standard deviation of Gaussian blur to be applied to the scene. Must be +chosen in the range [0.2,1024.0] and default value = 1. + +@item wmap_k +The power applied to the weight map to emphasize heigher weights. Must be chosen +in the range [1,20] and default value = 2. + +@item iters +The maximum number of iterations for performing the algorithm. Must be chosen in the +range [1,20] and default value = 5. + +@item angerr +The angular error threshold in degrees for stopping the algorithm. Must be chosen +in the range [0,360] and default value = 0.001. +@end table + +@subsection Examples +@itemize + +@item +@example +weighted_greyedge=minknorm=5:sigma=2:wmap_k=10:iters=10:angerr=0.0005 +@end example +@end itemize + @section xbr Apply the xBR high-quality magnification filter which is designed for pixel art. It follows a set of edge-detection rules, see diff --git a/libavfilter/Makefile b/libavfilter/Makefile index 6e2658186d..d77ca697b2 100644 --- a/libavfilter/Makefile +++ b/libavfilter/Makefile @@ -414,6 +414,7 @@ OBJS-$(CONFIG_VSTACK_FILTER) += vf_stack.o framesync.o OBJS-$(CONFIG_W3FDIF_FILTER) += vf_w3fdif.o OBJS-$(CONFIG_WAVEFORM_FILTER) += vf_waveform.o OBJS-$(CONFIG_WEAVE_FILTER) += vf_weave.o +OBJS-$(CONFIG_WEIGHTED_GREYEDGE_FILTER) += vf_colorconstancy.o OBJS-$(CONFIG_XBR_FILTER) += vf_xbr.o OBJS-$(CONFIG_XSTACK_FILTER) += vf_stack.o framesync.o OBJS-$(CONFIG_YADIF_FILTER) += vf_yadif.o yadif_common.o diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c index a600069500..7dbfccb393 100644 --- a/libavfilter/allfilters.c +++ b/libavfilter/allfilters.c @@ -392,6 +392,7 @@ extern AVFilter ff_vf_vstack; extern AVFilter ff_vf_w3fdif; extern AVFilter ff_vf_waveform; extern AVFilter ff_vf_weave; +extern AVFilter ff_vf_weighted_greyedge; extern AVFilter ff_vf_xbr; extern AVFilter ff_vf_xstack; extern AVFilter ff_vf_yadif; diff --git a/libavfilter/vf_colorconstancy.c b/libavfilter/vf_colorconstancy.c index e3bb39e51b..3b84a637c7 100644 --- a/libavfilter/vf_colorconstancy.c +++ b/libavfilter/vf_colorconstancy.c @@ -26,6 +26,14 @@ * * @cite * J. van de Weijer, Th. Gevers, A. Gijsenij "Edge-Based Color Constancy". + * + * @cite + * J. van de Weijer, Th. Gevers, and J. Geusebroek, + * “Edge and corner detection by photometric quasi-invariants”. + * + * @cite + * A. Gijsenij, Th. Gevers, J. van de Weijer, + * "Improving Color Constancy by Photometric Edge Weighting". */ #include "libavutil/imgutils.h" @@ -39,7 +47,8 @@ #include -#define GREY_EDGE "greyedge" +#define GREY_EDGE "greyedge" +#define WEIGHTED_GREY_EDGE "weighted_greyedge" #define SQRT3 1.73205080757 @@ -79,6 +88,10 @@ typedef struct ColorConstancyContext { int minknorm; /**< @minknorm = 0 : getMax instead */ double sigma; + int wmap_k; + int iters; + double angerr_thresh; + int nb_threads; int planeheight[4]; int planewidth[4]; @@ -477,53 +490,73 @@ static int filter_slice_grey_edge(AVFilterContext* ctx, void* arg, int jobnr, in } /** - * Main control function for grey edge algorithm. + * Slice function for weighted grey edge algorithm that is called iteratively to + * calculate and apply weight scheme. * * @param ctx the filter context. - * @param in frame to perfrom grey edge on. + * @param arg data to be passed between threads. + * @param jobnr current job nubmer. + * @param nb_jobs total number of jobs. * - * @return 0 in case of success, a negative value corresponding to an - * AVERROR code in case of failure. + * @return 0. */ -static int filter_grey_edge(AVFilterContext *ctx, AVFrame *in) +static int filter_slice_weighted_grey_edge(AVFilterContext* ctx, void* arg, int jobnr, int nb_jobs) { ColorConstancyContext *s = ctx->priv; - ThreadData td; - int minknorm = s->minknorm; - int difford = s->difford; - double *white = s->white; - int nb_jobs = FFMIN3(s->planeheight[1], s->planewidth[1], s->nb_threads); - int plane, job, ret; - - td.in = in; - ret = setup_derivative_buffers(ctx, &td); - if (ret) { - return ret; - } - get_derivative(ctx, &td); - if (difford > 0) { - ctx->internal->execute(ctx, slice_normalize, &td, NULL, nb_jobs); - } + ThreadData *td = arg; + AVFrame *in = td->in; + int minknorm = s->minknorm; + int wmap_k = s->wmap_k; + const uint8_t thresh = 255; - ctx->internal->execute(ctx, filter_slice_grey_edge, &td, NULL, nb_jobs); - if (!minknorm) { - for (plane = 0; plane < NUM_PLANES; ++plane) { - white[plane] = 0; // All values are absolute - for (job = 0; job < nb_jobs; ++job) { - white[plane] = FFMAX(white[plane] , td.data[INDEX_DST][plane][job]); + const int height = s->planeheight[1]; + const int width = s->planewidth[1]; + const int slice_start = (height * jobnr) / nb_jobs; + const int slice_end = (height * (jobnr+1)) / nb_jobs; + double dx, dy, d, dw[NUM_PLANES]; + double o3_x, o3_y; + double spec_var, weight; + double result[NUM_PLANES]; + int r, c, p; + + memset(result, 0, NUM_PLANES * sizeof(result[0])); + + for (r = slice_start; r < slice_end; ++r) { + for (c = 0; c < width; ++c) { + d = 0; + o3_x = 0; + o3_y = 0; + for (p = 0; p < NUM_PLANES; ++p) { + dx = td->data[INDEX_DX][p][INDX2D(r, c, width)]; + dy = td->data[INDEX_DY][p][INDX2D(r, c, width)]; + dw[p] = sqrt(dx * dx + dy * dy); + + d += dw[p] * dw[p]; + o3_x += dx; + o3_y += dy; } - } - } else { - for (plane = 0; plane < NUM_PLANES; ++plane) { - white[plane] = 0; - for (job = 0; job < nb_jobs; ++job) { - white[plane] += td.data[INDEX_DST][plane][job]; + d = sqrt(d); + o3_x /= SQRT3; + o3_y /= SQRT3; + + spec_var = sqrt( o3_x * o3_x + o3_y * o3_y); + weight = spec_var / d; + if (weight > 1) { + weight = 1; + } + weight = pow(weight, wmap_k); + + for (p = 0; p < NUM_PLANES; ++p) { + result[p] += ( pow(dw[p] * weight , minknorm) + * (in->data[p][INDX2D(r, c, in->linesize[p])] < thresh) ); } - white[plane] = pow(white[plane], 1./minknorm); } } - cleanup_derivative_buffers(&td, difford + 1, NUM_PLANES); + for (p = 0; p < NUM_PLANES; ++p) { + /** To make sure you save in a place you won't use again */ + td->data[INDEX_DST][p][slice_start] = result[p]; + } return 0; } @@ -554,32 +587,6 @@ static void normalize_light(double *light) } } -/** - * Redirects to corresponding algorithm estimation function and performs normalization - * after estimation. - * - * @param ctx the filter context. - * @param in frame to perfrom estimation on. - * - * @return 0 in case of success, a negative value corresponding to an - * AVERROR code in case of failure. - */ -static int illumination_estimation(AVFilterContext *ctx, AVFrame *in) -{ - ColorConstancyContext *s = ctx->priv; - int ret; - - ret = filter_grey_edge(ctx, in); - - av_log(ctx, AV_LOG_DEBUG, "Estimated illumination= %f %f %f\n", - s->white[0], s->white[1], s->white[2]); - normalize_light(s->white); - av_log(ctx, AV_LOG_DEBUG, "Estimated illumination after normalization= %f %f %f\n", - s->white[0], s->white[1], s->white[2]); - - return ret; -} - /** * Performs simple correction via diagonal transformation model. * @@ -636,6 +643,146 @@ static void chromatic_adaptation(AVFilterContext *ctx, AVFrame *in, AVFrame *out ctx->internal->execute(ctx, diagonal_transformation, &td, NULL, nb_jobs); } +/** + * Main control function for grey edge algorithm. + * + * @param ctx the filter context. + * @param in frame to perfrom grey edge on. + * + * @return 0 in case of success, a negative value corresponding to an + * AVERROR code in case of failure. + */ +static int filter_grey_edge(AVFilterContext *ctx, AVFrame *in, AVFrame **out) +{ + ColorConstancyContext *s = ctx->priv; + AVFilterLink *outlink = ctx->outputs[0]; + int minknorm = s->minknorm; + int difford = s->difford; + double *white = s->white; + int nb_jobs = FFMIN3(s->planeheight[1], s->planewidth[1], s->nb_threads); + ThreadData td; + int plane, job, ret; + + td.in = in; + ret = setup_derivative_buffers(ctx, &td); + if (ret) { + return ret; + } + get_derivative(ctx, &td); + if (difford > 0) { + ctx->internal->execute(ctx, slice_normalize, &td, NULL, nb_jobs); + } + + ctx->internal->execute(ctx, filter_slice_grey_edge, &td, NULL, nb_jobs); + if (!minknorm) { + for (plane = 0; plane < NUM_PLANES; ++plane) { + white[plane] = 0; // All values are absolute + for (job = 0; job < nb_jobs; ++job) { + white[plane] = FFMAX(white[plane] , td.data[INDEX_DST][plane][job]); + } + } + } else { + for (plane = 0; plane < NUM_PLANES; ++plane) { + white[plane] = 0; + for (job = 0; job < nb_jobs; ++job) { + white[plane] += td.data[INDEX_DST][plane][job]; + } + white[plane] = pow(white[plane], 1./minknorm); + } + } + + cleanup_derivative_buffers(&td, difford + 1, NUM_PLANES); + + av_log(ctx, AV_LOG_DEBUG, "Estimated illumination= %f %f %f\n", + s->white[0], s->white[1], s->white[2]); + normalize_light(s->white); + av_log(ctx, AV_LOG_DEBUG, "Estimated illumination after normalization= %f %f %f\n", + s->white[0], s->white[1], s->white[2]); + + if (av_frame_is_writable(in)) { + *out = in; + } else { + *out = ff_get_video_buffer(outlink, outlink->w, outlink->h); + if ( !(*out) ) { + av_log(ctx, AV_LOG_ERROR, "Out of memory while allocating output video buffer.\n"); + return AVERROR(ENOMEM); + } + av_frame_copy_props(*out, in); + } + chromatic_adaptation(ctx, in, *out); + + return 0; +} + +#define ANG_ERR(w) acos( ( (w)[0] + (w)[1] + (w)[2] ) / sqrt(3) ) +#define RAD2DEG(r) (r) * 180 / M_PI + +/** + * Main control function for weighted grey edge algorithm. + * + * @param ctx the filter context. + * @param in frame to perfrom grey edge on. + * + * @return 0 in case of success, a negative value corresponding to an + * AVERROR code in case of failure. + */ +static int filter_weighted_grey_edge(AVFilterContext *ctx, AVFrame *in, AVFrame **out) +{ + ColorConstancyContext *s = ctx->priv; + const double angerr_thresh = s->angerr_thresh; + double *curr_estim_illum = s->white; + int difford = s->difford; + int minknorm = s->minknorm; + int it = s->iters; + int nb_jobs = FFMIN3(s->planeheight[1], s->planewidth[1], s->nb_threads); + const int height = s->planeheight[1]; + double final_estim_illum[NUM_PLANES]; + ThreadData td; + int ret, plane, job; + + ret = filter_grey_edge(ctx, in, out); + if (ret) { + return ret; + } + memcpy(final_estim_illum, curr_estim_illum, NUM_PLANES * sizeof(*curr_estim_illum)); + + td.in = in; + ret = setup_derivative_buffers(ctx, &td); + if (ret) { + return ret; + } + + while (it-- && RAD2DEG(ANG_ERR(curr_estim_illum)) > angerr_thresh) { + get_derivative(ctx, &td); + ctx->internal->execute(ctx, filter_slice_weighted_grey_edge, &td, NULL, nb_jobs); + + for (plane = 0; plane < NUM_PLANES; ++plane) { + curr_estim_illum[plane] = 0; + for (job = 0; job < nb_jobs; ++job) { + curr_estim_illum[plane] += td.data[INDEX_DST][plane][(height * job) / nb_jobs]; + } + curr_estim_illum[plane] = pow(curr_estim_illum[plane], 1./minknorm); + } + + normalize_light(curr_estim_illum); + chromatic_adaptation(ctx, td.in, td.in); + + for (plane = 0; plane < NUM_PLANES; ++plane) { + final_estim_illum[plane] *= curr_estim_illum[plane]; + } + normalize_light(final_estim_illum); + + av_log(ctx, AV_LOG_DEBUG, "Current scene illumination=%f %f %f\n", curr_estim_illum[0], + curr_estim_illum[1], curr_estim_illum[2]); + av_log(ctx, AV_LOG_DEBUG, "Angular error = %.16f degrees\n", RAD2DEG(ANG_ERR(curr_estim_illum)) ); + av_log(ctx, AV_LOG_DEBUG, "Iterative estimated illumination after normalization= %f %f %f\n", + final_estim_illum[0], final_estim_illum[1], final_estim_illum[2]); + } + + cleanup_derivative_buffers(&td, difford + 1, NUM_PLANES); + return 0; +} + static int query_formats(AVFilterContext *ctx) { static const enum AVPixelFormat pix_fmts[] = { @@ -657,6 +804,11 @@ static int config_props(AVFilterLink *inlink) double sigma = s->sigma; int ret; + if (!strcmp(ctx->filter->name, WEIGHTED_GREY_EDGE)) { + /** Enforce difford since it's not an option */ + s->difford = 1; + } + if (!floor(break_off_sigma * sigma + 0.5) && s->difford) { av_log(ctx, AV_LOG_ERROR, "floor(%f * sigma) must be > 0 when difford > 0.\n", break_off_sigma); return AVERROR(EINVAL); @@ -683,22 +835,15 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in) AVFrame *out; int ret; - ret = illumination_estimation(ctx, in); - if (ret) { - return ret; + if (!strcmp(ctx->filter->name, GREY_EDGE)) { + ret = filter_grey_edge(ctx, in, &out); + } else if (!strcmp(ctx->filter->name, WEIGHTED_GREY_EDGE)) { + ret = filter_weighted_grey_edge(ctx, in, &out); } - if (av_frame_is_writable(in)) { - out = in; - } else { - out = ff_get_video_buffer(outlink, outlink->w, outlink->h); - if (!out) { - av_log(ctx, AV_LOG_ERROR, "Out of memory while allocating output video buffer.\n"); - return AVERROR(ENOMEM); - } - av_frame_copy_props(out, in); + if (ret) { + return ret; } - chromatic_adaptation(ctx, in, out); return ff_filter_frame(outlink, out); } @@ -756,3 +901,30 @@ AVFilter ff_vf_greyedge = { }; #endif /* CONFIG_GREY_EDGE_FILTER */ + +#if CONFIG_WEIGHTED_GREYEDGE_FILTER + +static const AVOption weighted_greyedge_options[] = { + { "minknorm", "set Minkowski norm", OFFSET(minknorm), AV_OPT_TYPE_INT, {.i64=1}, 1, 20, FLAGS }, + { "sigma", "set sigma", OFFSET(sigma), AV_OPT_TYPE_DOUBLE, {.dbl=1}, 0.2, 1024.0, FLAGS }, + { "wmap_k", "set power of weight map", OFFSET(wmap_k), AV_OPT_TYPE_INT, {.i64=2}, 1, 20, FLAGS }, + { "iters", "set max number of iterations", OFFSET(iters), AV_OPT_TYPE_INT, {.i64=5}, 1, 20, FLAGS }, + { "angerr", "set angular error threshold", OFFSET(angerr_thresh), AV_OPT_TYPE_DOUBLE, {.dbl=0.001}, 0, 360, FLAGS }, + { NULL } +}; + +AVFILTER_DEFINE_CLASS(weighted_greyedge); + +AVFilter ff_vf_weighted_greyedge = { + .name = WEIGHTED_GREY_EDGE, + .description = NULL_IF_CONFIG_SMALL("Estimates scene illumination by weighted grey edge assumption."), + .priv_size = sizeof(ColorConstancyContext), + .priv_class = &weighted_greyedge_class, + .query_formats = query_formats, + .uninit = uninit, + .inputs = colorconstancy_inputs, + .outputs = colorconstancy_outputs, + .flags = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC | AVFILTER_FLAG_SLICE_THREADS, +}; + +#endif /* CONFIG_WEIGHTED_GREYEDGE_FILTER */ \ No newline at end of file -- 2.17.1