From patchwork Mon Feb 24 08:40:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Guo, Yejun" X-Patchwork-Id: 17898 Return-Path: X-Original-To: patchwork@ffaux-bg.ffmpeg.org Delivered-To: patchwork@ffaux-bg.ffmpeg.org Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by ffaux.localdomain (Postfix) with ESMTP id E52E0448947 for ; Mon, 24 Feb 2020 10:50:17 +0200 (EET) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id B98EC68B0C9; Mon, 24 Feb 2020 10:50:17 +0200 (EET) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 4205B68B0C7 for ; Mon, 24 Feb 2020 10:50:10 +0200 (EET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 24 Feb 2020 00:50:08 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,479,1574150400"; d="scan'208";a="284297408" Received: from yguo18-skl-u1604.sh.intel.com ([10.239.13.25]) by FMSMGA003.fm.intel.com with ESMTP; 24 Feb 2020 00:50:08 -0800 From: "Guo, Yejun" To: ffmpeg-devel@ffmpeg.org Date: Mon, 24 Feb 2020 16:40:50 +0800 Message-Id: <1582533650-1496-1-git-send-email-yejun.guo@intel.com> X-Mailer: git-send-email 2.7.4 Subject: [FFmpeg-devel] [PATCH 1/4] avfilter/vf_sr.c: refine code to use AVPixFmtDescriptor.log2_chroma_h/w X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: yejun.guo@intel.com MIME-Version: 1.0 Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" Signed-off-by: Guo, Yejun --- libavfilter/vf_sr.c | 40 ++++++---------------------------------- 1 file changed, 6 insertions(+), 34 deletions(-) diff --git a/libavfilter/vf_sr.c b/libavfilter/vf_sr.c index 562b030..f000eda 100644 --- a/libavfilter/vf_sr.c +++ b/libavfilter/vf_sr.c @@ -176,40 +176,12 @@ static int config_props(AVFilterLink *inlink) sr_context->sws_slice_h = inlink->h; } else { if (inlink->format != AV_PIX_FMT_GRAY8){ - sws_src_h = sr_context->input.height; - sws_src_w = sr_context->input.width; - sws_dst_h = sr_context->output.height; - sws_dst_w = sr_context->output.width; - - switch (inlink->format){ - case AV_PIX_FMT_YUV420P: - sws_src_h = AV_CEIL_RSHIFT(sws_src_h, 1); - sws_src_w = AV_CEIL_RSHIFT(sws_src_w, 1); - sws_dst_h = AV_CEIL_RSHIFT(sws_dst_h, 1); - sws_dst_w = AV_CEIL_RSHIFT(sws_dst_w, 1); - break; - case AV_PIX_FMT_YUV422P: - sws_src_w = AV_CEIL_RSHIFT(sws_src_w, 1); - sws_dst_w = AV_CEIL_RSHIFT(sws_dst_w, 1); - break; - case AV_PIX_FMT_YUV444P: - break; - case AV_PIX_FMT_YUV410P: - sws_src_h = AV_CEIL_RSHIFT(sws_src_h, 2); - sws_src_w = AV_CEIL_RSHIFT(sws_src_w, 2); - sws_dst_h = AV_CEIL_RSHIFT(sws_dst_h, 2); - sws_dst_w = AV_CEIL_RSHIFT(sws_dst_w, 2); - break; - case AV_PIX_FMT_YUV411P: - sws_src_w = AV_CEIL_RSHIFT(sws_src_w, 2); - sws_dst_w = AV_CEIL_RSHIFT(sws_dst_w, 2); - break; - default: - av_log(context, AV_LOG_ERROR, - "could not create SwsContext for scaling for given input pixel format: %s\n", - av_get_pix_fmt_name(inlink->format)); - return AVERROR(EIO); - } + const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format); + sws_src_h = AV_CEIL_RSHIFT(sr_context->input.height, desc->log2_chroma_h); + sws_src_w = AV_CEIL_RSHIFT(sr_context->input.width, desc->log2_chroma_w); + sws_dst_h = AV_CEIL_RSHIFT(sr_context->output.height, desc->log2_chroma_h); + sws_dst_w = AV_CEIL_RSHIFT(sr_context->output.width, desc->log2_chroma_w); + sr_context->sws_contexts[0] = sws_getContext(sws_src_w, sws_src_h, AV_PIX_FMT_GRAY8, sws_dst_w, sws_dst_h, AV_PIX_FMT_GRAY8, SWS_BICUBIC, NULL, NULL, NULL); From patchwork Mon Feb 24 08:41:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Guo, Yejun" X-Patchwork-Id: 17899 Return-Path: X-Original-To: patchwork@ffaux-bg.ffmpeg.org Delivered-To: patchwork@ffaux-bg.ffmpeg.org Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by ffaux.localdomain (Postfix) with ESMTP id 16A63448947 for ; Mon, 24 Feb 2020 10:50:30 +0200 (EET) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id F35BF68B1E1; Mon, 24 Feb 2020 10:50:29 +0200 (EET) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 7FABE68B1B2 for ; Mon, 24 Feb 2020 10:50:23 +0200 (EET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 24 Feb 2020 00:50:21 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,479,1574150400"; d="scan'208";a="255516059" Received: from yguo18-skl-u1604.sh.intel.com ([10.239.13.25]) by orsmga002.jf.intel.com with ESMTP; 24 Feb 2020 00:50:20 -0800 From: "Guo, Yejun" To: ffmpeg-devel@ffmpeg.org Date: Mon, 24 Feb 2020 16:41:02 +0800 Message-Id: <1582533662-1543-1-git-send-email-yejun.guo@intel.com> X-Mailer: git-send-email 2.7.4 Subject: [FFmpeg-devel] [PATCH 2/4] avfilter/vf_dnn_processing.c: use swscale for uint8<->float32 convert X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: yejun.guo@intel.com MIME-Version: 1.0 Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" Signed-off-by: Guo, Yejun --- libavfilter/vf_dnn_processing.c | 81 +++++++++++++++++++++++++++++++---------- 1 file changed, 61 insertions(+), 20 deletions(-) diff --git a/libavfilter/vf_dnn_processing.c b/libavfilter/vf_dnn_processing.c index 492df93..4d0ee78 100644 --- a/libavfilter/vf_dnn_processing.c +++ b/libavfilter/vf_dnn_processing.c @@ -32,6 +32,7 @@ #include "dnn_interface.h" #include "formats.h" #include "internal.h" +#include "libswscale/swscale.h" typedef struct DnnProcessingContext { const AVClass *class; @@ -47,6 +48,9 @@ typedef struct DnnProcessingContext { // input & output of the model at execution time DNNData input; DNNData output; + + struct SwsContext *sws_gray8_to_grayf32; + struct SwsContext *sws_grayf32_to_gray8; } DnnProcessingContext; #define OFFSET(x) offsetof(DnnProcessingContext, x) @@ -211,6 +215,45 @@ static int config_input(AVFilterLink *inlink) return 0; } +static int prepare_sws_context(AVFilterLink *outlink) +{ + AVFilterContext *context = outlink->src; + DnnProcessingContext *ctx = context->priv; + AVFilterLink *inlink = context->inputs[0]; + enum AVPixelFormat fmt = inlink->format; + DNNDataType input_dt = ctx->input.dt; + DNNDataType output_dt = ctx->output.dt; + + switch (fmt) { + case AV_PIX_FMT_RGB24: + case AV_PIX_FMT_BGR24: + if (input_dt == DNN_FLOAT) { + ctx->sws_gray8_to_grayf32 = sws_getContext(inlink->w * 3, + inlink->h, + AV_PIX_FMT_GRAY8, + inlink->w * 3, + inlink->h, + AV_PIX_FMT_GRAYF32, + 0, NULL, NULL, NULL); + } + if (output_dt == DNN_FLOAT) { + ctx->sws_grayf32_to_gray8 = sws_getContext(outlink->w * 3, + outlink->h, + AV_PIX_FMT_GRAYF32, + outlink->w * 3, + outlink->h, + AV_PIX_FMT_GRAY8, + 0, NULL, NULL, NULL); + } + return 0; + default: + //do nothing + break; + } + + return 0; +} + static int config_output(AVFilterLink *outlink) { AVFilterContext *context = outlink->src; @@ -227,25 +270,23 @@ static int config_output(AVFilterLink *outlink) outlink->w = ctx->output.width; outlink->h = ctx->output.height; + prepare_sws_context(outlink); + return 0; } -static int copy_from_frame_to_dnn(DNNData *dnn_input, const AVFrame *frame) +static int copy_from_frame_to_dnn(DnnProcessingContext *ctx, const AVFrame *frame) { int bytewidth = av_image_get_linesize(frame->format, frame->width, 0); + DNNData *dnn_input = &ctx->input; switch (frame->format) { case AV_PIX_FMT_RGB24: case AV_PIX_FMT_BGR24: if (dnn_input->dt == DNN_FLOAT) { - float *dnn_input_data = dnn_input->data; - for (int i = 0; i < frame->height; i++) { - for(int j = 0; j < frame->width * 3; j++) { - int k = i * frame->linesize[0] + j; - int t = i * frame->width * 3 + j; - dnn_input_data[t] = frame->data[0][k] / 255.0f; - } - } + sws_scale(ctx->sws_gray8_to_grayf32, (const uint8_t **)frame->data, frame->linesize, + 0, frame->height, (uint8_t * const*)(&dnn_input->data), + (const int [4]){frame->linesize[0] * sizeof(float), 0, 0, 0}); } else { av_assert0(dnn_input->dt == DNN_UINT8); av_image_copy_plane(dnn_input->data, bytewidth, @@ -266,22 +307,19 @@ static int copy_from_frame_to_dnn(DNNData *dnn_input, const AVFrame *frame) return 0; } -static int copy_from_dnn_to_frame(AVFrame *frame, const DNNData *dnn_output) +static int copy_from_dnn_to_frame(DnnProcessingContext *ctx, AVFrame *frame) { int bytewidth = av_image_get_linesize(frame->format, frame->width, 0); + DNNData *dnn_output = &ctx->output; switch (frame->format) { case AV_PIX_FMT_RGB24: case AV_PIX_FMT_BGR24: if (dnn_output->dt == DNN_FLOAT) { - float *dnn_output_data = dnn_output->data; - for (int i = 0; i < frame->height; i++) { - for(int j = 0; j < frame->width * 3; j++) { - int k = i * frame->linesize[0] + j; - int t = i * frame->width * 3 + j; - frame->data[0][k] = av_clip_uintp2((int)(dnn_output_data[t] * 255.0f), 8); - } - } + sws_scale(ctx->sws_grayf32_to_gray8, (const uint8_t *[4]){(const uint8_t *)dnn_output->data, 0, 0, 0}, + (const int[4]){frame->linesize[0] * sizeof(float), 0, 0, 0}, + 0, frame->height, (uint8_t * const*)frame->data, frame->linesize); + } else { av_assert0(dnn_output->dt == DNN_UINT8); av_image_copy_plane(frame->data[0], frame->linesize[0], @@ -318,7 +356,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in) DNNReturnType dnn_result; AVFrame *out; - copy_from_frame_to_dnn(&ctx->input, in); + copy_from_frame_to_dnn(ctx, in); dnn_result = (ctx->dnn_module->execute_model)(ctx->model, &ctx->output, 1); if (dnn_result != DNN_SUCCESS){ @@ -334,7 +372,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in) } av_frame_copy_props(out, in); - copy_from_dnn_to_frame(out, &ctx->output); + copy_from_dnn_to_frame(ctx, out); av_frame_free(&in); return ff_filter_frame(outlink, out); } @@ -343,6 +381,9 @@ static av_cold void uninit(AVFilterContext *ctx) { DnnProcessingContext *context = ctx->priv; + sws_freeContext(context->sws_gray8_to_grayf32); + sws_freeContext(context->sws_grayf32_to_gray8); + if (context->dnn_module) (context->dnn_module->free_model)(&context->model); From patchwork Mon Feb 24 08:41:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Guo, Yejun" X-Patchwork-Id: 17900 Return-Path: X-Original-To: patchwork@ffaux-bg.ffmpeg.org Delivered-To: patchwork@ffaux-bg.ffmpeg.org Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by ffaux.localdomain (Postfix) with ESMTP id 053D0448947 for ; Mon, 24 Feb 2020 10:50:38 +0200 (EET) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id DEE2068B260; Mon, 24 Feb 2020 10:50:37 +0200 (EET) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id AB03668B100 for ; Mon, 24 Feb 2020 10:50:31 +0200 (EET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 24 Feb 2020 00:50:29 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,479,1574150400"; d="scan'208";a="229783645" Received: from yguo18-skl-u1604.sh.intel.com ([10.239.13.25]) by fmsmga007.fm.intel.com with ESMTP; 24 Feb 2020 00:50:28 -0800 From: "Guo, Yejun" To: ffmpeg-devel@ffmpeg.org Date: Mon, 24 Feb 2020 16:41:10 +0800 Message-Id: <1582533670-1591-1-git-send-email-yejun.guo@intel.com> X-Mailer: git-send-email 2.7.4 Subject: [FFmpeg-devel] [PATCH 3/4] avfilter/vf_dnn_processing.c: add planar yuv format support X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: yejun.guo@intel.com MIME-Version: 1.0 Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" Only the Y channel is handled by dnn, the UV channels are copied without changes. The command to use srcnn.pb (see vf_sr) looks like: ./ffmpeg -i 480p.jpg -vf format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=tensorflow:model=srcnn.pb:input=x:output=y -y srcnn.jpg Signed-off-by: Guo, Yejun --- doc/filters.texi | 8 +++++ libavfilter/vf_dnn_processing.c | 72 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 80 insertions(+) diff --git a/doc/filters.texi b/doc/filters.texi index 70fd7a4..71ea822 100644 --- a/doc/filters.texi +++ b/doc/filters.texi @@ -9180,6 +9180,8 @@ Set the output name of the dnn network. @end table +@subsection Examples + @itemize @item Halve the red channle of the frame with format rgb24: @@ -9193,6 +9195,12 @@ Halve the pixel value of the frame with format gray32f: ffmpeg -i input.jpg -vf format=grayf32,dnn_processing=model=halve_gray_float.model:input=dnn_in:output=dnn_out:dnn_backend=native -y out.native.png @end example +@item +Handle the Y channel with srcnn.pb (see @ref{sr}) for frame with yuv420p (planar YUV formats supported): +@example +./ffmpeg -i 480p.jpg -vf format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=tensorflow:model=srcnn.pb:input=x:output=y -y srcnn.jpg +@end example + @end itemize @section drawbox diff --git a/libavfilter/vf_dnn_processing.c b/libavfilter/vf_dnn_processing.c index 4d0ee78..f9458f0 100644 --- a/libavfilter/vf_dnn_processing.c +++ b/libavfilter/vf_dnn_processing.c @@ -110,6 +110,8 @@ static int query_formats(AVFilterContext *context) static const enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_RGB24, AV_PIX_FMT_BGR24, AV_PIX_FMT_GRAY8, AV_PIX_FMT_GRAYF32, + AV_PIX_FMT_YUV420P, AV_PIX_FMT_YUV422P, + AV_PIX_FMT_YUV444P, AV_PIX_FMT_YUV410P, AV_PIX_FMT_YUV411P, AV_PIX_FMT_NONE }; AVFilterFormats *fmts_list = ff_make_format_list(pix_fmts); @@ -163,6 +165,11 @@ static int check_modelinput_inlink(const DNNData *model_input, const AVFilterLin } return 0; case AV_PIX_FMT_GRAYF32: + case AV_PIX_FMT_YUV420P: + case AV_PIX_FMT_YUV422P: + case AV_PIX_FMT_YUV444P: + case AV_PIX_FMT_YUV410P: + case AV_PIX_FMT_YUV411P: if (model_input->channels != 1) { LOG_FORMAT_CHANNEL_MISMATCH(); return AVERROR(EIO); @@ -246,6 +253,28 @@ static int prepare_sws_context(AVFilterLink *outlink) 0, NULL, NULL, NULL); } return 0; + case AV_PIX_FMT_YUV420P: + case AV_PIX_FMT_YUV422P: + case AV_PIX_FMT_YUV444P: + case AV_PIX_FMT_YUV410P: + case AV_PIX_FMT_YUV411P: + av_assert0(input_dt == DNN_FLOAT); + av_assert0(output_dt == DNN_FLOAT); + ctx->sws_gray8_to_grayf32 = sws_getContext(inlink->w, + inlink->h, + AV_PIX_FMT_GRAY8, + inlink->w, + inlink->h, + AV_PIX_FMT_GRAYF32, + 0, NULL, NULL, NULL); + ctx->sws_grayf32_to_gray8 = sws_getContext(outlink->w, + outlink->h, + AV_PIX_FMT_GRAYF32, + outlink->w, + outlink->h, + AV_PIX_FMT_GRAY8, + 0, NULL, NULL, NULL); + return 0; default: //do nothing break; @@ -300,6 +329,15 @@ static int copy_from_frame_to_dnn(DnnProcessingContext *ctx, const AVFrame *fram frame->data[0], frame->linesize[0], bytewidth, frame->height); return 0; + case AV_PIX_FMT_YUV420P: + case AV_PIX_FMT_YUV422P: + case AV_PIX_FMT_YUV444P: + case AV_PIX_FMT_YUV410P: + case AV_PIX_FMT_YUV411P: + sws_scale(ctx->sws_gray8_to_grayf32, (const uint8_t **)frame->data, frame->linesize, + 0, frame->height, (uint8_t * const*)(&dnn_input->data), + (const int [4]){frame->width * sizeof(float), 0, 0, 0}); + return 0; default: return AVERROR(EIO); } @@ -341,6 +379,15 @@ static int copy_from_dnn_to_frame(DnnProcessingContext *ctx, AVFrame *frame) dnn_output->data, bytewidth, bytewidth, frame->height); return 0; + case AV_PIX_FMT_YUV420P: + case AV_PIX_FMT_YUV422P: + case AV_PIX_FMT_YUV444P: + case AV_PIX_FMT_YUV410P: + case AV_PIX_FMT_YUV411P: + sws_scale(ctx->sws_grayf32_to_gray8, (const uint8_t *[4]){(const uint8_t *)dnn_output->data, 0, 0, 0}, + (const int[4]){frame->width * sizeof(float), 0, 0, 0}, + 0, frame->height, (uint8_t * const*)frame->data, frame->linesize); + return 0; default: return AVERROR(EIO); } @@ -348,6 +395,27 @@ static int copy_from_dnn_to_frame(DnnProcessingContext *ctx, AVFrame *frame) return 0; } +static av_always_inline int isPlanarYUV(enum AVPixelFormat pix_fmt) +{ + const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(pix_fmt); + av_assert0(desc); + return !(desc->flags & AV_PIX_FMT_FLAG_RGB) && desc->nb_components == 3; +} + +static int copy_uv_planes(DnnProcessingContext *ctx, AVFrame *out, const AVFrame *in) +{ + const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(in->format); + int uv_height = AV_CEIL_RSHIFT(in->height, desc->log2_chroma_h); + for (int i = 1; i < 3; ++i) { + int bytewidth = av_image_get_linesize(in->format, in->width, i); + av_image_copy_plane(out->data[i], out->linesize[i], + in->data[i], in->linesize[i], + bytewidth, uv_height); + } + + return 0; +} + static int filter_frame(AVFilterLink *inlink, AVFrame *in) { AVFilterContext *context = inlink->dst; @@ -373,6 +441,10 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in) av_frame_copy_props(out, in); copy_from_dnn_to_frame(ctx, out); + + if (isPlanarYUV(in->format)) + copy_uv_planes(ctx, out, in); + av_frame_free(&in); return ff_filter_frame(outlink, out); } From patchwork Mon Feb 24 08:41:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Guo, Yejun" X-Patchwork-Id: 17901 Return-Path: X-Original-To: patchwork@ffaux-bg.ffmpeg.org Delivered-To: patchwork@ffaux-bg.ffmpeg.org Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by ffaux.localdomain (Postfix) with ESMTP id 0D0D2448947 for ; Mon, 24 Feb 2020 10:50:43 +0200 (EET) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id E954768B46D; Mon, 24 Feb 2020 10:50:42 +0200 (EET) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 6502868B422 for ; Mon, 24 Feb 2020 10:50:41 +0200 (EET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 24 Feb 2020 00:50:39 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,479,1574150400"; d="scan'208";a="384078338" Received: from yguo18-skl-u1604.sh.intel.com ([10.239.13.25]) by orsmga004.jf.intel.com with ESMTP; 24 Feb 2020 00:50:38 -0800 From: "Guo, Yejun" To: ffmpeg-devel@ffmpeg.org Date: Mon, 24 Feb 2020 16:41:20 +0800 Message-Id: <1582533680-1650-1-git-send-email-yejun.guo@intel.com> X-Mailer: git-send-email 2.7.4 Subject: [FFmpeg-devel] [PATCH 4/4] avfilter/vf_dnn_processing.c: add frame size change support for planar yuv format X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: yejun.guo@intel.com MIME-Version: 1.0 Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" The Y channel is handled by dnn, and also resized by dnn. The UV channels are resized with swscale. The command to use espcn.pb (see vf_sr) looks like: ./ffmpeg -i 480p.jpg -vf format=yuv420p,dnn_processing=dnn_backend=tensorflow:model=espcn.pb:input=x:output=y -y tmp.espcn.jpg Signed-off-by: Guo, Yejun --- doc/filters.texi | 8 ++++++++ libavfilter/vf_dnn_processing.c | 37 ++++++++++++++++++++++++++++++------- 2 files changed, 38 insertions(+), 7 deletions(-) diff --git a/doc/filters.texi b/doc/filters.texi index 71ea822..00a2e5c 100644 --- a/doc/filters.texi +++ b/doc/filters.texi @@ -9201,6 +9201,12 @@ Handle the Y channel with srcnn.pb (see @ref{sr}) for frame with yuv420p (plana ./ffmpeg -i 480p.jpg -vf format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=tensorflow:model=srcnn.pb:input=x:output=y -y srcnn.jpg @end example +@item +Handle the Y channel with espcn.pb (see @ref{sr}), which changes frame size, for format yuv420p (planar YUV formats supported): +@example +./ffmpeg -i 480p.jpg -vf format=yuv420p,dnn_processing=dnn_backend=tensorflow:model=espcn.pb:input=x:output=y -y tmp.espcn.jpg +@end example + @end itemize @section drawbox @@ -17353,6 +17359,8 @@ Default value is @code{2}. Scale factor is necessary for SRCNN model, because it input upscaled using bicubic upscaling with proper scale factor. @end table +This feature can also be finished with @ref{dnn_processing}. + @section ssim Obtain the SSIM (Structural SImilarity Metric) between two input videos. diff --git a/libavfilter/vf_dnn_processing.c b/libavfilter/vf_dnn_processing.c index f9458f0..7f40f85 100644 --- a/libavfilter/vf_dnn_processing.c +++ b/libavfilter/vf_dnn_processing.c @@ -51,6 +51,8 @@ typedef struct DnnProcessingContext { struct SwsContext *sws_gray8_to_grayf32; struct SwsContext *sws_grayf32_to_gray8; + struct SwsContext *sws_uv_scale; + int sws_uv_height; } DnnProcessingContext; #define OFFSET(x) offsetof(DnnProcessingContext, x) @@ -274,6 +276,18 @@ static int prepare_sws_context(AVFilterLink *outlink) outlink->h, AV_PIX_FMT_GRAY8, 0, NULL, NULL, NULL); + + if (inlink->w != outlink->w || inlink->h != outlink->h) { + const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(fmt); + int sws_src_h = AV_CEIL_RSHIFT(inlink->h, desc->log2_chroma_h); + int sws_src_w = AV_CEIL_RSHIFT(inlink->w, desc->log2_chroma_w); + int sws_dst_h = AV_CEIL_RSHIFT(outlink->h, desc->log2_chroma_h); + int sws_dst_w = AV_CEIL_RSHIFT(outlink->w, desc->log2_chroma_w); + ctx->sws_uv_scale = sws_getContext(sws_src_w, sws_src_h, AV_PIX_FMT_GRAY8, + sws_dst_w, sws_dst_h, AV_PIX_FMT_GRAY8, + SWS_BICUBIC, NULL, NULL, NULL); + ctx->sws_uv_height = sws_src_h; + } return 0; default: //do nothing @@ -404,13 +418,21 @@ static av_always_inline int isPlanarYUV(enum AVPixelFormat pix_fmt) static int copy_uv_planes(DnnProcessingContext *ctx, AVFrame *out, const AVFrame *in) { - const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(in->format); - int uv_height = AV_CEIL_RSHIFT(in->height, desc->log2_chroma_h); - for (int i = 1; i < 3; ++i) { - int bytewidth = av_image_get_linesize(in->format, in->width, i); - av_image_copy_plane(out->data[i], out->linesize[i], - in->data[i], in->linesize[i], - bytewidth, uv_height); + if (!ctx->sws_uv_scale) { + av_assert0(in->height == out->height && in->width == out->width); + const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(in->format); + int uv_height = AV_CEIL_RSHIFT(in->height, desc->log2_chroma_h); + for (int i = 1; i < 3; ++i) { + int bytewidth = av_image_get_linesize(in->format, in->width, i); + av_image_copy_plane(out->data[i], out->linesize[i], + in->data[i], in->linesize[i], + bytewidth, uv_height); + } + } else { + sws_scale(ctx->sws_uv_scale, (const uint8_t **)(in->data + 1), in->linesize + 1, + 0, ctx->sws_uv_height, out->data + 1, out->linesize + 1); + sws_scale(ctx->sws_uv_scale, (const uint8_t **)(in->data + 2), in->linesize + 2, + 0, ctx->sws_uv_height, out->data + 2, out->linesize + 2); } return 0; @@ -455,6 +477,7 @@ static av_cold void uninit(AVFilterContext *ctx) sws_freeContext(context->sws_gray8_to_grayf32); sws_freeContext(context->sws_grayf32_to_gray8); + sws_freeContext(context->sws_uv_scale); if (context->dnn_module) (context->dnn_module->free_model)(&context->model);