diff mbox series

[FFmpeg-devel,4/4] avfilter/vf_derain.c: put all the calculation in model file.

Message ID 1584936512-9306-1-git-send-email-yejun.guo@intel.com
State Accepted
Headers show
Series [FFmpeg-devel,1/4] avfilter/vf_dnn_processing.c: fix typo for the linesize of dnn data | expand

Checks

Context Check Description
andriy/ffmpeg-patchwork success Make fate finished

Commit Message

Guo, Yejun March 23, 2020, 4:08 a.m. UTC
currently, the model outputs the rain, and so need a subtraction
in filter c code to get the final derain result.

I've sent a PR to update the model file and accepted, see at
https://github.com/XueweiMeng/derain_filter/pull/3

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
---
 doc/filters.texi        | 7 +++++--
 libavfilter/vf_derain.c | 6 +-----
 2 files changed, 6 insertions(+), 7 deletions(-)

Comments

Liu Steven March 23, 2020, 5:04 a.m. UTC | #1
> 2020年3月23日 下午12:08,Guo, Yejun <yejun.guo@intel.com> 写道:
> 
> currently, the model outputs the rain, and so need a subtraction
> in filter c code to get the final derain result.
> 
> I've sent a PR to update the model file and accepted, see at
> https://github.com/XueweiMeng/derain_filter/pull/3
> 
> Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
> ---
> doc/filters.texi        | 7 +++++--
> libavfilter/vf_derain.c | 6 +-----
> 2 files changed, 6 insertions(+), 7 deletions(-)
> 
> diff --git a/doc/filters.texi b/doc/filters.texi
> index 328e984..3dc7715 100644
> --- a/doc/filters.texi
> +++ b/doc/filters.texi
> @@ -8878,6 +8878,7 @@ delogo=x=0:y=0:w=100:h=77:band=10
> 
> @end itemize
> 
> +@anchor{derain}
> @section derain
> 
> Remove the rain in the input image/video by applying the derain methods based on
> @@ -8932,6 +8933,8 @@ Note that different backends use different file formats. TensorFlow and native
> backend can load files for only its format.
> @end table
> 
> +It can also be finished with @ref{dnn_processing} filter.
> +
> @section deshake
> 
> Attempt to fix small changes in horizontal and/or vertical shift. This
> @@ -9201,9 +9204,9 @@ Set the output name of the dnn network.
> 
> @itemize
> @item
> -Halve the red channle of the frame with format rgb24:
> +Remove rain in rgb24 frame with can.pb (see @ref{derain} filter):
> @example
> -ffmpeg -i input.jpg -vf format=rgb24,dnn_processing=model=halve_first_channel.model:input=dnn_in:output=dnn_out:dnn_backend=native out.native.png
> +./ffmpeg -i rain.jpg -vf format=rgb24,dnn_processing=dnn_backend=tensorflow:model=can.pb:input=x:output=y derain.jpg
> @end example
> 
> @item
> diff --git a/libavfilter/vf_derain.c b/libavfilter/vf_derain.c
> index 89f9d5a..7432260 100644
> --- a/libavfilter/vf_derain.c
> +++ b/libavfilter/vf_derain.c
> @@ -100,7 +100,6 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
>     AVFilterLink *outlink = ctx->outputs[0];
>     DRContext *dr_context = ctx->priv;
>     DNNReturnType dnn_result;
> -    int pad_size;
> 
>     AVFrame *out = ff_get_video_buffer(outlink, outlink->w, outlink->h);
>     if (!out) {
> @@ -129,15 +128,12 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
>     out->width  = dr_context->output.width;
>     outlink->h  = dr_context->output.height;
>     outlink->w  = dr_context->output.width;
> -    pad_size    = (in->height - out->height) >> 1;
> 
>     for (int i = 0; i < out->height; i++){
>         for(int j = 0; j < out->width * 3; j++){
>             int k = i * out->linesize[0] + j;
>             int t = i * out->width * 3 + j;
> -
> -            int t_in =  (i + pad_size) * in->width * 3 + j + pad_size * 3;
> -            out->data[0][k] = CLIP((int)((((float *)dr_context->input.data)[t_in] - ((float *)dr_context->output.data)[t]) * 255), 0, 255);
> +            out->data[0][k] = CLIP((int)((((float *)dr_context->output.data)[t]) * 255), 0, 255);
>         }
>     }
> 
> -- 
> 2.7.4
> 
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

LGTM

Thanks

Steven Liu
Guo, Yejun March 30, 2020, 2 a.m. UTC | #2
> -----Original Message-----
> From: Steven Liu [mailto:lq@chinaffmpeg.org]
> Sent: Monday, March 23, 2020 1:05 PM
> To: FFmpeg development discussions and patches <ffmpeg-devel@ffmpeg.org>
> Cc: Steven Liu <lq@chinaffmpeg.org>; Guo, Yejun <yejun.guo@intel.com>
> Subject: Re: [FFmpeg-devel] [PATCH 4/4] avfilter/vf_derain.c: put all the
> calculation in model file.
> 
> 
> LGTM

thanks, and the rest patches of this patch set ask for review, thanks.

> 
> Thanks
> 
> Steven Liu
> 
>
Guo, Yejun April 6, 2020, 1:12 a.m. UTC | #3
> -----Original Message-----
> From: ffmpeg-devel [mailto:ffmpeg-devel-bounces@ffmpeg.org] On Behalf Of
> Guo, Yejun
> Sent: Monday, March 30, 2020 10:01 AM
> To: Steven Liu <lq@chinaffmpeg.org>; FFmpeg development discussions and
> patches <ffmpeg-devel@ffmpeg.org>
> Subject: Re: [FFmpeg-devel] [PATCH 4/4] avfilter/vf_derain.c: put all the
> calculation in model file.
> 
> 
> 
> > -----Original Message-----
> > From: Steven Liu [mailto:lq@chinaffmpeg.org]
> > Sent: Monday, March 23, 2020 1:05 PM
> > To: FFmpeg development discussions and patches
> <ffmpeg-devel@ffmpeg.org>
> > Cc: Steven Liu <lq@chinaffmpeg.org>; Guo, Yejun <yejun.guo@intel.com>
> > Subject: Re: [FFmpeg-devel] [PATCH 4/4] avfilter/vf_derain.c: put all the
> > calculation in model file.
> >
> >
> > LGTM
> 
> thanks, and the rest patches of this patch set ask for review, thanks.

will push tomorrow if no more comments, thanks.

> 
> >
> > Thanks
> >
> > Steven Liu
> >
> >
> 
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
diff mbox series

Patch

diff --git a/doc/filters.texi b/doc/filters.texi
index 328e984..3dc7715 100644
--- a/doc/filters.texi
+++ b/doc/filters.texi
@@ -8878,6 +8878,7 @@  delogo=x=0:y=0:w=100:h=77:band=10
 
 @end itemize
 
+@anchor{derain}
 @section derain
 
 Remove the rain in the input image/video by applying the derain methods based on
@@ -8932,6 +8933,8 @@  Note that different backends use different file formats. TensorFlow and native
 backend can load files for only its format.
 @end table
 
+It can also be finished with @ref{dnn_processing} filter.
+
 @section deshake
 
 Attempt to fix small changes in horizontal and/or vertical shift. This
@@ -9201,9 +9204,9 @@  Set the output name of the dnn network.
 
 @itemize
 @item
-Halve the red channle of the frame with format rgb24:
+Remove rain in rgb24 frame with can.pb (see @ref{derain} filter):
 @example
-ffmpeg -i input.jpg -vf format=rgb24,dnn_processing=model=halve_first_channel.model:input=dnn_in:output=dnn_out:dnn_backend=native out.native.png
+./ffmpeg -i rain.jpg -vf format=rgb24,dnn_processing=dnn_backend=tensorflow:model=can.pb:input=x:output=y derain.jpg
 @end example
 
 @item
diff --git a/libavfilter/vf_derain.c b/libavfilter/vf_derain.c
index 89f9d5a..7432260 100644
--- a/libavfilter/vf_derain.c
+++ b/libavfilter/vf_derain.c
@@ -100,7 +100,6 @@  static int filter_frame(AVFilterLink *inlink, AVFrame *in)
     AVFilterLink *outlink = ctx->outputs[0];
     DRContext *dr_context = ctx->priv;
     DNNReturnType dnn_result;
-    int pad_size;
 
     AVFrame *out = ff_get_video_buffer(outlink, outlink->w, outlink->h);
     if (!out) {
@@ -129,15 +128,12 @@  static int filter_frame(AVFilterLink *inlink, AVFrame *in)
     out->width  = dr_context->output.width;
     outlink->h  = dr_context->output.height;
     outlink->w  = dr_context->output.width;
-    pad_size    = (in->height - out->height) >> 1;
 
     for (int i = 0; i < out->height; i++){
         for(int j = 0; j < out->width * 3; j++){
             int k = i * out->linesize[0] + j;
             int t = i * out->width * 3 + j;
-
-            int t_in =  (i + pad_size) * in->width * 3 + j + pad_size * 3;
-            out->data[0][k] = CLIP((int)((((float *)dr_context->input.data)[t_in] - ((float *)dr_context->output.data)[t]) * 255), 0, 255);
+            out->data[0][k] = CLIP((int)((((float *)dr_context->output.data)[t]) * 255), 0, 255);
         }
     }