diff mbox series

[FFmpeg-devel,V2,2/2] vf_dnn_processing.c: add dnn backend openvino

Message ID 1591880449-24988-1-git-send-email-yejun.guo@intel.com
State Accepted
Headers show
Series [FFmpeg-devel,V2,1/2] dnn: add openvino as one of dnn backend | expand

Checks

Context Check Description
andriy/default pending
andriy/make success Make finished
andriy/make_fate success Make fate finished

Commit Message

Guo, Yejun June 11, 2020, 1 p.m. UTC
We can try with the srcnn model from sr filter.
1) get srcnn.pb model file, see filter sr
2) convert srcnn.pb into openvino model with command:
python mo_tf.py --input_model srcnn.pb --data_type=FP32 --input_shape [1,960,1440,1] --keep_shape_ops

See the script at https://github.com/openvinotoolkit/openvino/tree/master/model-optimizer
We'll see srcnn.xml and srcnn.bin at current path, copy them to the
directory where ffmpeg is.

I have also uploaded the model files at https://github.com/guoyejun/dnn_processing/tree/master/models

3) run with openvino backend:
ffmpeg -i input.jpg -vf format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=openvino:model=srcnn.xml:input=x:output=srcnn/Maximum -y srcnn.ov.jpg
(The input.jpg resolution is 720*480)

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
---
 doc/filters.texi                | 10 +++++++++-
 libavfilter/vf_dnn_processing.c |  5 ++++-
 2 files changed, 13 insertions(+), 2 deletions(-)

Comments

Guo, Yejun June 24, 2020, 6:39 a.m. UTC | #1
> -----Original Message-----
> From: Guo, Yejun <yejun.guo@intel.com>
> Sent: 2020年6月11日 21:01
> To: ffmpeg-devel@ffmpeg.org
> Cc: Guo, Yejun <yejun.guo@intel.com>
> Subject: [PATCH V2 2/2] vf_dnn_processing.c: add dnn backend openvino
> 
> We can try with the srcnn model from sr filter.
> 1) get srcnn.pb model file, see filter sr
> 2) convert srcnn.pb into openvino model with command:
> python mo_tf.py --input_model srcnn.pb --data_type=FP32 --input_shape
> [1,960,1440,1] --keep_shape_ops
> 
> See the script at
> https://github.com/openvinotoolkit/openvino/tree/master/model-optimizer
> We'll see srcnn.xml and srcnn.bin at current path, copy them to the directory
> where ffmpeg is.
> 
> I have also uploaded the model files at
> https://github.com/guoyejun/dnn_processing/tree/master/models
> 
> 3) run with openvino backend:
> ffmpeg -i input.jpg -vf
> format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=openvino
> :model=srcnn.xml:input=x:output=srcnn/Maximum -y srcnn.ov.jpg (The
> input.jpg resolution is 720*480)
> 
> Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
> ---
>  doc/filters.texi                | 10 +++++++++-
>  libavfilter/vf_dnn_processing.c |  5 ++++-
>  2 files changed, 13 insertions(+), 2 deletions(-)

any comment for this patch set? thanks.
Pedro Arthur June 28, 2020, 3:28 p.m. UTC | #2
Hi,

Em qua., 24 de jun. de 2020 às 03:40, Guo, Yejun <yejun.guo@intel.com>
escreveu:

>
>
> > -----Original Message-----
> > From: Guo, Yejun <yejun.guo@intel.com>
> > Sent: 2020年6月11日 21:01
> > To: ffmpeg-devel@ffmpeg.org
> > Cc: Guo, Yejun <yejun.guo@intel.com>
> > Subject: [PATCH V2 2/2] vf_dnn_processing.c: add dnn backend openvino
> >
> > We can try with the srcnn model from sr filter.
> > 1) get srcnn.pb model file, see filter sr
> > 2) convert srcnn.pb into openvino model with command:
> > python mo_tf.py --input_model srcnn.pb --data_type=FP32 --input_shape
> > [1,960,1440,1] --keep_shape_ops
> >
> > See the script at
> > https://github.com/openvinotoolkit/openvino/tree/master/model-optimizer
> > We'll see srcnn.xml and srcnn.bin at current path, copy them to the
> directory
> > where ffmpeg is.
> >
> > I have also uploaded the model files at
> > https://github.com/guoyejun/dnn_processing/tree/master/models
> >
> > 3) run with openvino backend:
> > ffmpeg -i input.jpg -vf
> > format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=openvino
> > :model=srcnn.xml:input=x:output=srcnn/Maximum -y srcnn.ov.jpg (The
> > input.jpg resolution is 720*480)
> >
> > Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
> > ---
> >  doc/filters.texi                | 10 +++++++++-
> >  libavfilter/vf_dnn_processing.c |  5 ++++-
> >  2 files changed, 13 insertions(+), 2 deletions(-)
>
> any comment for this patch set? thanks.
>
It would be nice if you include some benchmark numbers, comparing it with
the others backends.
Rest LGTM, thanks!

> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
Guo, Yejun June 29, 2020, 3:17 a.m. UTC | #3
> -----Original Message-----
> From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of Pedro
> Arthur
> Sent: 2020年6月28日 23:29
> To: FFmpeg development discussions and patches <ffmpeg-devel@ffmpeg.org>
> Subject: Re: [FFmpeg-devel] [PATCH V2 2/2] vf_dnn_processing.c: add dnn
> backend openvino
> 
> Hi,
> 
> Em qua., 24 de jun. de 2020 às 03:40, Guo, Yejun <yejun.guo@intel.com>
> escreveu:
> 
> >
> >
> > > -----Original Message-----
> > > From: Guo, Yejun <yejun.guo@intel.com>
> > > Sent: 2020年6月11日 21:01
> > > To: ffmpeg-devel@ffmpeg.org
> > > Cc: Guo, Yejun <yejun.guo@intel.com>
> > > Subject: [PATCH V2 2/2] vf_dnn_processing.c: add dnn backend
> > > openvino
> > >
> > > We can try with the srcnn model from sr filter.
> > > 1) get srcnn.pb model file, see filter sr
> > > 2) convert srcnn.pb into openvino model with command:
> > > python mo_tf.py --input_model srcnn.pb --data_type=FP32
> > > --input_shape [1,960,1440,1] --keep_shape_ops
> > >
> > > See the script at
> > > https://github.com/openvinotoolkit/openvino/tree/master/model-optimi
> > > zer We'll see srcnn.xml and srcnn.bin at current path, copy them to
> > > the
> > directory
> > > where ffmpeg is.
> > >
> > > I have also uploaded the model files at
> > > https://github.com/guoyejun/dnn_processing/tree/master/models
> > >
> > > 3) run with openvino backend:
> > > ffmpeg -i input.jpg -vf
> > >
> format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=openvi
> > > no :model=srcnn.xml:input=x:output=srcnn/Maximum -y srcnn.ov.jpg
> > > (The input.jpg resolution is 720*480)
> > >
> > > Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
> > > ---
> > >  doc/filters.texi                | 10 +++++++++-
> > >  libavfilter/vf_dnn_processing.c |  5 ++++-
> > >  2 files changed, 13 insertions(+), 2 deletions(-)
> >
> > any comment for this patch set? thanks.
> >
> It would be nice if you include some benchmark numbers, comparing it with the
> others backends.
> Rest LGTM, thanks!

Thanks Pedro, I tried srcnn model on a half-minute video to check the performance of tesorflow
backend (libtensorflow-cpu-linux-x86_64-1.14.0.tar.gz) and openvino backend with cpu path. 
The openvino backend does not cost more time. Actually, current openvino patch did not
enable throughput mode which increases performance a lot, see more detail at 
https://docs.openvinotoolkit.org/latest/_docs_optimization_guide_dldt_optimization_guide.html#cpu-streams

I plan to enable the features for performance next, for example, use filter's activate interface,
enable batch frames for one inference (they are common for different backends) and also
the throughput mode (in openvino backend), etc.

will push the patch tomorrow if no other comments, thanks.
diff mbox series

Patch

diff --git a/doc/filters.texi b/doc/filters.texi
index 84567de..d197d33 100644
--- a/doc/filters.texi
+++ b/doc/filters.texi
@@ -9288,13 +9288,21 @@  TensorFlow backend. To enable this backend you
 need to install the TensorFlow for C library (see
 @url{https://www.tensorflow.org/install/install_c}) and configure FFmpeg with
 @code{--enable-libtensorflow}
+
+@item openvino
+OpenVINO backend. To enable this backend you
+need to build and install the OpenVINO for C library (see
+@url{https://github.com/openvinotoolkit/openvino/blob/master/build-instruction.md}) and configure FFmpeg with
+@code{--enable-libopenvino} (--extra-cflags=-I... --extra-ldflags=-L... might
+be needed if the header files and libraries are not installed into system path)
+
 @end table
 
 Default value is @samp{native}.
 
 @item model
 Set path to model file specifying network architecture and its parameters.
-Note that different backends use different file formats. TensorFlow and native
+Note that different backends use different file formats. TensorFlow, OpenVINO and native
 backend can load files for only its format.
 
 Native model file (.model) can be generated from TensorFlow model file (.pb) by using tools/python/convert.py
diff --git a/libavfilter/vf_dnn_processing.c b/libavfilter/vf_dnn_processing.c
index cf589ac..4b31808 100644
--- a/libavfilter/vf_dnn_processing.c
+++ b/libavfilter/vf_dnn_processing.c
@@ -58,11 +58,14 @@  typedef struct DnnProcessingContext {
 #define OFFSET(x) offsetof(DnnProcessingContext, x)
 #define FLAGS AV_OPT_FLAG_FILTERING_PARAM | AV_OPT_FLAG_VIDEO_PARAM
 static const AVOption dnn_processing_options[] = {
-    { "dnn_backend", "DNN backend",                OFFSET(backend_type),     AV_OPT_TYPE_INT,       { .i64 = 0 },    0, 1, FLAGS, "backend" },
+    { "dnn_backend", "DNN backend",                OFFSET(backend_type),     AV_OPT_TYPE_INT,       { .i64 = 0 },    INT_MIN, INT_MAX, FLAGS, "backend" },
     { "native",      "native backend flag",        0,                        AV_OPT_TYPE_CONST,     { .i64 = 0 },    0, 0, FLAGS, "backend" },
 #if (CONFIG_LIBTENSORFLOW == 1)
     { "tensorflow",  "tensorflow backend flag",    0,                        AV_OPT_TYPE_CONST,     { .i64 = 1 },    0, 0, FLAGS, "backend" },
 #endif
+#if (CONFIG_LIBOPENVINO == 1)
+    { "openvino",    "openvino backend flag",      0,                        AV_OPT_TYPE_CONST,     { .i64 = 2 },    0, 0, FLAGS, "backend" },
+#endif
     { "model",       "path to model file",         OFFSET(model_filename),   AV_OPT_TYPE_STRING,    { .str = NULL }, 0, 0, FLAGS },
     { "input",       "input name of the model",    OFFSET(model_inputname),  AV_OPT_TYPE_STRING,    { .str = NULL }, 0, 0, FLAGS },
     { "output",      "output name of the model",   OFFSET(model_outputname), AV_OPT_TYPE_STRING,    { .str = NULL }, 0, 0, FLAGS },