From patchwork Tue May 26 01:53:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Guo, Yejun" X-Patchwork-Id: 19869 Return-Path: X-Original-To: patchwork@ffaux-bg.ffmpeg.org Delivered-To: patchwork@ffaux-bg.ffmpeg.org Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by ffaux.localdomain (Postfix) with ESMTP id 9CD5D44ACEF for ; Tue, 26 May 2020 04:56:47 +0300 (EEST) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 857826898AE; Tue, 26 May 2020 04:56:47 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id E76AF689256 for ; Tue, 26 May 2020 04:56:39 +0300 (EEST) IronPort-SDR: qpz1A+/WkV0MQ2fmrhiR7y71ilqR6tFtbf0Q7RgQhg/3Bd9UnminfavAyRQjlCCHONjpkMw+7R 5d79Pne1F1pw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 May 2020 18:56:38 -0700 IronPort-SDR: qJW33d60u66gTq7jgI4c7HbpYxbdyzemdVRqrpMB7s03PXdOhoilJtghyUXzwFzmmtD4mxYP06 1cWvrj5oKMXw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,435,1583222400"; d="scan'208";a="266294094" Received: from yguo18-skl-u1604.sh.intel.com ([10.239.159.53]) by orsmga003.jf.intel.com with ESMTP; 25 May 2020 18:56:37 -0700 From: "Guo, Yejun" To: ffmpeg-devel@ffmpeg.org Date: Tue, 26 May 2020 09:53:58 +0800 Message-Id: <1590458038-16035-1-git-send-email-yejun.guo@intel.com> X-Mailer: git-send-email 2.7.4 Subject: [FFmpeg-devel] [PATCH 2/2] vf_dnn_processing.c: add dnn backend openvino X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: yejun.guo@intel.com MIME-Version: 1.0 Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" We can try with the srcnn model from sr filter. 1) get srcnn.pb model file, see filter sr 2) convert srcnn.pb into openvino model with command: python mo_tf.py --input_model srcnn.pb --data_type=FP32 --input_shape [1,960,1440,1] --keep_shape_ops See the script at https://github.com/openvinotoolkit/openvino/tree/master/model-optimizer We'll see srcnn.xml and srcnn.bin at current path, copy them to the directory where ffmpeg is. I have also uploaded the model files at https://github.com/guoyejun/dnn_processing/tree/master/models 3) run with openvino backend: ffmpeg -i input.jpg -vf format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=openvino:model=srcnn.xml:input=x:output=srcnn/Maximum -y srcnn.ov.jpg (The input.jpg resolution is 720*480) Signed-off-by: Guo, Yejun --- doc/filters.texi | 10 +++++++++- libavfilter/vf_dnn_processing.c | 5 ++++- 2 files changed, 13 insertions(+), 2 deletions(-) diff --git a/doc/filters.texi b/doc/filters.texi index 85a511b..b64a4a7 100644 --- a/doc/filters.texi +++ b/doc/filters.texi @@ -9260,13 +9260,21 @@ TensorFlow backend. To enable this backend you need to install the TensorFlow for C library (see @url{https://www.tensorflow.org/install/install_c}) and configure FFmpeg with @code{--enable-libtensorflow} + +@item openvino +OpenVINO backend. To enable this backend you +need to build and install the OpenVINO for C library (see +@url{https://github.com/openvinotoolkit/openvino/blob/master/build-instruction.md}) and configure FFmpeg with +@code{--enable-libopenvino} (--extra-cflags=-I... --extra-ldflags=-L... might +be needed if the header files and libraries are not installed into system path) + @end table Default value is @samp{native}. @item model Set path to model file specifying network architecture and its parameters. -Note that different backends use different file formats. TensorFlow and native +Note that different backends use different file formats. TensorFlow, OpenVINO and native backend can load files for only its format. Native model file (.model) can be generated from TensorFlow model file (.pb) by using tools/python/convert.py diff --git a/libavfilter/vf_dnn_processing.c b/libavfilter/vf_dnn_processing.c index cf589ac..4b31808 100644 --- a/libavfilter/vf_dnn_processing.c +++ b/libavfilter/vf_dnn_processing.c @@ -58,11 +58,14 @@ typedef struct DnnProcessingContext { #define OFFSET(x) offsetof(DnnProcessingContext, x) #define FLAGS AV_OPT_FLAG_FILTERING_PARAM | AV_OPT_FLAG_VIDEO_PARAM static const AVOption dnn_processing_options[] = { - { "dnn_backend", "DNN backend", OFFSET(backend_type), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 1, FLAGS, "backend" }, + { "dnn_backend", "DNN backend", OFFSET(backend_type), AV_OPT_TYPE_INT, { .i64 = 0 }, INT_MIN, INT_MAX, FLAGS, "backend" }, { "native", "native backend flag", 0, AV_OPT_TYPE_CONST, { .i64 = 0 }, 0, 0, FLAGS, "backend" }, #if (CONFIG_LIBTENSORFLOW == 1) { "tensorflow", "tensorflow backend flag", 0, AV_OPT_TYPE_CONST, { .i64 = 1 }, 0, 0, FLAGS, "backend" }, #endif +#if (CONFIG_LIBOPENVINO == 1) + { "openvino", "openvino backend flag", 0, AV_OPT_TYPE_CONST, { .i64 = 2 }, 0, 0, FLAGS, "backend" }, +#endif { "model", "path to model file", OFFSET(model_filename), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS }, { "input", "input name of the model", OFFSET(model_inputname), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS }, { "output", "output name of the model", OFFSET(model_outputname), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS },