From patchwork Tue Dec 12 02:33:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chen, Wenbin" X-Patchwork-Id: 45083 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a20:1225:b0:181:818d:5e7f with SMTP id v37csp3334352pzf; Mon, 11 Dec 2023 18:34:24 -0800 (PST) X-Google-Smtp-Source: AGHT+IG5gLUcZBZ3aKQsOp8VOuLyDMKgfoVR4n32A7g4z3zL5w6aJeH5sYOlXKlDl0k0BWSpe5QS X-Received: by 2002:a17:906:20dd:b0:a1f:a0a1:20f9 with SMTP id c29-20020a17090620dd00b00a1fa0a120f9mr1596493ejc.151.1702348463905; Mon, 11 Dec 2023 18:34:23 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702348463; cv=none; d=google.com; s=arc-20160816; b=RaUm/83qGv/bSyK5geIrdl3MgCnJM8E8XU5Z7+DfMh234WsZnDUZjaCEEpAJE7aKes CFaYaFxEGnCBCxA+JXvFviiBTJq1LaKpTcYx18ekO5MQ6CcNgvW5lZOLx+nOoYprx0IT LjbJ2+WOKko647++gOwyyCz2AQ+j0VnPKZ+CkdoqKczTxntdh8PSzVRkcOSZy9M/sqdi WPrh8eB+ScdvpXUQSe250bHcOTMWAzZFL3LmI3MT9ENx99gNHK9n/B60vvBRV+OD9vGa NS+Qy4rc2Yg3STC4Z0NEAdMvlsit6AZQ/G+6t+R8D97YdCKd504P2NB9vyClwKnE5kRO 8Gaw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:reply-to:list-subscribe :list-help:list-post:list-archive:list-unsubscribe:list-id :precedence:subject:mime-version:message-id:date:to:from :dkim-signature:delivered-to; bh=d8Lc/Te11bTqN+JEQNRppSg/UF8nEMTgg6JmwMkWYPQ=; fh=YOA8vD9MJZuwZ71F/05pj6KdCjf6jQRmzLS+CATXUQk=; b=UvzBoF9z5t1Gf2N1PafK187Ooj9IeQrLspLR5JYSxCMQPk2xsdofYnWQOxGdE3DMBh +3SjO4dVQLViVPb8pv/Gn/rheCKWrQ46JP5lA51uNDJqROZ2gHtO4m6oC33m7wsjDLiI Ycskh1kkUd/awDmWfpiz/siZ3eRZ+ckmQppCbnRx8MCzmi81macozTxNUAVdHIedLaDn Ov3KlnwdPGAeFLthUW7zA3/gfFNAn/Hgs80Zy2+8xvmj/+tZyLsNuw0JDWvH+eGIse/9 5R8xWk+PHmRrW3sAnfAMh3LveUjVYncgF6ZyGRvrrfszjEeypR11e9aht+1hVVxtpot1 WD/g== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=Ivej+vBt; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id lv15-20020a170906bc8f00b00a1f8c8b725esi2155077ejb.841.2023.12.11.18.34.23; Mon, 11 Dec 2023 18:34:23 -0800 (PST) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=Ivej+vBt; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 3457068D254; Tue, 12 Dec 2023 04:34:08 +0200 (EET) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 22F4568D24D for ; Tue, 12 Dec 2023 04:33:59 +0200 (EET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1702348445; x=1733884445; h=from:to:subject:date:message-id:mime-version: content-transfer-encoding; bh=vogDmk7kMvf3YVu6fjyTr4qr3ZPkCiqCT85raBM1QPA=; b=Ivej+vBt6/1zHIZnzROI26uxIpiaeITJtrqm1EOrpjloSn9Ir6mSzxAw WkjoVaarK2ZekXp6tBRY3sUvgqg+qrw0tm1XU6z69pdR6sY1X6Vb/g/Sq 2rZVDFac2lSVtYBWIrBLx1jQmZi+HoLTBVIQ20Outbk3kfsHh/e/i+K10 DhKeu1kqTgQVegV+oaJPzLt/+aKtYKA0AwjFp7fVVz2tJfzdLu9C4l/Ji UNe+tSiBMn5om04+4yukwvpbHIZ+9/ekRMDOA7R4xlKMpKrw7qP+WwZoR YBQiCYdFzmoUqO6UFlmCPT4l8qrSoA18JKouOv4ireX68eKW+gXfaUYdR g==; X-IronPort-AV: E=McAfee;i="6600,9927,10921"; a="379738940" X-IronPort-AV: E=Sophos;i="6.04,269,1695711600"; d="scan'208";a="379738940" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Dec 2023 18:33:37 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10921"; a="1020503510" X-IronPort-AV: E=Sophos;i="6.04,269,1695711600"; d="scan'208";a="1020503510" Received: from wenbin-z390-aorus-ultra.sh.intel.com ([10.239.156.43]) by fmsmga006.fm.intel.com with ESMTP; 11 Dec 2023 18:33:35 -0800 From: wenbin.chen-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Tue, 12 Dec 2023 10:33:31 +0800 Message-Id: <20231212023334.2506376-1-wenbin.chen@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v2 1/4] libavfiter/dnn_backend_openvino: Add multiple output support X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: dqXqanjsgajL From: Wenbin Chen Add multiple output support to openvino backend. You can use '&' to split different output when you set output name using command line. Signed-off-by: Wenbin Chen --- libavfilter/dnn/dnn_backend_common.c | 7 - libavfilter/dnn/dnn_backend_openvino.c | 216 +++++++++++++++++-------- libavfilter/vf_dnn_detect.c | 11 +- 3 files changed, 150 insertions(+), 84 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_common.c b/libavfilter/dnn/dnn_backend_common.c index 91a4a3c4bf..632832ec36 100644 --- a/libavfilter/dnn/dnn_backend_common.c +++ b/libavfilter/dnn/dnn_backend_common.c @@ -43,13 +43,6 @@ int ff_check_exec_params(void *ctx, DNNBackendType backend, DNNFunctionType func return AVERROR(EINVAL); } - if (exec_params->nb_output != 1 && backend != DNN_TF) { - // currently, the filter does not need multiple outputs, - // so we just pending the support until we really need it. - avpriv_report_missing_feature(ctx, "multiple outputs"); - return AVERROR(ENOSYS); - } - return 0; } diff --git a/libavfilter/dnn/dnn_backend_openvino.c b/libavfilter/dnn/dnn_backend_openvino.c index 6fe8b9c243..089e028818 100644 --- a/libavfilter/dnn/dnn_backend_openvino.c +++ b/libavfilter/dnn/dnn_backend_openvino.c @@ -64,7 +64,7 @@ typedef struct OVModel{ ov_compiled_model_t *compiled_model; ov_output_const_port_t* input_port; ov_preprocess_input_info_t* input_info; - ov_output_const_port_t* output_port; + ov_output_const_port_t** output_ports; ov_preprocess_output_info_t* output_info; ov_preprocess_prepostprocessor_t* preprocess; #else @@ -77,6 +77,7 @@ typedef struct OVModel{ SafeQueue *request_queue; // holds OVRequestItem Queue *task_queue; // holds TaskItem Queue *lltask_queue; // holds LastLevelTaskItem + int nb_outputs; } OVModel; // one request for one call to openvino @@ -349,7 +350,7 @@ static void infer_completion_callback(void *args) TaskItem *task = lltask->task; OVModel *ov_model = task->model; SafeQueue *requestq = ov_model->request_queue; - DNNData output; + DNNData *outputs; OVContext *ctx = &ov_model->ctx; #if HAVE_OPENVINO2 size_t* dims; @@ -358,45 +359,61 @@ static void infer_completion_callback(void *args) ov_shape_t output_shape = {0}; ov_element_type_e precision; - memset(&output, 0, sizeof(output)); - status = ov_infer_request_get_output_tensor_by_index(request->infer_request, 0, &output_tensor); - if (status != OK) { - av_log(ctx, AV_LOG_ERROR, - "Failed to get output tensor."); + outputs = av_calloc(ov_model->nb_outputs, sizeof(*outputs)); + if (!outputs) { + av_log(ctx, AV_LOG_ERROR, "Failed to alloc outputs."); return; } - status = ov_tensor_data(output_tensor, &output.data); - if (status != OK) { - av_log(ctx, AV_LOG_ERROR, - "Failed to get output data."); - return; - } + for (int i = 0; i < ov_model->nb_outputs; i++) { + status = ov_infer_request_get_tensor_by_const_port(request->infer_request, + ov_model->output_ports[i], + &output_tensor); + if (status != OK) { + av_log(ctx, AV_LOG_ERROR, + "Failed to get output tensor."); + goto end; + } - status = ov_tensor_get_shape(output_tensor, &output_shape); - if (status != OK) { - av_log(ctx, AV_LOG_ERROR, "Failed to get output port shape.\n"); - return; - } - dims = output_shape.dims; + status = ov_tensor_data(output_tensor, &outputs[i].data); + if (status != OK) { + av_log(ctx, AV_LOG_ERROR, + "Failed to get output data."); + goto end; + } - status = ov_port_get_element_type(ov_model->output_port, &precision); - if (status != OK) { - av_log(ctx, AV_LOG_ERROR, "Failed to get output port data type.\n"); + status = ov_tensor_get_shape(output_tensor, &output_shape); + if (status != OK) { + av_log(ctx, AV_LOG_ERROR, "Failed to get output port shape.\n"); + goto end; + } + dims = output_shape.dims; + + status = ov_port_get_element_type(ov_model->output_ports[i], &precision); + if (status != OK) { + av_log(ctx, AV_LOG_ERROR, "Failed to get output port data type.\n"); + goto end; + } + outputs[i].dt = precision_to_datatype(precision); + + outputs[i].channels = output_shape.rank > 2 ? dims[output_shape.rank - 3] : 1; + outputs[i].height = output_shape.rank > 1 ? dims[output_shape.rank - 2] : 1; + outputs[i].width = output_shape.rank > 0 ? dims[output_shape.rank - 1] : 1; + av_assert0(request->lltask_count <= dims[0]); + outputs[i].layout = ctx->options.layout; + outputs[i].scale = ctx->options.scale; + outputs[i].mean = ctx->options.mean; ov_shape_free(&output_shape); - return; + ov_tensor_free(output_tensor); + output_tensor = NULL; } - output.channels = output_shape.rank > 2 ? dims[output_shape.rank - 3] : 1; - output.height = output_shape.rank > 1 ? dims[output_shape.rank - 2] : 1; - output.width = output_shape.rank > 0 ? dims[output_shape.rank - 1] : 1; - av_assert0(request->lltask_count <= dims[0]); - ov_shape_free(&output_shape); #else IEStatusCode status; dimensions_t dims; ie_blob_t *output_blob = NULL; ie_blob_buffer_t blob_buffer; precision_e precision; + DNNData output; status = ie_infer_request_get_blob(request->infer_request, task->output_names[0], &output_blob); if (status != OK) { av_log(ctx, AV_LOG_ERROR, @@ -424,11 +441,12 @@ static void infer_completion_callback(void *args) output.height = dims.dims[2]; output.width = dims.dims[3]; av_assert0(request->lltask_count <= dims.dims[0]); -#endif output.dt = precision_to_datatype(precision); output.layout = ctx->options.layout; output.scale = ctx->options.scale; output.mean = ctx->options.mean; + outputs = &output; +#endif av_assert0(request->lltask_count >= 1); for (int i = 0; i < request->lltask_count; ++i) { @@ -438,28 +456,33 @@ static void infer_completion_callback(void *args) case DFT_PROCESS_FRAME: if (task->do_ioproc) { if (ov_model->model->frame_post_proc != NULL) { - ov_model->model->frame_post_proc(task->out_frame, &output, ov_model->model->filter_ctx); + ov_model->model->frame_post_proc(task->out_frame, outputs, ov_model->model->filter_ctx); } else { - ff_proc_from_dnn_to_frame(task->out_frame, &output, ctx); + ff_proc_from_dnn_to_frame(task->out_frame, outputs, ctx); } } else { - task->out_frame->width = output.width; - task->out_frame->height = output.height; + task->out_frame->width = outputs[0].width; + task->out_frame->height = outputs[0].height; } break; case DFT_ANALYTICS_DETECT: if (!ov_model->model->detect_post_proc) { av_log(ctx, AV_LOG_ERROR, "detect filter needs to provide post proc\n"); - return; + goto end; } - ov_model->model->detect_post_proc(task->in_frame, &output, 1, ov_model->model->filter_ctx); + ov_model->model->detect_post_proc(task->in_frame, outputs, + ov_model->nb_outputs, + ov_model->model->filter_ctx); break; case DFT_ANALYTICS_CLASSIFY: if (!ov_model->model->classify_post_proc) { av_log(ctx, AV_LOG_ERROR, "classify filter needs to provide post proc\n"); - return; + goto end; } - ov_model->model->classify_post_proc(task->in_frame, &output, request->lltasks[i]->bbox_index, ov_model->model->filter_ctx); + for (int output_i = 0; output_i < ov_model->nb_outputs; output_i++) + ov_model->model->classify_post_proc(task->in_frame, outputs, + request->lltasks[i]->bbox_index, + ov_model->model->filter_ctx); break; default: av_assert0(!"should not reach here"); @@ -468,10 +491,17 @@ static void infer_completion_callback(void *args) task->inference_done++; av_freep(&request->lltasks[i]); - output.data = (uint8_t *)output.data - + output.width * output.height * output.channels * get_datatype_size(output.dt); + for (int i = 0; i < ov_model->nb_outputs; i++) + outputs[i].data = (uint8_t *)outputs[i].data + + outputs[i].width * outputs[i].height * outputs[i].channels * get_datatype_size(outputs[i].dt); } -#if !HAVE_OPENVINO2 +end: +#if HAVE_OPENVINO2 + av_freep(&outputs); + ov_shape_free(&output_shape); + if (output_tensor) + ov_tensor_free(output_tensor); +#else ie_blob_free(&output_blob); #endif request->lltask_count = 0; @@ -525,8 +555,10 @@ static void dnn_free_model_ov(DNNModel **model) #if HAVE_OPENVINO2 if (ov_model->input_port) ov_output_const_port_free(ov_model->input_port); - if (ov_model->output_port) - ov_output_const_port_free(ov_model->output_port); + for (int i = 0; i < ov_model->nb_outputs; i++) + if (ov_model->output_ports[i]) + ov_output_const_port_free(ov_model->output_ports[i]); + av_freep(&ov_model->output_ports); if (ov_model->preprocess) ov_preprocess_prepostprocessor_free(ov_model->preprocess); if (ov_model->compiled_model) @@ -551,7 +583,7 @@ static void dnn_free_model_ov(DNNModel **model) } -static int init_model_ov(OVModel *ov_model, const char *input_name, const char *output_name) +static int init_model_ov(OVModel *ov_model, const char *input_name, const char **output_names, int nb_outputs) { int ret = 0; OVContext *ctx = &ov_model->ctx; @@ -594,17 +626,15 @@ static int init_model_ov(OVModel *ov_model, const char *input_name, const char * } status = ov_preprocess_prepostprocessor_get_input_info_by_name(ov_model->preprocess, input_name, &ov_model->input_info); - status |= ov_preprocess_prepostprocessor_get_output_info_by_name(ov_model->preprocess, output_name, &ov_model->output_info); if (status != OK) { - av_log(ctx, AV_LOG_ERROR, "Failed to get input/output info from preprocess.\n"); + av_log(ctx, AV_LOG_ERROR, "Failed to get input info from preprocess.\n"); ret = ov2_map_error(status, NULL); goto err; } status = ov_preprocess_input_info_get_tensor_info(ov_model->input_info, &input_tensor_info); - status |= ov_preprocess_output_info_get_tensor_info(ov_model->output_info, &output_tensor_info); if (status != OK) { - av_log(ctx, AV_LOG_ERROR, "Failed to get tensor info from input/output.\n"); + av_log(ctx, AV_LOG_ERROR, "Failed to get tensor info from input.\n"); ret = ov2_map_error(status, NULL); goto err; } @@ -642,17 +672,43 @@ static int init_model_ov(OVModel *ov_model, const char *input_name, const char * } status = ov_preprocess_input_tensor_info_set_element_type(input_tensor_info, U8); - if (ov_model->model->func_type != DFT_PROCESS_FRAME) - status |= ov_preprocess_output_set_element_type(output_tensor_info, F32); - else if (fabsf(ctx->options.scale - 1) > 1e-6f || fabsf(ctx->options.mean) > 1e-6f) - status |= ov_preprocess_output_set_element_type(output_tensor_info, F32); - else - status |= ov_preprocess_output_set_element_type(output_tensor_info, U8); if (status != OK) { - av_log(ctx, AV_LOG_ERROR, "Failed to set input/output element type\n"); + av_log(ctx, AV_LOG_ERROR, "Failed to set input element type\n"); ret = ov2_map_error(status, NULL); goto err; } + + ov_model->nb_outputs = nb_outputs; + for (int i = 0; i < nb_outputs; i++) { + status = ov_preprocess_prepostprocessor_get_output_info_by_name( + ov_model->preprocess, output_names[i], &ov_model->output_info); + if (status != OK) { + av_log(ctx, AV_LOG_ERROR, "Failed to get output info from preprocess.\n"); + ret = ov2_map_error(status, NULL); + goto err; + } + status |= ov_preprocess_output_info_get_tensor_info(ov_model->output_info, &output_tensor_info); + if (status != OK) { + av_log(ctx, AV_LOG_ERROR, "Failed to get tensor info from input/output.\n"); + ret = ov2_map_error(status, NULL); + goto err; + } + if (ov_model->model->func_type != DFT_PROCESS_FRAME) + status |= ov_preprocess_output_set_element_type(output_tensor_info, F32); + else if (fabsf(ctx->options.scale - 1) > 1e-6f || fabsf(ctx->options.mean) > 1e-6f) + status |= ov_preprocess_output_set_element_type(output_tensor_info, F32); + else + status |= ov_preprocess_output_set_element_type(output_tensor_info, U8); + if (status != OK) { + av_log(ctx, AV_LOG_ERROR, "Failed to set output element type\n"); + ret = ov2_map_error(status, NULL); + goto err; + } + ov_preprocess_output_tensor_info_free(output_tensor_info); + output_tensor_info = NULL; + ov_preprocess_output_info_free(ov_model->output_info); + ov_model->output_info = NULL; + } // set preprocess steps. if (fabsf(ctx->options.scale - 1) > 1e-6f || fabsf(ctx->options.mean) > 1e-6f) { ov_preprocess_preprocess_steps_t* input_process_steps = NULL; @@ -667,11 +723,18 @@ static int init_model_ov(OVModel *ov_model, const char *input_name, const char * status |= ov_preprocess_preprocess_steps_scale(input_process_steps, ctx->options.scale); if (status != OK) { av_log(ctx, AV_LOG_ERROR, "Failed to set preprocess steps\n"); + ov_preprocess_preprocess_steps_free(input_process_steps); + input_process_steps = NULL; ret = ov2_map_error(status, NULL); goto err; } ov_preprocess_preprocess_steps_free(input_process_steps); + input_process_steps = NULL; } + ov_preprocess_input_tensor_info_free(input_tensor_info); + input_tensor_info = NULL; + ov_preprocess_input_info_free(ov_model->input_info); + ov_model->input_info = NULL; //update model if(ov_model->ov_model) @@ -679,20 +742,33 @@ static int init_model_ov(OVModel *ov_model, const char *input_name, const char * status = ov_preprocess_prepostprocessor_build(ov_model->preprocess, &ov_model->ov_model); if (status != OK) { av_log(ctx, AV_LOG_ERROR, "Failed to update OV model\n"); + ov_model_free(tmp_ov_model); + tmp_ov_model = NULL; ret = ov2_map_error(status, NULL); goto err; } ov_model_free(tmp_ov_model); //update output_port - if (ov_model->output_port) { - ov_output_const_port_free(ov_model->output_port); - ov_model->output_port = NULL; - } - status = ov_model_const_output_by_name(ov_model->ov_model, output_name, &ov_model->output_port); - if (status != OK) { - av_log(ctx, AV_LOG_ERROR, "Failed to get output port.\n"); - goto err; + if (!ov_model->output_ports) { + ov_model->output_ports = av_calloc(nb_outputs, sizeof(*ov_model->output_ports)); + if (!ov_model->output_ports) { + ret = AVERROR(ENOMEM); + goto err; + } + } else + for (int i = 0; i < nb_outputs; i++) { + ov_output_const_port_free(ov_model->output_ports[i]); + ov_model->output_ports[i] = NULL; + } + + for (int i = 0; i < nb_outputs; i++) { + status = ov_model_const_output_by_name(ov_model->ov_model, output_names[i], + &ov_model->output_ports[i]); + if (status != OK) { + av_log(ctx, AV_LOG_ERROR, "Failed to get output port %s.\n", output_names[i]); + goto err; + } } //compile network status = ov_core_compile_model(ov_model->core, ov_model->ov_model, device, 0, &ov_model->compiled_model); @@ -701,6 +777,7 @@ static int init_model_ov(OVModel *ov_model, const char *input_name, const char * goto err; } ov_preprocess_input_model_info_free(input_model_info); + input_model_info = NULL; ov_layout_free(NCHW_layout); ov_layout_free(NHWC_layout); #else @@ -745,6 +822,7 @@ static int init_model_ov(OVModel *ov_model, const char *input_name, const char * ret = DNN_GENERIC_ERROR; goto err; } + ov_model->nb_outputs = 1; // all models in openvino open model zoo use BGR with range [0.0f, 255.0f] as input, // we don't have a AVPixelFormat to describe it, so we'll use AV_PIX_FMT_BGR24 and @@ -848,6 +926,10 @@ static int init_model_ov(OVModel *ov_model, const char *input_name, const char * err: #if HAVE_OPENVINO2 + if (output_tensor_info) + ov_preprocess_output_tensor_info_free(output_tensor_info); + if (ov_model->output_info) + ov_preprocess_output_info_free(ov_model->output_info); if (NCHW_layout) ov_layout_free(NCHW_layout); if (NHWC_layout) @@ -1204,11 +1286,6 @@ static int get_output_ov(void *model, const char *input_name, int input_width, i } } - status = ov_model_const_output_by_name(ov_model->ov_model, output_name, &ov_model->output_port); - if (status != OK) { - av_log(ctx, AV_LOG_ERROR, "Failed to get output port.\n"); - return ov2_map_error(status, NULL); - } if (!ov_model->compiled_model) { #else if (ctx->options.input_resizable) { @@ -1224,7 +1301,7 @@ static int get_output_ov(void *model, const char *input_name, int input_width, i } if (!ov_model->exe_network) { #endif - ret = init_model_ov(ov_model, input_name, output_name); + ret = init_model_ov(ov_model, input_name, &output_name, 1); if (ret != 0) { av_log(ctx, AV_LOG_ERROR, "Failed init OpenVINO exectuable network or inference request\n"); return ret; @@ -1397,7 +1474,8 @@ static int dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams *exec_p #else if (!ov_model->exe_network) { #endif - ret = init_model_ov(ov_model, exec_params->input_name, exec_params->output_names[0]); + ret = init_model_ov(ov_model, exec_params->input_name, + exec_params->output_names, exec_params->nb_output); if (ret != 0) { av_log(ctx, AV_LOG_ERROR, "Failed init OpenVINO exectuable network or inference request\n"); return ret; diff --git a/libavfilter/vf_dnn_detect.c b/libavfilter/vf_dnn_detect.c index 7ac3bb0b58..373dda58bf 100644 --- a/libavfilter/vf_dnn_detect.c +++ b/libavfilter/vf_dnn_detect.c @@ -354,11 +354,11 @@ static int dnn_detect_post_proc_ssd(AVFrame *frame, DNNData *output, AVFilterCon break; } } - return 0; } -static int dnn_detect_post_proc_ov(AVFrame *frame, DNNData *output, AVFilterContext *filter_ctx) +static int dnn_detect_post_proc_ov(AVFrame *frame, DNNData *output, int nb_outputs, + AVFilterContext *filter_ctx) { AVFrameSideData *sd; DnnDetectContext *ctx = filter_ctx->priv; @@ -466,7 +466,7 @@ static int dnn_detect_post_proc(AVFrame *frame, DNNData *output, uint32_t nb, AV DnnContext *dnn_ctx = &ctx->dnnctx; switch (dnn_ctx->backend_type) { case DNN_OV: - return dnn_detect_post_proc_ov(frame, output, filter_ctx); + return dnn_detect_post_proc_ov(frame, output, nb, filter_ctx); case DNN_TF: return dnn_detect_post_proc_tf(frame, output, filter_ctx); default: @@ -553,11 +553,6 @@ static int check_output_nb(DnnDetectContext *ctx, DNNBackendType backend_type, i } return 0; case DNN_OV: - if (output_nb != 1) { - av_log(ctx, AV_LOG_ERROR, "Dnn detect filter with openvino backend needs 1 output only, \ - but get %d instead\n", output_nb); - return AVERROR(EINVAL); - } return 0; default: avpriv_report_missing_feature(ctx, "Dnn detect filter does not support current backend\n");