From patchwork Tue Aug 24 05:31:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29749 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp2724033iov; Mon, 23 Aug 2021 22:32:46 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxAuXLjDowdUXPFzuBiB6PsSiOOBANv2SZMG3cB9KZEZi8LjXlSDm9B2GxDlTXp+UiwqWVW X-Received: by 2002:aa7:dd0d:: with SMTP id i13mr41840331edv.371.1629783166005; Mon, 23 Aug 2021 22:32:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629783166; cv=none; d=google.com; s=arc-20160816; b=VghyjpBB9rowacwacGAyZJ+aM0PX6cLc2FB4cTdUsWkVqBoJAokziVMrr7AcX/L70r 5Pez7XnYBtPbUMipIJCzV3nNy77sGLEbP3thLgpipA55/qB8dLvxVs0oYloJtfcpDFmR ar6kkXLkppSkclrTdEYWtaJQmCnusYU/AlThgET4XZwceGDB7BerK08L5epLF8Dv/JQi SA+INrQWIiVa3SwQ1NxjmNEpHQFOEJlhCdMRGGvcjJ4WQPrI6JSPFmN7U4/v6xq7NA4l 144+6TutvNBbvlBDimvpJaBx8L6EjTqKpSkT9qiWRdmlQ6AnEIJiJyUNdOIbip9t7z1/ 32Tw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:message-id:date:to:from :dkim-signature:delivered-to; bh=xu2dKNmWrWwG0geOgKuam9qO2bznmV2ZbhWhVoZ63Zk=; b=Gn+lYyoMVmyiZm1St7xLfiTProTW8vR1ChEDv8E4j67VuCR/0GFJxILQpXgWuFvxR5 6Fd1LnNbNSRbQUosrUTbj5zN1WY+dKnMa7NqTXJlQr5GDItcpDr8WzlNQKpAIfR62z+X haPy5f3GPRmpMkw9qhAdB7WmdCHvDbRvZCbBZO8qdC/P729IvKWSWO7E5bephKNQp8QV BeeWUNJ1pcFvL5rZrBySqr/4A+zTn5gi7VOCHPY0dh17zRkMtX7tYf+8wvLAgRqNuiIq scTGHTDRmstMm+ozi6iBKLIrJXG5jS1OOLG9C/W1IJp6SWr73u4T51+m+67oOXLTyUmC AXXA== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=OAqkuZ0B; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id a26si15883573ejb.536.2021.08.23.22.32.45; Mon, 23 Aug 2021 22:32:45 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=OAqkuZ0B; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 3DC47689E94; Tue, 24 Aug 2021 08:32:43 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pg1-f169.google.com (mail-pg1-f169.google.com [209.85.215.169]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 37EDC680074 for ; Tue, 24 Aug 2021 08:32:37 +0300 (EEST) Received: by mail-pg1-f169.google.com with SMTP id k14so18749552pga.13 for ; Mon, 23 Aug 2021 22:32:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=pw+Yajcr6OYzBhe2Ue6TuF+FKjaSfHiIKA9m+KwpSuY=; b=OAqkuZ0BIVjEH+hhFcXLx+h1YYhCKtOhHBZSloCOA69reDxPMTMzag7bwz6KiWlvIQ T39SHwfXG7Hb8fBnqLWSUeB8xjc3n7OCfi9buyI7pj+Z3XYUbutMVnTH8DKtMLssokB/ AUzKFML800x10MHNA/MEpzhsIobKK1FoB5wVN+UHc7Eitp5Z2R6MdVUVsPdKviM7nVBN zKrMkD/9CDNLzQ4VZKw6+7rdlB6o7KOs9Ab1tgHlV5/Tks/BB+J3Q6AMNEPXXq2h7H0K ygAyGJbOqdV0N7+NrPJAD5MMzyGAdutG+Z07yaFLb7YJkbJG+JAj9exx/aIE68reuBgl cdng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=pw+Yajcr6OYzBhe2Ue6TuF+FKjaSfHiIKA9m+KwpSuY=; b=M7KXu0W7wSXoG8113sNhk9aa04qiR/ObDqhjGc5DwItaRPLQpGEqidaSmg+puh0qv8 T3sBZR9MDevGip3Ow1TT8QjoHbvjKd/kOOrbYCUkEn60xR8kIQGNLsbMmvDt1QpPUKqC kzglzsYThm/tC4xvj0rQAIWHk6UwuT5FibWcmxw6lfiDkkbzdl2eLTYbJJAlxZ+jNPm+ nn7F5jIe4uIiSlnTRhpL62Olfi7gykg/k7ykgogUx/hlTmZF2OrKFwi2Fh/+dJj3gqjX RLW7qorWSktpGvhlefbQm/e4deASUhiWqFp6/jRLdTzV75Arp0yYfsrLAx8tFIFjlGBH xjCw== X-Gm-Message-State: AOAM5304SbLjvtcBG8Lyswrb0V52RZllj67QL5o2zNHu0d5jk4Yq2ZKB QhSX9Nq3gzV83tYqUvnCm14f0fsI1xpiwQ== X-Received: by 2002:a05:6a00:1891:b0:3eb:17e6:2847 with SMTP id x17-20020a056a00189100b003eb17e62847mr14152767pfh.20.1629783154255; Mon, 23 Aug 2021 22:32:34 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.221.236]) by smtp.googlemail.com with ESMTPSA id z12sm7095207pfe.79.2021.08.23.22.32.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Aug 2021 22:32:33 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Tue, 24 Aug 2021 11:01:45 +0530 Message-Id: <20210824053151.14305-1-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v3 1/7] lavfi/dnn: Task-based Inference in Native Backend X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: RLIN9EF7sKiN This commit rearranges the code in Native Backend to use the TaskItem for inference. Signed-off-by: Shubhanshu Saxena --- libavfilter/dnn/dnn_backend_native.c | 176 ++++++++++++++++++--------- libavfilter/dnn/dnn_backend_native.h | 2 + 2 files changed, 121 insertions(+), 57 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_native.c b/libavfilter/dnn/dnn_backend_native.c index a6be27f1fd..3b2a3aa55d 100644 --- a/libavfilter/dnn/dnn_backend_native.c +++ b/libavfilter/dnn/dnn_backend_native.c @@ -45,9 +45,29 @@ static const AVClass dnn_native_class = { .category = AV_CLASS_CATEGORY_FILTER, }; -static DNNReturnType execute_model_native(const DNNModel *model, const char *input_name, AVFrame *in_frame, - const char **output_names, uint32_t nb_output, AVFrame *out_frame, - int do_ioproc); +static DNNReturnType execute_model_native(Queue *inference_queue); + +static DNNReturnType extract_inference_from_task(TaskItem *task, Queue *inference_queue) +{ + NativeModel *native_model = task->model; + NativeContext *ctx = &native_model->ctx; + InferenceItem *inference = av_malloc(sizeof(*inference)); + + if (!inference) { + av_log(ctx, AV_LOG_ERROR, "Unable to allocate space for InferenceItem\n"); + return DNN_ERROR; + } + task->inference_todo = 1; + task->inference_done = 0; + inference->task = task; + + if (ff_queue_push_back(inference_queue, inference) < 0) { + av_log(ctx, AV_LOG_ERROR, "Failed to push back inference_queue.\n"); + av_freep(&inference); + return DNN_ERROR; + } + return DNN_SUCCESS; +} static DNNReturnType get_input_native(void *model, DNNData *input, const char *input_name) { @@ -78,34 +98,36 @@ static DNNReturnType get_input_native(void *model, DNNData *input, const char *i static DNNReturnType get_output_native(void *model, const char *input_name, int input_width, int input_height, const char *output_name, int *output_width, int *output_height) { - DNNReturnType ret; + DNNReturnType ret = 0; NativeModel *native_model = model; NativeContext *ctx = &native_model->ctx; - AVFrame *in_frame = av_frame_alloc(); - AVFrame *out_frame = NULL; - - if (!in_frame) { - av_log(ctx, AV_LOG_ERROR, "Could not allocate memory for input frame\n"); - return DNN_ERROR; + TaskItem task; + DNNExecBaseParams exec_params = { + .input_name = input_name, + .output_names = &output_name, + .nb_output = 1, + .in_frame = NULL, + .out_frame = NULL, + }; + + if (ff_dnn_fill_gettingoutput_task(&task, &exec_params, native_model, input_height, input_width, ctx) != DNN_SUCCESS) { + ret = DNN_ERROR; + goto err; } - out_frame = av_frame_alloc(); - - if (!out_frame) { - av_log(ctx, AV_LOG_ERROR, "Could not allocate memory for output frame\n"); - av_frame_free(&in_frame); - return DNN_ERROR; + if (extract_inference_from_task(&task, native_model->inference_queue) != DNN_SUCCESS) { + av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); + ret = DNN_ERROR; + goto err; } - in_frame->width = input_width; - in_frame->height = input_height; - - ret = execute_model_native(native_model->model, input_name, in_frame, &output_name, 1, out_frame, 0); - *output_width = out_frame->width; - *output_height = out_frame->height; + ret = execute_model_native(native_model->inference_queue); + *output_width = task.out_frame->width; + *output_height = task.out_frame->height; - av_frame_free(&out_frame); - av_frame_free(&in_frame); +err: + av_frame_free(&task.out_frame); + av_frame_free(&task.in_frame); return ret; } @@ -190,6 +212,11 @@ DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType f goto fail; } + native_model->inference_queue = ff_queue_create(); + if (!native_model->inference_queue) { + goto fail; + } + for (layer = 0; layer < native_model->layers_num; ++layer){ layer_type = (int32_t)avio_rl32(model_file_context); dnn_size += 4; @@ -259,50 +286,66 @@ fail: return NULL; } -static DNNReturnType execute_model_native(const DNNModel *model, const char *input_name, AVFrame *in_frame, - const char **output_names, uint32_t nb_output, AVFrame *out_frame, - int do_ioproc) +static DNNReturnType execute_model_native(Queue *inference_queue) { - NativeModel *native_model = model->model; - NativeContext *ctx = &native_model->ctx; + NativeModel *native_model = NULL; + NativeContext *ctx = NULL; int32_t layer; DNNData input, output; DnnOperand *oprd = NULL; + InferenceItem *inference = NULL; + TaskItem *task = NULL; + DNNReturnType ret = 0; + + inference = ff_queue_pop_front(inference_queue); + if (!inference) { + av_log(NULL, AV_LOG_ERROR, "Failed to get inference item\n"); + ret = DNN_ERROR; + goto err; + } + task = inference->task; + native_model = task->model; + ctx = &native_model->ctx; if (native_model->layers_num <= 0 || native_model->operands_num <= 0) { av_log(ctx, AV_LOG_ERROR, "No operands or layers in model\n"); - return DNN_ERROR; + ret = DNN_ERROR; + goto err; } for (int i = 0; i < native_model->operands_num; ++i) { oprd = &native_model->operands[i]; - if (strcmp(oprd->name, input_name) == 0) { + if (strcmp(oprd->name, task->input_name) == 0) { if (oprd->type != DOT_INPUT) { - av_log(ctx, AV_LOG_ERROR, "Found \"%s\" in model, but it is not input node\n", input_name); - return DNN_ERROR; + av_log(ctx, AV_LOG_ERROR, "Found \"%s\" in model, but it is not input node\n", task->input_name); + ret = DNN_ERROR; + goto err; } break; } oprd = NULL; } if (!oprd) { - av_log(ctx, AV_LOG_ERROR, "Could not find \"%s\" in model\n", input_name); - return DNN_ERROR; + av_log(ctx, AV_LOG_ERROR, "Could not find \"%s\" in model\n", task->input_name); + ret = DNN_ERROR; + goto err; } - oprd->dims[1] = in_frame->height; - oprd->dims[2] = in_frame->width; + oprd->dims[1] = task->in_frame->height; + oprd->dims[2] = task->in_frame->width; av_freep(&oprd->data); oprd->length = ff_calculate_operand_data_length(oprd); if (oprd->length <= 0) { av_log(ctx, AV_LOG_ERROR, "The input data length overflow\n"); - return DNN_ERROR; + ret = DNN_ERROR; + goto err; } oprd->data = av_malloc(oprd->length); if (!oprd->data) { av_log(ctx, AV_LOG_ERROR, "Failed to malloc memory for input data\n"); - return DNN_ERROR; + ret = DNN_ERROR; + goto err; } input.height = oprd->dims[1]; @@ -310,19 +353,20 @@ static DNNReturnType execute_model_native(const DNNModel *model, const char *inp input.channels = oprd->dims[3]; input.data = oprd->data; input.dt = oprd->data_type; - if (do_ioproc) { + if (task->do_ioproc) { if (native_model->model->frame_pre_proc != NULL) { - native_model->model->frame_pre_proc(in_frame, &input, native_model->model->filter_ctx); + native_model->model->frame_pre_proc(task->in_frame, &input, native_model->model->filter_ctx); } else { - ff_proc_from_frame_to_dnn(in_frame, &input, ctx); + ff_proc_from_frame_to_dnn(task->in_frame, &input, ctx); } } - if (nb_output != 1) { + if (task->nb_output != 1) { // currently, the filter does not need multiple outputs, // so we just pending the support until we really need it. avpriv_report_missing_feature(ctx, "multiple outputs"); - return DNN_ERROR; + ret = DNN_ERROR; + goto err; } for (layer = 0; layer < native_model->layers_num; ++layer){ @@ -333,13 +377,14 @@ static DNNReturnType execute_model_native(const DNNModel *model, const char *inp native_model->layers[layer].params, &native_model->ctx) == DNN_ERROR) { av_log(ctx, AV_LOG_ERROR, "Failed to execute model\n"); - return DNN_ERROR; + ret = DNN_ERROR; + goto err; } } - for (uint32_t i = 0; i < nb_output; ++i) { + for (uint32_t i = 0; i < task->nb_output; ++i) { DnnOperand *oprd = NULL; - const char *output_name = output_names[i]; + const char *output_name = task->output_names[i]; for (int j = 0; j < native_model->operands_num; ++j) { if (strcmp(native_model->operands[j].name, output_name) == 0) { oprd = &native_model->operands[j]; @@ -349,7 +394,8 @@ static DNNReturnType execute_model_native(const DNNModel *model, const char *inp if (oprd == NULL) { av_log(ctx, AV_LOG_ERROR, "Could not find output in model\n"); - return DNN_ERROR; + ret = DNN_ERROR; + goto err; } output.data = oprd->data; @@ -358,32 +404,43 @@ static DNNReturnType execute_model_native(const DNNModel *model, const char *inp output.channels = oprd->dims[3]; output.dt = oprd->data_type; - if (do_ioproc) { + if (task->do_ioproc) { if (native_model->model->frame_post_proc != NULL) { - native_model->model->frame_post_proc(out_frame, &output, native_model->model->filter_ctx); + native_model->model->frame_post_proc(task->out_frame, &output, native_model->model->filter_ctx); } else { - ff_proc_from_dnn_to_frame(out_frame, &output, ctx); + ff_proc_from_dnn_to_frame(task->out_frame, &output, ctx); } } else { - out_frame->width = output.width; - out_frame->height = output.height; + task->out_frame->width = output.width; + task->out_frame->height = output.height; } } - - return DNN_SUCCESS; + task->inference_done++; +err: + av_freep(&inference); + return ret; } DNNReturnType ff_dnn_execute_model_native(const DNNModel *model, DNNExecBaseParams *exec_params) { NativeModel *native_model = model->model; NativeContext *ctx = &native_model->ctx; + TaskItem task; if (ff_check_exec_params(ctx, DNN_NATIVE, model->func_type, exec_params) != 0) { return DNN_ERROR; } - return execute_model_native(model, exec_params->input_name, exec_params->in_frame, - exec_params->output_names, exec_params->nb_output, exec_params->out_frame, 1); + if (ff_dnn_fill_task(&task, exec_params, native_model, 0, 1) != DNN_SUCCESS) { + return DNN_ERROR; + } + + if (extract_inference_from_task(&task, native_model->inference_queue) != DNN_SUCCESS) { + av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); + return DNN_ERROR; + } + + return execute_model_native(native_model->inference_queue); } int32_t ff_calculate_operand_dims_count(const DnnOperand *oprd) @@ -435,6 +492,11 @@ void ff_dnn_free_model_native(DNNModel **model) av_freep(&native_model->operands); } + while (ff_queue_size(native_model->inference_queue) != 0) { + InferenceItem *item = ff_queue_pop_front(native_model->inference_queue); + av_freep(&item); + } + ff_queue_destroy(native_model->inference_queue); av_freep(&native_model); } av_freep(model); diff --git a/libavfilter/dnn/dnn_backend_native.h b/libavfilter/dnn/dnn_backend_native.h index 89bcb8e358..1b9d5bdf2d 100644 --- a/libavfilter/dnn/dnn_backend_native.h +++ b/libavfilter/dnn/dnn_backend_native.h @@ -30,6 +30,7 @@ #include "../dnn_interface.h" #include "libavformat/avio.h" #include "libavutil/opt.h" +#include "queue.h" /** * the enum value of DNNLayerType should not be changed, @@ -126,6 +127,7 @@ typedef struct NativeModel{ int32_t layers_num; DnnOperand *operands; int32_t operands_num; + Queue *inference_queue; } NativeModel; DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx); From patchwork Tue Aug 24 05:31:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29755 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp2724129iov; Mon, 23 Aug 2021 22:32:57 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxoJ4lwARw5JvFpp+ZQrBQCylW2BEUTGtl1fqy1dBbBdGDEDp9X8oTgnjV50CARPtyp1cr5 X-Received: by 2002:a05:6402:160b:: with SMTP id f11mr40609644edv.22.1629783176907; Mon, 23 Aug 2021 22:32:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629783176; cv=none; d=google.com; s=arc-20160816; b=dt9b+VUSm0+jeKQ3TaG/G5mc0pa7/hFfu0RnQ86k6mCxiPhc3zfQkQrF0yFkFzrFFP XAdRb4B1wITPQF6s8qbVS2K+xmox5iYE9q1AeSPEYdEdauaO17XSc28HdwcidWWU+9Mk /Ap1+jOuz2enVyy10/JxHFP+nCvfGvzK4SWjM0eHAFEJjDXDqjvhS2SDjpH63nc8XzWO Vsi36vrBH4Xp+KwAZoNd6Cdzi+w/b+7cpVU70zNK7zvHgij70Pxf1IhlBSnZBhNwSarc xoaGc13SbFy/k5tGhzxJDhRWHekpqe2ywWyuCqR2fLegrl7sZvegC+IcncPt8TXHn72H ff+Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=8Ij/hgMq2/l3h2l9qGs1fO6QsWmR7QsWfLXZDcZ4YU0=; b=cd8brszrJTr9ktulhhKnimn+0Mhh/Fv/dyIG9SbyyWEY26KO5Mr3bpKsYn6qV9Se9j AZTZB+2+45pgSyi2EhA/HSYXMBbAjXqnYrHdvSptGEZyrUR/NCOhnehHQSfHGZbab28g cGzG4s3E9gXDMseEXgRqSYLVeE4eKH8EnLNgFBD5z8fNAgR5a/P523jRAvW8MpsHP34K WXKQlwFS+NkRSptIjxGLfKwcoqZDJigPTn8dBN/ReWFg2Eqgjl2GkqR/fAaa1kvTkayz CEFoImib1L5zhp7prfcmB/V13Cj0BzUUD5TPLGjhYlcYRbWvK6PSMYfnc6nZEwCoHpdQ 5QoA== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b="nAgxqoM/"; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id x2si19985040ejy.385.2021.08.23.22.32.56; Mon, 23 Aug 2021 22:32:56 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b="nAgxqoM/"; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 49DEE689F2C; Tue, 24 Aug 2021 08:32:46 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pg1-f174.google.com (mail-pg1-f174.google.com [209.85.215.174]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 2FEA1689E61 for ; Tue, 24 Aug 2021 08:32:38 +0300 (EEST) Received: by mail-pg1-f174.google.com with SMTP id x4so18737828pgh.1 for ; Mon, 23 Aug 2021 22:32:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=D0chzyfm0GqqnrVwzFmf9+NvIIx8DdAyC0PvWDW6vp0=; b=nAgxqoM/wOo+F4Z8LGyDrpZPmcDRdKqBMrpvs9LdQyG8IOnBNC6BXuu7cYrDFHeFqj uh/7xRlcs53r+5F7gLx7LVHo1kqIQywn1pJBKonzdYe3v+PXMld95WUmnEqTVbBRujKx OWvr46ys8ggYoEc/47oyBI+4kaM/vDcgT+ZUdSVSskJiKCt7HaTVmAueio3Ef1CS+Pig bLvKdbX5KObx5RNov7cuwRCrXlFjUpcLrPXCVpJgqCuxCYzcHx8YCWs9xX3CBPJ0kIEk NRcnzap5xesXrY5E3wIzIeJVkY3Pyyivb/QsLxvlyNQGOGyIBrjyMjceBK/Irz6E1IAl ezGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=D0chzyfm0GqqnrVwzFmf9+NvIIx8DdAyC0PvWDW6vp0=; b=g86YNywxZnNFWzT3D1nAzoBL+8NB/YlgJtBvHaFcOTd3Ex6ZeUdltqRGQ1k5nQQnUG 1P4dWF3H8iQCRE2zX9KOeU2KI/7VaaS/weOObIxrDH/RHxzj2eCpQAvQUm5NpfA4Pa8C Uftaxml83J9jqQrU1pqnaCApRzHXeKVOeJxZtAe8FPkvI2kLcWeJ0RK2dhVyOTSHU5v4 Eia3wSjCvCSOADJCIrS3Y9WqdD533vIMOqiviITFjysWo51SDm6xAWGlNVMLlG3tWWUT 9xcC4gu6747lnIbCTWaIp/gpbu047+cqYGIN4OPNwWH5XoI6qVDVkJ8ewJtdnhdZNL3M 8I+A== X-Gm-Message-State: AOAM530P1ztx4/0G8UFNd1J7ON5+/edt+BX0/eYTwxIfLkrBLfGtaJd/ /XourMT50/WCQ1mrV0UEVlvC04UuGKE4ZQ== X-Received: by 2002:a62:6143:0:b029:3c9:3117:c620 with SMTP id v64-20020a6261430000b02903c93117c620mr38002398pfb.30.1629783156240; Mon, 23 Aug 2021 22:32:36 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.221.236]) by smtp.googlemail.com with ESMTPSA id z12sm7095207pfe.79.2021.08.23.22.32.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Aug 2021 22:32:35 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Tue, 24 Aug 2021 11:01:46 +0530 Message-Id: <20210824053151.14305-2-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210824053151.14305-1-shubhanshu.e01@gmail.com> References: <20210824053151.14305-1-shubhanshu.e01@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v3 2/7] libavfilter: Unify Execution Modes in DNN Filters X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: cXxu8K0FOyJF This commit unifies the async and sync mode from the DNN filters' perspective. As of this commit, the Native backend only supports synchronous execution mode. Now the user can switch between async and sync mode by using the 'async' option in the backend_configs. The values can be 1 for async and 0 for sync mode of execution. This commit affects the following filters: 1. vf_dnn_classify 2. vf_dnn_detect 3. vf_dnn_processing 4. vf_sr 5. vf_derain Signed-off-by: Shubhanshu Saxena --- libavfilter/dnn/dnn_backend_common.c | 2 +- libavfilter/dnn/dnn_backend_common.h | 5 +- libavfilter/dnn/dnn_backend_native.c | 59 +++++++++++++++- libavfilter/dnn/dnn_backend_native.h | 6 ++ libavfilter/dnn/dnn_backend_openvino.c | 94 ++++++++++---------------- libavfilter/dnn/dnn_backend_openvino.h | 3 +- libavfilter/dnn/dnn_backend_tf.c | 35 ++-------- libavfilter/dnn/dnn_backend_tf.h | 3 +- libavfilter/dnn/dnn_interface.c | 8 +-- libavfilter/dnn_filter_common.c | 23 +------ libavfilter/dnn_filter_common.h | 3 +- libavfilter/dnn_interface.h | 4 +- libavfilter/vf_derain.c | 7 ++ libavfilter/vf_dnn_classify.c | 4 +- libavfilter/vf_dnn_detect.c | 10 +-- libavfilter/vf_dnn_processing.c | 10 +-- libavfilter/vf_sr.c | 8 +++ 17 files changed, 142 insertions(+), 142 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_common.c b/libavfilter/dnn/dnn_backend_common.c index 426683b73d..d2bc016fef 100644 --- a/libavfilter/dnn/dnn_backend_common.c +++ b/libavfilter/dnn/dnn_backend_common.c @@ -138,7 +138,7 @@ DNNReturnType ff_dnn_start_inference_async(void *ctx, DNNAsyncExecModule *async_ return DNN_SUCCESS; } -DNNAsyncStatusType ff_dnn_get_async_result_common(Queue *task_queue, AVFrame **in, AVFrame **out) +DNNAsyncStatusType ff_dnn_get_result_common(Queue *task_queue, AVFrame **in, AVFrame **out) { TaskItem *task = ff_queue_peek_front(task_queue); diff --git a/libavfilter/dnn/dnn_backend_common.h b/libavfilter/dnn/dnn_backend_common.h index 604c1d3bd7..78e62a94a2 100644 --- a/libavfilter/dnn/dnn_backend_common.h +++ b/libavfilter/dnn/dnn_backend_common.h @@ -29,7 +29,8 @@ #include "libavutil/thread.h" #define DNN_BACKEND_COMMON_OPTIONS \ - { "nireq", "number of request", OFFSET(options.nireq), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, INT_MAX, FLAGS }, + { "nireq", "number of request", OFFSET(options.nireq), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, INT_MAX, FLAGS }, \ + { "async", "use DNN async inference", OFFSET(options.async), AV_OPT_TYPE_BOOL, { .i64 = 1 }, 0, 1, FLAGS }, // one task for one function call from dnn interface typedef struct TaskItem { @@ -135,7 +136,7 @@ DNNReturnType ff_dnn_start_inference_async(void *ctx, DNNAsyncExecModule *async_ * @retval DAST_NOT_READY if inference not completed yet. * @retval DAST_SUCCESS if result successfully extracted */ -DNNAsyncStatusType ff_dnn_get_async_result_common(Queue *task_queue, AVFrame **in, AVFrame **out); +DNNAsyncStatusType ff_dnn_get_result_common(Queue *task_queue, AVFrame **in, AVFrame **out); /** * Allocate input and output frames and fill the Task diff --git a/libavfilter/dnn/dnn_backend_native.c b/libavfilter/dnn/dnn_backend_native.c index 3b2a3aa55d..2d34b88f8a 100644 --- a/libavfilter/dnn/dnn_backend_native.c +++ b/libavfilter/dnn/dnn_backend_native.c @@ -34,6 +34,7 @@ #define FLAGS AV_OPT_FLAG_FILTERING_PARAM static const AVOption dnn_native_options[] = { { "conv2d_threads", "threads num for conv2d layer", OFFSET(options.conv2d_threads), AV_OPT_TYPE_INT, { .i64 = 0 }, INT_MIN, INT_MAX, FLAGS }, + { "async", "use DNN async inference", OFFSET(options.async), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, FLAGS }, { NULL }, }; @@ -189,6 +190,11 @@ DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType f goto fail; native_model->model = model; + if (native_model->ctx.options.async) { + av_log(&native_model->ctx, AV_LOG_WARNING, "Async not supported. Rolling back to sync\n"); + native_model->ctx.options.async = 0; + } + #if !HAVE_PTHREAD_CANCEL if (native_model->ctx.options.conv2d_threads > 1){ av_log(&native_model->ctx, AV_LOG_WARNING, "'conv2d_threads' option was set but it is not supported " @@ -212,6 +218,11 @@ DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType f goto fail; } + native_model->task_queue = ff_queue_create(); + if (!native_model->task_queue) { + goto fail; + } + native_model->inference_queue = ff_queue_create(); if (!native_model->inference_queue) { goto fail; @@ -425,17 +436,30 @@ DNNReturnType ff_dnn_execute_model_native(const DNNModel *model, DNNExecBasePara { NativeModel *native_model = model->model; NativeContext *ctx = &native_model->ctx; - TaskItem task; + TaskItem *task; if (ff_check_exec_params(ctx, DNN_NATIVE, model->func_type, exec_params) != 0) { return DNN_ERROR; } - if (ff_dnn_fill_task(&task, exec_params, native_model, 0, 1) != DNN_SUCCESS) { + task = av_malloc(sizeof(*task)); + if (!task) { + av_log(ctx, AV_LOG_ERROR, "unable to alloc memory for task item.\n"); return DNN_ERROR; } - if (extract_inference_from_task(&task, native_model->inference_queue) != DNN_SUCCESS) { + if (ff_dnn_fill_task(task, exec_params, native_model, ctx->options.async, 1) != DNN_SUCCESS) { + av_freep(&task); + return DNN_ERROR; + } + + if (ff_queue_push_back(native_model->task_queue, task) < 0) { + av_freep(&task); + av_log(ctx, AV_LOG_ERROR, "unable to push back task_queue.\n"); + return DNN_ERROR; + } + + if (extract_inference_from_task(task, native_model->inference_queue) != DNN_SUCCESS) { av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); return DNN_ERROR; } @@ -443,6 +467,26 @@ DNNReturnType ff_dnn_execute_model_native(const DNNModel *model, DNNExecBasePara return execute_model_native(native_model->inference_queue); } +DNNReturnType ff_dnn_flush_native(const DNNModel *model) +{ + NativeModel *native_model = model->model; + + if (ff_queue_size(native_model->inference_queue) == 0) { + // no pending task need to flush + return DNN_SUCCESS; + } + + // for now, use sync node with flush operation + // Switch to async when it is supported + return execute_model_native(native_model->inference_queue); +} + +DNNAsyncStatusType ff_dnn_get_result_native(const DNNModel *model, AVFrame **in, AVFrame **out) +{ + NativeModel *native_model = model->model; + return ff_dnn_get_result_common(native_model->task_queue, in, out); +} + int32_t ff_calculate_operand_dims_count(const DnnOperand *oprd) { int32_t result = 1; @@ -497,6 +541,15 @@ void ff_dnn_free_model_native(DNNModel **model) av_freep(&item); } ff_queue_destroy(native_model->inference_queue); + + while (ff_queue_size(native_model->task_queue) != 0) { + TaskItem *item = ff_queue_pop_front(native_model->task_queue); + av_frame_free(&item->in_frame); + av_frame_free(&item->out_frame); + av_freep(&item); + } + ff_queue_destroy(native_model->task_queue); + av_freep(&native_model); } av_freep(model); diff --git a/libavfilter/dnn/dnn_backend_native.h b/libavfilter/dnn/dnn_backend_native.h index 1b9d5bdf2d..ca61bb353f 100644 --- a/libavfilter/dnn/dnn_backend_native.h +++ b/libavfilter/dnn/dnn_backend_native.h @@ -111,6 +111,7 @@ typedef struct InputParams{ } InputParams; typedef struct NativeOptions{ + uint8_t async; uint32_t conv2d_threads; } NativeOptions; @@ -127,6 +128,7 @@ typedef struct NativeModel{ int32_t layers_num; DnnOperand *operands; int32_t operands_num; + Queue *task_queue; Queue *inference_queue; } NativeModel; @@ -134,6 +136,10 @@ DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType f DNNReturnType ff_dnn_execute_model_native(const DNNModel *model, DNNExecBaseParams *exec_params); +DNNAsyncStatusType ff_dnn_get_result_native(const DNNModel *model, AVFrame **in, AVFrame **out); + +DNNReturnType ff_dnn_flush_native(const DNNModel *model); + void ff_dnn_free_model_native(DNNModel **model); // NOTE: User must check for error (return value <= 0) to handle diff --git a/libavfilter/dnn/dnn_backend_openvino.c b/libavfilter/dnn/dnn_backend_openvino.c index c825e70c82..abf3f4b6b6 100644 --- a/libavfilter/dnn/dnn_backend_openvino.c +++ b/libavfilter/dnn/dnn_backend_openvino.c @@ -39,6 +39,7 @@ typedef struct OVOptions{ char *device_type; int nireq; + uint8_t async; int batch_size; int input_resizable; } OVOptions; @@ -758,55 +759,6 @@ err: } DNNReturnType ff_dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams *exec_params) -{ - OVModel *ov_model = model->model; - OVContext *ctx = &ov_model->ctx; - TaskItem task; - OVRequestItem *request; - - if (ff_check_exec_params(ctx, DNN_OV, model->func_type, exec_params) != 0) { - return DNN_ERROR; - } - - if (model->func_type == DFT_ANALYTICS_CLASSIFY) { - // Once we add async support for tensorflow backend and native backend, - // we'll combine the two sync/async functions in dnn_interface.h to - // simplify the code in filter, and async will be an option within backends. - // so, do not support now, and classify filter will not call this function. - return DNN_ERROR; - } - - if (ctx->options.batch_size > 1) { - avpriv_report_missing_feature(ctx, "batch mode for sync execution"); - return DNN_ERROR; - } - - if (!ov_model->exe_network) { - if (init_model_ov(ov_model, exec_params->input_name, exec_params->output_names[0]) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "Failed init OpenVINO exectuable network or inference request\n"); - return DNN_ERROR; - } - } - - if (ff_dnn_fill_task(&task, exec_params, ov_model, 0, 1) != DNN_SUCCESS) { - return DNN_ERROR; - } - - if (extract_inference_from_task(ov_model->model->func_type, &task, ov_model->inference_queue, exec_params) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); - return DNN_ERROR; - } - - request = ff_safe_queue_pop_front(ov_model->request_queue); - if (!request) { - av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); - return DNN_ERROR; - } - - return execute_model_ov(request, ov_model->inference_queue); -} - -DNNReturnType ff_dnn_execute_model_async_ov(const DNNModel *model, DNNExecBaseParams *exec_params) { OVModel *ov_model = model->model; OVContext *ctx = &ov_model->ctx; @@ -831,7 +783,8 @@ DNNReturnType ff_dnn_execute_model_async_ov(const DNNModel *model, DNNExecBasePa return DNN_ERROR; } - if (ff_dnn_fill_task(task, exec_params, ov_model, 1, 1) != DNN_SUCCESS) { + if (ff_dnn_fill_task(task, exec_params, ov_model, ctx->options.async, 1) != DNN_SUCCESS) { + av_freep(&task); return DNN_ERROR; } @@ -846,26 +799,47 @@ DNNReturnType ff_dnn_execute_model_async_ov(const DNNModel *model, DNNExecBasePa return DNN_ERROR; } - while (ff_queue_size(ov_model->inference_queue) >= ctx->options.batch_size) { + if (ctx->options.async) { + while (ff_queue_size(ov_model->inference_queue) >= ctx->options.batch_size) { + request = ff_safe_queue_pop_front(ov_model->request_queue); + if (!request) { + av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); + return DNN_ERROR; + } + + ret = execute_model_ov(request, ov_model->inference_queue); + if (ret != DNN_SUCCESS) { + return ret; + } + } + + return DNN_SUCCESS; + } + else { + if (model->func_type == DFT_ANALYTICS_CLASSIFY) { + // Classification filter has not been completely + // tested with the sync mode. So, do not support now. + return DNN_ERROR; + } + + if (ctx->options.batch_size > 1) { + avpriv_report_missing_feature(ctx, "batch mode for sync execution"); + return DNN_ERROR; + } + request = ff_safe_queue_pop_front(ov_model->request_queue); if (!request) { av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); return DNN_ERROR; } - - ret = execute_model_ov(request, ov_model->inference_queue); - if (ret != DNN_SUCCESS) { - return ret; - } + return execute_model_ov(request, ov_model->inference_queue); } - - return DNN_SUCCESS; } -DNNAsyncStatusType ff_dnn_get_async_result_ov(const DNNModel *model, AVFrame **in, AVFrame **out) +DNNAsyncStatusType ff_dnn_get_result_ov(const DNNModel *model, AVFrame **in, AVFrame **out) { OVModel *ov_model = model->model; - return ff_dnn_get_async_result_common(ov_model->task_queue, in, out); + return ff_dnn_get_result_common(ov_model->task_queue, in, out); } DNNReturnType ff_dnn_flush_ov(const DNNModel *model) diff --git a/libavfilter/dnn/dnn_backend_openvino.h b/libavfilter/dnn/dnn_backend_openvino.h index 046d0c5b5a..0bbca0c057 100644 --- a/libavfilter/dnn/dnn_backend_openvino.h +++ b/libavfilter/dnn/dnn_backend_openvino.h @@ -32,8 +32,7 @@ DNNModel *ff_dnn_load_model_ov(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx); DNNReturnType ff_dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams *exec_params); -DNNReturnType ff_dnn_execute_model_async_ov(const DNNModel *model, DNNExecBaseParams *exec_params); -DNNAsyncStatusType ff_dnn_get_async_result_ov(const DNNModel *model, AVFrame **in, AVFrame **out); +DNNAsyncStatusType ff_dnn_get_result_ov(const DNNModel *model, AVFrame **in, AVFrame **out); DNNReturnType ff_dnn_flush_ov(const DNNModel *model); void ff_dnn_free_model_ov(DNNModel **model); diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c index ffec1b1328..4a0b561f29 100644 --- a/libavfilter/dnn/dnn_backend_tf.c +++ b/libavfilter/dnn/dnn_backend_tf.c @@ -42,6 +42,7 @@ typedef struct TFOptions{ char *sess_config; + uint8_t async; uint32_t nireq; } TFOptions; @@ -1121,34 +1122,6 @@ err: DNNReturnType ff_dnn_execute_model_tf(const DNNModel *model, DNNExecBaseParams *exec_params) { - TFModel *tf_model = model->model; - TFContext *ctx = &tf_model->ctx; - TaskItem task; - TFRequestItem *request; - - if (ff_check_exec_params(ctx, DNN_TF, model->func_type, exec_params) != 0) { - return DNN_ERROR; - } - - if (ff_dnn_fill_task(&task, exec_params, tf_model, 0, 1) != DNN_SUCCESS) { - return DNN_ERROR; - } - - if (extract_inference_from_task(&task, tf_model->inference_queue) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); - return DNN_ERROR; - } - - request = ff_safe_queue_pop_front(tf_model->request_queue); - if (!request) { - av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); - return DNN_ERROR; - } - - return execute_model_tf(request, tf_model->inference_queue); -} - -DNNReturnType ff_dnn_execute_model_async_tf(const DNNModel *model, DNNExecBaseParams *exec_params) { TFModel *tf_model = model->model; TFContext *ctx = &tf_model->ctx; TaskItem *task; @@ -1164,7 +1137,7 @@ DNNReturnType ff_dnn_execute_model_async_tf(const DNNModel *model, DNNExecBasePa return DNN_ERROR; } - if (ff_dnn_fill_task(task, exec_params, tf_model, 1, 1) != DNN_SUCCESS) { + if (ff_dnn_fill_task(task, exec_params, tf_model, ctx->options.async, 1) != DNN_SUCCESS) { av_freep(&task); return DNN_ERROR; } @@ -1188,10 +1161,10 @@ DNNReturnType ff_dnn_execute_model_async_tf(const DNNModel *model, DNNExecBasePa return execute_model_tf(request, tf_model->inference_queue); } -DNNAsyncStatusType ff_dnn_get_async_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out) +DNNAsyncStatusType ff_dnn_get_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out) { TFModel *tf_model = model->model; - return ff_dnn_get_async_result_common(tf_model->task_queue, in, out); + return ff_dnn_get_result_common(tf_model->task_queue, in, out); } DNNReturnType ff_dnn_flush_tf(const DNNModel *model) diff --git a/libavfilter/dnn/dnn_backend_tf.h b/libavfilter/dnn/dnn_backend_tf.h index aec0fc2011..f14ea8c47a 100644 --- a/libavfilter/dnn/dnn_backend_tf.h +++ b/libavfilter/dnn/dnn_backend_tf.h @@ -32,8 +32,7 @@ DNNModel *ff_dnn_load_model_tf(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx); DNNReturnType ff_dnn_execute_model_tf(const DNNModel *model, DNNExecBaseParams *exec_params); -DNNReturnType ff_dnn_execute_model_async_tf(const DNNModel *model, DNNExecBaseParams *exec_params); -DNNAsyncStatusType ff_dnn_get_async_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out); +DNNAsyncStatusType ff_dnn_get_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out); DNNReturnType ff_dnn_flush_tf(const DNNModel *model); void ff_dnn_free_model_tf(DNNModel **model); diff --git a/libavfilter/dnn/dnn_interface.c b/libavfilter/dnn/dnn_interface.c index 81af934dd5..554a36b0dc 100644 --- a/libavfilter/dnn/dnn_interface.c +++ b/libavfilter/dnn/dnn_interface.c @@ -42,14 +42,15 @@ DNNModule *ff_get_dnn_module(DNNBackendType backend_type) case DNN_NATIVE: dnn_module->load_model = &ff_dnn_load_model_native; dnn_module->execute_model = &ff_dnn_execute_model_native; + dnn_module->get_result = &ff_dnn_get_result_native; + dnn_module->flush = &ff_dnn_flush_native; dnn_module->free_model = &ff_dnn_free_model_native; break; case DNN_TF: #if (CONFIG_LIBTENSORFLOW == 1) dnn_module->load_model = &ff_dnn_load_model_tf; dnn_module->execute_model = &ff_dnn_execute_model_tf; - dnn_module->execute_model_async = &ff_dnn_execute_model_async_tf; - dnn_module->get_async_result = &ff_dnn_get_async_result_tf; + dnn_module->get_result = &ff_dnn_get_result_tf; dnn_module->flush = &ff_dnn_flush_tf; dnn_module->free_model = &ff_dnn_free_model_tf; #else @@ -61,8 +62,7 @@ DNNModule *ff_get_dnn_module(DNNBackendType backend_type) #if (CONFIG_LIBOPENVINO == 1) dnn_module->load_model = &ff_dnn_load_model_ov; dnn_module->execute_model = &ff_dnn_execute_model_ov; - dnn_module->execute_model_async = &ff_dnn_execute_model_async_ov; - dnn_module->get_async_result = &ff_dnn_get_async_result_ov; + dnn_module->get_result = &ff_dnn_get_result_ov; dnn_module->flush = &ff_dnn_flush_ov; dnn_module->free_model = &ff_dnn_free_model_ov; #else diff --git a/libavfilter/dnn_filter_common.c b/libavfilter/dnn_filter_common.c index fcee7482c3..455eaa37f4 100644 --- a/libavfilter/dnn_filter_common.c +++ b/libavfilter/dnn_filter_common.c @@ -84,11 +84,6 @@ int ff_dnn_init(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *fil return AVERROR(EINVAL); } - if (!ctx->dnn_module->execute_model_async && ctx->async) { - ctx->async = 0; - av_log(filter_ctx, AV_LOG_WARNING, "this backend does not support async execution, roll back to sync.\n"); - } - #if !HAVE_PTHREAD_CANCEL if (ctx->async) { ctx->async = 0; @@ -141,18 +136,6 @@ DNNReturnType ff_dnn_execute_model(DnnContext *ctx, AVFrame *in_frame, AVFrame * return (ctx->dnn_module->execute_model)(ctx->model, &exec_params); } -DNNReturnType ff_dnn_execute_model_async(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame) -{ - DNNExecBaseParams exec_params = { - .input_name = ctx->model_inputname, - .output_names = (const char **)ctx->model_outputnames, - .nb_output = ctx->nb_outputs, - .in_frame = in_frame, - .out_frame = out_frame, - }; - return (ctx->dnn_module->execute_model_async)(ctx->model, &exec_params); -} - DNNReturnType ff_dnn_execute_model_classification(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame, const char *target) { DNNExecClassificationParams class_params = { @@ -165,12 +148,12 @@ DNNReturnType ff_dnn_execute_model_classification(DnnContext *ctx, AVFrame *in_f }, .target = target, }; - return (ctx->dnn_module->execute_model_async)(ctx->model, &class_params.base); + return (ctx->dnn_module->execute_model)(ctx->model, &class_params.base); } -DNNAsyncStatusType ff_dnn_get_async_result(DnnContext *ctx, AVFrame **in_frame, AVFrame **out_frame) +DNNAsyncStatusType ff_dnn_get_result(DnnContext *ctx, AVFrame **in_frame, AVFrame **out_frame) { - return (ctx->dnn_module->get_async_result)(ctx->model, in_frame, out_frame); + return (ctx->dnn_module->get_result)(ctx->model, in_frame, out_frame); } DNNReturnType ff_dnn_flush(DnnContext *ctx) diff --git a/libavfilter/dnn_filter_common.h b/libavfilter/dnn_filter_common.h index 8e43d8c8dc..4d92c1dc36 100644 --- a/libavfilter/dnn_filter_common.h +++ b/libavfilter/dnn_filter_common.h @@ -56,9 +56,8 @@ int ff_dnn_set_classify_post_proc(DnnContext *ctx, ClassifyPostProc post_proc); DNNReturnType ff_dnn_get_input(DnnContext *ctx, DNNData *input); DNNReturnType ff_dnn_get_output(DnnContext *ctx, int input_width, int input_height, int *output_width, int *output_height); DNNReturnType ff_dnn_execute_model(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame); -DNNReturnType ff_dnn_execute_model_async(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame); DNNReturnType ff_dnn_execute_model_classification(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame, const char *target); -DNNAsyncStatusType ff_dnn_get_async_result(DnnContext *ctx, AVFrame **in_frame, AVFrame **out_frame); +DNNAsyncStatusType ff_dnn_get_result(DnnContext *ctx, AVFrame **in_frame, AVFrame **out_frame); DNNReturnType ff_dnn_flush(DnnContext *ctx); void ff_dnn_uninit(DnnContext *ctx); diff --git a/libavfilter/dnn_interface.h b/libavfilter/dnn_interface.h index 5e9ffeb077..37e89d9789 100644 --- a/libavfilter/dnn_interface.h +++ b/libavfilter/dnn_interface.h @@ -114,10 +114,8 @@ typedef struct DNNModule{ DNNModel *(*load_model)(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx); // Executes model with specified input and output. Returns DNN_ERROR otherwise. DNNReturnType (*execute_model)(const DNNModel *model, DNNExecBaseParams *exec_params); - // Executes model with specified input and output asynchronously. Returns DNN_ERROR otherwise. - DNNReturnType (*execute_model_async)(const DNNModel *model, DNNExecBaseParams *exec_params); // Retrieve inference result. - DNNAsyncStatusType (*get_async_result)(const DNNModel *model, AVFrame **in, AVFrame **out); + DNNAsyncStatusType (*get_result)(const DNNModel *model, AVFrame **in, AVFrame **out); // Flush all the pending tasks. DNNReturnType (*flush)(const DNNModel *model); // Frees memory allocated for model. diff --git a/libavfilter/vf_derain.c b/libavfilter/vf_derain.c index 720cc70f5c..97bdb4e843 100644 --- a/libavfilter/vf_derain.c +++ b/libavfilter/vf_derain.c @@ -68,6 +68,7 @@ static int query_formats(AVFilterContext *ctx) static int filter_frame(AVFilterLink *inlink, AVFrame *in) { + DNNAsyncStatusType async_state = 0; AVFilterContext *ctx = inlink->dst; AVFilterLink *outlink = ctx->outputs[0]; DRContext *dr_context = ctx->priv; @@ -88,6 +89,12 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in) av_frame_free(&in); return AVERROR(EIO); } + do { + async_state = ff_dnn_get_result(&dr_context->dnnctx, &in, &out); + } while (async_state == DAST_NOT_READY); + + if (async_state != DAST_SUCCESS) + return AVERROR(EINVAL); av_frame_free(&in); diff --git a/libavfilter/vf_dnn_classify.c b/libavfilter/vf_dnn_classify.c index 904067f5f6..d5ee65d403 100644 --- a/libavfilter/vf_dnn_classify.c +++ b/libavfilter/vf_dnn_classify.c @@ -224,7 +224,7 @@ static int dnn_classify_flush_frame(AVFilterLink *outlink, int64_t pts, int64_t do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { av_assert0(in_frame == out_frame); ret = ff_filter_frame(outlink, out_frame); @@ -268,7 +268,7 @@ static int dnn_classify_activate(AVFilterContext *filter_ctx) do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { av_assert0(in_frame == out_frame); ret = ff_filter_frame(outlink, out_frame); diff --git a/libavfilter/vf_dnn_detect.c b/libavfilter/vf_dnn_detect.c index 200b5897c1..11de376753 100644 --- a/libavfilter/vf_dnn_detect.c +++ b/libavfilter/vf_dnn_detect.c @@ -424,7 +424,7 @@ static int dnn_detect_flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *o do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { av_assert0(in_frame == out_frame); ret = ff_filter_frame(outlink, out_frame); @@ -458,7 +458,7 @@ static int dnn_detect_activate_async(AVFilterContext *filter_ctx) if (ret < 0) return ret; if (ret > 0) { - if (ff_dnn_execute_model_async(&ctx->dnnctx, in, in) != DNN_SUCCESS) { + if (ff_dnn_execute_model(&ctx->dnnctx, in, in) != DNN_SUCCESS) { return AVERROR(EIO); } } @@ -468,7 +468,7 @@ static int dnn_detect_activate_async(AVFilterContext *filter_ctx) do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { av_assert0(in_frame == out_frame); ret = ff_filter_frame(outlink, out_frame); @@ -496,7 +496,7 @@ static int dnn_detect_activate_async(AVFilterContext *filter_ctx) return 0; } -static int dnn_detect_activate(AVFilterContext *filter_ctx) +static av_unused int dnn_detect_activate(AVFilterContext *filter_ctx) { DnnDetectContext *ctx = filter_ctx->priv; @@ -537,5 +537,5 @@ const AVFilter ff_vf_dnn_detect = { FILTER_INPUTS(dnn_detect_inputs), FILTER_OUTPUTS(dnn_detect_outputs), .priv_class = &dnn_detect_class, - .activate = dnn_detect_activate, + .activate = dnn_detect_activate_async, }; diff --git a/libavfilter/vf_dnn_processing.c b/libavfilter/vf_dnn_processing.c index 5f24955263..7435dd4959 100644 --- a/libavfilter/vf_dnn_processing.c +++ b/libavfilter/vf_dnn_processing.c @@ -328,7 +328,7 @@ static int flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *out_pts) do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { if (isPlanarYUV(in_frame->format)) copy_uv_planes(ctx, out_frame, in_frame); @@ -370,7 +370,7 @@ static int activate_async(AVFilterContext *filter_ctx) return AVERROR(ENOMEM); } av_frame_copy_props(out, in); - if (ff_dnn_execute_model_async(&ctx->dnnctx, in, out) != DNN_SUCCESS) { + if (ff_dnn_execute_model(&ctx->dnnctx, in, out) != DNN_SUCCESS) { return AVERROR(EIO); } } @@ -380,7 +380,7 @@ static int activate_async(AVFilterContext *filter_ctx) do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { if (isPlanarYUV(in_frame->format)) copy_uv_planes(ctx, out_frame, in_frame); @@ -410,7 +410,7 @@ static int activate_async(AVFilterContext *filter_ctx) return 0; } -static int activate(AVFilterContext *filter_ctx) +static av_unused int activate(AVFilterContext *filter_ctx) { DnnProcessingContext *ctx = filter_ctx->priv; @@ -454,5 +454,5 @@ const AVFilter ff_vf_dnn_processing = { FILTER_INPUTS(dnn_processing_inputs), FILTER_OUTPUTS(dnn_processing_outputs), .priv_class = &dnn_processing_class, - .activate = activate, + .activate = activate_async, }; diff --git a/libavfilter/vf_sr.c b/libavfilter/vf_sr.c index f009a868f8..f4fc84d251 100644 --- a/libavfilter/vf_sr.c +++ b/libavfilter/vf_sr.c @@ -119,6 +119,7 @@ static int config_output(AVFilterLink *outlink) static int filter_frame(AVFilterLink *inlink, AVFrame *in) { + DNNAsyncStatusType async_state = 0; AVFilterContext *context = inlink->dst; SRContext *ctx = context->priv; AVFilterLink *outlink = context->outputs[0]; @@ -148,6 +149,13 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in) return AVERROR(EIO); } + do { + async_state = ff_dnn_get_result(&ctx->dnnctx, &in, &out); + } while (async_state == DAST_NOT_READY); + + if (async_state != DAST_SUCCESS) + return AVERROR(EINVAL); + if (ctx->sws_uv_scale) { sws_scale(ctx->sws_uv_scale, (const uint8_t **)(in->data + 1), in->linesize + 1, 0, ctx->sws_uv_height, out->data + 1, out->linesize + 1); From patchwork Tue Aug 24 05:31:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29754 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp2724220iov; Mon, 23 Aug 2021 22:33:08 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxwXx7cGHE0fFSANqwM7GHxbJ911ar0eR0ptBFY5YROmbgCTnrQFBc0TvXxVJenLcvsupVR X-Received: by 2002:a17:907:7718:: with SMTP id kw24mr40542447ejc.316.1629783188640; Mon, 23 Aug 2021 22:33:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629783188; cv=none; d=google.com; s=arc-20160816; b=xFsZvPQAosI4mBdIaaETRbr6Wq+YDGcV/zHVfm91ROIyU43UXo3Dge29Esog2WAwK1 DtMkeBLY8xmnyBLkJvGVzcm9vfv/ztqa9q5QWmL9yCqcW83ai0yKyA9Osw/mnOGmNBAG 9K2UhKj8X5nzjR0Jx+xwKiQILIceA3LWnYIBTC7N19nEz9TyiGI+WnqqLwY1KHXVvsXa Jew8SGK83WdVZHg+EqhqPfpVrvsvWmtrHgAiQddc6Jiigi/YiXSHbJVNJIL4hqau0CsU lRIR5mWNHZvxRcEbAkkhHFZIcu56Nyf5GyX5JyglAdHPixtamb+mqoDCe7b8LjiQTEMu LZEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=tgeb9EeUFRLp7ILdglfym8VhzXOPzTcP3IskwCvOEoQ=; b=oxXE9L8HS8z6HJT7Rwxn2G2OdSKK2HoF3rbqP700wT6StB44c93jWAkh3m7nNLrEPk 1uDHi0NQ81cIVMHSob5lAWZWW0DpBBnoy80ECkplEuglmOq2LG9+s/diefdQ0xqnGuIu czxWlvu7Qo7xeRL38UhYwmg4r9TIzoy7UC4vKPu67nyAUPDAQfab7MMKbP0hezPWRS3S QIlbkpwRQgcp2+MZVoGAEjUdUEdCMDouoLyRuYLXlis0ljAUqtrqwfPx1TBqnr4G8xzP 1Gl2hNiQ5cP3SDf9UrSmye2AnhCtmpRRsRkrqFfMo8KEKIvwyExz6QSgACIa/AbM3fnN DrdA== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=EOpGH3+H; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id hr29si19412885ejc.256.2021.08.23.22.33.08; Mon, 23 Aug 2021 22:33:08 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=EOpGH3+H; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 55430689FB5; Tue, 24 Aug 2021 08:32:47 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pg1-f172.google.com (mail-pg1-f172.google.com [209.85.215.172]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id A0564680074 for ; Tue, 24 Aug 2021 08:32:39 +0300 (EEST) Received: by mail-pg1-f172.google.com with SMTP id q68so1651075pga.9 for ; Mon, 23 Aug 2021 22:32:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kerxkJIFTEUo1lakrAXtZvZ6zlf+he6RHyzlWv+8W5I=; b=EOpGH3+HmDL3K6//H3fmrhziSL1zUbzvDQTmmMS+iTs8jkDEybCFwXznVrmoHcHWZe lhem96eCpScJvPgqF9DqI6rG1lrWWTKNOVKcsTFxVongGlUVYhM234v1X0gQRv0+TEua UEic8HSWwF8foajCh22Ofye5VQvqDYkpfGdt0/sykjMZZ81cQdMxvYVWLqu9jw67jHXv xRH6m7i4f5SYa6fVp2R5Jl436C0Rws1erhgxwrsagPR2TcMUNgkVO8YgjJ3kqrPYSklV JqWOyjXIzW/3WzRlFbC7UEJ455JDvqsf9wU3np7p9GY+9A1be+kN2Vb7H7byHKEvJI5J rh4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kerxkJIFTEUo1lakrAXtZvZ6zlf+he6RHyzlWv+8W5I=; b=Yjtut908oriB1iaVApESRhkCoAS3zADm3JD3Z+DvJghoSSs+2Ypsp4Pw/8RTHvBcl5 GUVqTZWLIL2Bewp/pnW6UhXj6AKT+76/xvAb8dUlPJ0vkcUDIqpbAl9c6BjR3pUh+cnf OTTE76JCCGEglu/+xX+CyNJKHILRwLGO3MfNDcRjylAGSI31PQz4oFcLj5t+UkZDiKeP KjB+9IUvlPk5GaFTqXnq0w5nMc46CnLIfGj4g3XEOAZqCrHMAea3KPQVkGvc1nTkdJZq zpGntyazWJyPjhR50RZhxSUoJQhBU4Bqo4htOkqw5tnHZeIChmxbvGPqSIQi6EQjGDo/ 5Rag== X-Gm-Message-State: AOAM532zPiZpnvIjbKljeT6GNcLz3bM8PqwRvXtCY81Z5zf0BR1MSomV 95xOD3vhcIv/MOCJkj5tI9Fpv68NAQT2SQ== X-Received: by 2002:a05:6a00:158a:b0:3eb:432e:9acd with SMTP id u10-20020a056a00158a00b003eb432e9acdmr5190575pfk.0.1629783157973; Mon, 23 Aug 2021 22:32:37 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.221.236]) by smtp.googlemail.com with ESMTPSA id z12sm7095207pfe.79.2021.08.23.22.32.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Aug 2021 22:32:37 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Tue, 24 Aug 2021 11:01:47 +0530 Message-Id: <20210824053151.14305-3-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210824053151.14305-1-shubhanshu.e01@gmail.com> References: <20210824053151.14305-1-shubhanshu.e01@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v3 3/7] libavfilter: Remove synchronous functions from DNN filters X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: KoRbLuk1KGFU This commit removes the unused sync mode specific code from the DNN filters since the sync and async mode are now unified from the filters' perspective. Signed-off-by: Shubhanshu Saxena --- libavfilter/vf_dnn_detect.c | 71 +--------------------------- libavfilter/vf_dnn_processing.c | 84 +-------------------------------- 2 files changed, 4 insertions(+), 151 deletions(-) diff --git a/libavfilter/vf_dnn_detect.c b/libavfilter/vf_dnn_detect.c index 11de376753..809d70b930 100644 --- a/libavfilter/vf_dnn_detect.c +++ b/libavfilter/vf_dnn_detect.c @@ -353,63 +353,6 @@ static int dnn_detect_query_formats(AVFilterContext *context) return ff_set_common_formats_from_list(context, pix_fmts); } -static int dnn_detect_filter_frame(AVFilterLink *inlink, AVFrame *in) -{ - AVFilterContext *context = inlink->dst; - AVFilterLink *outlink = context->outputs[0]; - DnnDetectContext *ctx = context->priv; - DNNReturnType dnn_result; - - dnn_result = ff_dnn_execute_model(&ctx->dnnctx, in, in); - if (dnn_result != DNN_SUCCESS){ - av_log(ctx, AV_LOG_ERROR, "failed to execute model\n"); - av_frame_free(&in); - return AVERROR(EIO); - } - - return ff_filter_frame(outlink, in); -} - -static int dnn_detect_activate_sync(AVFilterContext *filter_ctx) -{ - AVFilterLink *inlink = filter_ctx->inputs[0]; - AVFilterLink *outlink = filter_ctx->outputs[0]; - AVFrame *in = NULL; - int64_t pts; - int ret, status; - int got_frame = 0; - - FF_FILTER_FORWARD_STATUS_BACK(outlink, inlink); - - do { - // drain all input frames - ret = ff_inlink_consume_frame(inlink, &in); - if (ret < 0) - return ret; - if (ret > 0) { - ret = dnn_detect_filter_frame(inlink, in); - if (ret < 0) - return ret; - got_frame = 1; - } - } while (ret > 0); - - // if frame got, schedule to next filter - if (got_frame) - return 0; - - if (ff_inlink_acknowledge_status(inlink, &status, &pts)) { - if (status == AVERROR_EOF) { - ff_outlink_set_status(outlink, status, pts); - return ret; - } - } - - FF_FILTER_FORWARD_WANTED(outlink, inlink); - - return FFERROR_NOT_READY; -} - static int dnn_detect_flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *out_pts) { DnnDetectContext *ctx = outlink->src->priv; @@ -439,7 +382,7 @@ static int dnn_detect_flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *o return 0; } -static int dnn_detect_activate_async(AVFilterContext *filter_ctx) +static int dnn_detect_activate(AVFilterContext *filter_ctx) { AVFilterLink *inlink = filter_ctx->inputs[0]; AVFilterLink *outlink = filter_ctx->outputs[0]; @@ -496,16 +439,6 @@ static int dnn_detect_activate_async(AVFilterContext *filter_ctx) return 0; } -static av_unused int dnn_detect_activate(AVFilterContext *filter_ctx) -{ - DnnDetectContext *ctx = filter_ctx->priv; - - if (ctx->dnnctx.async) - return dnn_detect_activate_async(filter_ctx); - else - return dnn_detect_activate_sync(filter_ctx); -} - static av_cold void dnn_detect_uninit(AVFilterContext *context) { DnnDetectContext *ctx = context->priv; @@ -537,5 +470,5 @@ const AVFilter ff_vf_dnn_detect = { FILTER_INPUTS(dnn_detect_inputs), FILTER_OUTPUTS(dnn_detect_outputs), .priv_class = &dnn_detect_class, - .activate = dnn_detect_activate_async, + .activate = dnn_detect_activate, }; diff --git a/libavfilter/vf_dnn_processing.c b/libavfilter/vf_dnn_processing.c index 7435dd4959..55634efde5 100644 --- a/libavfilter/vf_dnn_processing.c +++ b/libavfilter/vf_dnn_processing.c @@ -244,76 +244,6 @@ static int copy_uv_planes(DnnProcessingContext *ctx, AVFrame *out, const AVFrame return 0; } -static int filter_frame(AVFilterLink *inlink, AVFrame *in) -{ - AVFilterContext *context = inlink->dst; - AVFilterLink *outlink = context->outputs[0]; - DnnProcessingContext *ctx = context->priv; - DNNReturnType dnn_result; - AVFrame *out; - - out = ff_get_video_buffer(outlink, outlink->w, outlink->h); - if (!out) { - av_frame_free(&in); - return AVERROR(ENOMEM); - } - av_frame_copy_props(out, in); - - dnn_result = ff_dnn_execute_model(&ctx->dnnctx, in, out); - if (dnn_result != DNN_SUCCESS){ - av_log(ctx, AV_LOG_ERROR, "failed to execute model\n"); - av_frame_free(&in); - av_frame_free(&out); - return AVERROR(EIO); - } - - if (isPlanarYUV(in->format)) - copy_uv_planes(ctx, out, in); - - av_frame_free(&in); - return ff_filter_frame(outlink, out); -} - -static int activate_sync(AVFilterContext *filter_ctx) -{ - AVFilterLink *inlink = filter_ctx->inputs[0]; - AVFilterLink *outlink = filter_ctx->outputs[0]; - AVFrame *in = NULL; - int64_t pts; - int ret, status; - int got_frame = 0; - - FF_FILTER_FORWARD_STATUS_BACK(outlink, inlink); - - do { - // drain all input frames - ret = ff_inlink_consume_frame(inlink, &in); - if (ret < 0) - return ret; - if (ret > 0) { - ret = filter_frame(inlink, in); - if (ret < 0) - return ret; - got_frame = 1; - } - } while (ret > 0); - - // if frame got, schedule to next filter - if (got_frame) - return 0; - - if (ff_inlink_acknowledge_status(inlink, &status, &pts)) { - if (status == AVERROR_EOF) { - ff_outlink_set_status(outlink, status, pts); - return ret; - } - } - - FF_FILTER_FORWARD_WANTED(outlink, inlink); - - return FFERROR_NOT_READY; -} - static int flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *out_pts) { DnnProcessingContext *ctx = outlink->src->priv; @@ -345,7 +275,7 @@ static int flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *out_pts) return 0; } -static int activate_async(AVFilterContext *filter_ctx) +static int activate(AVFilterContext *filter_ctx) { AVFilterLink *inlink = filter_ctx->inputs[0]; AVFilterLink *outlink = filter_ctx->outputs[0]; @@ -410,16 +340,6 @@ static int activate_async(AVFilterContext *filter_ctx) return 0; } -static av_unused int activate(AVFilterContext *filter_ctx) -{ - DnnProcessingContext *ctx = filter_ctx->priv; - - if (ctx->dnnctx.async) - return activate_async(filter_ctx); - else - return activate_sync(filter_ctx); -} - static av_cold void uninit(AVFilterContext *ctx) { DnnProcessingContext *context = ctx->priv; @@ -454,5 +374,5 @@ const AVFilter ff_vf_dnn_processing = { FILTER_INPUTS(dnn_processing_inputs), FILTER_OUTPUTS(dnn_processing_outputs), .priv_class = &dnn_processing_class, - .activate = activate_async, + .activate = activate, }; From patchwork Tue Aug 24 05:31:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29753 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp2724309iov; Mon, 23 Aug 2021 22:33:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwa5o1aTuO57Ulr16W6K+ZdBntpsdjueZSbGngwE3YwFdInv25jhMyOisJ/ct8YPI5efpuD X-Received: by 2002:a17:907:7291:: with SMTP id dt17mr40061622ejc.162.1629783200484; Mon, 23 Aug 2021 22:33:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629783200; cv=none; d=google.com; s=arc-20160816; b=wV81N2oubY7SRa1LGI+p7rIFV2cUnmM3cTiMEtx9owbySHVGZbPqulpt0WR78f2h5V X4RXs5ooEK7yt1Ip5t0BHKPFRIZJ+GtIRoFxjZJ0ZoBPXUdIIKa8oA5jM+hBQ25KJu+S n5UG/aJAMN4aqzQW1oCLZLpuZfYqD1iLNYZitMWpcYpYCIhjCAXLhtI6jSlaEdxqK5/J RuKmBpQ4Pjleyb4DToRKH8U/MPMSG5IoF2Bnw2UHi4ADnlB4KkWJAvTcGH/6gxqHLTIU VlpxAwcwBZ4dxIlDGH2X3YP3YUDqNmtA2KII/Ww8/FneKJ/h0rSXipAsqyNqTkUGOo5p u1SQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=DLZpo8s15XMpfdsJiyOJcUefheczHg2yUIo4aT39jlY=; b=Fh6CqL4OUU6AoJYJJozE3cpLMkekNShpDatdul+c3kjbaII5cbDGfU/DojBK/X7pDN 5z3gxU3yBNHlhQoYXZgwFcbIVryTRupcQNvMbx9BPCgrLSe+KxlSXOc9bXm1JgIqAaFx RSpvy3F4Gcaxx/3hTAHyyfNaQDvHxtEFnuddCkC48PqKmhaJV/WdKkfJJYdMQqIOpFoD F+qMb6dlcQFjFpnwoGynge+lGY3wlbFSUE87kaNyDnNIW4MXsPlG/DjHk5qDY3rlvweY Hok4uGUOjkLVa8alAldIo/jrPkfz7rZlfvJDO7tb2Sc/g3+JSq30WZ94XpmiDBmN64P+ z/ew== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=nq1stbI4; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id v12si15442679ejv.679.2021.08.23.22.33.20; Mon, 23 Aug 2021 22:33:20 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=nq1stbI4; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 5615C68A26E; Tue, 24 Aug 2021 08:32:48 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pj1-f54.google.com (mail-pj1-f54.google.com [209.85.216.54]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id AF9AA680074 for ; Tue, 24 Aug 2021 08:32:41 +0300 (EEST) Received: by mail-pj1-f54.google.com with SMTP id z24-20020a17090acb1800b0018e87a24300so1571220pjt.0 for ; Mon, 23 Aug 2021 22:32:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xEtSZIhxjrgP8Qiv8HtecgeSth5SbbKyMevLA+Wlpyw=; b=nq1stbI4WMNEB9oCZjZhMI+GDmOkfUeER9KRlzxXIlA8NA7LaXzb+8jCmk5IHIbIFT i+exWnSH+/IAXGdxMBcUmdJibfcoNI/yLs+IWYcszMWQ+4koJEKWu6Su9eb635UOS3hz IPhIwFMkxqj4kGudkaJiYpj8Or0xg3XjQfO0rwkbynM0ozg83uQIz95gK0QYTi8LPda+ M4xVMhwiWfhkab+ztHjoaRYzRyc8HoXp3ATgvYvxxZ8qIX4821nJPsHK3FMbkZQ+1+Bd ZyrovYQai5sRa4Dqm+yjJ/Viec3Zj7VWHmmXurQNML2aPulp+JlH7J+MbdQ9yA6j7yeD QDBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xEtSZIhxjrgP8Qiv8HtecgeSth5SbbKyMevLA+Wlpyw=; b=FZbHjvujwcKN7xlOqsatJqsC6mnWLcUezKEIiHjh/STQmbXxLSHHswna4ai3p486Ax dS5wHQhZJ1VCXajTF3zgh+FStcQy8GI/tv0D6zVkScxUCUUiFmEpo7K6BxTS5YA9Jy3t 7HasynDj2Qw0xRPfGXVzzhMn4nzn7VbBPuFxqfreKy0UCw3/vkJ2GnxtUkiDu2hbYlj7 VmUwMOUfa8H+Ya1ZMwA5g5y6Dmjg80LPoYlz3HlTXbTmtWFAxH5MSGtyHAD64gXXvz+B sNhUOO/kICH4WJQL1uDiQ5sO/2uRx3ezyhSM58IHIsSCEp7kSW5LKYBcDJWfkVeYgaj+ xn4g== X-Gm-Message-State: AOAM532VyoEnNuRKGb7TMIdnhXG7rAGbVDJMeweUeBGYm0GDGm32nVZb YYMnlFj9FYnpDxybAT6esFCIS2lmw+iAJg== X-Received: by 2002:a17:90b:3748:: with SMTP id ne8mr2473985pjb.162.1629783159734; Mon, 23 Aug 2021 22:32:39 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.221.236]) by smtp.googlemail.com with ESMTPSA id z12sm7095207pfe.79.2021.08.23.22.32.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Aug 2021 22:32:39 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Tue, 24 Aug 2021 11:01:48 +0530 Message-Id: <20210824053151.14305-4-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210824053151.14305-1-shubhanshu.e01@gmail.com> References: <20210824053151.14305-1-shubhanshu.e01@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v3 4/7] libavfilter: Remove Async Flag from DNN Filter Side X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: B/7aQ80j2NyJ Remove async flag from filter's perspective after the unification of async and sync modes in the DNN backend. Signed-off-by: Shubhanshu Saxena --- doc/filters.texi | 14 ++++---------- libavfilter/dnn/dnn_backend_tf.c | 7 +++++++ libavfilter/dnn_filter_common.c | 7 ------- libavfilter/dnn_filter_common.h | 2 +- 4 files changed, 12 insertions(+), 18 deletions(-) diff --git a/doc/filters.texi b/doc/filters.texi index b902aca12d..d99368e64b 100644 --- a/doc/filters.texi +++ b/doc/filters.texi @@ -10283,11 +10283,8 @@ and the second line is the name of label id 1, etc. The label id is considered as name if the label file is not provided. @item backend_configs -Set the configs to be passed into backend - -@item async -use DNN async execution if set (default: set), -roll back to sync execution if the backend does not support async. +Set the configs to be passed into backend. To use async execution, set async (default: set). +Roll back to sync execution if the backend does not support async. @end table @@ -10339,15 +10336,12 @@ Set the input name of the dnn network. Set the output name of the dnn network. @item backend_configs -Set the configs to be passed into backend +Set the configs to be passed into backend. To use async execution, set async (default: set). +Roll back to sync execution if the backend does not support async. For tensorflow backend, you can set its configs with @option{sess_config} options, please use tools/python/tf_sess_config.py to get the configs of TensorFlow backend for your system. -@item async -use DNN async execution if set (default: set), -roll back to sync execution if the backend does not support async. - @end table @subsection Examples diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c index 4a0b561f29..906934d8c0 100644 --- a/libavfilter/dnn/dnn_backend_tf.c +++ b/libavfilter/dnn/dnn_backend_tf.c @@ -884,6 +884,13 @@ DNNModel *ff_dnn_load_model_tf(const char *model_filename, DNNFunctionType func_ ctx->options.nireq = av_cpu_count() / 2 + 1; } +#if !HAVE_PTHREAD_CANCEL + if (ctx->options.async) { + ctx->options.async = 0; + av_log(filter_ctx, AV_LOG_WARNING, "pthread is not supported, roll back to sync.\n"); + } +#endif + tf_model->request_queue = ff_safe_queue_create(); if (!tf_model->request_queue) { goto err; diff --git a/libavfilter/dnn_filter_common.c b/libavfilter/dnn_filter_common.c index 455eaa37f4..3045ce0131 100644 --- a/libavfilter/dnn_filter_common.c +++ b/libavfilter/dnn_filter_common.c @@ -84,13 +84,6 @@ int ff_dnn_init(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *fil return AVERROR(EINVAL); } -#if !HAVE_PTHREAD_CANCEL - if (ctx->async) { - ctx->async = 0; - av_log(filter_ctx, AV_LOG_WARNING, "pthread is not supported, roll back to sync.\n"); - } -#endif - return 0; } diff --git a/libavfilter/dnn_filter_common.h b/libavfilter/dnn_filter_common.h index 4d92c1dc36..635ae631c1 100644 --- a/libavfilter/dnn_filter_common.h +++ b/libavfilter/dnn_filter_common.h @@ -46,7 +46,7 @@ typedef struct DnnContext { { "output", "output name of the model", OFFSET(model_outputnames_string), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS },\ { "backend_configs", "backend configs", OFFSET(backend_options), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS },\ { "options", "backend configs (deprecated, use backend_configs)", OFFSET(backend_options), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS | AV_OPT_FLAG_DEPRECATED},\ - { "async", "use DNN async inference", OFFSET(async), AV_OPT_TYPE_BOOL, { .i64 = 1}, 0, 1, FLAGS}, + { "async", "use DNN async inference (ignored, use backend_configs='async=1')", OFFSET(async), AV_OPT_TYPE_BOOL, { .i64 = 1}, 0, 1, FLAGS}, int ff_dnn_init(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *filter_ctx); From patchwork Tue Aug 24 05:31:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29751 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp2724420iov; Mon, 23 Aug 2021 22:33:32 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxD+KdpRw3pJKymMtRQkHRh17fwsr59JC3IMHe5swt+CTVbWNYid+OdZ+QgLAUQR/XRDu/9 X-Received: by 2002:a17:906:a108:: with SMTP id t8mr39037734ejy.407.1629783212056; Mon, 23 Aug 2021 22:33:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629783212; cv=none; d=google.com; s=arc-20160816; b=oxLYOV/qY1AXAHGK0VFzp9XWQNkQmUMAZX8yA1qxMm3ogZZ1a6dR4K7O+LDxb59CBR OimI/xfUTmocO8Uqgn0GjqMJpPbzlD86YqRlktxT/5y3TAoAxKdz0vsRMiueY3DQj/Im mV3V6W/wbqfV2vlSrnfpngB8BgOBhOEgceWcKorDNtxQGu7X0npYwJ4LTogdcRQ+T7Jz LEVKcbaIoaLeQ06D9KuXmStY5xEcQ/iyGY/vH99FgorbLcwN/rx7GwCW/Tr3dUJ97+yO ElYDeMkfXiZbpn1PLB0F+Pldsm+RB1qQnQNcyU4987oyqNX8FGGvipz558RqPFyZ7Gws hAqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=wPmC1V9xA7xcIJpal4W9btuetXoWmeljT0mNz7RTB4g=; b=h5K3+uKR42gmZKcaarX20Euh1ErmK05yGSLB4Q035Wl4MmKYGGTeEQIbASKHvPR3eq 1qEQ1LUTHCWf+GQ9FzP9IHa6pERsH08KFRfhg7EHS73Hckd4N0b8ug1/glIhvoO+qSP0 jqrtKqxRu5PZizfnZ6GixuCvViD3nrI+NmrzNqvi137Ecdk7+kWPQXJODqC+yV3et8xc 8t0R5qS9Z7ysmAmI7+06XJhoC9KO6COxDLlJwvY38us6G8QkP5G8+SBE1WbZz/O6DC3p RGqva1iTD+GVvJHYFgaLm23nYLUJt3bGJ8pxbDherRWDDmxxk6IHt8zFt/GzfEF7x5Jz vLLA== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=Muw2AoVM; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id r8si2123935edd.456.2021.08.23.22.33.31; Mon, 23 Aug 2021 22:33:32 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=Muw2AoVM; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 8E6CC68A2A9; Tue, 24 Aug 2021 08:32:50 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pj1-f49.google.com (mail-pj1-f49.google.com [209.85.216.49]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id ADD22680610 for ; Tue, 24 Aug 2021 08:32:43 +0300 (EEST) Received: by mail-pj1-f49.google.com with SMTP id j10-20020a17090a94ca00b00181f17b7ef7so1009832pjw.2 for ; Mon, 23 Aug 2021 22:32:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=X0GoaER13LZC8Buz2Nw+pjFQZNo+J42xQflmdYhUWao=; b=Muw2AoVMKvhhM74S/AdLqJDHXSuc5i5bySUqPYcB6OvZ3jbygJ/xNwIYbfPxvlHb0M IDQbGjDa/3TIkWujQ0EOy5OdD2CO2EmqdA49qzv4sCZmge3OvmLtqVS5zxwDBtpIN3IP 9pHJrbyv7z0tQInJGy9ZEYv3DOkTxsVIxUNHZ2ooWxx2goiuENjDJkIJ+aAkfb8KBxO8 VwHyCjY8gekbhLRQmsSkmA//PJ8rVDuRwcV9Zc4vd/MOyoHIQlInPPeFE59Bdeg8BS0M +m8QjLm68ka8sipE2rvohh2LPnDBdA1iK8mxoYB8plSLUlwuHNVVKYPfEvWPVej0WZeD 12BQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=X0GoaER13LZC8Buz2Nw+pjFQZNo+J42xQflmdYhUWao=; b=be2T53YLolrYmP4Y/X2YpASwiqUaJmUtDJQIGLHJEX09BN+YSzYl/bW5/3BMwFU04e n8ak2nq8gDBkOUNW8skHvOGzxYd+4eT7dvibu5/73ao5FvF3WH86sRkgqQIPYx/2SuaE Hwsv6EPYgI1+NvrfaF10e/TfEHOU5JQyNhPszVzF6Ri7CxEI/hMRvTYEV/VjzTHep4Uq JS6KITLOmLLKdeqTlLqaRKbXKX6B03Axbr6l4Rnx4IDkuDXstd7hh1YL0wS2IRj6fGiO 58dLX9mfKKZ6dzh+DM3gN9BHbxsUWgvhG63JeSi21jSOTBalwrRz4l7RnuKdycln4MSn EC5w== X-Gm-Message-State: AOAM532Utlc8uTyTT9/eNeTJEkIt+XoWgDOZ3iRa4luTUrK3Xp6+oNIu aKYLB3j8+zY3ZztARojPuUwBwpT2HnMIwg== X-Received: by 2002:a17:90a:3b0e:: with SMTP id d14mr2520598pjc.164.1629783161647; Mon, 23 Aug 2021 22:32:41 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.221.236]) by smtp.googlemail.com with ESMTPSA id z12sm7095207pfe.79.2021.08.23.22.32.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Aug 2021 22:32:41 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Tue, 24 Aug 2021 11:01:49 +0530 Message-Id: <20210824053151.14305-5-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210824053151.14305-1-shubhanshu.e01@gmail.com> References: <20210824053151.14305-1-shubhanshu.e01@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v3 5/7] lavfi/dnn: Rename InferenceItem to LastLevelTaskItem X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: jLpCosWOgI2h This patch renames the InferenceItem to LastLevelTaskItem in the three backends to avoid confusion among the meanings of these structs. The following are the renames done in this patch: 1. extract_inference_from_task -> extract_lltask_from_task 2. InferenceItem -> LastLevelTaskItem 3. inference_queue -> lltask_queue 4. inference -> lltask 5. inference_count -> lltask_count Signed-off-by: Shubhanshu Saxena --- libavfilter/dnn/dnn_backend_common.h | 4 +- libavfilter/dnn/dnn_backend_native.c | 58 ++++++------- libavfilter/dnn/dnn_backend_native.h | 2 +- libavfilter/dnn/dnn_backend_openvino.c | 110 ++++++++++++------------- libavfilter/dnn/dnn_backend_tf.c | 76 ++++++++--------- 5 files changed, 125 insertions(+), 125 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_common.h b/libavfilter/dnn/dnn_backend_common.h index 78e62a94a2..6b6a5e21ae 100644 --- a/libavfilter/dnn/dnn_backend_common.h +++ b/libavfilter/dnn/dnn_backend_common.h @@ -47,10 +47,10 @@ typedef struct TaskItem { } TaskItem; // one task might have multiple inferences -typedef struct InferenceItem { +typedef struct LastLevelTaskItem { TaskItem *task; uint32_t bbox_index; -} InferenceItem; +} LastLevelTaskItem; /** * Common Async Execution Mechanism for the DNN Backends. diff --git a/libavfilter/dnn/dnn_backend_native.c b/libavfilter/dnn/dnn_backend_native.c index 2d34b88f8a..13436c0484 100644 --- a/libavfilter/dnn/dnn_backend_native.c +++ b/libavfilter/dnn/dnn_backend_native.c @@ -46,25 +46,25 @@ static const AVClass dnn_native_class = { .category = AV_CLASS_CATEGORY_FILTER, }; -static DNNReturnType execute_model_native(Queue *inference_queue); +static DNNReturnType execute_model_native(Queue *lltask_queue); -static DNNReturnType extract_inference_from_task(TaskItem *task, Queue *inference_queue) +static DNNReturnType extract_lltask_from_task(TaskItem *task, Queue *lltask_queue) { NativeModel *native_model = task->model; NativeContext *ctx = &native_model->ctx; - InferenceItem *inference = av_malloc(sizeof(*inference)); + LastLevelTaskItem *lltask = av_malloc(sizeof(*lltask)); - if (!inference) { - av_log(ctx, AV_LOG_ERROR, "Unable to allocate space for InferenceItem\n"); + if (!lltask) { + av_log(ctx, AV_LOG_ERROR, "Unable to allocate space for LastLevelTaskItem\n"); return DNN_ERROR; } task->inference_todo = 1; task->inference_done = 0; - inference->task = task; + lltask->task = task; - if (ff_queue_push_back(inference_queue, inference) < 0) { - av_log(ctx, AV_LOG_ERROR, "Failed to push back inference_queue.\n"); - av_freep(&inference); + if (ff_queue_push_back(lltask_queue, lltask) < 0) { + av_log(ctx, AV_LOG_ERROR, "Failed to push back lltask_queue.\n"); + av_freep(&lltask); return DNN_ERROR; } return DNN_SUCCESS; @@ -116,13 +116,13 @@ static DNNReturnType get_output_native(void *model, const char *input_name, int goto err; } - if (extract_inference_from_task(&task, native_model->inference_queue) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); + if (extract_lltask_from_task(&task, native_model->lltask_queue) != DNN_SUCCESS) { + av_log(ctx, AV_LOG_ERROR, "unable to extract last level task from task.\n"); ret = DNN_ERROR; goto err; } - ret = execute_model_native(native_model->inference_queue); + ret = execute_model_native(native_model->lltask_queue); *output_width = task.out_frame->width; *output_height = task.out_frame->height; @@ -223,8 +223,8 @@ DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType f goto fail; } - native_model->inference_queue = ff_queue_create(); - if (!native_model->inference_queue) { + native_model->lltask_queue = ff_queue_create(); + if (!native_model->lltask_queue) { goto fail; } @@ -297,24 +297,24 @@ fail: return NULL; } -static DNNReturnType execute_model_native(Queue *inference_queue) +static DNNReturnType execute_model_native(Queue *lltask_queue) { NativeModel *native_model = NULL; NativeContext *ctx = NULL; int32_t layer; DNNData input, output; DnnOperand *oprd = NULL; - InferenceItem *inference = NULL; + LastLevelTaskItem *lltask = NULL; TaskItem *task = NULL; DNNReturnType ret = 0; - inference = ff_queue_pop_front(inference_queue); - if (!inference) { - av_log(NULL, AV_LOG_ERROR, "Failed to get inference item\n"); + lltask = ff_queue_pop_front(lltask_queue); + if (!lltask) { + av_log(NULL, AV_LOG_ERROR, "Failed to get LastLevelTaskItem\n"); ret = DNN_ERROR; goto err; } - task = inference->task; + task = lltask->task; native_model = task->model; ctx = &native_model->ctx; @@ -428,7 +428,7 @@ static DNNReturnType execute_model_native(Queue *inference_queue) } task->inference_done++; err: - av_freep(&inference); + av_freep(&lltask); return ret; } @@ -459,26 +459,26 @@ DNNReturnType ff_dnn_execute_model_native(const DNNModel *model, DNNExecBasePara return DNN_ERROR; } - if (extract_inference_from_task(task, native_model->inference_queue) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); + if (extract_lltask_from_task(task, native_model->lltask_queue) != DNN_SUCCESS) { + av_log(ctx, AV_LOG_ERROR, "unable to extract last level task from task.\n"); return DNN_ERROR; } - return execute_model_native(native_model->inference_queue); + return execute_model_native(native_model->lltask_queue); } DNNReturnType ff_dnn_flush_native(const DNNModel *model) { NativeModel *native_model = model->model; - if (ff_queue_size(native_model->inference_queue) == 0) { + if (ff_queue_size(native_model->lltask_queue) == 0) { // no pending task need to flush return DNN_SUCCESS; } // for now, use sync node with flush operation // Switch to async when it is supported - return execute_model_native(native_model->inference_queue); + return execute_model_native(native_model->lltask_queue); } DNNAsyncStatusType ff_dnn_get_result_native(const DNNModel *model, AVFrame **in, AVFrame **out) @@ -536,11 +536,11 @@ void ff_dnn_free_model_native(DNNModel **model) av_freep(&native_model->operands); } - while (ff_queue_size(native_model->inference_queue) != 0) { - InferenceItem *item = ff_queue_pop_front(native_model->inference_queue); + while (ff_queue_size(native_model->lltask_queue) != 0) { + LastLevelTaskItem *item = ff_queue_pop_front(native_model->lltask_queue); av_freep(&item); } - ff_queue_destroy(native_model->inference_queue); + ff_queue_destroy(native_model->lltask_queue); while (ff_queue_size(native_model->task_queue) != 0) { TaskItem *item = ff_queue_pop_front(native_model->task_queue); diff --git a/libavfilter/dnn/dnn_backend_native.h b/libavfilter/dnn/dnn_backend_native.h index ca61bb353f..e8017ee4b4 100644 --- a/libavfilter/dnn/dnn_backend_native.h +++ b/libavfilter/dnn/dnn_backend_native.h @@ -129,7 +129,7 @@ typedef struct NativeModel{ DnnOperand *operands; int32_t operands_num; Queue *task_queue; - Queue *inference_queue; + Queue *lltask_queue; } NativeModel; DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx); diff --git a/libavfilter/dnn/dnn_backend_openvino.c b/libavfilter/dnn/dnn_backend_openvino.c index abf3f4b6b6..76dc06c6d7 100644 --- a/libavfilter/dnn/dnn_backend_openvino.c +++ b/libavfilter/dnn/dnn_backend_openvino.c @@ -57,14 +57,14 @@ typedef struct OVModel{ ie_executable_network_t *exe_network; SafeQueue *request_queue; // holds OVRequestItem Queue *task_queue; // holds TaskItem - Queue *inference_queue; // holds InferenceItem + Queue *lltask_queue; // holds LastLevelTaskItem } OVModel; // one request for one call to openvino typedef struct OVRequestItem { ie_infer_request_t *infer_request; - InferenceItem **inferences; - uint32_t inference_count; + LastLevelTaskItem **lltasks; + uint32_t lltask_count; ie_complete_call_back_t callback; } OVRequestItem; @@ -121,12 +121,12 @@ static DNNReturnType fill_model_input_ov(OVModel *ov_model, OVRequestItem *reque IEStatusCode status; DNNData input; ie_blob_t *input_blob = NULL; - InferenceItem *inference; + LastLevelTaskItem *lltask; TaskItem *task; - inference = ff_queue_peek_front(ov_model->inference_queue); - av_assert0(inference); - task = inference->task; + lltask = ff_queue_peek_front(ov_model->lltask_queue); + av_assert0(lltask); + task = lltask->task; status = ie_infer_request_get_blob(request->infer_request, task->input_name, &input_blob); if (status != OK) { @@ -159,13 +159,13 @@ static DNNReturnType fill_model_input_ov(OVModel *ov_model, OVRequestItem *reque input.order = DCO_BGR; for (int i = 0; i < ctx->options.batch_size; ++i) { - inference = ff_queue_pop_front(ov_model->inference_queue); - if (!inference) { + lltask = ff_queue_pop_front(ov_model->lltask_queue); + if (!lltask) { break; } - request->inferences[i] = inference; - request->inference_count = i + 1; - task = inference->task; + request->lltasks[i] = lltask; + request->lltask_count = i + 1; + task = lltask->task; switch (ov_model->model->func_type) { case DFT_PROCESS_FRAME: if (task->do_ioproc) { @@ -180,7 +180,7 @@ static DNNReturnType fill_model_input_ov(OVModel *ov_model, OVRequestItem *reque ff_frame_to_dnn_detect(task->in_frame, &input, ctx); break; case DFT_ANALYTICS_CLASSIFY: - ff_frame_to_dnn_classify(task->in_frame, &input, inference->bbox_index, ctx); + ff_frame_to_dnn_classify(task->in_frame, &input, lltask->bbox_index, ctx); break; default: av_assert0(!"should not reach here"); @@ -200,8 +200,8 @@ static void infer_completion_callback(void *args) precision_e precision; IEStatusCode status; OVRequestItem *request = args; - InferenceItem *inference = request->inferences[0]; - TaskItem *task = inference->task; + LastLevelTaskItem *lltask = request->lltasks[0]; + TaskItem *task = lltask->task; OVModel *ov_model = task->model; SafeQueue *requestq = ov_model->request_queue; ie_blob_t *output_blob = NULL; @@ -248,10 +248,10 @@ static void infer_completion_callback(void *args) output.dt = precision_to_datatype(precision); output.data = blob_buffer.buffer; - av_assert0(request->inference_count <= dims.dims[0]); - av_assert0(request->inference_count >= 1); - for (int i = 0; i < request->inference_count; ++i) { - task = request->inferences[i]->task; + av_assert0(request->lltask_count <= dims.dims[0]); + av_assert0(request->lltask_count >= 1); + for (int i = 0; i < request->lltask_count; ++i) { + task = request->lltasks[i]->task; task->inference_done++; switch (ov_model->model->func_type) { @@ -279,20 +279,20 @@ static void infer_completion_callback(void *args) av_log(ctx, AV_LOG_ERROR, "classify filter needs to provide post proc\n"); return; } - ov_model->model->classify_post_proc(task->out_frame, &output, request->inferences[i]->bbox_index, ov_model->model->filter_ctx); + ov_model->model->classify_post_proc(task->out_frame, &output, request->lltasks[i]->bbox_index, ov_model->model->filter_ctx); break; default: av_assert0(!"should not reach here"); break; } - av_freep(&request->inferences[i]); + av_freep(&request->lltasks[i]); output.data = (uint8_t *)output.data + output.width * output.height * output.channels * get_datatype_size(output.dt); } ie_blob_free(&output_blob); - request->inference_count = 0; + request->lltask_count = 0; if (ff_safe_queue_push_back(requestq, request) < 0) { ie_infer_request_free(&request->infer_request); av_freep(&request); @@ -399,11 +399,11 @@ static DNNReturnType init_model_ov(OVModel *ov_model, const char *input_name, co goto err; } - item->inferences = av_malloc_array(ctx->options.batch_size, sizeof(*item->inferences)); - if (!item->inferences) { + item->lltasks = av_malloc_array(ctx->options.batch_size, sizeof(*item->lltasks)); + if (!item->lltasks) { goto err; } - item->inference_count = 0; + item->lltask_count = 0; } ov_model->task_queue = ff_queue_create(); @@ -411,8 +411,8 @@ static DNNReturnType init_model_ov(OVModel *ov_model, const char *input_name, co goto err; } - ov_model->inference_queue = ff_queue_create(); - if (!ov_model->inference_queue) { + ov_model->lltask_queue = ff_queue_create(); + if (!ov_model->lltask_queue) { goto err; } @@ -427,7 +427,7 @@ static DNNReturnType execute_model_ov(OVRequestItem *request, Queue *inferenceq) { IEStatusCode status; DNNReturnType ret; - InferenceItem *inference; + LastLevelTaskItem *lltask; TaskItem *task; OVContext *ctx; OVModel *ov_model; @@ -438,8 +438,8 @@ static DNNReturnType execute_model_ov(OVRequestItem *request, Queue *inferenceq) return DNN_SUCCESS; } - inference = ff_queue_peek_front(inferenceq); - task = inference->task; + lltask = ff_queue_peek_front(inferenceq); + task = lltask->task; ov_model = task->model; ctx = &ov_model->ctx; @@ -567,21 +567,21 @@ static int contain_valid_detection_bbox(AVFrame *frame) return 1; } -static DNNReturnType extract_inference_from_task(DNNFunctionType func_type, TaskItem *task, Queue *inference_queue, DNNExecBaseParams *exec_params) +static DNNReturnType extract_lltask_from_task(DNNFunctionType func_type, TaskItem *task, Queue *lltask_queue, DNNExecBaseParams *exec_params) { switch (func_type) { case DFT_PROCESS_FRAME: case DFT_ANALYTICS_DETECT: { - InferenceItem *inference = av_malloc(sizeof(*inference)); - if (!inference) { + LastLevelTaskItem *lltask = av_malloc(sizeof(*lltask)); + if (!lltask) { return DNN_ERROR; } task->inference_todo = 1; task->inference_done = 0; - inference->task = task; - if (ff_queue_push_back(inference_queue, inference) < 0) { - av_freep(&inference); + lltask->task = task; + if (ff_queue_push_back(lltask_queue, lltask) < 0) { + av_freep(&lltask); return DNN_ERROR; } return DNN_SUCCESS; @@ -604,7 +604,7 @@ static DNNReturnType extract_inference_from_task(DNNFunctionType func_type, Task header = (const AVDetectionBBoxHeader *)sd->data; for (uint32_t i = 0; i < header->nb_bboxes; i++) { - InferenceItem *inference; + LastLevelTaskItem *lltask; const AVDetectionBBox *bbox = av_get_detection_bbox(header, i); if (params->target) { @@ -613,15 +613,15 @@ static DNNReturnType extract_inference_from_task(DNNFunctionType func_type, Task } } - inference = av_malloc(sizeof(*inference)); - if (!inference) { + lltask = av_malloc(sizeof(*lltask)); + if (!lltask) { return DNN_ERROR; } task->inference_todo++; - inference->task = task; - inference->bbox_index = i; - if (ff_queue_push_back(inference_queue, inference) < 0) { - av_freep(&inference); + lltask->task = task; + lltask->bbox_index = i; + if (ff_queue_push_back(lltask_queue, lltask) < 0) { + av_freep(&lltask); return DNN_ERROR; } } @@ -679,8 +679,8 @@ static DNNReturnType get_output_ov(void *model, const char *input_name, int inpu return DNN_ERROR; } - if (extract_inference_from_task(ov_model->model->func_type, &task, ov_model->inference_queue, NULL) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); + if (extract_lltask_from_task(ov_model->model->func_type, &task, ov_model->lltask_queue, NULL) != DNN_SUCCESS) { + av_log(ctx, AV_LOG_ERROR, "unable to extract last level task from task.\n"); ret = DNN_ERROR; goto err; } @@ -692,7 +692,7 @@ static DNNReturnType get_output_ov(void *model, const char *input_name, int inpu goto err; } - ret = execute_model_ov(request, ov_model->inference_queue); + ret = execute_model_ov(request, ov_model->lltask_queue); *output_width = task.out_frame->width; *output_height = task.out_frame->height; err: @@ -794,20 +794,20 @@ DNNReturnType ff_dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams * return DNN_ERROR; } - if (extract_inference_from_task(model->func_type, task, ov_model->inference_queue, exec_params) != DNN_SUCCESS) { + if (extract_lltask_from_task(model->func_type, task, ov_model->lltask_queue, exec_params) != DNN_SUCCESS) { av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); return DNN_ERROR; } if (ctx->options.async) { - while (ff_queue_size(ov_model->inference_queue) >= ctx->options.batch_size) { + while (ff_queue_size(ov_model->lltask_queue) >= ctx->options.batch_size) { request = ff_safe_queue_pop_front(ov_model->request_queue); if (!request) { av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); return DNN_ERROR; } - ret = execute_model_ov(request, ov_model->inference_queue); + ret = execute_model_ov(request, ov_model->lltask_queue); if (ret != DNN_SUCCESS) { return ret; } @@ -832,7 +832,7 @@ DNNReturnType ff_dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams * av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); return DNN_ERROR; } - return execute_model_ov(request, ov_model->inference_queue); + return execute_model_ov(request, ov_model->lltask_queue); } } @@ -850,7 +850,7 @@ DNNReturnType ff_dnn_flush_ov(const DNNModel *model) IEStatusCode status; DNNReturnType ret; - if (ff_queue_size(ov_model->inference_queue) == 0) { + if (ff_queue_size(ov_model->lltask_queue) == 0) { // no pending task need to flush return DNN_SUCCESS; } @@ -889,16 +889,16 @@ void ff_dnn_free_model_ov(DNNModel **model) if (item && item->infer_request) { ie_infer_request_free(&item->infer_request); } - av_freep(&item->inferences); + av_freep(&item->lltasks); av_freep(&item); } ff_safe_queue_destroy(ov_model->request_queue); - while (ff_queue_size(ov_model->inference_queue) != 0) { - InferenceItem *item = ff_queue_pop_front(ov_model->inference_queue); + while (ff_queue_size(ov_model->lltask_queue) != 0) { + LastLevelTaskItem *item = ff_queue_pop_front(ov_model->lltask_queue); av_freep(&item); } - ff_queue_destroy(ov_model->inference_queue); + ff_queue_destroy(ov_model->lltask_queue); while (ff_queue_size(ov_model->task_queue) != 0) { TaskItem *item = ff_queue_pop_front(ov_model->task_queue); diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c index 906934d8c0..dfac58b357 100644 --- a/libavfilter/dnn/dnn_backend_tf.c +++ b/libavfilter/dnn/dnn_backend_tf.c @@ -58,7 +58,7 @@ typedef struct TFModel{ TF_Session *session; TF_Status *status; SafeQueue *request_queue; - Queue *inference_queue; + Queue *lltask_queue; Queue *task_queue; } TFModel; @@ -75,7 +75,7 @@ typedef struct TFInferRequest { typedef struct TFRequestItem { TFInferRequest *infer_request; - InferenceItem *inference; + LastLevelTaskItem *lltask; TF_Status *status; DNNAsyncExecModule exec_module; } TFRequestItem; @@ -90,7 +90,7 @@ static const AVOption dnn_tensorflow_options[] = { AVFILTER_DEFINE_CLASS(dnn_tensorflow); -static DNNReturnType execute_model_tf(TFRequestItem *request, Queue *inference_queue); +static DNNReturnType execute_model_tf(TFRequestItem *request, Queue *lltask_queue); static void infer_completion_callback(void *args); static inline void destroy_request_item(TFRequestItem **arg); @@ -158,8 +158,8 @@ static DNNReturnType tf_start_inference(void *args) { TFRequestItem *request = args; TFInferRequest *infer_request = request->infer_request; - InferenceItem *inference = request->inference; - TaskItem *task = inference->task; + LastLevelTaskItem *lltask = request->lltask; + TaskItem *task = lltask->task; TFModel *tf_model = task->model; if (!request) { @@ -196,27 +196,27 @@ static inline void destroy_request_item(TFRequestItem **arg) { request = *arg; tf_free_request(request->infer_request); av_freep(&request->infer_request); - av_freep(&request->inference); + av_freep(&request->lltask); TF_DeleteStatus(request->status); ff_dnn_async_module_cleanup(&request->exec_module); av_freep(arg); } -static DNNReturnType extract_inference_from_task(TaskItem *task, Queue *inference_queue) +static DNNReturnType extract_lltask_from_task(TaskItem *task, Queue *lltask_queue) { TFModel *tf_model = task->model; TFContext *ctx = &tf_model->ctx; - InferenceItem *inference = av_malloc(sizeof(*inference)); - if (!inference) { - av_log(ctx, AV_LOG_ERROR, "Unable to allocate space for InferenceItem\n"); + LastLevelTaskItem *lltask = av_malloc(sizeof(*lltask)); + if (!lltask) { + av_log(ctx, AV_LOG_ERROR, "Unable to allocate space for LastLevelTaskItem\n"); return DNN_ERROR; } task->inference_todo = 1; task->inference_done = 0; - inference->task = task; - if (ff_queue_push_back(inference_queue, inference) < 0) { - av_log(ctx, AV_LOG_ERROR, "Failed to push back inference_queue.\n"); - av_freep(&inference); + lltask->task = task; + if (ff_queue_push_back(lltask_queue, lltask) < 0) { + av_log(ctx, AV_LOG_ERROR, "Failed to push back lltask_queue.\n"); + av_freep(&lltask); return DNN_ERROR; } return DNN_SUCCESS; @@ -333,7 +333,7 @@ static DNNReturnType get_output_tf(void *model, const char *input_name, int inpu goto err; } - if (extract_inference_from_task(&task, tf_model->inference_queue) != DNN_SUCCESS) { + if (extract_lltask_from_task(&task, tf_model->lltask_queue) != DNN_SUCCESS) { av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); ret = DNN_ERROR; goto err; @@ -346,7 +346,7 @@ static DNNReturnType get_output_tf(void *model, const char *input_name, int inpu goto err; } - ret = execute_model_tf(request, tf_model->inference_queue); + ret = execute_model_tf(request, tf_model->lltask_queue); *output_width = task.out_frame->width; *output_height = task.out_frame->height; @@ -901,7 +901,7 @@ DNNModel *ff_dnn_load_model_tf(const char *model_filename, DNNFunctionType func_ if (!item) { goto err; } - item->inference = NULL; + item->lltask = NULL; item->infer_request = tf_create_inference_request(); if (!item->infer_request) { av_log(ctx, AV_LOG_ERROR, "Failed to allocate memory for TensorFlow inference request\n"); @@ -919,8 +919,8 @@ DNNModel *ff_dnn_load_model_tf(const char *model_filename, DNNFunctionType func_ } } - tf_model->inference_queue = ff_queue_create(); - if (!tf_model->inference_queue) { + tf_model->lltask_queue = ff_queue_create(); + if (!tf_model->lltask_queue) { goto err; } @@ -944,15 +944,15 @@ err: static DNNReturnType fill_model_input_tf(TFModel *tf_model, TFRequestItem *request) { DNNData input; - InferenceItem *inference; + LastLevelTaskItem *lltask; TaskItem *task; TFInferRequest *infer_request; TFContext *ctx = &tf_model->ctx; - inference = ff_queue_pop_front(tf_model->inference_queue); - av_assert0(inference); - task = inference->task; - request->inference = inference; + lltask = ff_queue_pop_front(tf_model->lltask_queue); + av_assert0(lltask); + task = lltask->task; + request->lltask = lltask; if (get_input_tf(tf_model, &input, task->input_name) != DNN_SUCCESS) { goto err; @@ -1030,8 +1030,8 @@ err: static void infer_completion_callback(void *args) { TFRequestItem *request = args; - InferenceItem *inference = request->inference; - TaskItem *task = inference->task; + LastLevelTaskItem *lltask = request->lltask; + TaskItem *task = lltask->task; DNNData *outputs; TFInferRequest *infer_request = request->infer_request; TFModel *tf_model = task->model; @@ -1086,20 +1086,20 @@ err: } } -static DNNReturnType execute_model_tf(TFRequestItem *request, Queue *inference_queue) +static DNNReturnType execute_model_tf(TFRequestItem *request, Queue *lltask_queue) { TFModel *tf_model; TFContext *ctx; - InferenceItem *inference; + LastLevelTaskItem *lltask; TaskItem *task; - if (ff_queue_size(inference_queue) == 0) { + if (ff_queue_size(lltask_queue) == 0) { destroy_request_item(&request); return DNN_SUCCESS; } - inference = ff_queue_peek_front(inference_queue); - task = inference->task; + lltask = ff_queue_peek_front(lltask_queue); + task = lltask->task; tf_model = task->model; ctx = &tf_model->ctx; @@ -1155,8 +1155,8 @@ DNNReturnType ff_dnn_execute_model_tf(const DNNModel *model, DNNExecBaseParams * return DNN_ERROR; } - if (extract_inference_from_task(task, tf_model->inference_queue) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); + if (extract_lltask_from_task(task, tf_model->lltask_queue) != DNN_SUCCESS) { + av_log(ctx, AV_LOG_ERROR, "unable to extract last level task from task.\n"); return DNN_ERROR; } @@ -1165,7 +1165,7 @@ DNNReturnType ff_dnn_execute_model_tf(const DNNModel *model, DNNExecBaseParams * av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); return DNN_ERROR; } - return execute_model_tf(request, tf_model->inference_queue); + return execute_model_tf(request, tf_model->lltask_queue); } DNNAsyncStatusType ff_dnn_get_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out) @@ -1181,7 +1181,7 @@ DNNReturnType ff_dnn_flush_tf(const DNNModel *model) TFRequestItem *request; DNNReturnType ret; - if (ff_queue_size(tf_model->inference_queue) == 0) { + if (ff_queue_size(tf_model->lltask_queue) == 0) { // no pending task need to flush return DNN_SUCCESS; } @@ -1216,11 +1216,11 @@ void ff_dnn_free_model_tf(DNNModel **model) } ff_safe_queue_destroy(tf_model->request_queue); - while (ff_queue_size(tf_model->inference_queue) != 0) { - InferenceItem *item = ff_queue_pop_front(tf_model->inference_queue); + while (ff_queue_size(tf_model->lltask_queue) != 0) { + LastLevelTaskItem *item = ff_queue_pop_front(tf_model->lltask_queue); av_freep(&item); } - ff_queue_destroy(tf_model->inference_queue); + ff_queue_destroy(tf_model->lltask_queue); while (ff_queue_size(tf_model->task_queue) != 0) { TaskItem *item = ff_queue_pop_front(tf_model->task_queue); From patchwork Tue Aug 24 05:31:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29752 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp2724532iov; Mon, 23 Aug 2021 22:33:43 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwnc++quMKHrPzeDFDAGWWyTFQPygn2gGsyog1+cKKRdeHWbevLVUQgA9naD5xozWlhJQtB X-Received: by 2002:a17:906:7153:: with SMTP id z19mr39055899ejj.343.1629783223388; Mon, 23 Aug 2021 22:33:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629783223; cv=none; d=google.com; s=arc-20160816; b=sw2+VISzetPAkikKPD0R8MmAGb/ypCkhPoB/9vOeV5siFDEgkVwEv3vJDdH7YCesf+ Jytfx3J0/vJgrvgRcApeVoVhwGWJb/uzoo8uC2RhQhKMvFaqxx3YhPvOkpR0YBSYZy7c rs2qpIQxIBZ4Fh43xETV+Z5FU9bv4WE8pC4XF8xqJ05Dvkw0Uyky5zx+AWwqWAVQ7xgV k7ykl9YpAWQisWGqYHrKjvBKAygxLoOD599VDvm+g+AH/ubds0XkCwygBqz036fYHkiG LzV/XuGE6tzwqcY8Tx34WXmtGoAWaWEFzNKjoJxwBm0zCLdld6jBc8LZax2q6zQwJFfG XsyA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=FTYKjgXxUk76+G8WQr8mMel1OVLWsur50UjMhG7iRhQ=; b=Ad6ZuNIYWR08eOV4d8SbvNo4Demc0blVj2LJnBHEXyVshe37LqfDhgumj2ydBE6dIK 4nzwTM9HELtHoglhG9/ylmMZPHvAV08VcV78KtX5c1zD1FLFPfZoo9kEWDfFiLNor29Z hurCFw/SXKgFIkCGnOT17WCR3zZ8yAH9cTyhYfC5ZmtP9JPDthEhX89y5wuF6I2axNin O9eL1kxmM+/D/LPMq4MeWIVCE3SgR1d75jHqubmoQkYnv9AMLpwjQ4/z2kFWp+2EXEsk Ksa7rMzESLmWkdNPE/WaHbuuuGJ7ztOFFWwB3HdXpEpVsn0JILt1IrpYiFWTHK7anUlO u/2A== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b="OQSq/BqO"; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id v2si17297498ejy.465.2021.08.23.22.33.43; Mon, 23 Aug 2021 22:33:43 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b="OQSq/BqO"; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id C300468A0EA; Tue, 24 Aug 2021 08:32:53 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 5369368A2A9 for ; Tue, 24 Aug 2021 08:32:45 +0300 (EEST) Received: by mail-pl1-f181.google.com with SMTP id n12so114446plk.10 for ; Mon, 23 Aug 2021 22:32:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FUFg+PQeZADAUTePrFSIiTIyS8s+IXwL7EV+i8Ji3Lo=; b=OQSq/BqOTf1bhP6ybPF1nLh7UBGZDgWonsOx9fgHo/iTNXFtvZ8sa0RYSvtUodWI7h MaBnpsIK4TpfTFswvPIs0PlrujWYFCW8D0n9ENGv6AxVhMLAmwKlpeeRZ9PlypO1TYpA MYWF7rB1SKYRGNE+ZD6IUDRBXjSdPv4z/FlVIUD80pNQh42E3Oj35NToeFt/Lsu0Va4U 7e5ERzik4JaN2YFJ40+PdDb7UTfR9vVbaMWDsP4BlnmuoaPB9Uth7iBErggnjhvZMXGl R4DwAh7+yrppFiLAt1yXfKmq248iXbENiiDBjieYCXXGW3r4/bYz4zD7GI5eQhCJH8Y/ Nbsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FUFg+PQeZADAUTePrFSIiTIyS8s+IXwL7EV+i8Ji3Lo=; b=k8pBhyW84D6fAzG+GJjc0eDWSwNAuLON+TMOH8oUPqf2TARq9pQxx3saRkxp9hdjns e9fCIkgj1wKWCPc/HmVOR+G63EDC+fs5GFxTu/64vX0sEkChdRk08Ibs0ph0H9fT6za1 LL6VuHU+e5wNGZ4eiIPojS5+Qd1pCddlJGrY/oDVGl7QtwuKKr9aGtdJLfyBwlCEpkQl ZUo2lMRHuJNIhu1nJraP0E9dFLTBxfcHKyqcN6lRJedmErMfcbBFsYbNfwlkyMsUM4UH hC7NWBNI2lZXpElwbr79qUa/FRRTpgWNttM6CRP3g993a0agkR4WmSiwMWyY90KcANjO Ypag== X-Gm-Message-State: AOAM533HngQcccDWZ1StAv5jHHVhhEVsioA4dIEyiPPlKWzGiTTIk+0o Rsm4EA++JNCeQGNtjNsNITqxyjU92jsttA== X-Received: by 2002:a17:90b:3681:: with SMTP id mj1mr2385687pjb.219.1629783163331; Mon, 23 Aug 2021 22:32:43 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.221.236]) by smtp.googlemail.com with ESMTPSA id z12sm7095207pfe.79.2021.08.23.22.32.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Aug 2021 22:32:43 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Tue, 24 Aug 2021 11:01:50 +0530 Message-Id: <20210824053151.14305-6-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210824053151.14305-1-shubhanshu.e01@gmail.com> References: <20210824053151.14305-1-shubhanshu.e01@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v3 6/7] doc/filters.texi: Include dnn_processing in docs of sr and derain filter X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: 3cW8I4NHwHMk Signed-off-by: Shubhanshu Saxena --- doc/filters.texi | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/filters.texi b/doc/filters.texi index d99368e64b..112adc5d94 100644 --- a/doc/filters.texi +++ b/doc/filters.texi @@ -9982,7 +9982,7 @@ Note that different backends use different file formats. TensorFlow and native backend can load files for only its format. @end table -It can also be finished with @ref{dnn_processing} filter. +To get full functionality (such as async execution), please use the @ref{dnn_processing} filter. @section deshake @@ -19271,7 +19271,7 @@ Default value is @code{2}. Scale factor is necessary for SRCNN model, because it input upscaled using bicubic upscaling with proper scale factor. @end table -This feature can also be finished with @ref{dnn_processing} filter. +To get full functionality (such as async execution), please use the @ref{dnn_processing} filter. @section ssim From patchwork Tue Aug 24 05:31:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29750 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp2724632iov; Mon, 23 Aug 2021 22:33:53 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwyQ0DxD5mOD30l/XuzTC8Hoiu4NCcWVucd9zqGBAYpvUfJnVeWOWg8mlFjoLlQVx3sQ9io X-Received: by 2002:a05:6402:2684:: with SMTP id w4mr41646970edd.69.1629783233084; Mon, 23 Aug 2021 22:33:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629783233; cv=none; d=google.com; s=arc-20160816; b=sOTjyDXrzRMw3iB5UKgNb47TyO1mr4nMVpbIGN3sOIkBLYr6uVdNyfuzlsgwqrjf1l 5hSuax2JTbGEARYNLRkYA0o0IVz3GSihp0EiKjM73jW5+QsIRiNLoN1WUStto7Ol7hCG fkYcNxWm2tjIo4lLFpdc0M68s4utkLEXRI/Fcn1qjbv+XYLukIXSkaPLxqIop3k3KoYh hsFBxQMzIqtJhX9VrnFVLIiPquBjfWe3GpnpGjqjkQIMVTY8LfTheBRMEdyhMICEELJ2 6Y/XGHJkpCx6e0f809W0CO7vw7MkY2ufBYc/hyq3GioWTlYoqxRcohg8Ia6AuuTn1A5x bakw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=rJFVm3sunTRk4oJpQijCBsuy/6QyV4Ru8eO9IWiHyi0=; b=oM9Z69BVCZ8jcxyW65axyg1ZTNrWzJjVoKOVq160r7nJNYCL7GQu5Y2ZhHm3zi4GDU AWl2wBiCyjhuJiHfgtAU1OoS/7HaNm+9rkPX7HUci8u8YYbaIRM6QGMjtHNKydLWFy5L U4kgvmWWDXB9WL9Tu7AgimBynu9lGxYnztepORYA3xj9a704W7nz8oSmZwHwoRb5zBQD iOmG1jT1lc4KlMwjWIJATfkuM7gLLnNb8lwkbvAlFdrFVBCBD/kJqdFFq15gybE1qHHp DLH0xmTSg//oGSQiJoGczzwJh4Gus/ScgfzPwWo+WzBnWjEwux6n8qVbPL/X5M+VVNHu kbUA== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=ucLcO4jZ; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id hr29si19414985ejc.256.2021.08.23.22.33.52; Mon, 23 Aug 2021 22:33:53 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=ucLcO4jZ; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id DD52E68A2F6; Tue, 24 Aug 2021 08:32:54 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pj1-f52.google.com (mail-pj1-f52.google.com [209.85.216.52]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id EE95F68A0B6 for ; Tue, 24 Aug 2021 08:32:46 +0300 (EEST) Received: by mail-pj1-f52.google.com with SMTP id 28-20020a17090a031cb0290178dcd8a4d1so1320387pje.0 for ; Mon, 23 Aug 2021 22:32:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pMo2jvem+nmmtNm6BdIjY6ppcu0qgnG/gB4kLphCeko=; b=ucLcO4jZdGSepfKX93KfVIJI+qfl3GL4zbwjAXm5S+e4CPeu/OFb54fShzyZwIV7TD E2sg0touN3j1L1JqUCaORtxEpaiS1Ka/w94Pjvuazjj/0IBjgUDvzFpH2Dx1jaArTrAj pV91Afjgp4AFPxLkz/8dUJZtq1Pqez2pZQHM1V4AmaTsybPSGoTiTHbwaOxGm7sZwDVK D1xxaV95JRObrftM7prawaW0U8TuQFREN/KNGdjELxGgpRbaapjiOqjQAqQ/wgQ8nvBC I3HqBt+OliSd3cayvtYUzJgGBfTvqS+afruxxMbFEbr/zTomzP+4cJINsohAEQESqgIS zLlw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pMo2jvem+nmmtNm6BdIjY6ppcu0qgnG/gB4kLphCeko=; b=cYIlk+YjuyPLR7hBykunpdWye0OJHUN87nEHsw4X9KMYfkT7HkvWdhJETuDuHSOz+h UHAzl6uFSe9Nm5YOoaYlNTAEKpgnNBgP5JwDY/BDYOa4khcvqb+vjqDucB0eOf26UmzY yt7yqM6S3sJ+alETIxM1DE7CVsfFmEUWOT9+Ev/CEZ/ljRxWTUCqCAes2mT2GN6Y1hpC wF8xsM2LGbCNCy+gzcZowX79FQJwik6NSZjC0OtO+3CCsPAUxJeliR0ujRpb/urx9mRG qyR80bI7QfSVlqByg0VtsyCjrXfaVeezmbFD1XaG2J5ERQfF1XqwQg7jvnJlpzra2OTs iHWQ== X-Gm-Message-State: AOAM533zoJv9u4rzR92GSUjtl0B/o1+Gh8jxEaHB0n5c1j8K8gRpJ27I H02rxWSinQpWRKb7mOPhLYr+CqxadXoalQ== X-Received: by 2002:a17:90a:7a8b:: with SMTP id q11mr2456497pjf.189.1629783165039; Mon, 23 Aug 2021 22:32:45 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.221.236]) by smtp.googlemail.com with ESMTPSA id z12sm7095207pfe.79.2021.08.23.22.32.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Aug 2021 22:32:44 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Tue, 24 Aug 2021 11:01:51 +0530 Message-Id: <20210824053151.14305-7-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210824053151.14305-1-shubhanshu.e01@gmail.com> References: <20210824053151.14305-1-shubhanshu.e01@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v3 7/7] libavfilter: Send only input frame for DNN Detect and Classify X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: Z2fMOWY5IhQf This commit updates the following two filters to send only the input frame and send NULL as output frame instead of input frame to the DNN backends. 1. vf_dnn_detect 2. vf_dnn_classify Signed-off-by: Shubhanshu Saxena --- libavfilter/dnn/dnn_backend_common.c | 2 +- libavfilter/dnn/dnn_backend_openvino.c | 5 +++-- libavfilter/vf_dnn_classify.c | 14 ++++++-------- libavfilter/vf_dnn_detect.c | 14 ++++++-------- 4 files changed, 16 insertions(+), 19 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_common.c b/libavfilter/dnn/dnn_backend_common.c index d2bc016fef..6a9c4cc87f 100644 --- a/libavfilter/dnn/dnn_backend_common.c +++ b/libavfilter/dnn/dnn_backend_common.c @@ -38,7 +38,7 @@ int ff_check_exec_params(void *ctx, DNNBackendType backend, DNNFunctionType func return AVERROR(EINVAL); } - if (!exec_params->out_frame) { + if (!exec_params->out_frame && func_type == DFT_PROCESS_FRAME) { av_log(ctx, AV_LOG_ERROR, "out frame is NULL when execute model.\n"); return AVERROR(EINVAL); } diff --git a/libavfilter/dnn/dnn_backend_openvino.c b/libavfilter/dnn/dnn_backend_openvino.c index 76dc06c6d7..f5b1454d21 100644 --- a/libavfilter/dnn/dnn_backend_openvino.c +++ b/libavfilter/dnn/dnn_backend_openvino.c @@ -272,14 +272,14 @@ static void infer_completion_callback(void *args) av_log(ctx, AV_LOG_ERROR, "detect filter needs to provide post proc\n"); return; } - ov_model->model->detect_post_proc(task->out_frame, &output, 1, ov_model->model->filter_ctx); + ov_model->model->detect_post_proc(task->in_frame, &output, 1, ov_model->model->filter_ctx); break; case DFT_ANALYTICS_CLASSIFY: if (!ov_model->model->classify_post_proc) { av_log(ctx, AV_LOG_ERROR, "classify filter needs to provide post proc\n"); return; } - ov_model->model->classify_post_proc(task->out_frame, &output, request->lltasks[i]->bbox_index, ov_model->model->filter_ctx); + ov_model->model->classify_post_proc(task->in_frame, &output, request->lltasks[i]->bbox_index, ov_model->model->filter_ctx); break; default: av_assert0(!"should not reach here"); @@ -819,6 +819,7 @@ DNNReturnType ff_dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams * if (model->func_type == DFT_ANALYTICS_CLASSIFY) { // Classification filter has not been completely // tested with the sync mode. So, do not support now. + avpriv_report_missing_feature(ctx, "classify for sync execution"); return DNN_ERROR; } diff --git a/libavfilter/vf_dnn_classify.c b/libavfilter/vf_dnn_classify.c index d5ee65d403..d1ba8dffbc 100644 --- a/libavfilter/vf_dnn_classify.c +++ b/libavfilter/vf_dnn_classify.c @@ -225,13 +225,12 @@ static int dnn_classify_flush_frame(AVFilterLink *outlink, int64_t pts, int64_t AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); - if (out_frame) { - av_assert0(in_frame == out_frame); - ret = ff_filter_frame(outlink, out_frame); + if (async_state == DAST_SUCCESS) { + ret = ff_filter_frame(outlink, in_frame); if (ret < 0) return ret; if (out_pts) - *out_pts = out_frame->pts + pts; + *out_pts = in_frame->pts + pts; } av_usleep(5000); } while (async_state >= DAST_NOT_READY); @@ -258,7 +257,7 @@ static int dnn_classify_activate(AVFilterContext *filter_ctx) if (ret < 0) return ret; if (ret > 0) { - if (ff_dnn_execute_model_classification(&ctx->dnnctx, in, in, ctx->target) != DNN_SUCCESS) { + if (ff_dnn_execute_model_classification(&ctx->dnnctx, in, NULL, ctx->target) != DNN_SUCCESS) { return AVERROR(EIO); } } @@ -269,9 +268,8 @@ static int dnn_classify_activate(AVFilterContext *filter_ctx) AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); - if (out_frame) { - av_assert0(in_frame == out_frame); - ret = ff_filter_frame(outlink, out_frame); + if (async_state == DAST_SUCCESS) { + ret = ff_filter_frame(outlink, in_frame); if (ret < 0) return ret; got_frame = 1; diff --git a/libavfilter/vf_dnn_detect.c b/libavfilter/vf_dnn_detect.c index 809d70b930..637874b2a1 100644 --- a/libavfilter/vf_dnn_detect.c +++ b/libavfilter/vf_dnn_detect.c @@ -368,13 +368,12 @@ static int dnn_detect_flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *o AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); - if (out_frame) { - av_assert0(in_frame == out_frame); - ret = ff_filter_frame(outlink, out_frame); + if (async_state == DAST_SUCCESS) { + ret = ff_filter_frame(outlink, in_frame); if (ret < 0) return ret; if (out_pts) - *out_pts = out_frame->pts + pts; + *out_pts = in_frame->pts + pts; } av_usleep(5000); } while (async_state >= DAST_NOT_READY); @@ -401,7 +400,7 @@ static int dnn_detect_activate(AVFilterContext *filter_ctx) if (ret < 0) return ret; if (ret > 0) { - if (ff_dnn_execute_model(&ctx->dnnctx, in, in) != DNN_SUCCESS) { + if (ff_dnn_execute_model(&ctx->dnnctx, in, NULL) != DNN_SUCCESS) { return AVERROR(EIO); } } @@ -412,9 +411,8 @@ static int dnn_detect_activate(AVFilterContext *filter_ctx) AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); - if (out_frame) { - av_assert0(in_frame == out_frame); - ret = ff_filter_frame(outlink, out_frame); + if (async_state == DAST_SUCCESS) { + ret = ff_filter_frame(outlink, in_frame); if (ret < 0) return ret; got_frame = 1;