From patchwork Wed Aug 25 10:39:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29770 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp434595iov; Wed, 25 Aug 2021 03:40:27 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxMfsg9ycZBnvGtm9hhUZbTORWLBeUvad5uhE7/oj3fUV88nesi9qFtrWaPdO36wsa7l90H X-Received: by 2002:a17:907:2b09:: with SMTP id gc9mr46816292ejc.49.1629888027069; Wed, 25 Aug 2021 03:40:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629888027; cv=none; d=google.com; s=arc-20160816; b=PobKx5Yel5iEF6alB016FMhmwNL9DYq0cXNpf3ciBGYm12RYZyyntBfSnfID0dUXLa sjXndJF+dkhfpfkFqv7YVgt+k2TnJTJekczP091nXQpdHPiIA2Gp270J5GCqKrWIoiTj +4/OklPutOcxY0TLRtVKyewLrRbL+xlvsOUvwr/QEK37f+Np81OR6uCQf5xZtwj/44O/ j5j6kjTBQSLTvlFQZayF6SjdYBhXqcT6u4dCsp/umU3+S0aw6IY1P8RmqdlhGT4Ldm/v u3SWtLUJWutY1CIAMnap74YoeD6FpX5SXDkKBo5vxBLAjuhqWtGGHg9HWea1urHZyXjA //7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:message-id:date:to:from :dkim-signature:delivered-to; bh=xu2dKNmWrWwG0geOgKuam9qO2bznmV2ZbhWhVoZ63Zk=; b=WSZg2TjBrLayjrfwCnzWgkuVh2MMXgXICA7V+XqJUfgMkJTQYlltGxxG/sTinuNexa OpG2X91oo3OGfC5WbMkXL7zh0ieXaiByXKC79Y+xjUXvWMlCS+2gHKw8rNQmkkyfHX3b eYlyN1q/JFLaPEM5gUSTP56jLdazPgNevB6SBNKI/Vkba2j8/2uslqysiQzp5ThFi+Ae 8q1vJ/3dDRghf5+rSpooqlwWi7n1kvz2Vx6uSeFxhq3LLfz1fhdKmuPECCtNDOko/I3c HVNghL+lzjtNcfIiDmvPwz2aIbbXAki4G+KjRPROM7obJ22wbeGClIiZCHN1uGSBqlYv Wbig== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=bM7waMuU; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id cn11si20863652edb.350.2021.08.25.03.40.25; Wed, 25 Aug 2021 03:40:27 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=bM7waMuU; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 898926818DB; Wed, 25 Aug 2021 13:40:21 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pj1-f50.google.com (mail-pj1-f50.google.com [209.85.216.50]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 1E6F7687FCC for ; Wed, 25 Aug 2021 13:40:14 +0300 (EEST) Received: by mail-pj1-f50.google.com with SMTP id om1-20020a17090b3a8100b0017941c44ce4so3851778pjb.3 for ; Wed, 25 Aug 2021 03:40:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=pw+Yajcr6OYzBhe2Ue6TuF+FKjaSfHiIKA9m+KwpSuY=; b=bM7waMuUpmhma3L7qMLSzo7ZrOaSeXRPS/ieiX/rMZmy/Az/dnRB3IcGBwU29ATqPS i2ZMhPBxvX7mVrfJbcyGfHMlP8T1+5FSml1x9kko09jp/i+YedWUb04sYR1QUBiot6C/ 5+4taiLkmtWVYXSgyzgq8QwS1wq/EzWuBBP37KlQESksNC9F4fo0ZcAm/zmqFB5K8e6R olxy+frOINf6jzeSchnNm3C9LRui7B+hI2SDKQ2++cCHu1toKJJUeE5h7Tk7Jw3vBxxD EVwnmCNvNjk3w3bkSP3oa1i/uQ/sBEvKZfBzenccNoaJaQ5b2YcOfcQ8S+I61Q09CF4x c6tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=pw+Yajcr6OYzBhe2Ue6TuF+FKjaSfHiIKA9m+KwpSuY=; b=c1/alSmumH5TJd5AurvK+hbNs+ByBdsiL4JKy/LxcyNEZ5rLkOzRjnd495P5FmkAsQ Wcbgv+Z9shDbiVXH5EW1TMpayHZD5Vdx1DXKpTx5qRMt5Ypbj3RhC5clqnchG9wrd4fZ X2rf7NwXCJbDxS8afvUeW4kf18G49eNLij//kwvPqt/2TdaLcvP5X7T9GYgc3mFkoDQQ AQ9xQXDDb0j62wQiwYBW9eDA3ckc183VmDXlWQUOZiDDw07bPlu2aiZv9om9tnDytgTy URpV54LuXdIH1a+YAoxsYQkKifjM6+iQW5vdNLPqkrFvDjDk1osUOEHB+GyKvVPPePNj f3XQ== X-Gm-Message-State: AOAM531ODISHxhrymejsw9aDYkisdh+dBqVD7kObSqbJr9vdn5/Et7ZJ NGIZ5yRFTc9cK7Hn/x2KsglukSOt86Zxnw== X-Received: by 2002:a17:902:6b84:b029:ee:f966:1911 with SMTP id p4-20020a1709026b84b02900eef9661911mr37776865plk.69.1629888011828; Wed, 25 Aug 2021 03:40:11 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.221.29]) by smtp.googlemail.com with ESMTPSA id w16sm4548735pff.130.2021.08.25.03.40.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Aug 2021 03:40:11 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Wed, 25 Aug 2021 16:09:34 +0530 Message-Id: <20210825103940.28798-1-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v4 1/7] lavfi/dnn: Task-based Inference in Native Backend X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: wyiFFfYphpzG This commit rearranges the code in Native Backend to use the TaskItem for inference. Signed-off-by: Shubhanshu Saxena --- libavfilter/dnn/dnn_backend_native.c | 176 ++++++++++++++++++--------- libavfilter/dnn/dnn_backend_native.h | 2 + 2 files changed, 121 insertions(+), 57 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_native.c b/libavfilter/dnn/dnn_backend_native.c index a6be27f1fd..3b2a3aa55d 100644 --- a/libavfilter/dnn/dnn_backend_native.c +++ b/libavfilter/dnn/dnn_backend_native.c @@ -45,9 +45,29 @@ static const AVClass dnn_native_class = { .category = AV_CLASS_CATEGORY_FILTER, }; -static DNNReturnType execute_model_native(const DNNModel *model, const char *input_name, AVFrame *in_frame, - const char **output_names, uint32_t nb_output, AVFrame *out_frame, - int do_ioproc); +static DNNReturnType execute_model_native(Queue *inference_queue); + +static DNNReturnType extract_inference_from_task(TaskItem *task, Queue *inference_queue) +{ + NativeModel *native_model = task->model; + NativeContext *ctx = &native_model->ctx; + InferenceItem *inference = av_malloc(sizeof(*inference)); + + if (!inference) { + av_log(ctx, AV_LOG_ERROR, "Unable to allocate space for InferenceItem\n"); + return DNN_ERROR; + } + task->inference_todo = 1; + task->inference_done = 0; + inference->task = task; + + if (ff_queue_push_back(inference_queue, inference) < 0) { + av_log(ctx, AV_LOG_ERROR, "Failed to push back inference_queue.\n"); + av_freep(&inference); + return DNN_ERROR; + } + return DNN_SUCCESS; +} static DNNReturnType get_input_native(void *model, DNNData *input, const char *input_name) { @@ -78,34 +98,36 @@ static DNNReturnType get_input_native(void *model, DNNData *input, const char *i static DNNReturnType get_output_native(void *model, const char *input_name, int input_width, int input_height, const char *output_name, int *output_width, int *output_height) { - DNNReturnType ret; + DNNReturnType ret = 0; NativeModel *native_model = model; NativeContext *ctx = &native_model->ctx; - AVFrame *in_frame = av_frame_alloc(); - AVFrame *out_frame = NULL; - - if (!in_frame) { - av_log(ctx, AV_LOG_ERROR, "Could not allocate memory for input frame\n"); - return DNN_ERROR; + TaskItem task; + DNNExecBaseParams exec_params = { + .input_name = input_name, + .output_names = &output_name, + .nb_output = 1, + .in_frame = NULL, + .out_frame = NULL, + }; + + if (ff_dnn_fill_gettingoutput_task(&task, &exec_params, native_model, input_height, input_width, ctx) != DNN_SUCCESS) { + ret = DNN_ERROR; + goto err; } - out_frame = av_frame_alloc(); - - if (!out_frame) { - av_log(ctx, AV_LOG_ERROR, "Could not allocate memory for output frame\n"); - av_frame_free(&in_frame); - return DNN_ERROR; + if (extract_inference_from_task(&task, native_model->inference_queue) != DNN_SUCCESS) { + av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); + ret = DNN_ERROR; + goto err; } - in_frame->width = input_width; - in_frame->height = input_height; - - ret = execute_model_native(native_model->model, input_name, in_frame, &output_name, 1, out_frame, 0); - *output_width = out_frame->width; - *output_height = out_frame->height; + ret = execute_model_native(native_model->inference_queue); + *output_width = task.out_frame->width; + *output_height = task.out_frame->height; - av_frame_free(&out_frame); - av_frame_free(&in_frame); +err: + av_frame_free(&task.out_frame); + av_frame_free(&task.in_frame); return ret; } @@ -190,6 +212,11 @@ DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType f goto fail; } + native_model->inference_queue = ff_queue_create(); + if (!native_model->inference_queue) { + goto fail; + } + for (layer = 0; layer < native_model->layers_num; ++layer){ layer_type = (int32_t)avio_rl32(model_file_context); dnn_size += 4; @@ -259,50 +286,66 @@ fail: return NULL; } -static DNNReturnType execute_model_native(const DNNModel *model, const char *input_name, AVFrame *in_frame, - const char **output_names, uint32_t nb_output, AVFrame *out_frame, - int do_ioproc) +static DNNReturnType execute_model_native(Queue *inference_queue) { - NativeModel *native_model = model->model; - NativeContext *ctx = &native_model->ctx; + NativeModel *native_model = NULL; + NativeContext *ctx = NULL; int32_t layer; DNNData input, output; DnnOperand *oprd = NULL; + InferenceItem *inference = NULL; + TaskItem *task = NULL; + DNNReturnType ret = 0; + + inference = ff_queue_pop_front(inference_queue); + if (!inference) { + av_log(NULL, AV_LOG_ERROR, "Failed to get inference item\n"); + ret = DNN_ERROR; + goto err; + } + task = inference->task; + native_model = task->model; + ctx = &native_model->ctx; if (native_model->layers_num <= 0 || native_model->operands_num <= 0) { av_log(ctx, AV_LOG_ERROR, "No operands or layers in model\n"); - return DNN_ERROR; + ret = DNN_ERROR; + goto err; } for (int i = 0; i < native_model->operands_num; ++i) { oprd = &native_model->operands[i]; - if (strcmp(oprd->name, input_name) == 0) { + if (strcmp(oprd->name, task->input_name) == 0) { if (oprd->type != DOT_INPUT) { - av_log(ctx, AV_LOG_ERROR, "Found \"%s\" in model, but it is not input node\n", input_name); - return DNN_ERROR; + av_log(ctx, AV_LOG_ERROR, "Found \"%s\" in model, but it is not input node\n", task->input_name); + ret = DNN_ERROR; + goto err; } break; } oprd = NULL; } if (!oprd) { - av_log(ctx, AV_LOG_ERROR, "Could not find \"%s\" in model\n", input_name); - return DNN_ERROR; + av_log(ctx, AV_LOG_ERROR, "Could not find \"%s\" in model\n", task->input_name); + ret = DNN_ERROR; + goto err; } - oprd->dims[1] = in_frame->height; - oprd->dims[2] = in_frame->width; + oprd->dims[1] = task->in_frame->height; + oprd->dims[2] = task->in_frame->width; av_freep(&oprd->data); oprd->length = ff_calculate_operand_data_length(oprd); if (oprd->length <= 0) { av_log(ctx, AV_LOG_ERROR, "The input data length overflow\n"); - return DNN_ERROR; + ret = DNN_ERROR; + goto err; } oprd->data = av_malloc(oprd->length); if (!oprd->data) { av_log(ctx, AV_LOG_ERROR, "Failed to malloc memory for input data\n"); - return DNN_ERROR; + ret = DNN_ERROR; + goto err; } input.height = oprd->dims[1]; @@ -310,19 +353,20 @@ static DNNReturnType execute_model_native(const DNNModel *model, const char *inp input.channels = oprd->dims[3]; input.data = oprd->data; input.dt = oprd->data_type; - if (do_ioproc) { + if (task->do_ioproc) { if (native_model->model->frame_pre_proc != NULL) { - native_model->model->frame_pre_proc(in_frame, &input, native_model->model->filter_ctx); + native_model->model->frame_pre_proc(task->in_frame, &input, native_model->model->filter_ctx); } else { - ff_proc_from_frame_to_dnn(in_frame, &input, ctx); + ff_proc_from_frame_to_dnn(task->in_frame, &input, ctx); } } - if (nb_output != 1) { + if (task->nb_output != 1) { // currently, the filter does not need multiple outputs, // so we just pending the support until we really need it. avpriv_report_missing_feature(ctx, "multiple outputs"); - return DNN_ERROR; + ret = DNN_ERROR; + goto err; } for (layer = 0; layer < native_model->layers_num; ++layer){ @@ -333,13 +377,14 @@ static DNNReturnType execute_model_native(const DNNModel *model, const char *inp native_model->layers[layer].params, &native_model->ctx) == DNN_ERROR) { av_log(ctx, AV_LOG_ERROR, "Failed to execute model\n"); - return DNN_ERROR; + ret = DNN_ERROR; + goto err; } } - for (uint32_t i = 0; i < nb_output; ++i) { + for (uint32_t i = 0; i < task->nb_output; ++i) { DnnOperand *oprd = NULL; - const char *output_name = output_names[i]; + const char *output_name = task->output_names[i]; for (int j = 0; j < native_model->operands_num; ++j) { if (strcmp(native_model->operands[j].name, output_name) == 0) { oprd = &native_model->operands[j]; @@ -349,7 +394,8 @@ static DNNReturnType execute_model_native(const DNNModel *model, const char *inp if (oprd == NULL) { av_log(ctx, AV_LOG_ERROR, "Could not find output in model\n"); - return DNN_ERROR; + ret = DNN_ERROR; + goto err; } output.data = oprd->data; @@ -358,32 +404,43 @@ static DNNReturnType execute_model_native(const DNNModel *model, const char *inp output.channels = oprd->dims[3]; output.dt = oprd->data_type; - if (do_ioproc) { + if (task->do_ioproc) { if (native_model->model->frame_post_proc != NULL) { - native_model->model->frame_post_proc(out_frame, &output, native_model->model->filter_ctx); + native_model->model->frame_post_proc(task->out_frame, &output, native_model->model->filter_ctx); } else { - ff_proc_from_dnn_to_frame(out_frame, &output, ctx); + ff_proc_from_dnn_to_frame(task->out_frame, &output, ctx); } } else { - out_frame->width = output.width; - out_frame->height = output.height; + task->out_frame->width = output.width; + task->out_frame->height = output.height; } } - - return DNN_SUCCESS; + task->inference_done++; +err: + av_freep(&inference); + return ret; } DNNReturnType ff_dnn_execute_model_native(const DNNModel *model, DNNExecBaseParams *exec_params) { NativeModel *native_model = model->model; NativeContext *ctx = &native_model->ctx; + TaskItem task; if (ff_check_exec_params(ctx, DNN_NATIVE, model->func_type, exec_params) != 0) { return DNN_ERROR; } - return execute_model_native(model, exec_params->input_name, exec_params->in_frame, - exec_params->output_names, exec_params->nb_output, exec_params->out_frame, 1); + if (ff_dnn_fill_task(&task, exec_params, native_model, 0, 1) != DNN_SUCCESS) { + return DNN_ERROR; + } + + if (extract_inference_from_task(&task, native_model->inference_queue) != DNN_SUCCESS) { + av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); + return DNN_ERROR; + } + + return execute_model_native(native_model->inference_queue); } int32_t ff_calculate_operand_dims_count(const DnnOperand *oprd) @@ -435,6 +492,11 @@ void ff_dnn_free_model_native(DNNModel **model) av_freep(&native_model->operands); } + while (ff_queue_size(native_model->inference_queue) != 0) { + InferenceItem *item = ff_queue_pop_front(native_model->inference_queue); + av_freep(&item); + } + ff_queue_destroy(native_model->inference_queue); av_freep(&native_model); } av_freep(model); diff --git a/libavfilter/dnn/dnn_backend_native.h b/libavfilter/dnn/dnn_backend_native.h index 89bcb8e358..1b9d5bdf2d 100644 --- a/libavfilter/dnn/dnn_backend_native.h +++ b/libavfilter/dnn/dnn_backend_native.h @@ -30,6 +30,7 @@ #include "../dnn_interface.h" #include "libavformat/avio.h" #include "libavutil/opt.h" +#include "queue.h" /** * the enum value of DNNLayerType should not be changed, @@ -126,6 +127,7 @@ typedef struct NativeModel{ int32_t layers_num; DnnOperand *operands; int32_t operands_num; + Queue *inference_queue; } NativeModel; DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx); From patchwork Wed Aug 25 10:39:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29773 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp434722iov; Wed, 25 Aug 2021 03:40:38 -0700 (PDT) X-Google-Smtp-Source: ABdhPJydVSDL2q3e+1F3KrHRqWhCHqtgQCDP5U7TyapKdoGTW4iMy0uPLBrQuwM+TB1Lq7dPrfIa X-Received: by 2002:a17:906:481b:: with SMTP id w27mr45393238ejq.151.1629888038627; Wed, 25 Aug 2021 03:40:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629888038; cv=none; d=google.com; s=arc-20160816; b=pun/HfHhTTtVU2imZtMaKVft5b6KKECtkRfZuMu41izIQZPGw1S0G6uXCEtBDNbIOg 3GG3rpd1t3DktG3FbVn2xtHsd7kCqN94JkEEKnGl3dGAc9AJRcqzgQ/2nQWv/uoEursZ zRxoZf3qG8CK/OvkNRwbyeqs2rYEMmbZvds9asvuqVmrrGPnPgObQAqa4iWvU3qDs2EB R2+sxbuVFSaJgwW65HUtSmvBV5ZEqF/CBhOiBnlVJ+fDWPTI4nukG0WDVz+8pcV1UK3a QHyVQ1fzzMEOiVlGIPhn4pnJiP7VYjdcO5XJW5n9otgJ1u3cagIQEIVYj5du3lXwU0vu 3NKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=8Ij/hgMq2/l3h2l9qGs1fO6QsWmR7QsWfLXZDcZ4YU0=; b=DsmFvpEgZ0UPuTU5o75Oo5oHJNz3YMjwBpQvw43CwFs6LXPtL0i0ROsHOsKE4vMlDm Hk3bCwx6CC5owyugpZRopMw+sHyCNccxygl7iCxPAnatlzUfBvKp0EOZk1xJIRL7dqU2 Tra7ZWrnJNlnC+/ffGHWad65i6HNpU7GFqhuRZm7AWrAviyPF8NJkxCRt4eXyMOPszvr QRHQ7QlYlXM4cum803hvbp3H8WSf4KHzf20Sj+qWo06Msj44UTqLLZMOqOw5j4ysG0hd vtPXmya+fbiqchsELANmjOmy8AeAM/8k7zOgs1krItB6fCkIOIsm70gSjS35bl65LDJ8 Qs5A== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=bngOUE2R; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id j15si11631187edq.329.2021.08.25.03.40.38; Wed, 25 Aug 2021 03:40:38 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=bngOUE2R; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id B500168A273; Wed, 25 Aug 2021 13:40:22 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 064E86818DB for ; Wed, 25 Aug 2021 13:40:16 +0300 (EEST) Received: by mail-pj1-f43.google.com with SMTP id z24-20020a17090acb1800b0018e87a24300so4429602pjt.0 for ; Wed, 25 Aug 2021 03:40:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=D0chzyfm0GqqnrVwzFmf9+NvIIx8DdAyC0PvWDW6vp0=; b=bngOUE2RxkaS6VZ1qNLWF9m5z3aS6LDQiBsH3t9NB7gzpVhwI7D6mG3dRTPbLiBxZu QdAnbg2WYciYwvPM9Mph3BXqDTj+Goxg7vsMCAyXn6CAWuIyJsf5fai64q19ZuOdd5vz o3R98OIzPffWNKqTkUGuQxDbk+KgMcEA6F1NA5erEvW+PITftSR+N28SuevSfiJdAmu0 sfiPV+Pwj4wceVvIfvmMlBd1hfshAGJpdUoEiQNOLFp4YQD/+/wvsxI7wso+Wvwlw/HW oeMj41aD9kRGxwGez/Z5qkrusxya6c/wPJWtMnUZ9DYmP6w1lKD2/k94Dd1DHQjlvvyM sDwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=D0chzyfm0GqqnrVwzFmf9+NvIIx8DdAyC0PvWDW6vp0=; b=qgHwPEiZRZL/pFnd0xZMl45R7v9ZhrsOxQHta9roRM9wC7QMyxtIW9b/kCp+JYCBHc x4LHeieJ7C1ksckrvdMEDGsOfkPnbdr/btA1OP9Su0tMyktq4/4DREHSLrsKoqVc2oW/ 2rW3C8boRKTg2HlajbyFgkQpd/GPPZo4x96WgGvVH+FuUJ9vTQAjY8HHNKvfrG9TYwPJ ur7CojTI+mkP5J/CVKl9//WJXvEIt5gQs+qmwGmWPkorSmaWbhh77v3G9BpgSHrRgCwd nmF7mycVQTHbDVhKe2UWL1JwPELQ/XB8VgmulekEUFJ+PP7LG9A6JIb4qp+xDy6MnqVH Kq3g== X-Gm-Message-State: AOAM533b1bKXNdjorAxMEvelrdqA/0InjDZVOQffidUprQEtarLHvl13 EP5CEMWNr6T+RcolX17nE2AQpk7QyrosXA== X-Received: by 2002:a17:902:ab52:b0:12d:92b8:60c7 with SMTP id ij18-20020a170902ab5200b0012d92b860c7mr37847148plb.44.1629888013714; Wed, 25 Aug 2021 03:40:13 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.221.29]) by smtp.googlemail.com with ESMTPSA id w16sm4548735pff.130.2021.08.25.03.40.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Aug 2021 03:40:13 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Wed, 25 Aug 2021 16:09:35 +0530 Message-Id: <20210825103940.28798-2-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210825103940.28798-1-shubhanshu.e01@gmail.com> References: <20210825103940.28798-1-shubhanshu.e01@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v4 2/7] libavfilter: Unify Execution Modes in DNN Filters X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: Y5qWjFY7rnaZ This commit unifies the async and sync mode from the DNN filters' perspective. As of this commit, the Native backend only supports synchronous execution mode. Now the user can switch between async and sync mode by using the 'async' option in the backend_configs. The values can be 1 for async and 0 for sync mode of execution. This commit affects the following filters: 1. vf_dnn_classify 2. vf_dnn_detect 3. vf_dnn_processing 4. vf_sr 5. vf_derain Signed-off-by: Shubhanshu Saxena --- libavfilter/dnn/dnn_backend_common.c | 2 +- libavfilter/dnn/dnn_backend_common.h | 5 +- libavfilter/dnn/dnn_backend_native.c | 59 +++++++++++++++- libavfilter/dnn/dnn_backend_native.h | 6 ++ libavfilter/dnn/dnn_backend_openvino.c | 94 ++++++++++---------------- libavfilter/dnn/dnn_backend_openvino.h | 3 +- libavfilter/dnn/dnn_backend_tf.c | 35 ++-------- libavfilter/dnn/dnn_backend_tf.h | 3 +- libavfilter/dnn/dnn_interface.c | 8 +-- libavfilter/dnn_filter_common.c | 23 +------ libavfilter/dnn_filter_common.h | 3 +- libavfilter/dnn_interface.h | 4 +- libavfilter/vf_derain.c | 7 ++ libavfilter/vf_dnn_classify.c | 4 +- libavfilter/vf_dnn_detect.c | 10 +-- libavfilter/vf_dnn_processing.c | 10 +-- libavfilter/vf_sr.c | 8 +++ 17 files changed, 142 insertions(+), 142 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_common.c b/libavfilter/dnn/dnn_backend_common.c index 426683b73d..d2bc016fef 100644 --- a/libavfilter/dnn/dnn_backend_common.c +++ b/libavfilter/dnn/dnn_backend_common.c @@ -138,7 +138,7 @@ DNNReturnType ff_dnn_start_inference_async(void *ctx, DNNAsyncExecModule *async_ return DNN_SUCCESS; } -DNNAsyncStatusType ff_dnn_get_async_result_common(Queue *task_queue, AVFrame **in, AVFrame **out) +DNNAsyncStatusType ff_dnn_get_result_common(Queue *task_queue, AVFrame **in, AVFrame **out) { TaskItem *task = ff_queue_peek_front(task_queue); diff --git a/libavfilter/dnn/dnn_backend_common.h b/libavfilter/dnn/dnn_backend_common.h index 604c1d3bd7..78e62a94a2 100644 --- a/libavfilter/dnn/dnn_backend_common.h +++ b/libavfilter/dnn/dnn_backend_common.h @@ -29,7 +29,8 @@ #include "libavutil/thread.h" #define DNN_BACKEND_COMMON_OPTIONS \ - { "nireq", "number of request", OFFSET(options.nireq), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, INT_MAX, FLAGS }, + { "nireq", "number of request", OFFSET(options.nireq), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, INT_MAX, FLAGS }, \ + { "async", "use DNN async inference", OFFSET(options.async), AV_OPT_TYPE_BOOL, { .i64 = 1 }, 0, 1, FLAGS }, // one task for one function call from dnn interface typedef struct TaskItem { @@ -135,7 +136,7 @@ DNNReturnType ff_dnn_start_inference_async(void *ctx, DNNAsyncExecModule *async_ * @retval DAST_NOT_READY if inference not completed yet. * @retval DAST_SUCCESS if result successfully extracted */ -DNNAsyncStatusType ff_dnn_get_async_result_common(Queue *task_queue, AVFrame **in, AVFrame **out); +DNNAsyncStatusType ff_dnn_get_result_common(Queue *task_queue, AVFrame **in, AVFrame **out); /** * Allocate input and output frames and fill the Task diff --git a/libavfilter/dnn/dnn_backend_native.c b/libavfilter/dnn/dnn_backend_native.c index 3b2a3aa55d..2d34b88f8a 100644 --- a/libavfilter/dnn/dnn_backend_native.c +++ b/libavfilter/dnn/dnn_backend_native.c @@ -34,6 +34,7 @@ #define FLAGS AV_OPT_FLAG_FILTERING_PARAM static const AVOption dnn_native_options[] = { { "conv2d_threads", "threads num for conv2d layer", OFFSET(options.conv2d_threads), AV_OPT_TYPE_INT, { .i64 = 0 }, INT_MIN, INT_MAX, FLAGS }, + { "async", "use DNN async inference", OFFSET(options.async), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, FLAGS }, { NULL }, }; @@ -189,6 +190,11 @@ DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType f goto fail; native_model->model = model; + if (native_model->ctx.options.async) { + av_log(&native_model->ctx, AV_LOG_WARNING, "Async not supported. Rolling back to sync\n"); + native_model->ctx.options.async = 0; + } + #if !HAVE_PTHREAD_CANCEL if (native_model->ctx.options.conv2d_threads > 1){ av_log(&native_model->ctx, AV_LOG_WARNING, "'conv2d_threads' option was set but it is not supported " @@ -212,6 +218,11 @@ DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType f goto fail; } + native_model->task_queue = ff_queue_create(); + if (!native_model->task_queue) { + goto fail; + } + native_model->inference_queue = ff_queue_create(); if (!native_model->inference_queue) { goto fail; @@ -425,17 +436,30 @@ DNNReturnType ff_dnn_execute_model_native(const DNNModel *model, DNNExecBasePara { NativeModel *native_model = model->model; NativeContext *ctx = &native_model->ctx; - TaskItem task; + TaskItem *task; if (ff_check_exec_params(ctx, DNN_NATIVE, model->func_type, exec_params) != 0) { return DNN_ERROR; } - if (ff_dnn_fill_task(&task, exec_params, native_model, 0, 1) != DNN_SUCCESS) { + task = av_malloc(sizeof(*task)); + if (!task) { + av_log(ctx, AV_LOG_ERROR, "unable to alloc memory for task item.\n"); return DNN_ERROR; } - if (extract_inference_from_task(&task, native_model->inference_queue) != DNN_SUCCESS) { + if (ff_dnn_fill_task(task, exec_params, native_model, ctx->options.async, 1) != DNN_SUCCESS) { + av_freep(&task); + return DNN_ERROR; + } + + if (ff_queue_push_back(native_model->task_queue, task) < 0) { + av_freep(&task); + av_log(ctx, AV_LOG_ERROR, "unable to push back task_queue.\n"); + return DNN_ERROR; + } + + if (extract_inference_from_task(task, native_model->inference_queue) != DNN_SUCCESS) { av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); return DNN_ERROR; } @@ -443,6 +467,26 @@ DNNReturnType ff_dnn_execute_model_native(const DNNModel *model, DNNExecBasePara return execute_model_native(native_model->inference_queue); } +DNNReturnType ff_dnn_flush_native(const DNNModel *model) +{ + NativeModel *native_model = model->model; + + if (ff_queue_size(native_model->inference_queue) == 0) { + // no pending task need to flush + return DNN_SUCCESS; + } + + // for now, use sync node with flush operation + // Switch to async when it is supported + return execute_model_native(native_model->inference_queue); +} + +DNNAsyncStatusType ff_dnn_get_result_native(const DNNModel *model, AVFrame **in, AVFrame **out) +{ + NativeModel *native_model = model->model; + return ff_dnn_get_result_common(native_model->task_queue, in, out); +} + int32_t ff_calculate_operand_dims_count(const DnnOperand *oprd) { int32_t result = 1; @@ -497,6 +541,15 @@ void ff_dnn_free_model_native(DNNModel **model) av_freep(&item); } ff_queue_destroy(native_model->inference_queue); + + while (ff_queue_size(native_model->task_queue) != 0) { + TaskItem *item = ff_queue_pop_front(native_model->task_queue); + av_frame_free(&item->in_frame); + av_frame_free(&item->out_frame); + av_freep(&item); + } + ff_queue_destroy(native_model->task_queue); + av_freep(&native_model); } av_freep(model); diff --git a/libavfilter/dnn/dnn_backend_native.h b/libavfilter/dnn/dnn_backend_native.h index 1b9d5bdf2d..ca61bb353f 100644 --- a/libavfilter/dnn/dnn_backend_native.h +++ b/libavfilter/dnn/dnn_backend_native.h @@ -111,6 +111,7 @@ typedef struct InputParams{ } InputParams; typedef struct NativeOptions{ + uint8_t async; uint32_t conv2d_threads; } NativeOptions; @@ -127,6 +128,7 @@ typedef struct NativeModel{ int32_t layers_num; DnnOperand *operands; int32_t operands_num; + Queue *task_queue; Queue *inference_queue; } NativeModel; @@ -134,6 +136,10 @@ DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType f DNNReturnType ff_dnn_execute_model_native(const DNNModel *model, DNNExecBaseParams *exec_params); +DNNAsyncStatusType ff_dnn_get_result_native(const DNNModel *model, AVFrame **in, AVFrame **out); + +DNNReturnType ff_dnn_flush_native(const DNNModel *model); + void ff_dnn_free_model_native(DNNModel **model); // NOTE: User must check for error (return value <= 0) to handle diff --git a/libavfilter/dnn/dnn_backend_openvino.c b/libavfilter/dnn/dnn_backend_openvino.c index c825e70c82..abf3f4b6b6 100644 --- a/libavfilter/dnn/dnn_backend_openvino.c +++ b/libavfilter/dnn/dnn_backend_openvino.c @@ -39,6 +39,7 @@ typedef struct OVOptions{ char *device_type; int nireq; + uint8_t async; int batch_size; int input_resizable; } OVOptions; @@ -758,55 +759,6 @@ err: } DNNReturnType ff_dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams *exec_params) -{ - OVModel *ov_model = model->model; - OVContext *ctx = &ov_model->ctx; - TaskItem task; - OVRequestItem *request; - - if (ff_check_exec_params(ctx, DNN_OV, model->func_type, exec_params) != 0) { - return DNN_ERROR; - } - - if (model->func_type == DFT_ANALYTICS_CLASSIFY) { - // Once we add async support for tensorflow backend and native backend, - // we'll combine the two sync/async functions in dnn_interface.h to - // simplify the code in filter, and async will be an option within backends. - // so, do not support now, and classify filter will not call this function. - return DNN_ERROR; - } - - if (ctx->options.batch_size > 1) { - avpriv_report_missing_feature(ctx, "batch mode for sync execution"); - return DNN_ERROR; - } - - if (!ov_model->exe_network) { - if (init_model_ov(ov_model, exec_params->input_name, exec_params->output_names[0]) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "Failed init OpenVINO exectuable network or inference request\n"); - return DNN_ERROR; - } - } - - if (ff_dnn_fill_task(&task, exec_params, ov_model, 0, 1) != DNN_SUCCESS) { - return DNN_ERROR; - } - - if (extract_inference_from_task(ov_model->model->func_type, &task, ov_model->inference_queue, exec_params) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); - return DNN_ERROR; - } - - request = ff_safe_queue_pop_front(ov_model->request_queue); - if (!request) { - av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); - return DNN_ERROR; - } - - return execute_model_ov(request, ov_model->inference_queue); -} - -DNNReturnType ff_dnn_execute_model_async_ov(const DNNModel *model, DNNExecBaseParams *exec_params) { OVModel *ov_model = model->model; OVContext *ctx = &ov_model->ctx; @@ -831,7 +783,8 @@ DNNReturnType ff_dnn_execute_model_async_ov(const DNNModel *model, DNNExecBasePa return DNN_ERROR; } - if (ff_dnn_fill_task(task, exec_params, ov_model, 1, 1) != DNN_SUCCESS) { + if (ff_dnn_fill_task(task, exec_params, ov_model, ctx->options.async, 1) != DNN_SUCCESS) { + av_freep(&task); return DNN_ERROR; } @@ -846,26 +799,47 @@ DNNReturnType ff_dnn_execute_model_async_ov(const DNNModel *model, DNNExecBasePa return DNN_ERROR; } - while (ff_queue_size(ov_model->inference_queue) >= ctx->options.batch_size) { + if (ctx->options.async) { + while (ff_queue_size(ov_model->inference_queue) >= ctx->options.batch_size) { + request = ff_safe_queue_pop_front(ov_model->request_queue); + if (!request) { + av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); + return DNN_ERROR; + } + + ret = execute_model_ov(request, ov_model->inference_queue); + if (ret != DNN_SUCCESS) { + return ret; + } + } + + return DNN_SUCCESS; + } + else { + if (model->func_type == DFT_ANALYTICS_CLASSIFY) { + // Classification filter has not been completely + // tested with the sync mode. So, do not support now. + return DNN_ERROR; + } + + if (ctx->options.batch_size > 1) { + avpriv_report_missing_feature(ctx, "batch mode for sync execution"); + return DNN_ERROR; + } + request = ff_safe_queue_pop_front(ov_model->request_queue); if (!request) { av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); return DNN_ERROR; } - - ret = execute_model_ov(request, ov_model->inference_queue); - if (ret != DNN_SUCCESS) { - return ret; - } + return execute_model_ov(request, ov_model->inference_queue); } - - return DNN_SUCCESS; } -DNNAsyncStatusType ff_dnn_get_async_result_ov(const DNNModel *model, AVFrame **in, AVFrame **out) +DNNAsyncStatusType ff_dnn_get_result_ov(const DNNModel *model, AVFrame **in, AVFrame **out) { OVModel *ov_model = model->model; - return ff_dnn_get_async_result_common(ov_model->task_queue, in, out); + return ff_dnn_get_result_common(ov_model->task_queue, in, out); } DNNReturnType ff_dnn_flush_ov(const DNNModel *model) diff --git a/libavfilter/dnn/dnn_backend_openvino.h b/libavfilter/dnn/dnn_backend_openvino.h index 046d0c5b5a..0bbca0c057 100644 --- a/libavfilter/dnn/dnn_backend_openvino.h +++ b/libavfilter/dnn/dnn_backend_openvino.h @@ -32,8 +32,7 @@ DNNModel *ff_dnn_load_model_ov(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx); DNNReturnType ff_dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams *exec_params); -DNNReturnType ff_dnn_execute_model_async_ov(const DNNModel *model, DNNExecBaseParams *exec_params); -DNNAsyncStatusType ff_dnn_get_async_result_ov(const DNNModel *model, AVFrame **in, AVFrame **out); +DNNAsyncStatusType ff_dnn_get_result_ov(const DNNModel *model, AVFrame **in, AVFrame **out); DNNReturnType ff_dnn_flush_ov(const DNNModel *model); void ff_dnn_free_model_ov(DNNModel **model); diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c index ffec1b1328..4a0b561f29 100644 --- a/libavfilter/dnn/dnn_backend_tf.c +++ b/libavfilter/dnn/dnn_backend_tf.c @@ -42,6 +42,7 @@ typedef struct TFOptions{ char *sess_config; + uint8_t async; uint32_t nireq; } TFOptions; @@ -1121,34 +1122,6 @@ err: DNNReturnType ff_dnn_execute_model_tf(const DNNModel *model, DNNExecBaseParams *exec_params) { - TFModel *tf_model = model->model; - TFContext *ctx = &tf_model->ctx; - TaskItem task; - TFRequestItem *request; - - if (ff_check_exec_params(ctx, DNN_TF, model->func_type, exec_params) != 0) { - return DNN_ERROR; - } - - if (ff_dnn_fill_task(&task, exec_params, tf_model, 0, 1) != DNN_SUCCESS) { - return DNN_ERROR; - } - - if (extract_inference_from_task(&task, tf_model->inference_queue) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); - return DNN_ERROR; - } - - request = ff_safe_queue_pop_front(tf_model->request_queue); - if (!request) { - av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); - return DNN_ERROR; - } - - return execute_model_tf(request, tf_model->inference_queue); -} - -DNNReturnType ff_dnn_execute_model_async_tf(const DNNModel *model, DNNExecBaseParams *exec_params) { TFModel *tf_model = model->model; TFContext *ctx = &tf_model->ctx; TaskItem *task; @@ -1164,7 +1137,7 @@ DNNReturnType ff_dnn_execute_model_async_tf(const DNNModel *model, DNNExecBasePa return DNN_ERROR; } - if (ff_dnn_fill_task(task, exec_params, tf_model, 1, 1) != DNN_SUCCESS) { + if (ff_dnn_fill_task(task, exec_params, tf_model, ctx->options.async, 1) != DNN_SUCCESS) { av_freep(&task); return DNN_ERROR; } @@ -1188,10 +1161,10 @@ DNNReturnType ff_dnn_execute_model_async_tf(const DNNModel *model, DNNExecBasePa return execute_model_tf(request, tf_model->inference_queue); } -DNNAsyncStatusType ff_dnn_get_async_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out) +DNNAsyncStatusType ff_dnn_get_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out) { TFModel *tf_model = model->model; - return ff_dnn_get_async_result_common(tf_model->task_queue, in, out); + return ff_dnn_get_result_common(tf_model->task_queue, in, out); } DNNReturnType ff_dnn_flush_tf(const DNNModel *model) diff --git a/libavfilter/dnn/dnn_backend_tf.h b/libavfilter/dnn/dnn_backend_tf.h index aec0fc2011..f14ea8c47a 100644 --- a/libavfilter/dnn/dnn_backend_tf.h +++ b/libavfilter/dnn/dnn_backend_tf.h @@ -32,8 +32,7 @@ DNNModel *ff_dnn_load_model_tf(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx); DNNReturnType ff_dnn_execute_model_tf(const DNNModel *model, DNNExecBaseParams *exec_params); -DNNReturnType ff_dnn_execute_model_async_tf(const DNNModel *model, DNNExecBaseParams *exec_params); -DNNAsyncStatusType ff_dnn_get_async_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out); +DNNAsyncStatusType ff_dnn_get_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out); DNNReturnType ff_dnn_flush_tf(const DNNModel *model); void ff_dnn_free_model_tf(DNNModel **model); diff --git a/libavfilter/dnn/dnn_interface.c b/libavfilter/dnn/dnn_interface.c index 81af934dd5..554a36b0dc 100644 --- a/libavfilter/dnn/dnn_interface.c +++ b/libavfilter/dnn/dnn_interface.c @@ -42,14 +42,15 @@ DNNModule *ff_get_dnn_module(DNNBackendType backend_type) case DNN_NATIVE: dnn_module->load_model = &ff_dnn_load_model_native; dnn_module->execute_model = &ff_dnn_execute_model_native; + dnn_module->get_result = &ff_dnn_get_result_native; + dnn_module->flush = &ff_dnn_flush_native; dnn_module->free_model = &ff_dnn_free_model_native; break; case DNN_TF: #if (CONFIG_LIBTENSORFLOW == 1) dnn_module->load_model = &ff_dnn_load_model_tf; dnn_module->execute_model = &ff_dnn_execute_model_tf; - dnn_module->execute_model_async = &ff_dnn_execute_model_async_tf; - dnn_module->get_async_result = &ff_dnn_get_async_result_tf; + dnn_module->get_result = &ff_dnn_get_result_tf; dnn_module->flush = &ff_dnn_flush_tf; dnn_module->free_model = &ff_dnn_free_model_tf; #else @@ -61,8 +62,7 @@ DNNModule *ff_get_dnn_module(DNNBackendType backend_type) #if (CONFIG_LIBOPENVINO == 1) dnn_module->load_model = &ff_dnn_load_model_ov; dnn_module->execute_model = &ff_dnn_execute_model_ov; - dnn_module->execute_model_async = &ff_dnn_execute_model_async_ov; - dnn_module->get_async_result = &ff_dnn_get_async_result_ov; + dnn_module->get_result = &ff_dnn_get_result_ov; dnn_module->flush = &ff_dnn_flush_ov; dnn_module->free_model = &ff_dnn_free_model_ov; #else diff --git a/libavfilter/dnn_filter_common.c b/libavfilter/dnn_filter_common.c index fcee7482c3..455eaa37f4 100644 --- a/libavfilter/dnn_filter_common.c +++ b/libavfilter/dnn_filter_common.c @@ -84,11 +84,6 @@ int ff_dnn_init(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *fil return AVERROR(EINVAL); } - if (!ctx->dnn_module->execute_model_async && ctx->async) { - ctx->async = 0; - av_log(filter_ctx, AV_LOG_WARNING, "this backend does not support async execution, roll back to sync.\n"); - } - #if !HAVE_PTHREAD_CANCEL if (ctx->async) { ctx->async = 0; @@ -141,18 +136,6 @@ DNNReturnType ff_dnn_execute_model(DnnContext *ctx, AVFrame *in_frame, AVFrame * return (ctx->dnn_module->execute_model)(ctx->model, &exec_params); } -DNNReturnType ff_dnn_execute_model_async(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame) -{ - DNNExecBaseParams exec_params = { - .input_name = ctx->model_inputname, - .output_names = (const char **)ctx->model_outputnames, - .nb_output = ctx->nb_outputs, - .in_frame = in_frame, - .out_frame = out_frame, - }; - return (ctx->dnn_module->execute_model_async)(ctx->model, &exec_params); -} - DNNReturnType ff_dnn_execute_model_classification(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame, const char *target) { DNNExecClassificationParams class_params = { @@ -165,12 +148,12 @@ DNNReturnType ff_dnn_execute_model_classification(DnnContext *ctx, AVFrame *in_f }, .target = target, }; - return (ctx->dnn_module->execute_model_async)(ctx->model, &class_params.base); + return (ctx->dnn_module->execute_model)(ctx->model, &class_params.base); } -DNNAsyncStatusType ff_dnn_get_async_result(DnnContext *ctx, AVFrame **in_frame, AVFrame **out_frame) +DNNAsyncStatusType ff_dnn_get_result(DnnContext *ctx, AVFrame **in_frame, AVFrame **out_frame) { - return (ctx->dnn_module->get_async_result)(ctx->model, in_frame, out_frame); + return (ctx->dnn_module->get_result)(ctx->model, in_frame, out_frame); } DNNReturnType ff_dnn_flush(DnnContext *ctx) diff --git a/libavfilter/dnn_filter_common.h b/libavfilter/dnn_filter_common.h index 8e43d8c8dc..4d92c1dc36 100644 --- a/libavfilter/dnn_filter_common.h +++ b/libavfilter/dnn_filter_common.h @@ -56,9 +56,8 @@ int ff_dnn_set_classify_post_proc(DnnContext *ctx, ClassifyPostProc post_proc); DNNReturnType ff_dnn_get_input(DnnContext *ctx, DNNData *input); DNNReturnType ff_dnn_get_output(DnnContext *ctx, int input_width, int input_height, int *output_width, int *output_height); DNNReturnType ff_dnn_execute_model(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame); -DNNReturnType ff_dnn_execute_model_async(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame); DNNReturnType ff_dnn_execute_model_classification(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame, const char *target); -DNNAsyncStatusType ff_dnn_get_async_result(DnnContext *ctx, AVFrame **in_frame, AVFrame **out_frame); +DNNAsyncStatusType ff_dnn_get_result(DnnContext *ctx, AVFrame **in_frame, AVFrame **out_frame); DNNReturnType ff_dnn_flush(DnnContext *ctx); void ff_dnn_uninit(DnnContext *ctx); diff --git a/libavfilter/dnn_interface.h b/libavfilter/dnn_interface.h index 5e9ffeb077..37e89d9789 100644 --- a/libavfilter/dnn_interface.h +++ b/libavfilter/dnn_interface.h @@ -114,10 +114,8 @@ typedef struct DNNModule{ DNNModel *(*load_model)(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx); // Executes model with specified input and output. Returns DNN_ERROR otherwise. DNNReturnType (*execute_model)(const DNNModel *model, DNNExecBaseParams *exec_params); - // Executes model with specified input and output asynchronously. Returns DNN_ERROR otherwise. - DNNReturnType (*execute_model_async)(const DNNModel *model, DNNExecBaseParams *exec_params); // Retrieve inference result. - DNNAsyncStatusType (*get_async_result)(const DNNModel *model, AVFrame **in, AVFrame **out); + DNNAsyncStatusType (*get_result)(const DNNModel *model, AVFrame **in, AVFrame **out); // Flush all the pending tasks. DNNReturnType (*flush)(const DNNModel *model); // Frees memory allocated for model. diff --git a/libavfilter/vf_derain.c b/libavfilter/vf_derain.c index 720cc70f5c..97bdb4e843 100644 --- a/libavfilter/vf_derain.c +++ b/libavfilter/vf_derain.c @@ -68,6 +68,7 @@ static int query_formats(AVFilterContext *ctx) static int filter_frame(AVFilterLink *inlink, AVFrame *in) { + DNNAsyncStatusType async_state = 0; AVFilterContext *ctx = inlink->dst; AVFilterLink *outlink = ctx->outputs[0]; DRContext *dr_context = ctx->priv; @@ -88,6 +89,12 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in) av_frame_free(&in); return AVERROR(EIO); } + do { + async_state = ff_dnn_get_result(&dr_context->dnnctx, &in, &out); + } while (async_state == DAST_NOT_READY); + + if (async_state != DAST_SUCCESS) + return AVERROR(EINVAL); av_frame_free(&in); diff --git a/libavfilter/vf_dnn_classify.c b/libavfilter/vf_dnn_classify.c index 904067f5f6..d5ee65d403 100644 --- a/libavfilter/vf_dnn_classify.c +++ b/libavfilter/vf_dnn_classify.c @@ -224,7 +224,7 @@ static int dnn_classify_flush_frame(AVFilterLink *outlink, int64_t pts, int64_t do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { av_assert0(in_frame == out_frame); ret = ff_filter_frame(outlink, out_frame); @@ -268,7 +268,7 @@ static int dnn_classify_activate(AVFilterContext *filter_ctx) do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { av_assert0(in_frame == out_frame); ret = ff_filter_frame(outlink, out_frame); diff --git a/libavfilter/vf_dnn_detect.c b/libavfilter/vf_dnn_detect.c index 200b5897c1..11de376753 100644 --- a/libavfilter/vf_dnn_detect.c +++ b/libavfilter/vf_dnn_detect.c @@ -424,7 +424,7 @@ static int dnn_detect_flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *o do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { av_assert0(in_frame == out_frame); ret = ff_filter_frame(outlink, out_frame); @@ -458,7 +458,7 @@ static int dnn_detect_activate_async(AVFilterContext *filter_ctx) if (ret < 0) return ret; if (ret > 0) { - if (ff_dnn_execute_model_async(&ctx->dnnctx, in, in) != DNN_SUCCESS) { + if (ff_dnn_execute_model(&ctx->dnnctx, in, in) != DNN_SUCCESS) { return AVERROR(EIO); } } @@ -468,7 +468,7 @@ static int dnn_detect_activate_async(AVFilterContext *filter_ctx) do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { av_assert0(in_frame == out_frame); ret = ff_filter_frame(outlink, out_frame); @@ -496,7 +496,7 @@ static int dnn_detect_activate_async(AVFilterContext *filter_ctx) return 0; } -static int dnn_detect_activate(AVFilterContext *filter_ctx) +static av_unused int dnn_detect_activate(AVFilterContext *filter_ctx) { DnnDetectContext *ctx = filter_ctx->priv; @@ -537,5 +537,5 @@ const AVFilter ff_vf_dnn_detect = { FILTER_INPUTS(dnn_detect_inputs), FILTER_OUTPUTS(dnn_detect_outputs), .priv_class = &dnn_detect_class, - .activate = dnn_detect_activate, + .activate = dnn_detect_activate_async, }; diff --git a/libavfilter/vf_dnn_processing.c b/libavfilter/vf_dnn_processing.c index 5f24955263..7435dd4959 100644 --- a/libavfilter/vf_dnn_processing.c +++ b/libavfilter/vf_dnn_processing.c @@ -328,7 +328,7 @@ static int flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *out_pts) do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { if (isPlanarYUV(in_frame->format)) copy_uv_planes(ctx, out_frame, in_frame); @@ -370,7 +370,7 @@ static int activate_async(AVFilterContext *filter_ctx) return AVERROR(ENOMEM); } av_frame_copy_props(out, in); - if (ff_dnn_execute_model_async(&ctx->dnnctx, in, out) != DNN_SUCCESS) { + if (ff_dnn_execute_model(&ctx->dnnctx, in, out) != DNN_SUCCESS) { return AVERROR(EIO); } } @@ -380,7 +380,7 @@ static int activate_async(AVFilterContext *filter_ctx) do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { if (isPlanarYUV(in_frame->format)) copy_uv_planes(ctx, out_frame, in_frame); @@ -410,7 +410,7 @@ static int activate_async(AVFilterContext *filter_ctx) return 0; } -static int activate(AVFilterContext *filter_ctx) +static av_unused int activate(AVFilterContext *filter_ctx) { DnnProcessingContext *ctx = filter_ctx->priv; @@ -454,5 +454,5 @@ const AVFilter ff_vf_dnn_processing = { FILTER_INPUTS(dnn_processing_inputs), FILTER_OUTPUTS(dnn_processing_outputs), .priv_class = &dnn_processing_class, - .activate = activate, + .activate = activate_async, }; diff --git a/libavfilter/vf_sr.c b/libavfilter/vf_sr.c index f009a868f8..f4fc84d251 100644 --- a/libavfilter/vf_sr.c +++ b/libavfilter/vf_sr.c @@ -119,6 +119,7 @@ static int config_output(AVFilterLink *outlink) static int filter_frame(AVFilterLink *inlink, AVFrame *in) { + DNNAsyncStatusType async_state = 0; AVFilterContext *context = inlink->dst; SRContext *ctx = context->priv; AVFilterLink *outlink = context->outputs[0]; @@ -148,6 +149,13 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in) return AVERROR(EIO); } + do { + async_state = ff_dnn_get_result(&ctx->dnnctx, &in, &out); + } while (async_state == DAST_NOT_READY); + + if (async_state != DAST_SUCCESS) + return AVERROR(EINVAL); + if (ctx->sws_uv_scale) { sws_scale(ctx->sws_uv_scale, (const uint8_t **)(in->data + 1), in->linesize + 1, 0, ctx->sws_uv_height, out->data + 1, out->linesize + 1); From patchwork Wed Aug 25 10:39:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29771 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp434847iov; Wed, 25 Aug 2021 03:40:48 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyNOp+T7nnj+o0kVne1oW9b8xlGsQEpmhBEWEdQUagaRhtyJ1zcanmCdAl+huf9vuaxoVr7 X-Received: by 2002:a17:906:8cd:: with SMTP id o13mr46084591eje.341.1629888048025; Wed, 25 Aug 2021 03:40:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629888048; cv=none; d=google.com; s=arc-20160816; b=ovqRQ56SMwd2MR6+bQG5+bHMnc4Kz2WjkxOH55AWSxwYa3a/51MvuuexNQ9mt6bBmJ nxwnWaUfPVbXmvtG3VHUO0MFV4fWCQNDa85JDB9cwIxoHQPnuDbgnvMZfZsaeBea6FyN 77nxJaWTmWGFYpvHUH8hlzS1M/aj/8NmxURnxoXY3s8qaghTzlA46P1hVOKVA7utSQFi OHGUe2LA+DvmNnpynTAk9/Ki5qwSmeasKzfLIY8w6f6QFUWX0Z25NXBpsuCiNFgzB1cK znTBXoorkPGstBPa+fSa7fbAV5mY56J+oqhpuo3s1sy/QFJmordzithYOySci9wk5Wb7 V53Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=tgeb9EeUFRLp7ILdglfym8VhzXOPzTcP3IskwCvOEoQ=; b=azSNjEaUQICc78cwJZrCRGgDukvU3ytTWHVFXJeqhgXUQxX4onujluaOUrrFRiYSgS RiGhZfp/7dVbWV2dlDe4QSV+whWlGN7U/3LO+HObIx00mtnnP5fVGlCpxQG103p+lyf2 N+G1V2F1hj3qS7E9bq8R40iEFhxNUvbm/Yeyi2nkkfWoM55WtgvCrHceLWcTXnsjtDp7 Z28N5CvOD9+xWjsyK+krXP+BvPxrlSbvJdYuvzFIlY6vGnJKP1tQ5UyvxWl6UeBkIudk QoMcki2sGYIMCV4dSYvUcYM3IvaqOFAwrz404GgYBa5GefvGciEHhOVcOe0tEzdckNC/ 1hzg== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=mVXCD0fD; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id qw31si5518380ejc.633.2021.08.25.03.40.47; Wed, 25 Aug 2021 03:40:47 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=mVXCD0fD; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id D903468A2F4; Wed, 25 Aug 2021 13:40:24 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pg1-f178.google.com (mail-pg1-f178.google.com [209.85.215.178]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 9704168A273 for ; Wed, 25 Aug 2021 13:40:17 +0300 (EEST) Received: by mail-pg1-f178.google.com with SMTP id e7so22642846pgk.2 for ; Wed, 25 Aug 2021 03:40:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kerxkJIFTEUo1lakrAXtZvZ6zlf+he6RHyzlWv+8W5I=; b=mVXCD0fDCCeaSiXoo/KgBPQeizhImOaPmw/miByzyD/MG29e83U5JEB2yt60nOhc5E li6P6fudKD04kDKDxZ8KyMeX+/pCqBzpQJf+FbPCXj76H2rtP4M/VY3V27JXPp++m72N CwM2Qiy4w2vSX9mP9i/BQbrt5OtQIFjf79BlR8fetyneIIXxGnz/aIQ/BaTNFeuSa6Qe oPUKd4HVzFPLTE1nw64SE+/iJEc6ozYopzZviCB5h+dxddcIN2h9BLnM50/WtzofY7fQ jfIXNNg28e0T10uE5ZHk+Pv0KlSx2R2DeDG0hjiGzilinBKqRrsZlqQe5zt2ur27ev4y d9JA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kerxkJIFTEUo1lakrAXtZvZ6zlf+he6RHyzlWv+8W5I=; b=HmMnfUvdwRetKJB0N/Vsk48xO2HXZChZ8Sk5TKf/0f+FhqOydkSkIkzB/VPZ5gK9Ew myFpNunzhI3xaP1+JiFpNZX3tLnUVig7NztyVYbA049qvqcTEcv+6M+JulPMSWbgg29s j+BSYJV/VYDXNkO/4AhyP2mMDs9iX6gUr063H1DjL9eBzje/ekrtcD3jQRsNKWIyn9iC UIElvBX8isiEHHh5xNTK9AbU/SB4Wo/XvyAPlZqlUoRCwT9xjnZClT6YyCoNehOe/bMT eoyoq6BnNLRjA4InrFLI70SKAY7qsUptQHgdKy/71Er3/s06So9R1qO4VQVjUqq547JP ZxWg== X-Gm-Message-State: AOAM532ytnpdVr7FgqaIN7Eo0zi0lXiY3QlTqZfcZyPfSkGZnHteeZKP +K9t+3sTOyc2hYVQ+yJQAlk0Ijw9hmHG7g== X-Received: by 2002:aa7:83c6:0:b029:3e0:1f64:6f75 with SMTP id j6-20020aa783c60000b02903e01f646f75mr43597172pfn.69.1629888015638; Wed, 25 Aug 2021 03:40:15 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.221.29]) by smtp.googlemail.com with ESMTPSA id w16sm4548735pff.130.2021.08.25.03.40.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Aug 2021 03:40:15 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Wed, 25 Aug 2021 16:09:36 +0530 Message-Id: <20210825103940.28798-3-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210825103940.28798-1-shubhanshu.e01@gmail.com> References: <20210825103940.28798-1-shubhanshu.e01@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v4 3/7] libavfilter: Remove synchronous functions from DNN filters X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: Rezi7onYTT22 This commit removes the unused sync mode specific code from the DNN filters since the sync and async mode are now unified from the filters' perspective. Signed-off-by: Shubhanshu Saxena --- libavfilter/vf_dnn_detect.c | 71 +--------------------------- libavfilter/vf_dnn_processing.c | 84 +-------------------------------- 2 files changed, 4 insertions(+), 151 deletions(-) diff --git a/libavfilter/vf_dnn_detect.c b/libavfilter/vf_dnn_detect.c index 11de376753..809d70b930 100644 --- a/libavfilter/vf_dnn_detect.c +++ b/libavfilter/vf_dnn_detect.c @@ -353,63 +353,6 @@ static int dnn_detect_query_formats(AVFilterContext *context) return ff_set_common_formats_from_list(context, pix_fmts); } -static int dnn_detect_filter_frame(AVFilterLink *inlink, AVFrame *in) -{ - AVFilterContext *context = inlink->dst; - AVFilterLink *outlink = context->outputs[0]; - DnnDetectContext *ctx = context->priv; - DNNReturnType dnn_result; - - dnn_result = ff_dnn_execute_model(&ctx->dnnctx, in, in); - if (dnn_result != DNN_SUCCESS){ - av_log(ctx, AV_LOG_ERROR, "failed to execute model\n"); - av_frame_free(&in); - return AVERROR(EIO); - } - - return ff_filter_frame(outlink, in); -} - -static int dnn_detect_activate_sync(AVFilterContext *filter_ctx) -{ - AVFilterLink *inlink = filter_ctx->inputs[0]; - AVFilterLink *outlink = filter_ctx->outputs[0]; - AVFrame *in = NULL; - int64_t pts; - int ret, status; - int got_frame = 0; - - FF_FILTER_FORWARD_STATUS_BACK(outlink, inlink); - - do { - // drain all input frames - ret = ff_inlink_consume_frame(inlink, &in); - if (ret < 0) - return ret; - if (ret > 0) { - ret = dnn_detect_filter_frame(inlink, in); - if (ret < 0) - return ret; - got_frame = 1; - } - } while (ret > 0); - - // if frame got, schedule to next filter - if (got_frame) - return 0; - - if (ff_inlink_acknowledge_status(inlink, &status, &pts)) { - if (status == AVERROR_EOF) { - ff_outlink_set_status(outlink, status, pts); - return ret; - } - } - - FF_FILTER_FORWARD_WANTED(outlink, inlink); - - return FFERROR_NOT_READY; -} - static int dnn_detect_flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *out_pts) { DnnDetectContext *ctx = outlink->src->priv; @@ -439,7 +382,7 @@ static int dnn_detect_flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *o return 0; } -static int dnn_detect_activate_async(AVFilterContext *filter_ctx) +static int dnn_detect_activate(AVFilterContext *filter_ctx) { AVFilterLink *inlink = filter_ctx->inputs[0]; AVFilterLink *outlink = filter_ctx->outputs[0]; @@ -496,16 +439,6 @@ static int dnn_detect_activate_async(AVFilterContext *filter_ctx) return 0; } -static av_unused int dnn_detect_activate(AVFilterContext *filter_ctx) -{ - DnnDetectContext *ctx = filter_ctx->priv; - - if (ctx->dnnctx.async) - return dnn_detect_activate_async(filter_ctx); - else - return dnn_detect_activate_sync(filter_ctx); -} - static av_cold void dnn_detect_uninit(AVFilterContext *context) { DnnDetectContext *ctx = context->priv; @@ -537,5 +470,5 @@ const AVFilter ff_vf_dnn_detect = { FILTER_INPUTS(dnn_detect_inputs), FILTER_OUTPUTS(dnn_detect_outputs), .priv_class = &dnn_detect_class, - .activate = dnn_detect_activate_async, + .activate = dnn_detect_activate, }; diff --git a/libavfilter/vf_dnn_processing.c b/libavfilter/vf_dnn_processing.c index 7435dd4959..55634efde5 100644 --- a/libavfilter/vf_dnn_processing.c +++ b/libavfilter/vf_dnn_processing.c @@ -244,76 +244,6 @@ static int copy_uv_planes(DnnProcessingContext *ctx, AVFrame *out, const AVFrame return 0; } -static int filter_frame(AVFilterLink *inlink, AVFrame *in) -{ - AVFilterContext *context = inlink->dst; - AVFilterLink *outlink = context->outputs[0]; - DnnProcessingContext *ctx = context->priv; - DNNReturnType dnn_result; - AVFrame *out; - - out = ff_get_video_buffer(outlink, outlink->w, outlink->h); - if (!out) { - av_frame_free(&in); - return AVERROR(ENOMEM); - } - av_frame_copy_props(out, in); - - dnn_result = ff_dnn_execute_model(&ctx->dnnctx, in, out); - if (dnn_result != DNN_SUCCESS){ - av_log(ctx, AV_LOG_ERROR, "failed to execute model\n"); - av_frame_free(&in); - av_frame_free(&out); - return AVERROR(EIO); - } - - if (isPlanarYUV(in->format)) - copy_uv_planes(ctx, out, in); - - av_frame_free(&in); - return ff_filter_frame(outlink, out); -} - -static int activate_sync(AVFilterContext *filter_ctx) -{ - AVFilterLink *inlink = filter_ctx->inputs[0]; - AVFilterLink *outlink = filter_ctx->outputs[0]; - AVFrame *in = NULL; - int64_t pts; - int ret, status; - int got_frame = 0; - - FF_FILTER_FORWARD_STATUS_BACK(outlink, inlink); - - do { - // drain all input frames - ret = ff_inlink_consume_frame(inlink, &in); - if (ret < 0) - return ret; - if (ret > 0) { - ret = filter_frame(inlink, in); - if (ret < 0) - return ret; - got_frame = 1; - } - } while (ret > 0); - - // if frame got, schedule to next filter - if (got_frame) - return 0; - - if (ff_inlink_acknowledge_status(inlink, &status, &pts)) { - if (status == AVERROR_EOF) { - ff_outlink_set_status(outlink, status, pts); - return ret; - } - } - - FF_FILTER_FORWARD_WANTED(outlink, inlink); - - return FFERROR_NOT_READY; -} - static int flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *out_pts) { DnnProcessingContext *ctx = outlink->src->priv; @@ -345,7 +275,7 @@ static int flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *out_pts) return 0; } -static int activate_async(AVFilterContext *filter_ctx) +static int activate(AVFilterContext *filter_ctx) { AVFilterLink *inlink = filter_ctx->inputs[0]; AVFilterLink *outlink = filter_ctx->outputs[0]; @@ -410,16 +340,6 @@ static int activate_async(AVFilterContext *filter_ctx) return 0; } -static av_unused int activate(AVFilterContext *filter_ctx) -{ - DnnProcessingContext *ctx = filter_ctx->priv; - - if (ctx->dnnctx.async) - return activate_async(filter_ctx); - else - return activate_sync(filter_ctx); -} - static av_cold void uninit(AVFilterContext *ctx) { DnnProcessingContext *context = ctx->priv; @@ -454,5 +374,5 @@ const AVFilter ff_vf_dnn_processing = { FILTER_INPUTS(dnn_processing_inputs), FILTER_OUTPUTS(dnn_processing_outputs), .priv_class = &dnn_processing_class, - .activate = activate_async, + .activate = activate, }; From patchwork Wed Aug 25 10:39:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29769 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp434956iov; Wed, 25 Aug 2021 03:40:57 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw+hXtvNax50cZ3sT/mjq6gruvR6EcdQ563bCTUyfQbpNPYDQM0fpAEmlSQWDchIeoA8njA X-Received: by 2002:a17:906:c251:: with SMTP id bl17mr45454264ejb.219.1629888057298; Wed, 25 Aug 2021 03:40:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629888057; cv=none; d=google.com; s=arc-20160816; b=Dnf7IvTOlaYy5hMEjmWXfvBgqUrz3bQvfcSiiJChxYKJGvo+N1opBs/7XG/ogPM/GO vjENRs1g+phB2inFtYaMagDq+vYWv10VXXvC/BnMaMaA3DFhMYOkm1itNx/KG1K0PTA2 CVfGAOHPn5HHHZ6VB2o0ZqdQ8CfYOaxtPbEcsF8ks7a69EPxD/TmuAvsCKtA0/fTEcCI +bNyTNPHqcjduZHctXiCePIWRtHS3iWLPzW8xZxD9dVQAUejUs8/bZnoFud4OQsH/gDd 3aFr4Y3Hm/s4wRkoEfR1xlw/KWnNGTnKjFkPptAKYQwbXRfmETSJOOeox3wv5OZ5wYhD 2ZfA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=DLZpo8s15XMpfdsJiyOJcUefheczHg2yUIo4aT39jlY=; b=dtOYwlrxDH+1vYZBY04b2uUIgHfZEyFlyPh2HafWAl8XhfuGjC7potFhRI8ZFnA+ub pVW3v+NqCJseXU1AS4GZHEYN/q7Gi1n598031kkJpukDiSuKjeHZYQVkHCNAdy7QHrs2 1f+hKuSHKmaEFWovsF9Q57hZ9RYZityT8FeVzOj6fFt0+W2G9Lwo2FBnkQhARcLLHO4W nkn2jVy5J15CLvyiUJXxdfjP95wsifefqkwfwAfQPkZ0OQ8lt9fw5eC36qzphaxFZtuE 1J4hBaTEG8r07RAKe5T1515W+omuoJNsqA6VAB259AhZ6xCErIudoPx02y4H1TkHmzv7 2a2Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b="m/lwAo8P"; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id f9si8294925edv.148.2021.08.25.03.40.57; Wed, 25 Aug 2021 03:40:57 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b="m/lwAo8P"; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id ECCC368A320; Wed, 25 Aug 2021 13:40:25 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pg1-f171.google.com (mail-pg1-f171.google.com [209.85.215.171]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 08FE4680523 for ; Wed, 25 Aug 2021 13:40:19 +0300 (EEST) Received: by mail-pg1-f171.google.com with SMTP id g184so4137328pgc.6 for ; Wed, 25 Aug 2021 03:40:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xEtSZIhxjrgP8Qiv8HtecgeSth5SbbKyMevLA+Wlpyw=; b=m/lwAo8Pbl3nSeCBKUUnSHozfizSEYLWdLBKRWAEWGTH9o1hfzV8aNxxf5RMHdhZdZ s/YcoZtwoD5HHg7Sa7xCajBAuLKOu5yh5S+KlRhOeXkUhPFWpmpe/4OK1kzE3c16GTyD /BGr6oAXEAGm+IWhYvi/siOjdqgrOB3jG0x0vELiqxQvZcHGosd3CftayBzlQEjgqvs5 /9sB4P1GtaWrRupkLuXBEnxx6L4mhAqTtDZiKNpN+ZOmbeZzlyWcXaH3l7cD1OcOzP7h 6W2XC5IlGX3rEumL+CpgBNH+QS4aO1BG28RfPrhUkbyQPavG5Xn+QDzK4Rjqb0eh0nd8 S+gQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xEtSZIhxjrgP8Qiv8HtecgeSth5SbbKyMevLA+Wlpyw=; b=Td4gzmJFi0nnfRMppKe3xbKEqc56/4D4+Gl54ZUzJbOviqNUM20OaEGIm3llaz4jj8 J80eKT5JmUXYygRqJANYUPutB+ZJiGUjC+8Gs7yBFWSSVA8BaPwEalWM25yxczjWeNi7 fx/mINjX0wA+CeqSLiFba3jRcCYYctDs4Is3wBVBj3YzFbaz48JumXreN7W8RkLLYv/5 BncP6Y+NNzpD+fZybFsiTQw+9I8sS4CRQKJrV0Ixfgu1mAtUGG6qSPiPhYEOHJTk+b49 xh98I48BK4cn5kZNfRsDg/uMgrOTdFsq8ildsUATh/NKq7GagzZmXgIZzKAzbUgjb0Zk d1Uw== X-Gm-Message-State: AOAM5310VuqCO8WABF3s4P+4g1hnIO9SxGdFgpvGswYuVva6ZKxhw70c CiTpnM7xvtCVFIDqNYs63yh2uBI6d0mzXA== X-Received: by 2002:a05:6a00:d5c:b0:3e2:78ce:8d31 with SMTP id n28-20020a056a000d5c00b003e278ce8d31mr43842071pfv.39.1629888017340; Wed, 25 Aug 2021 03:40:17 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.221.29]) by smtp.googlemail.com with ESMTPSA id w16sm4548735pff.130.2021.08.25.03.40.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Aug 2021 03:40:17 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Wed, 25 Aug 2021 16:09:37 +0530 Message-Id: <20210825103940.28798-4-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210825103940.28798-1-shubhanshu.e01@gmail.com> References: <20210825103940.28798-1-shubhanshu.e01@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v4 4/7] libavfilter: Remove Async Flag from DNN Filter Side X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: DUhuElyi3CZ1 Remove async flag from filter's perspective after the unification of async and sync modes in the DNN backend. Signed-off-by: Shubhanshu Saxena --- doc/filters.texi | 14 ++++---------- libavfilter/dnn/dnn_backend_tf.c | 7 +++++++ libavfilter/dnn_filter_common.c | 7 ------- libavfilter/dnn_filter_common.h | 2 +- 4 files changed, 12 insertions(+), 18 deletions(-) diff --git a/doc/filters.texi b/doc/filters.texi index b902aca12d..d99368e64b 100644 --- a/doc/filters.texi +++ b/doc/filters.texi @@ -10283,11 +10283,8 @@ and the second line is the name of label id 1, etc. The label id is considered as name if the label file is not provided. @item backend_configs -Set the configs to be passed into backend - -@item async -use DNN async execution if set (default: set), -roll back to sync execution if the backend does not support async. +Set the configs to be passed into backend. To use async execution, set async (default: set). +Roll back to sync execution if the backend does not support async. @end table @@ -10339,15 +10336,12 @@ Set the input name of the dnn network. Set the output name of the dnn network. @item backend_configs -Set the configs to be passed into backend +Set the configs to be passed into backend. To use async execution, set async (default: set). +Roll back to sync execution if the backend does not support async. For tensorflow backend, you can set its configs with @option{sess_config} options, please use tools/python/tf_sess_config.py to get the configs of TensorFlow backend for your system. -@item async -use DNN async execution if set (default: set), -roll back to sync execution if the backend does not support async. - @end table @subsection Examples diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c index 4a0b561f29..906934d8c0 100644 --- a/libavfilter/dnn/dnn_backend_tf.c +++ b/libavfilter/dnn/dnn_backend_tf.c @@ -884,6 +884,13 @@ DNNModel *ff_dnn_load_model_tf(const char *model_filename, DNNFunctionType func_ ctx->options.nireq = av_cpu_count() / 2 + 1; } +#if !HAVE_PTHREAD_CANCEL + if (ctx->options.async) { + ctx->options.async = 0; + av_log(filter_ctx, AV_LOG_WARNING, "pthread is not supported, roll back to sync.\n"); + } +#endif + tf_model->request_queue = ff_safe_queue_create(); if (!tf_model->request_queue) { goto err; diff --git a/libavfilter/dnn_filter_common.c b/libavfilter/dnn_filter_common.c index 455eaa37f4..3045ce0131 100644 --- a/libavfilter/dnn_filter_common.c +++ b/libavfilter/dnn_filter_common.c @@ -84,13 +84,6 @@ int ff_dnn_init(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *fil return AVERROR(EINVAL); } -#if !HAVE_PTHREAD_CANCEL - if (ctx->async) { - ctx->async = 0; - av_log(filter_ctx, AV_LOG_WARNING, "pthread is not supported, roll back to sync.\n"); - } -#endif - return 0; } diff --git a/libavfilter/dnn_filter_common.h b/libavfilter/dnn_filter_common.h index 4d92c1dc36..635ae631c1 100644 --- a/libavfilter/dnn_filter_common.h +++ b/libavfilter/dnn_filter_common.h @@ -46,7 +46,7 @@ typedef struct DnnContext { { "output", "output name of the model", OFFSET(model_outputnames_string), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS },\ { "backend_configs", "backend configs", OFFSET(backend_options), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS },\ { "options", "backend configs (deprecated, use backend_configs)", OFFSET(backend_options), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS | AV_OPT_FLAG_DEPRECATED},\ - { "async", "use DNN async inference", OFFSET(async), AV_OPT_TYPE_BOOL, { .i64 = 1}, 0, 1, FLAGS}, + { "async", "use DNN async inference (ignored, use backend_configs='async=1')", OFFSET(async), AV_OPT_TYPE_BOOL, { .i64 = 1}, 0, 1, FLAGS}, int ff_dnn_init(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *filter_ctx); From patchwork Wed Aug 25 10:39:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29772 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp435071iov; Wed, 25 Aug 2021 03:41:07 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxBtqBSP1xI33wlJLbzRds3VEQtSoz7gfj41s9E1ExsrFN7UHuxNHB5MsfYyvM/ijLnhU+x X-Received: by 2002:a17:906:5384:: with SMTP id g4mr46153809ejo.27.1629888067527; Wed, 25 Aug 2021 03:41:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629888067; cv=none; d=google.com; s=arc-20160816; b=m85KchO1D4is/ztbg7H9nrn/4wAPF88OgiwwMUhTDn26myO+7b5SoD7rQTHuhSrWV9 +tEybgqYpP0j64wnluE53b4DrR28J2/9bsAIckECvndJ0YBMoXJLIABHwezZyLSY84/a q88K3NSXOuf2OZpZMN+eicZwuo5ebmx1vHMDuTp/kmrrWjkxxh5jVzyQCjM3DJaCB8Me anjcYz3S3mRupWUzJIV8WnCJXHiVpA4I6cRNKYWce15ZISEbD81dmWm2FUA+OLfP7pRu 6s6HINK7IDZh9D8IEnDDmzPV2Cr9q//mIUyrf0Au6TXwPggM0QPW1nJ+LR/OwgecLKX5 jMeg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=wPmC1V9xA7xcIJpal4W9btuetXoWmeljT0mNz7RTB4g=; b=wlnoVvEJrJJ20p1DQDT++qwnAqQP8FvKzNMk50nspplTcAYceswW+CJva4mJ9FdSYB l0J26Rie8aVcFM6N9WdlZDPUJhDS+VEDPs3o6/szsqTMbrFWaSP4PK1EuO4nDSbJe/LI dnjFaS6oMvYTlawOwXkJTLZAB7HkH4qZ8wWeobkNvyX2XqNtF4gPAqFcKPWjYZYCK0CW sknhjnNavDeP18FdrDe0WV1461BU5hwK9jtCFKJKUjWvkCLN/3fy2EUtC3XnPGs9Lrft e0rBOC+zyjsXXMFkvYpDHqCrt9FADbY4GIYTXkcu49sIQaLKRHkI0sCcp4Ab/SoFSfQ4 r9hw== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=Kvgb1JWb; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id r9si17403076ejj.752.2021.08.25.03.41.07; Wed, 25 Aug 2021 03:41:07 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=Kvgb1JWb; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id ECDD968A351; Wed, 25 Aug 2021 13:40:27 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 5C7BD68A1B9 for ; Wed, 25 Aug 2021 13:40:21 +0300 (EEST) Received: by mail-pf1-f171.google.com with SMTP id v123so6173268pfb.11 for ; Wed, 25 Aug 2021 03:40:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=X0GoaER13LZC8Buz2Nw+pjFQZNo+J42xQflmdYhUWao=; b=Kvgb1JWbrxjCKHWfngOybjbD5oon1FiB+iAb00vIDCHOAP02hf2V0PoxHd4zyMBrOw TbvJLSNqOZdsqNKzwdzE5BOHb1Ns07JchXUj50g5Ns/D+jizrY1oRB/4x+zcjLcb/GFt a9geV6DtErAHpptH08bc+HIyU1Zaxw1UqXLJajvjqXMARuIq3Vd9T7sbBIQHxf727nbo lZuWZ7qVPf+waUFmqWyfHsP7DskPiHo9W4ymoGfG653CB8gNfUkcDFFKzxwFSFxDnlIl gtd47yap2CLrFICO3BYq8+7vsFDscDbR8I06EH3ANpCSL1oNBOOpdbzM63LxHejIKwSd cSzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=X0GoaER13LZC8Buz2Nw+pjFQZNo+J42xQflmdYhUWao=; b=Y7ov7/eKuyTnbwyL2cWBMRY0Q8nVIlHErLju/fhs5oaDZnDOehmwZg3oF1XcYk+YIv 2NFAwKwmTGd2aaTDc3mBetNl9u2iWkjZR5LI3XDBjqQvpJqNqbmmOh0V7UhRW8YcHwEd PlAGc5Q9reQ4etpeYjS2SvSmrKYnrPvTR0JrZBtqo+r438YB1t3tA1P+WYMk215tiSuI dXOi1o72SD/SmXzkesM7Q/MBVntjvDVjh64OgoB/SBaokK/pWcQI8e+3KTnEuwbhEU72 coZ4A94tSDG6v0lIxSwzBbOhBhLgUQ5M+qcpqI/tvXnEuxLXnmTij/+JZSC9TFE0tuHK YqFA== X-Gm-Message-State: AOAM532wCT3oR+i3seNwmc6F/CY0jwtjcy0oyCBzvqxoU6KVY54NImyV DIiav9I2WMmIzRBkIcl3DLVeTfWmkho6bQ== X-Received: by 2002:a62:ea06:0:b0:3e1:62a6:95b8 with SMTP id t6-20020a62ea06000000b003e162a695b8mr44448899pfh.70.1629888019334; Wed, 25 Aug 2021 03:40:19 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.221.29]) by smtp.googlemail.com with ESMTPSA id w16sm4548735pff.130.2021.08.25.03.40.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Aug 2021 03:40:19 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Wed, 25 Aug 2021 16:09:38 +0530 Message-Id: <20210825103940.28798-5-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210825103940.28798-1-shubhanshu.e01@gmail.com> References: <20210825103940.28798-1-shubhanshu.e01@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v4 5/7] lavfi/dnn: Rename InferenceItem to LastLevelTaskItem X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: fBDqvScd++QQ This patch renames the InferenceItem to LastLevelTaskItem in the three backends to avoid confusion among the meanings of these structs. The following are the renames done in this patch: 1. extract_inference_from_task -> extract_lltask_from_task 2. InferenceItem -> LastLevelTaskItem 3. inference_queue -> lltask_queue 4. inference -> lltask 5. inference_count -> lltask_count Signed-off-by: Shubhanshu Saxena --- libavfilter/dnn/dnn_backend_common.h | 4 +- libavfilter/dnn/dnn_backend_native.c | 58 ++++++------- libavfilter/dnn/dnn_backend_native.h | 2 +- libavfilter/dnn/dnn_backend_openvino.c | 110 ++++++++++++------------- libavfilter/dnn/dnn_backend_tf.c | 76 ++++++++--------- 5 files changed, 125 insertions(+), 125 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_common.h b/libavfilter/dnn/dnn_backend_common.h index 78e62a94a2..6b6a5e21ae 100644 --- a/libavfilter/dnn/dnn_backend_common.h +++ b/libavfilter/dnn/dnn_backend_common.h @@ -47,10 +47,10 @@ typedef struct TaskItem { } TaskItem; // one task might have multiple inferences -typedef struct InferenceItem { +typedef struct LastLevelTaskItem { TaskItem *task; uint32_t bbox_index; -} InferenceItem; +} LastLevelTaskItem; /** * Common Async Execution Mechanism for the DNN Backends. diff --git a/libavfilter/dnn/dnn_backend_native.c b/libavfilter/dnn/dnn_backend_native.c index 2d34b88f8a..13436c0484 100644 --- a/libavfilter/dnn/dnn_backend_native.c +++ b/libavfilter/dnn/dnn_backend_native.c @@ -46,25 +46,25 @@ static const AVClass dnn_native_class = { .category = AV_CLASS_CATEGORY_FILTER, }; -static DNNReturnType execute_model_native(Queue *inference_queue); +static DNNReturnType execute_model_native(Queue *lltask_queue); -static DNNReturnType extract_inference_from_task(TaskItem *task, Queue *inference_queue) +static DNNReturnType extract_lltask_from_task(TaskItem *task, Queue *lltask_queue) { NativeModel *native_model = task->model; NativeContext *ctx = &native_model->ctx; - InferenceItem *inference = av_malloc(sizeof(*inference)); + LastLevelTaskItem *lltask = av_malloc(sizeof(*lltask)); - if (!inference) { - av_log(ctx, AV_LOG_ERROR, "Unable to allocate space for InferenceItem\n"); + if (!lltask) { + av_log(ctx, AV_LOG_ERROR, "Unable to allocate space for LastLevelTaskItem\n"); return DNN_ERROR; } task->inference_todo = 1; task->inference_done = 0; - inference->task = task; + lltask->task = task; - if (ff_queue_push_back(inference_queue, inference) < 0) { - av_log(ctx, AV_LOG_ERROR, "Failed to push back inference_queue.\n"); - av_freep(&inference); + if (ff_queue_push_back(lltask_queue, lltask) < 0) { + av_log(ctx, AV_LOG_ERROR, "Failed to push back lltask_queue.\n"); + av_freep(&lltask); return DNN_ERROR; } return DNN_SUCCESS; @@ -116,13 +116,13 @@ static DNNReturnType get_output_native(void *model, const char *input_name, int goto err; } - if (extract_inference_from_task(&task, native_model->inference_queue) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); + if (extract_lltask_from_task(&task, native_model->lltask_queue) != DNN_SUCCESS) { + av_log(ctx, AV_LOG_ERROR, "unable to extract last level task from task.\n"); ret = DNN_ERROR; goto err; } - ret = execute_model_native(native_model->inference_queue); + ret = execute_model_native(native_model->lltask_queue); *output_width = task.out_frame->width; *output_height = task.out_frame->height; @@ -223,8 +223,8 @@ DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType f goto fail; } - native_model->inference_queue = ff_queue_create(); - if (!native_model->inference_queue) { + native_model->lltask_queue = ff_queue_create(); + if (!native_model->lltask_queue) { goto fail; } @@ -297,24 +297,24 @@ fail: return NULL; } -static DNNReturnType execute_model_native(Queue *inference_queue) +static DNNReturnType execute_model_native(Queue *lltask_queue) { NativeModel *native_model = NULL; NativeContext *ctx = NULL; int32_t layer; DNNData input, output; DnnOperand *oprd = NULL; - InferenceItem *inference = NULL; + LastLevelTaskItem *lltask = NULL; TaskItem *task = NULL; DNNReturnType ret = 0; - inference = ff_queue_pop_front(inference_queue); - if (!inference) { - av_log(NULL, AV_LOG_ERROR, "Failed to get inference item\n"); + lltask = ff_queue_pop_front(lltask_queue); + if (!lltask) { + av_log(NULL, AV_LOG_ERROR, "Failed to get LastLevelTaskItem\n"); ret = DNN_ERROR; goto err; } - task = inference->task; + task = lltask->task; native_model = task->model; ctx = &native_model->ctx; @@ -428,7 +428,7 @@ static DNNReturnType execute_model_native(Queue *inference_queue) } task->inference_done++; err: - av_freep(&inference); + av_freep(&lltask); return ret; } @@ -459,26 +459,26 @@ DNNReturnType ff_dnn_execute_model_native(const DNNModel *model, DNNExecBasePara return DNN_ERROR; } - if (extract_inference_from_task(task, native_model->inference_queue) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); + if (extract_lltask_from_task(task, native_model->lltask_queue) != DNN_SUCCESS) { + av_log(ctx, AV_LOG_ERROR, "unable to extract last level task from task.\n"); return DNN_ERROR; } - return execute_model_native(native_model->inference_queue); + return execute_model_native(native_model->lltask_queue); } DNNReturnType ff_dnn_flush_native(const DNNModel *model) { NativeModel *native_model = model->model; - if (ff_queue_size(native_model->inference_queue) == 0) { + if (ff_queue_size(native_model->lltask_queue) == 0) { // no pending task need to flush return DNN_SUCCESS; } // for now, use sync node with flush operation // Switch to async when it is supported - return execute_model_native(native_model->inference_queue); + return execute_model_native(native_model->lltask_queue); } DNNAsyncStatusType ff_dnn_get_result_native(const DNNModel *model, AVFrame **in, AVFrame **out) @@ -536,11 +536,11 @@ void ff_dnn_free_model_native(DNNModel **model) av_freep(&native_model->operands); } - while (ff_queue_size(native_model->inference_queue) != 0) { - InferenceItem *item = ff_queue_pop_front(native_model->inference_queue); + while (ff_queue_size(native_model->lltask_queue) != 0) { + LastLevelTaskItem *item = ff_queue_pop_front(native_model->lltask_queue); av_freep(&item); } - ff_queue_destroy(native_model->inference_queue); + ff_queue_destroy(native_model->lltask_queue); while (ff_queue_size(native_model->task_queue) != 0) { TaskItem *item = ff_queue_pop_front(native_model->task_queue); diff --git a/libavfilter/dnn/dnn_backend_native.h b/libavfilter/dnn/dnn_backend_native.h index ca61bb353f..e8017ee4b4 100644 --- a/libavfilter/dnn/dnn_backend_native.h +++ b/libavfilter/dnn/dnn_backend_native.h @@ -129,7 +129,7 @@ typedef struct NativeModel{ DnnOperand *operands; int32_t operands_num; Queue *task_queue; - Queue *inference_queue; + Queue *lltask_queue; } NativeModel; DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx); diff --git a/libavfilter/dnn/dnn_backend_openvino.c b/libavfilter/dnn/dnn_backend_openvino.c index abf3f4b6b6..76dc06c6d7 100644 --- a/libavfilter/dnn/dnn_backend_openvino.c +++ b/libavfilter/dnn/dnn_backend_openvino.c @@ -57,14 +57,14 @@ typedef struct OVModel{ ie_executable_network_t *exe_network; SafeQueue *request_queue; // holds OVRequestItem Queue *task_queue; // holds TaskItem - Queue *inference_queue; // holds InferenceItem + Queue *lltask_queue; // holds LastLevelTaskItem } OVModel; // one request for one call to openvino typedef struct OVRequestItem { ie_infer_request_t *infer_request; - InferenceItem **inferences; - uint32_t inference_count; + LastLevelTaskItem **lltasks; + uint32_t lltask_count; ie_complete_call_back_t callback; } OVRequestItem; @@ -121,12 +121,12 @@ static DNNReturnType fill_model_input_ov(OVModel *ov_model, OVRequestItem *reque IEStatusCode status; DNNData input; ie_blob_t *input_blob = NULL; - InferenceItem *inference; + LastLevelTaskItem *lltask; TaskItem *task; - inference = ff_queue_peek_front(ov_model->inference_queue); - av_assert0(inference); - task = inference->task; + lltask = ff_queue_peek_front(ov_model->lltask_queue); + av_assert0(lltask); + task = lltask->task; status = ie_infer_request_get_blob(request->infer_request, task->input_name, &input_blob); if (status != OK) { @@ -159,13 +159,13 @@ static DNNReturnType fill_model_input_ov(OVModel *ov_model, OVRequestItem *reque input.order = DCO_BGR; for (int i = 0; i < ctx->options.batch_size; ++i) { - inference = ff_queue_pop_front(ov_model->inference_queue); - if (!inference) { + lltask = ff_queue_pop_front(ov_model->lltask_queue); + if (!lltask) { break; } - request->inferences[i] = inference; - request->inference_count = i + 1; - task = inference->task; + request->lltasks[i] = lltask; + request->lltask_count = i + 1; + task = lltask->task; switch (ov_model->model->func_type) { case DFT_PROCESS_FRAME: if (task->do_ioproc) { @@ -180,7 +180,7 @@ static DNNReturnType fill_model_input_ov(OVModel *ov_model, OVRequestItem *reque ff_frame_to_dnn_detect(task->in_frame, &input, ctx); break; case DFT_ANALYTICS_CLASSIFY: - ff_frame_to_dnn_classify(task->in_frame, &input, inference->bbox_index, ctx); + ff_frame_to_dnn_classify(task->in_frame, &input, lltask->bbox_index, ctx); break; default: av_assert0(!"should not reach here"); @@ -200,8 +200,8 @@ static void infer_completion_callback(void *args) precision_e precision; IEStatusCode status; OVRequestItem *request = args; - InferenceItem *inference = request->inferences[0]; - TaskItem *task = inference->task; + LastLevelTaskItem *lltask = request->lltasks[0]; + TaskItem *task = lltask->task; OVModel *ov_model = task->model; SafeQueue *requestq = ov_model->request_queue; ie_blob_t *output_blob = NULL; @@ -248,10 +248,10 @@ static void infer_completion_callback(void *args) output.dt = precision_to_datatype(precision); output.data = blob_buffer.buffer; - av_assert0(request->inference_count <= dims.dims[0]); - av_assert0(request->inference_count >= 1); - for (int i = 0; i < request->inference_count; ++i) { - task = request->inferences[i]->task; + av_assert0(request->lltask_count <= dims.dims[0]); + av_assert0(request->lltask_count >= 1); + for (int i = 0; i < request->lltask_count; ++i) { + task = request->lltasks[i]->task; task->inference_done++; switch (ov_model->model->func_type) { @@ -279,20 +279,20 @@ static void infer_completion_callback(void *args) av_log(ctx, AV_LOG_ERROR, "classify filter needs to provide post proc\n"); return; } - ov_model->model->classify_post_proc(task->out_frame, &output, request->inferences[i]->bbox_index, ov_model->model->filter_ctx); + ov_model->model->classify_post_proc(task->out_frame, &output, request->lltasks[i]->bbox_index, ov_model->model->filter_ctx); break; default: av_assert0(!"should not reach here"); break; } - av_freep(&request->inferences[i]); + av_freep(&request->lltasks[i]); output.data = (uint8_t *)output.data + output.width * output.height * output.channels * get_datatype_size(output.dt); } ie_blob_free(&output_blob); - request->inference_count = 0; + request->lltask_count = 0; if (ff_safe_queue_push_back(requestq, request) < 0) { ie_infer_request_free(&request->infer_request); av_freep(&request); @@ -399,11 +399,11 @@ static DNNReturnType init_model_ov(OVModel *ov_model, const char *input_name, co goto err; } - item->inferences = av_malloc_array(ctx->options.batch_size, sizeof(*item->inferences)); - if (!item->inferences) { + item->lltasks = av_malloc_array(ctx->options.batch_size, sizeof(*item->lltasks)); + if (!item->lltasks) { goto err; } - item->inference_count = 0; + item->lltask_count = 0; } ov_model->task_queue = ff_queue_create(); @@ -411,8 +411,8 @@ static DNNReturnType init_model_ov(OVModel *ov_model, const char *input_name, co goto err; } - ov_model->inference_queue = ff_queue_create(); - if (!ov_model->inference_queue) { + ov_model->lltask_queue = ff_queue_create(); + if (!ov_model->lltask_queue) { goto err; } @@ -427,7 +427,7 @@ static DNNReturnType execute_model_ov(OVRequestItem *request, Queue *inferenceq) { IEStatusCode status; DNNReturnType ret; - InferenceItem *inference; + LastLevelTaskItem *lltask; TaskItem *task; OVContext *ctx; OVModel *ov_model; @@ -438,8 +438,8 @@ static DNNReturnType execute_model_ov(OVRequestItem *request, Queue *inferenceq) return DNN_SUCCESS; } - inference = ff_queue_peek_front(inferenceq); - task = inference->task; + lltask = ff_queue_peek_front(inferenceq); + task = lltask->task; ov_model = task->model; ctx = &ov_model->ctx; @@ -567,21 +567,21 @@ static int contain_valid_detection_bbox(AVFrame *frame) return 1; } -static DNNReturnType extract_inference_from_task(DNNFunctionType func_type, TaskItem *task, Queue *inference_queue, DNNExecBaseParams *exec_params) +static DNNReturnType extract_lltask_from_task(DNNFunctionType func_type, TaskItem *task, Queue *lltask_queue, DNNExecBaseParams *exec_params) { switch (func_type) { case DFT_PROCESS_FRAME: case DFT_ANALYTICS_DETECT: { - InferenceItem *inference = av_malloc(sizeof(*inference)); - if (!inference) { + LastLevelTaskItem *lltask = av_malloc(sizeof(*lltask)); + if (!lltask) { return DNN_ERROR; } task->inference_todo = 1; task->inference_done = 0; - inference->task = task; - if (ff_queue_push_back(inference_queue, inference) < 0) { - av_freep(&inference); + lltask->task = task; + if (ff_queue_push_back(lltask_queue, lltask) < 0) { + av_freep(&lltask); return DNN_ERROR; } return DNN_SUCCESS; @@ -604,7 +604,7 @@ static DNNReturnType extract_inference_from_task(DNNFunctionType func_type, Task header = (const AVDetectionBBoxHeader *)sd->data; for (uint32_t i = 0; i < header->nb_bboxes; i++) { - InferenceItem *inference; + LastLevelTaskItem *lltask; const AVDetectionBBox *bbox = av_get_detection_bbox(header, i); if (params->target) { @@ -613,15 +613,15 @@ static DNNReturnType extract_inference_from_task(DNNFunctionType func_type, Task } } - inference = av_malloc(sizeof(*inference)); - if (!inference) { + lltask = av_malloc(sizeof(*lltask)); + if (!lltask) { return DNN_ERROR; } task->inference_todo++; - inference->task = task; - inference->bbox_index = i; - if (ff_queue_push_back(inference_queue, inference) < 0) { - av_freep(&inference); + lltask->task = task; + lltask->bbox_index = i; + if (ff_queue_push_back(lltask_queue, lltask) < 0) { + av_freep(&lltask); return DNN_ERROR; } } @@ -679,8 +679,8 @@ static DNNReturnType get_output_ov(void *model, const char *input_name, int inpu return DNN_ERROR; } - if (extract_inference_from_task(ov_model->model->func_type, &task, ov_model->inference_queue, NULL) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); + if (extract_lltask_from_task(ov_model->model->func_type, &task, ov_model->lltask_queue, NULL) != DNN_SUCCESS) { + av_log(ctx, AV_LOG_ERROR, "unable to extract last level task from task.\n"); ret = DNN_ERROR; goto err; } @@ -692,7 +692,7 @@ static DNNReturnType get_output_ov(void *model, const char *input_name, int inpu goto err; } - ret = execute_model_ov(request, ov_model->inference_queue); + ret = execute_model_ov(request, ov_model->lltask_queue); *output_width = task.out_frame->width; *output_height = task.out_frame->height; err: @@ -794,20 +794,20 @@ DNNReturnType ff_dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams * return DNN_ERROR; } - if (extract_inference_from_task(model->func_type, task, ov_model->inference_queue, exec_params) != DNN_SUCCESS) { + if (extract_lltask_from_task(model->func_type, task, ov_model->lltask_queue, exec_params) != DNN_SUCCESS) { av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); return DNN_ERROR; } if (ctx->options.async) { - while (ff_queue_size(ov_model->inference_queue) >= ctx->options.batch_size) { + while (ff_queue_size(ov_model->lltask_queue) >= ctx->options.batch_size) { request = ff_safe_queue_pop_front(ov_model->request_queue); if (!request) { av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); return DNN_ERROR; } - ret = execute_model_ov(request, ov_model->inference_queue); + ret = execute_model_ov(request, ov_model->lltask_queue); if (ret != DNN_SUCCESS) { return ret; } @@ -832,7 +832,7 @@ DNNReturnType ff_dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams * av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); return DNN_ERROR; } - return execute_model_ov(request, ov_model->inference_queue); + return execute_model_ov(request, ov_model->lltask_queue); } } @@ -850,7 +850,7 @@ DNNReturnType ff_dnn_flush_ov(const DNNModel *model) IEStatusCode status; DNNReturnType ret; - if (ff_queue_size(ov_model->inference_queue) == 0) { + if (ff_queue_size(ov_model->lltask_queue) == 0) { // no pending task need to flush return DNN_SUCCESS; } @@ -889,16 +889,16 @@ void ff_dnn_free_model_ov(DNNModel **model) if (item && item->infer_request) { ie_infer_request_free(&item->infer_request); } - av_freep(&item->inferences); + av_freep(&item->lltasks); av_freep(&item); } ff_safe_queue_destroy(ov_model->request_queue); - while (ff_queue_size(ov_model->inference_queue) != 0) { - InferenceItem *item = ff_queue_pop_front(ov_model->inference_queue); + while (ff_queue_size(ov_model->lltask_queue) != 0) { + LastLevelTaskItem *item = ff_queue_pop_front(ov_model->lltask_queue); av_freep(&item); } - ff_queue_destroy(ov_model->inference_queue); + ff_queue_destroy(ov_model->lltask_queue); while (ff_queue_size(ov_model->task_queue) != 0) { TaskItem *item = ff_queue_pop_front(ov_model->task_queue); diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c index 906934d8c0..dfac58b357 100644 --- a/libavfilter/dnn/dnn_backend_tf.c +++ b/libavfilter/dnn/dnn_backend_tf.c @@ -58,7 +58,7 @@ typedef struct TFModel{ TF_Session *session; TF_Status *status; SafeQueue *request_queue; - Queue *inference_queue; + Queue *lltask_queue; Queue *task_queue; } TFModel; @@ -75,7 +75,7 @@ typedef struct TFInferRequest { typedef struct TFRequestItem { TFInferRequest *infer_request; - InferenceItem *inference; + LastLevelTaskItem *lltask; TF_Status *status; DNNAsyncExecModule exec_module; } TFRequestItem; @@ -90,7 +90,7 @@ static const AVOption dnn_tensorflow_options[] = { AVFILTER_DEFINE_CLASS(dnn_tensorflow); -static DNNReturnType execute_model_tf(TFRequestItem *request, Queue *inference_queue); +static DNNReturnType execute_model_tf(TFRequestItem *request, Queue *lltask_queue); static void infer_completion_callback(void *args); static inline void destroy_request_item(TFRequestItem **arg); @@ -158,8 +158,8 @@ static DNNReturnType tf_start_inference(void *args) { TFRequestItem *request = args; TFInferRequest *infer_request = request->infer_request; - InferenceItem *inference = request->inference; - TaskItem *task = inference->task; + LastLevelTaskItem *lltask = request->lltask; + TaskItem *task = lltask->task; TFModel *tf_model = task->model; if (!request) { @@ -196,27 +196,27 @@ static inline void destroy_request_item(TFRequestItem **arg) { request = *arg; tf_free_request(request->infer_request); av_freep(&request->infer_request); - av_freep(&request->inference); + av_freep(&request->lltask); TF_DeleteStatus(request->status); ff_dnn_async_module_cleanup(&request->exec_module); av_freep(arg); } -static DNNReturnType extract_inference_from_task(TaskItem *task, Queue *inference_queue) +static DNNReturnType extract_lltask_from_task(TaskItem *task, Queue *lltask_queue) { TFModel *tf_model = task->model; TFContext *ctx = &tf_model->ctx; - InferenceItem *inference = av_malloc(sizeof(*inference)); - if (!inference) { - av_log(ctx, AV_LOG_ERROR, "Unable to allocate space for InferenceItem\n"); + LastLevelTaskItem *lltask = av_malloc(sizeof(*lltask)); + if (!lltask) { + av_log(ctx, AV_LOG_ERROR, "Unable to allocate space for LastLevelTaskItem\n"); return DNN_ERROR; } task->inference_todo = 1; task->inference_done = 0; - inference->task = task; - if (ff_queue_push_back(inference_queue, inference) < 0) { - av_log(ctx, AV_LOG_ERROR, "Failed to push back inference_queue.\n"); - av_freep(&inference); + lltask->task = task; + if (ff_queue_push_back(lltask_queue, lltask) < 0) { + av_log(ctx, AV_LOG_ERROR, "Failed to push back lltask_queue.\n"); + av_freep(&lltask); return DNN_ERROR; } return DNN_SUCCESS; @@ -333,7 +333,7 @@ static DNNReturnType get_output_tf(void *model, const char *input_name, int inpu goto err; } - if (extract_inference_from_task(&task, tf_model->inference_queue) != DNN_SUCCESS) { + if (extract_lltask_from_task(&task, tf_model->lltask_queue) != DNN_SUCCESS) { av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); ret = DNN_ERROR; goto err; @@ -346,7 +346,7 @@ static DNNReturnType get_output_tf(void *model, const char *input_name, int inpu goto err; } - ret = execute_model_tf(request, tf_model->inference_queue); + ret = execute_model_tf(request, tf_model->lltask_queue); *output_width = task.out_frame->width; *output_height = task.out_frame->height; @@ -901,7 +901,7 @@ DNNModel *ff_dnn_load_model_tf(const char *model_filename, DNNFunctionType func_ if (!item) { goto err; } - item->inference = NULL; + item->lltask = NULL; item->infer_request = tf_create_inference_request(); if (!item->infer_request) { av_log(ctx, AV_LOG_ERROR, "Failed to allocate memory for TensorFlow inference request\n"); @@ -919,8 +919,8 @@ DNNModel *ff_dnn_load_model_tf(const char *model_filename, DNNFunctionType func_ } } - tf_model->inference_queue = ff_queue_create(); - if (!tf_model->inference_queue) { + tf_model->lltask_queue = ff_queue_create(); + if (!tf_model->lltask_queue) { goto err; } @@ -944,15 +944,15 @@ err: static DNNReturnType fill_model_input_tf(TFModel *tf_model, TFRequestItem *request) { DNNData input; - InferenceItem *inference; + LastLevelTaskItem *lltask; TaskItem *task; TFInferRequest *infer_request; TFContext *ctx = &tf_model->ctx; - inference = ff_queue_pop_front(tf_model->inference_queue); - av_assert0(inference); - task = inference->task; - request->inference = inference; + lltask = ff_queue_pop_front(tf_model->lltask_queue); + av_assert0(lltask); + task = lltask->task; + request->lltask = lltask; if (get_input_tf(tf_model, &input, task->input_name) != DNN_SUCCESS) { goto err; @@ -1030,8 +1030,8 @@ err: static void infer_completion_callback(void *args) { TFRequestItem *request = args; - InferenceItem *inference = request->inference; - TaskItem *task = inference->task; + LastLevelTaskItem *lltask = request->lltask; + TaskItem *task = lltask->task; DNNData *outputs; TFInferRequest *infer_request = request->infer_request; TFModel *tf_model = task->model; @@ -1086,20 +1086,20 @@ err: } } -static DNNReturnType execute_model_tf(TFRequestItem *request, Queue *inference_queue) +static DNNReturnType execute_model_tf(TFRequestItem *request, Queue *lltask_queue) { TFModel *tf_model; TFContext *ctx; - InferenceItem *inference; + LastLevelTaskItem *lltask; TaskItem *task; - if (ff_queue_size(inference_queue) == 0) { + if (ff_queue_size(lltask_queue) == 0) { destroy_request_item(&request); return DNN_SUCCESS; } - inference = ff_queue_peek_front(inference_queue); - task = inference->task; + lltask = ff_queue_peek_front(lltask_queue); + task = lltask->task; tf_model = task->model; ctx = &tf_model->ctx; @@ -1155,8 +1155,8 @@ DNNReturnType ff_dnn_execute_model_tf(const DNNModel *model, DNNExecBaseParams * return DNN_ERROR; } - if (extract_inference_from_task(task, tf_model->inference_queue) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); + if (extract_lltask_from_task(task, tf_model->lltask_queue) != DNN_SUCCESS) { + av_log(ctx, AV_LOG_ERROR, "unable to extract last level task from task.\n"); return DNN_ERROR; } @@ -1165,7 +1165,7 @@ DNNReturnType ff_dnn_execute_model_tf(const DNNModel *model, DNNExecBaseParams * av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); return DNN_ERROR; } - return execute_model_tf(request, tf_model->inference_queue); + return execute_model_tf(request, tf_model->lltask_queue); } DNNAsyncStatusType ff_dnn_get_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out) @@ -1181,7 +1181,7 @@ DNNReturnType ff_dnn_flush_tf(const DNNModel *model) TFRequestItem *request; DNNReturnType ret; - if (ff_queue_size(tf_model->inference_queue) == 0) { + if (ff_queue_size(tf_model->lltask_queue) == 0) { // no pending task need to flush return DNN_SUCCESS; } @@ -1216,11 +1216,11 @@ void ff_dnn_free_model_tf(DNNModel **model) } ff_safe_queue_destroy(tf_model->request_queue); - while (ff_queue_size(tf_model->inference_queue) != 0) { - InferenceItem *item = ff_queue_pop_front(tf_model->inference_queue); + while (ff_queue_size(tf_model->lltask_queue) != 0) { + LastLevelTaskItem *item = ff_queue_pop_front(tf_model->lltask_queue); av_freep(&item); } - ff_queue_destroy(tf_model->inference_queue); + ff_queue_destroy(tf_model->lltask_queue); while (ff_queue_size(tf_model->task_queue) != 0) { TaskItem *item = ff_queue_pop_front(tf_model->task_queue); From patchwork Wed Aug 25 10:39:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29768 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp435184iov; Wed, 25 Aug 2021 03:41:18 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwPHEDFhn7lEGYfdvFvarEBv5KDnfKXo8e1/XH1T9KShEkJTxHHd9Yz0Wq3xIwQozlJ8rTe X-Received: by 2002:a05:6402:51c7:: with SMTP id r7mr47868764edd.65.1629888077954; Wed, 25 Aug 2021 03:41:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629888077; cv=none; d=google.com; s=arc-20160816; b=m+j4lQAzQWFRhHrtEAl0qjGXx6eHcyKJ4VyUL2J0G5XfU+TozGT8JGugCxOzesYB9O YQBCpKoSrzeVKjgpxSMcVsUJRigTkzbtAuYZURK8Kn1fPqjc2l6f4R5Etqs1Wjr5V8jH YS219pwaC7d4eBbHnyv2olskeZOW8R279tiV5POK3HmzMWbmiyzyRWGpq+9Byi7gy/52 0y0KfZrPuwxz55N7kuXSmBfHbokRvRXOU8FxPxYhkYudMTH8kNX2Xw7f4Wnj9EsUK5Bb +D0vkdrX38or3GaM0yrd7waSO3RcD83P8Cu79eN2CGLzNag9N2EyNSvAeF9n6cAZJ6AX iDKg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=FTYKjgXxUk76+G8WQr8mMel1OVLWsur50UjMhG7iRhQ=; b=u64xDizQMjcB5SHNMq52/fpcYPafDCa0iS6ewTD1a59+tSoYUA33fKwbBpulWDCR68 mqZzxl9JUU5QwewxijEKCbUl5MRmyVGnvYc9HaYA2vPW0vP/aRKgEFs1goGEM0xDE0Ub O7MXOqp7fwJ5gakhkgl2IlKlCRVzPSO/9DTZE8CHu4aFJQUSiI2x2m4LJelR2VQc8A2a 1/UH/5FC7jFEUpHzd8FvFJy7shLA+eg986V4erUZMYG3e0o2wUJT6dFWd04ZM06VoGnC w3W23pq57/ta3VcLia4VO5lHX6N2pE6fvd7PngcX0uSK6qt7MK8RGFsr1IRpXTfoV2Ne ZVVg== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=XG+R0bHd; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id yd6si5928923ejb.340.2021.08.25.03.41.17; Wed, 25 Aug 2021 03:41:17 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=XG+R0bHd; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id EB78E68A3AF; Wed, 25 Aug 2021 13:40:29 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id B9CEE68A351 for ; Wed, 25 Aug 2021 13:40:22 +0300 (EEST) Received: by mail-pl1-f169.google.com with SMTP id n12so2638010plk.10 for ; Wed, 25 Aug 2021 03:40:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FUFg+PQeZADAUTePrFSIiTIyS8s+IXwL7EV+i8Ji3Lo=; b=XG+R0bHdDRsam9JHEbStGLu1Rs0ugganzjYpGA1p6ATHaHvx01R3vPHk0bvCzilYLv cfBWlosFo3unugiHHv3uB07pWGgFhwS/2UBlAKqgSSyzDwEsUh9bBJiC9f//CcS+s3rs qjWEA077RmzfdtmiSG213qR0vIiXE5zwdmTmxU29yEGJXJ9sGcfgvlCSZ5XdlssCVmHo PkuP4d4BnLKzSx+VvaDNDZ2Kh2a2B8Y51Dlo0JF6f6x8E0sE2SuU7BgVFprLfL8Vs0zE nJW7PNfIsbyTYgosAK2/ldv9cDBRRoMDfhWZL7KzSz0aSXW1pguvoajX75q1OrSR3AqW 6R4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FUFg+PQeZADAUTePrFSIiTIyS8s+IXwL7EV+i8Ji3Lo=; b=Fo1Z3j1Wx3vwbKUHOfmPZXEVFDNbjaTDI/p+Bu7gAGuvXJ4QMIW6hUHcHw1xtJSSPm ADt5s2iEdsh8OGLPPchQvkBS7+2lURMn7svP4CyELD0m9REJKbAVmJqR5jPt7xeIotTY cohIAVOsJGzGEOeGj+A7R96FM7FgUlKkCj2IuDBAniG0TUBmYnCIJuBTbtvOPiDhYEv+ czZu1VX13sLpRuW1jLhSxsv6KaE0CCVx3E0WJ6h13XM1HMt3RsVbVkUBtQMRKZlleTBR fM3l3cuqyU4n0ZsjrS3T4lppEL2zuz3NGUg+UF8EF9bdngMYirA/LAhyyq7qyUBfkr6P AcNw== X-Gm-Message-State: AOAM532EmJ/vpyDWjFQYIWpuldmARbTdi6jQM3KqQR3yAI2SIz1pe/4K ON4g4C3cAnqKcP4uw+29pBOq003VrzxVSw== X-Received: by 2002:a17:90b:78f:: with SMTP id l15mr9891780pjz.181.1629888021032; Wed, 25 Aug 2021 03:40:21 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.221.29]) by smtp.googlemail.com with ESMTPSA id w16sm4548735pff.130.2021.08.25.03.40.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Aug 2021 03:40:20 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Wed, 25 Aug 2021 16:09:39 +0530 Message-Id: <20210825103940.28798-6-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210825103940.28798-1-shubhanshu.e01@gmail.com> References: <20210825103940.28798-1-shubhanshu.e01@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v4 6/7] doc/filters.texi: Include dnn_processing in docs of sr and derain filter X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: rcOukIHuth6k Signed-off-by: Shubhanshu Saxena --- doc/filters.texi | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/filters.texi b/doc/filters.texi index d99368e64b..112adc5d94 100644 --- a/doc/filters.texi +++ b/doc/filters.texi @@ -9982,7 +9982,7 @@ Note that different backends use different file formats. TensorFlow and native backend can load files for only its format. @end table -It can also be finished with @ref{dnn_processing} filter. +To get full functionality (such as async execution), please use the @ref{dnn_processing} filter. @section deshake @@ -19271,7 +19271,7 @@ Default value is @code{2}. Scale factor is necessary for SRCNN model, because it input upscaled using bicubic upscaling with proper scale factor. @end table -This feature can also be finished with @ref{dnn_processing} filter. +To get full functionality (such as async execution), please use the @ref{dnn_processing} filter. @section ssim From patchwork Wed Aug 25 10:39:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29774 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp435295iov; Wed, 25 Aug 2021 03:41:28 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxsFMXgt+kFxUrowcuMRUV+zcFbEIbWa9ryiLsk7Ymt4fqFxc8s+iJvadbbg6dgF2sttNHo X-Received: by 2002:a05:6402:d66:: with SMTP id ec38mr7355010edb.234.1629888087972; Wed, 25 Aug 2021 03:41:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629888087; cv=none; d=google.com; s=arc-20160816; b=Z5wUrCuAdLeO5FNjylKOw9McOUK5oD71dSHU95osWBdowWo9mj4iyy3mST6HJ7r5LA 3riwrVIpt0sKLWu+5ON9rsUdZw5DP9ez3Emc0pv9qmFsgOOmDf26g+nvPOaoLxbqOYA+ tFndMyt0XAYCCUtFl0eM1g1m9Ga28WUvuQJQB/jZkscpUKyDe/vS84bKxOqUrm6p0nNB JQPlt/5K+ddSXgNbVAL1tuXUB987JlHrVPJUsUCt5NSuemrPsoWKaCk5HAULk8bPkIrc 1swQLYjJ+PrwbibXq41bhjJBBqc3BWw/ZFU8/CKj3WLjNXJK2jQMk9yab6gLhhEczywN ilBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=nGY7fOqvCGAsNEZjXHpFF4Bs6DP2u7GhtsfYW3oYT64=; b=SsrUrfnIOxEqYzqSHUm5i7jzQjWX5kYyvI4lJ6cZpmSOMDviIwoK+5foeKdatdobaH Vs6y+9QAAHCLwUDlXxREGWU9AtzaHPG0fL/Gjq3Vvny7nR75/twvlqKkuCd4BCf0eQgC wHNhY9oBHN/LGhK+K8aPI80PnpeFRujG3rzDPvLCF2nWLdpVepkf456DC34NQzXl+nas vijVG/mu6ZIah5tWSaYfCOttfGHH2z5oWdAx9d4ljq0TQ03BVsUAXcs9GbZT0row6XWj pYlCd3u3YMxHJ/CoSkYmvAl+4yix+uqa4nL+/zbdhG/h3f+7fnwyNj/MV1c+3m/SjL4L 4VJw== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b="f0TQ/Yv1"; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id w17si19358527ejv.123.2021.08.25.03.41.27; Wed, 25 Aug 2021 03:41:27 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b="f0TQ/Yv1"; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 0FC0068A3B8; Wed, 25 Aug 2021 13:40:33 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pg1-f178.google.com (mail-pg1-f178.google.com [209.85.215.178]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 5DF67680523 for ; Wed, 25 Aug 2021 13:40:24 +0300 (EEST) Received: by mail-pg1-f178.google.com with SMTP id c17so22630743pgc.0 for ; Wed, 25 Aug 2021 03:40:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=689fmjzc8ixjmruryRwS7yUk5dCh7YcqKpTxBlM4Xg4=; b=f0TQ/Yv1NJo6r8zD72JIZDpT6pxPHyq55FrEWXq6rvST7y6AtY2LwrtO+xlRjv3JzQ CeR+dOtCmzCvfqtxARmK3LI//5Y+o5aSvcJM3xnnClIRECXlCKGHLDcGoEz4HDnMlePl q6DDlYQtfaZLD3mH8iACq9XauKnYci3tfI/O6Wkj3g0AZGTgL0CiXWJIIARaLzjMJmPn lXvtVJ7HCku/a3EL0pC33Gw1VlYbUiLDzSfaOADro7OUfNcIbBw9wvxC6273ZMKd++ad 3m6rVvnwE6Bf/n9aB82goP1+5gLi4Lx57gW8yk8VqDNgeeBP/YoA/xzrq6o/6AP3pu8C av6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=689fmjzc8ixjmruryRwS7yUk5dCh7YcqKpTxBlM4Xg4=; b=hWVm1l/lr1ZWoNgmL7lbPorAb4a9httpL9kQcFqBmZGPkjOqjJ1vp/seAk6o1urZka poTMeMksvI9oH0sE1F+NuzT80R2ByY3pZCqWicHmM31QE1nhkrAL9fa1V83csYLTfSAo 1vRSpKN3YDzq5oOkDByX9gck5F9HzqYTEqY7mr/K7osqAQ+yjt2p0rHSnLhXG/7NUcsB Cer5/UIFhklrbo3j2WXa7gjEteE8IdKuyY8uVDwOnnkLvaU+6iPMWg6ZjizKXbM3vCd5 +z30FQ9DAxzeJnvdmyXYGNq2Hnd6DNZMNGvPWx9iZJ/r0/EbytqNP52Z6CzGyIN51d+V N7vQ== X-Gm-Message-State: AOAM531NPAtrRYnzGOcyTe3fbuy58YCHgvazzMsLF50KFSnc6GJAu44L e+PqtiQBbLoXnK/ZYDJBtaSDe09evEG7UA== X-Received: by 2002:a63:ed50:: with SMTP id m16mr41414469pgk.231.1629888022728; Wed, 25 Aug 2021 03:40:22 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.221.29]) by smtp.googlemail.com with ESMTPSA id w16sm4548735pff.130.2021.08.25.03.40.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Aug 2021 03:40:22 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Wed, 25 Aug 2021 16:09:40 +0530 Message-Id: <20210825103940.28798-7-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210825103940.28798-1-shubhanshu.e01@gmail.com> References: <20210825103940.28798-1-shubhanshu.e01@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v4 7/7] libavfilter: Send only input frame for DNN Detect and Classify X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: TdNhAVTfmI1p This commit updates the following two filters to send only the input frame and send NULL as output frame instead of input frame to the DNN backends. 1. vf_dnn_detect 2. vf_dnn_classify Signed-off-by: Shubhanshu Saxena --- libavfilter/dnn/dnn_backend_common.c | 2 +- libavfilter/dnn/dnn_backend_openvino.c | 5 +++-- libavfilter/dnn/dnn_backend_tf.c | 2 +- libavfilter/vf_dnn_classify.c | 14 ++++++-------- libavfilter/vf_dnn_detect.c | 14 ++++++-------- 5 files changed, 17 insertions(+), 20 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_common.c b/libavfilter/dnn/dnn_backend_common.c index d2bc016fef..6a9c4cc87f 100644 --- a/libavfilter/dnn/dnn_backend_common.c +++ b/libavfilter/dnn/dnn_backend_common.c @@ -38,7 +38,7 @@ int ff_check_exec_params(void *ctx, DNNBackendType backend, DNNFunctionType func return AVERROR(EINVAL); } - if (!exec_params->out_frame) { + if (!exec_params->out_frame && func_type == DFT_PROCESS_FRAME) { av_log(ctx, AV_LOG_ERROR, "out frame is NULL when execute model.\n"); return AVERROR(EINVAL); } diff --git a/libavfilter/dnn/dnn_backend_openvino.c b/libavfilter/dnn/dnn_backend_openvino.c index 76dc06c6d7..f5b1454d21 100644 --- a/libavfilter/dnn/dnn_backend_openvino.c +++ b/libavfilter/dnn/dnn_backend_openvino.c @@ -272,14 +272,14 @@ static void infer_completion_callback(void *args) av_log(ctx, AV_LOG_ERROR, "detect filter needs to provide post proc\n"); return; } - ov_model->model->detect_post_proc(task->out_frame, &output, 1, ov_model->model->filter_ctx); + ov_model->model->detect_post_proc(task->in_frame, &output, 1, ov_model->model->filter_ctx); break; case DFT_ANALYTICS_CLASSIFY: if (!ov_model->model->classify_post_proc) { av_log(ctx, AV_LOG_ERROR, "classify filter needs to provide post proc\n"); return; } - ov_model->model->classify_post_proc(task->out_frame, &output, request->lltasks[i]->bbox_index, ov_model->model->filter_ctx); + ov_model->model->classify_post_proc(task->in_frame, &output, request->lltasks[i]->bbox_index, ov_model->model->filter_ctx); break; default: av_assert0(!"should not reach here"); @@ -819,6 +819,7 @@ DNNReturnType ff_dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams * if (model->func_type == DFT_ANALYTICS_CLASSIFY) { // Classification filter has not been completely // tested with the sync mode. So, do not support now. + avpriv_report_missing_feature(ctx, "classify for sync execution"); return DNN_ERROR; } diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c index dfac58b357..c95cad7944 100644 --- a/libavfilter/dnn/dnn_backend_tf.c +++ b/libavfilter/dnn/dnn_backend_tf.c @@ -1069,7 +1069,7 @@ static void infer_completion_callback(void *args) { av_log(ctx, AV_LOG_ERROR, "Detect filter needs provide post proc\n"); return; } - tf_model->model->detect_post_proc(task->out_frame, outputs, task->nb_output, tf_model->model->filter_ctx); + tf_model->model->detect_post_proc(task->in_frame, outputs, task->nb_output, tf_model->model->filter_ctx); break; default: av_log(ctx, AV_LOG_ERROR, "Tensorflow backend does not support this kind of dnn filter now\n"); diff --git a/libavfilter/vf_dnn_classify.c b/libavfilter/vf_dnn_classify.c index d5ee65d403..d1ba8dffbc 100644 --- a/libavfilter/vf_dnn_classify.c +++ b/libavfilter/vf_dnn_classify.c @@ -225,13 +225,12 @@ static int dnn_classify_flush_frame(AVFilterLink *outlink, int64_t pts, int64_t AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); - if (out_frame) { - av_assert0(in_frame == out_frame); - ret = ff_filter_frame(outlink, out_frame); + if (async_state == DAST_SUCCESS) { + ret = ff_filter_frame(outlink, in_frame); if (ret < 0) return ret; if (out_pts) - *out_pts = out_frame->pts + pts; + *out_pts = in_frame->pts + pts; } av_usleep(5000); } while (async_state >= DAST_NOT_READY); @@ -258,7 +257,7 @@ static int dnn_classify_activate(AVFilterContext *filter_ctx) if (ret < 0) return ret; if (ret > 0) { - if (ff_dnn_execute_model_classification(&ctx->dnnctx, in, in, ctx->target) != DNN_SUCCESS) { + if (ff_dnn_execute_model_classification(&ctx->dnnctx, in, NULL, ctx->target) != DNN_SUCCESS) { return AVERROR(EIO); } } @@ -269,9 +268,8 @@ static int dnn_classify_activate(AVFilterContext *filter_ctx) AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); - if (out_frame) { - av_assert0(in_frame == out_frame); - ret = ff_filter_frame(outlink, out_frame); + if (async_state == DAST_SUCCESS) { + ret = ff_filter_frame(outlink, in_frame); if (ret < 0) return ret; got_frame = 1; diff --git a/libavfilter/vf_dnn_detect.c b/libavfilter/vf_dnn_detect.c index 809d70b930..637874b2a1 100644 --- a/libavfilter/vf_dnn_detect.c +++ b/libavfilter/vf_dnn_detect.c @@ -368,13 +368,12 @@ static int dnn_detect_flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *o AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); - if (out_frame) { - av_assert0(in_frame == out_frame); - ret = ff_filter_frame(outlink, out_frame); + if (async_state == DAST_SUCCESS) { + ret = ff_filter_frame(outlink, in_frame); if (ret < 0) return ret; if (out_pts) - *out_pts = out_frame->pts + pts; + *out_pts = in_frame->pts + pts; } av_usleep(5000); } while (async_state >= DAST_NOT_READY); @@ -401,7 +400,7 @@ static int dnn_detect_activate(AVFilterContext *filter_ctx) if (ret < 0) return ret; if (ret > 0) { - if (ff_dnn_execute_model(&ctx->dnnctx, in, in) != DNN_SUCCESS) { + if (ff_dnn_execute_model(&ctx->dnnctx, in, NULL) != DNN_SUCCESS) { return AVERROR(EIO); } } @@ -412,9 +411,8 @@ static int dnn_detect_activate(AVFilterContext *filter_ctx) AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); - if (out_frame) { - av_assert0(in_frame == out_frame); - ret = ff_filter_frame(outlink, out_frame); + if (async_state == DAST_SUCCESS) { + ret = ff_filter_frame(outlink, in_frame); if (ret < 0) return ret; got_frame = 1;