From patchwork Fri Aug 20 14:20:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29638 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp1309248iov; Fri, 20 Aug 2021 07:22:32 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxEz6gSmxdwjFmNrHY057lMJ+o+Qbz1Sjf4P9MIA3mkDT0yJOxhn++BRtwYK65pqLrZEXwc X-Received: by 2002:a17:906:1416:: with SMTP id p22mr21538462ejc.364.1629469351755; Fri, 20 Aug 2021 07:22:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629469351; cv=none; d=google.com; s=arc-20160816; b=CByum0fnhqDrFz3WFGhwrST343/W0d01uLFHxLl9m+cmluB5XdIWc5a00wEd3n6FnS XvlvdPT5nyLukxOk53PAKM4ES5Z+1ez13ZqPX89XAO9nuMk/kBEZ/xjy0lVzQQ5ygWJA socHS0FODHukhFEXyimoz7YlAPsFjGn7BGeQpoNJsUvA5U5ojiKo1suIKpuAsOji8/5P 6Vp5It5xu4ms61mcg1qGYPJlWfu9OxbyAutqqlFMtHY68ET15grtrK9v9W7Vu+2mVy6w UcmaJPk8ImHsERv+1cA+IbDKV2YZ9yBSQrF/ukFjEmWSDt/56Nux2sgpEj2Khv6DY9eh zFFQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:message-id:date:to:from :dkim-signature:delivered-to; bh=xu2dKNmWrWwG0geOgKuam9qO2bznmV2ZbhWhVoZ63Zk=; b=wAs7cNkWGPElNxbgUXCm1YnBEhEgIvfGdg3cNIZjiSXb9/tEDg0HLoJ3CdQ2AwPfxO oUFR6Aqs9AXIfC5dTd4YYtHPK050r0p2RImPYlG/zovwPC+9/faBMMqvRCQWQcPSQKdx m5IbTVqrpGJ7XxRc89B71JBx1Jt4bKM/4rHymlSNun3GagvJGAIZG1QweKiopNx0kcGf fmSPHycY/+ujEykb07iVMRBCIBVqWx6zfJWyj0H+24DAcdBx/p0fd2cRersEHe8ZKCW9 4EtX6w5fz23dMcpkPuHVXtNO5obxEmICEqKdgON+KsOh/9lLqLLTFyhsg2e/8J/U48aZ CX2A== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=CA8thKWP; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id d6si6941385ede.118.2021.08.20.07.22.29; Fri, 20 Aug 2021 07:22:31 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=CA8thKWP; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 2E7C56804B9; Fri, 20 Aug 2021 17:22:25 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 221746804B9 for ; Fri, 20 Aug 2021 17:22:19 +0300 (EEST) Received: by mail-pf1-f176.google.com with SMTP id w68so8723151pfd.0 for ; Fri, 20 Aug 2021 07:22:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=pw+Yajcr6OYzBhe2Ue6TuF+FKjaSfHiIKA9m+KwpSuY=; b=CA8thKWPQFL+n8pbe6QQ+4lPPFNQr8vF2dhGoprn1puO5jvBtB4n3S/vvNppCmr3Jx tCu3aKkY2leJYc9mo7sM9H+NtwS4QpMwX/BicKINUjkB3woJvv3Mt/tU8YkoziEztuKL C0iL6MlfDW0QWHVMIyNbBDzCFgxW9QRMcEPRmfLlbhetVzH8ja2VV9Oeb8dF0V6EX1/n bjsDEmgTpoR7sV/M2mkxyDdcY/mYhdqUD9lx3l6SHjCoRulVel434l7IEWi8L1lE3vB/ CdF5M5QvHiN4J5eKg5qXjI5UJuxFbKYML21nWq4sz2HyyJiJWoHP0o9nwHScf84azQi4 wV3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=pw+Yajcr6OYzBhe2Ue6TuF+FKjaSfHiIKA9m+KwpSuY=; b=erQvlH7Xtw0F1UJKVvFiiqyaH8+eND7dj5SnRAHbJ13Uy20rdB2Hz05bNMRi6U0+kv ooL9n6NYSA9BToNjgwfZFrMi/c6jpqnfyVgtRoyiLaqL/Pn52R26NhYsirW/Q3rDytVy nKzUwrguVs2zTQC7fflLVRw0W0+QdKSCzhotqP5o1UmH1QuT21MOShp10ymjwhZI5cfq 0JfDWN6v5Qw07VbmXcw7PPQoRpfRF57+8ecfq7sliEcuzm7Wfy2/vfFw4pGwH/gOwEF0 0qYtzOnnTAqYFl7tlNpkTCJTTUKKzSewfPXV16KgQIbj4dF8Qy0jx9bHYPnXbqHSRlSC 29gQ== X-Gm-Message-State: AOAM532sPJBRxfCz2Tz2Adqexa/DyJrlGEhIajQONZZO/mmCNDQ6kNCr Hh8QwHuhGoJM4P5WL0B72MyXZA0qLKb1pg== X-Received: by 2002:a65:47c9:: with SMTP id f9mr1634612pgs.292.1629469336409; Fri, 20 Aug 2021 07:22:16 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.220.241]) by smtp.googlemail.com with ESMTPSA id x19sm8276043pgk.37.2021.08.20.07.22.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Aug 2021 07:22:16 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Fri, 20 Aug 2021 19:50:58 +0530 Message-Id: <20210820142103.14953-1-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH 1/6] lavfi/dnn: Task-based Inference in Native Backend X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: +J8MdlE1L91Q This commit rearranges the code in Native Backend to use the TaskItem for inference. Signed-off-by: Shubhanshu Saxena --- libavfilter/dnn/dnn_backend_native.c | 176 ++++++++++++++++++--------- libavfilter/dnn/dnn_backend_native.h | 2 + 2 files changed, 121 insertions(+), 57 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_native.c b/libavfilter/dnn/dnn_backend_native.c index a6be27f1fd..3b2a3aa55d 100644 --- a/libavfilter/dnn/dnn_backend_native.c +++ b/libavfilter/dnn/dnn_backend_native.c @@ -45,9 +45,29 @@ static const AVClass dnn_native_class = { .category = AV_CLASS_CATEGORY_FILTER, }; -static DNNReturnType execute_model_native(const DNNModel *model, const char *input_name, AVFrame *in_frame, - const char **output_names, uint32_t nb_output, AVFrame *out_frame, - int do_ioproc); +static DNNReturnType execute_model_native(Queue *inference_queue); + +static DNNReturnType extract_inference_from_task(TaskItem *task, Queue *inference_queue) +{ + NativeModel *native_model = task->model; + NativeContext *ctx = &native_model->ctx; + InferenceItem *inference = av_malloc(sizeof(*inference)); + + if (!inference) { + av_log(ctx, AV_LOG_ERROR, "Unable to allocate space for InferenceItem\n"); + return DNN_ERROR; + } + task->inference_todo = 1; + task->inference_done = 0; + inference->task = task; + + if (ff_queue_push_back(inference_queue, inference) < 0) { + av_log(ctx, AV_LOG_ERROR, "Failed to push back inference_queue.\n"); + av_freep(&inference); + return DNN_ERROR; + } + return DNN_SUCCESS; +} static DNNReturnType get_input_native(void *model, DNNData *input, const char *input_name) { @@ -78,34 +98,36 @@ static DNNReturnType get_input_native(void *model, DNNData *input, const char *i static DNNReturnType get_output_native(void *model, const char *input_name, int input_width, int input_height, const char *output_name, int *output_width, int *output_height) { - DNNReturnType ret; + DNNReturnType ret = 0; NativeModel *native_model = model; NativeContext *ctx = &native_model->ctx; - AVFrame *in_frame = av_frame_alloc(); - AVFrame *out_frame = NULL; - - if (!in_frame) { - av_log(ctx, AV_LOG_ERROR, "Could not allocate memory for input frame\n"); - return DNN_ERROR; + TaskItem task; + DNNExecBaseParams exec_params = { + .input_name = input_name, + .output_names = &output_name, + .nb_output = 1, + .in_frame = NULL, + .out_frame = NULL, + }; + + if (ff_dnn_fill_gettingoutput_task(&task, &exec_params, native_model, input_height, input_width, ctx) != DNN_SUCCESS) { + ret = DNN_ERROR; + goto err; } - out_frame = av_frame_alloc(); - - if (!out_frame) { - av_log(ctx, AV_LOG_ERROR, "Could not allocate memory for output frame\n"); - av_frame_free(&in_frame); - return DNN_ERROR; + if (extract_inference_from_task(&task, native_model->inference_queue) != DNN_SUCCESS) { + av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); + ret = DNN_ERROR; + goto err; } - in_frame->width = input_width; - in_frame->height = input_height; - - ret = execute_model_native(native_model->model, input_name, in_frame, &output_name, 1, out_frame, 0); - *output_width = out_frame->width; - *output_height = out_frame->height; + ret = execute_model_native(native_model->inference_queue); + *output_width = task.out_frame->width; + *output_height = task.out_frame->height; - av_frame_free(&out_frame); - av_frame_free(&in_frame); +err: + av_frame_free(&task.out_frame); + av_frame_free(&task.in_frame); return ret; } @@ -190,6 +212,11 @@ DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType f goto fail; } + native_model->inference_queue = ff_queue_create(); + if (!native_model->inference_queue) { + goto fail; + } + for (layer = 0; layer < native_model->layers_num; ++layer){ layer_type = (int32_t)avio_rl32(model_file_context); dnn_size += 4; @@ -259,50 +286,66 @@ fail: return NULL; } -static DNNReturnType execute_model_native(const DNNModel *model, const char *input_name, AVFrame *in_frame, - const char **output_names, uint32_t nb_output, AVFrame *out_frame, - int do_ioproc) +static DNNReturnType execute_model_native(Queue *inference_queue) { - NativeModel *native_model = model->model; - NativeContext *ctx = &native_model->ctx; + NativeModel *native_model = NULL; + NativeContext *ctx = NULL; int32_t layer; DNNData input, output; DnnOperand *oprd = NULL; + InferenceItem *inference = NULL; + TaskItem *task = NULL; + DNNReturnType ret = 0; + + inference = ff_queue_pop_front(inference_queue); + if (!inference) { + av_log(NULL, AV_LOG_ERROR, "Failed to get inference item\n"); + ret = DNN_ERROR; + goto err; + } + task = inference->task; + native_model = task->model; + ctx = &native_model->ctx; if (native_model->layers_num <= 0 || native_model->operands_num <= 0) { av_log(ctx, AV_LOG_ERROR, "No operands or layers in model\n"); - return DNN_ERROR; + ret = DNN_ERROR; + goto err; } for (int i = 0; i < native_model->operands_num; ++i) { oprd = &native_model->operands[i]; - if (strcmp(oprd->name, input_name) == 0) { + if (strcmp(oprd->name, task->input_name) == 0) { if (oprd->type != DOT_INPUT) { - av_log(ctx, AV_LOG_ERROR, "Found \"%s\" in model, but it is not input node\n", input_name); - return DNN_ERROR; + av_log(ctx, AV_LOG_ERROR, "Found \"%s\" in model, but it is not input node\n", task->input_name); + ret = DNN_ERROR; + goto err; } break; } oprd = NULL; } if (!oprd) { - av_log(ctx, AV_LOG_ERROR, "Could not find \"%s\" in model\n", input_name); - return DNN_ERROR; + av_log(ctx, AV_LOG_ERROR, "Could not find \"%s\" in model\n", task->input_name); + ret = DNN_ERROR; + goto err; } - oprd->dims[1] = in_frame->height; - oprd->dims[2] = in_frame->width; + oprd->dims[1] = task->in_frame->height; + oprd->dims[2] = task->in_frame->width; av_freep(&oprd->data); oprd->length = ff_calculate_operand_data_length(oprd); if (oprd->length <= 0) { av_log(ctx, AV_LOG_ERROR, "The input data length overflow\n"); - return DNN_ERROR; + ret = DNN_ERROR; + goto err; } oprd->data = av_malloc(oprd->length); if (!oprd->data) { av_log(ctx, AV_LOG_ERROR, "Failed to malloc memory for input data\n"); - return DNN_ERROR; + ret = DNN_ERROR; + goto err; } input.height = oprd->dims[1]; @@ -310,19 +353,20 @@ static DNNReturnType execute_model_native(const DNNModel *model, const char *inp input.channels = oprd->dims[3]; input.data = oprd->data; input.dt = oprd->data_type; - if (do_ioproc) { + if (task->do_ioproc) { if (native_model->model->frame_pre_proc != NULL) { - native_model->model->frame_pre_proc(in_frame, &input, native_model->model->filter_ctx); + native_model->model->frame_pre_proc(task->in_frame, &input, native_model->model->filter_ctx); } else { - ff_proc_from_frame_to_dnn(in_frame, &input, ctx); + ff_proc_from_frame_to_dnn(task->in_frame, &input, ctx); } } - if (nb_output != 1) { + if (task->nb_output != 1) { // currently, the filter does not need multiple outputs, // so we just pending the support until we really need it. avpriv_report_missing_feature(ctx, "multiple outputs"); - return DNN_ERROR; + ret = DNN_ERROR; + goto err; } for (layer = 0; layer < native_model->layers_num; ++layer){ @@ -333,13 +377,14 @@ static DNNReturnType execute_model_native(const DNNModel *model, const char *inp native_model->layers[layer].params, &native_model->ctx) == DNN_ERROR) { av_log(ctx, AV_LOG_ERROR, "Failed to execute model\n"); - return DNN_ERROR; + ret = DNN_ERROR; + goto err; } } - for (uint32_t i = 0; i < nb_output; ++i) { + for (uint32_t i = 0; i < task->nb_output; ++i) { DnnOperand *oprd = NULL; - const char *output_name = output_names[i]; + const char *output_name = task->output_names[i]; for (int j = 0; j < native_model->operands_num; ++j) { if (strcmp(native_model->operands[j].name, output_name) == 0) { oprd = &native_model->operands[j]; @@ -349,7 +394,8 @@ static DNNReturnType execute_model_native(const DNNModel *model, const char *inp if (oprd == NULL) { av_log(ctx, AV_LOG_ERROR, "Could not find output in model\n"); - return DNN_ERROR; + ret = DNN_ERROR; + goto err; } output.data = oprd->data; @@ -358,32 +404,43 @@ static DNNReturnType execute_model_native(const DNNModel *model, const char *inp output.channels = oprd->dims[3]; output.dt = oprd->data_type; - if (do_ioproc) { + if (task->do_ioproc) { if (native_model->model->frame_post_proc != NULL) { - native_model->model->frame_post_proc(out_frame, &output, native_model->model->filter_ctx); + native_model->model->frame_post_proc(task->out_frame, &output, native_model->model->filter_ctx); } else { - ff_proc_from_dnn_to_frame(out_frame, &output, ctx); + ff_proc_from_dnn_to_frame(task->out_frame, &output, ctx); } } else { - out_frame->width = output.width; - out_frame->height = output.height; + task->out_frame->width = output.width; + task->out_frame->height = output.height; } } - - return DNN_SUCCESS; + task->inference_done++; +err: + av_freep(&inference); + return ret; } DNNReturnType ff_dnn_execute_model_native(const DNNModel *model, DNNExecBaseParams *exec_params) { NativeModel *native_model = model->model; NativeContext *ctx = &native_model->ctx; + TaskItem task; if (ff_check_exec_params(ctx, DNN_NATIVE, model->func_type, exec_params) != 0) { return DNN_ERROR; } - return execute_model_native(model, exec_params->input_name, exec_params->in_frame, - exec_params->output_names, exec_params->nb_output, exec_params->out_frame, 1); + if (ff_dnn_fill_task(&task, exec_params, native_model, 0, 1) != DNN_SUCCESS) { + return DNN_ERROR; + } + + if (extract_inference_from_task(&task, native_model->inference_queue) != DNN_SUCCESS) { + av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); + return DNN_ERROR; + } + + return execute_model_native(native_model->inference_queue); } int32_t ff_calculate_operand_dims_count(const DnnOperand *oprd) @@ -435,6 +492,11 @@ void ff_dnn_free_model_native(DNNModel **model) av_freep(&native_model->operands); } + while (ff_queue_size(native_model->inference_queue) != 0) { + InferenceItem *item = ff_queue_pop_front(native_model->inference_queue); + av_freep(&item); + } + ff_queue_destroy(native_model->inference_queue); av_freep(&native_model); } av_freep(model); diff --git a/libavfilter/dnn/dnn_backend_native.h b/libavfilter/dnn/dnn_backend_native.h index 89bcb8e358..1b9d5bdf2d 100644 --- a/libavfilter/dnn/dnn_backend_native.h +++ b/libavfilter/dnn/dnn_backend_native.h @@ -30,6 +30,7 @@ #include "../dnn_interface.h" #include "libavformat/avio.h" #include "libavutil/opt.h" +#include "queue.h" /** * the enum value of DNNLayerType should not be changed, @@ -126,6 +127,7 @@ typedef struct NativeModel{ int32_t layers_num; DnnOperand *operands; int32_t operands_num; + Queue *inference_queue; } NativeModel; DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx); From patchwork Fri Aug 20 14:20:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29641 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp1309438iov; Fri, 20 Aug 2021 07:22:43 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwRKAp2zca/+5Thp5olAByVUKQ1kKvi0jcYOCn6bEYTB9lndeHJ1fc8v6ci5YmEhliC051F X-Received: by 2002:a17:906:ad7:: with SMTP id z23mr22441733ejf.419.1629469362878; Fri, 20 Aug 2021 07:22:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629469362; cv=none; d=google.com; s=arc-20160816; b=fkl1HkGXGkiWSA8IxoX5s0wK+YrxQOwkyCwMQwNRoHx1ApLK1b+EbaYAFEhu/8DbEb /yXJzMUQEfYMjBUsKtsq7caz+CBL4si315eX+45yN3o2tocIo3BtW/r9j7UMOoUoN5uz 44cBU75sF6U6ZvydtI8fmzvJ50gT34pJn9VqR07hCqkqKFu1GTwU0/TLdOPilzlJGgu7 doN+l6T2qcWZjF2N83plMqu7pjYqNY2S5+/c8k3bT9uBeBZ6YQKOgrC8CuYDTVQl9PUt E1a/V+j/gHPMEiIxtoEw+tTKGDTK6CRja+rBH4AE/6mrx2h7ClH/2Vz0TADFrSB7AnW2 bOrQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=95MLTK12qsQYUK7DmKzT+jKC+zmS3ecFiHMDjG7Iy7M=; b=fLvMOKrxPCbE5dAnoh2oRZdaTk7bxas0dQSZrS0xodknhqozU2pmzDLW8RO/ltWpBn 7F8X9YYqLh42Da8UvTmfPZb2uMeYLWscBSxUNZjiMNz2/BrvWknzyeNYE0+t6jSdJocE EHGYONGTXB3/aYFsbwu2kAFKKO+hDw0BrqblAu55V9LNytR57ehMp4CXrtcoX/jTS5n3 6tTCqlGsP8ZvytjFyXMS2bqwFSHZCZY0cyRwVBTuTMqcr3TzPrejaGbJzSok7GEAAdqV AUL477CPkg67gItSvaOC6PRT5Oc6tzmp+kkk23Nqq2QEVhdvXu5BDxFPTQvWFXk9Rcdo E8gA== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=G8tUe9Gp; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id ee39si6534847edb.579.2021.08.20.07.22.42; Fri, 20 Aug 2021 07:22:42 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=G8tUe9Gp; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 60C9E68A2FB; Fri, 20 Aug 2021 17:22:27 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pf1-f180.google.com (mail-pf1-f180.google.com [209.85.210.180]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id ABEE268A1B9 for ; Fri, 20 Aug 2021 17:22:20 +0300 (EEST) Received: by mail-pf1-f180.google.com with SMTP id j187so8720362pfg.4 for ; Fri, 20 Aug 2021 07:22:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=O3X1ve07cNMC1WrgXEw/ZrTd0xP+7mrg0uqxp7t9ybs=; b=G8tUe9GpDh9BkkgK4ri52xp8Mkzii8xJET2We3TnYI8fB10/Y+HgEZcIs2Q1vBcFbP SmkbqlZ8trxeZ0B8+xlAZqYy/VICjCE8Kv5s6/IHC6gKSl1V39SQYCdzYClPVudNs8LB 1ycmdQ0ah9daUiPx32cYhvBGEHhmWXB1qrqXu1gIprZv10yLi8Ce7XFQi9jw9s8Gk3pk AFd77MIMy9yzCqRvb18Dz9dzvhbqWzyCzvQLMv8jsTrPpsAXbvHDZDVCO7oTqbTXny6E Cb4sRzIJ7avg/x/VIDPZEtAkmO1GbmGhFyLUXJ9/Goy42w8qahkaiH3hf5RLPPxW7uQ6 QItQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=O3X1ve07cNMC1WrgXEw/ZrTd0xP+7mrg0uqxp7t9ybs=; b=OTuwOXwwEjYBITwPeuJ45p2J1TYwnZGLbibfzwUOmHFdJKGX250D/YnGUnID+5h2gS SJ3ERxkhyqgdwBgs/QmuIQGhIpNjFrcBAlWfjOtWEsrgkBcoJKQZvrSpPUh4sduInAev Di7DuQhhwdP6S+ayrzOSzG7AT7fxN0qwlLBZvBGLgA3hg6ZubyETmhVYi41ZM/bdiGuc BqLW+1Hcd+8TXEe9+1DdSYF0CgTXuPZdNRA+qv5vAfBnpyZTJhjrv014WheAze53Ma9l eg0MGIHB9Q3f3fKSWOW574zQ1ECpC6aQmN2n1nZjMhSJl/WxKlsu75H58rSIMTdig5ZO dziA== X-Gm-Message-State: AOAM53039zRPacbbBDcUeJ1esr4/f1MWcnZQZgi1IUeyLvfaB32vSduz GL2fKJynB/Jp1u/ZS1yX3HXPpeeqDwbHiw== X-Received: by 2002:a65:52cd:: with SMTP id z13mr11280380pgp.405.1629469338743; Fri, 20 Aug 2021 07:22:18 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.220.241]) by smtp.googlemail.com with ESMTPSA id x19sm8276043pgk.37.2021.08.20.07.22.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Aug 2021 07:22:18 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Fri, 20 Aug 2021 19:50:59 +0530 Message-Id: <20210820142103.14953-2-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210820142103.14953-1-shubhanshu.e01@gmail.com> References: <20210820142103.14953-1-shubhanshu.e01@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH 2/6] libavfilter: Unify Execution Modes in DNN Filters X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: 5wp2Qdl049ji This commit unifies the async and sync mode from the DNN filters' perspective. As of this commit, the Native backend only supports synchronous execution mode. Now the user can switch between async and sync mode by using the 'async' option in the backend_configs. The values can be 1 for async and 0 for sync mode of execution. This commit affects the following filters: 1. vf_dnn_classify 2. vf_dnn_detect 3. vf_dnn_processing 4. vf_sr 5. vf_derain Signed-off-by: Shubhanshu Saxena --- libavfilter/dnn/dnn_backend_common.c | 2 +- libavfilter/dnn/dnn_backend_common.h | 5 +- libavfilter/dnn/dnn_backend_native.c | 59 +++++++++++++++- libavfilter/dnn/dnn_backend_native.h | 6 ++ libavfilter/dnn/dnn_backend_openvino.c | 94 ++++++++++---------------- libavfilter/dnn/dnn_backend_openvino.h | 3 +- libavfilter/dnn/dnn_backend_tf.c | 35 ++-------- libavfilter/dnn/dnn_backend_tf.h | 3 +- libavfilter/dnn/dnn_interface.c | 8 +-- libavfilter/dnn_filter_common.c | 23 +------ libavfilter/dnn_filter_common.h | 3 +- libavfilter/dnn_interface.h | 4 +- libavfilter/vf_derain.c | 7 ++ libavfilter/vf_dnn_classify.c | 4 +- libavfilter/vf_dnn_detect.c | 8 +-- libavfilter/vf_dnn_processing.c | 8 +-- libavfilter/vf_sr.c | 8 +++ 17 files changed, 140 insertions(+), 140 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_common.c b/libavfilter/dnn/dnn_backend_common.c index 426683b73d..d2bc016fef 100644 --- a/libavfilter/dnn/dnn_backend_common.c +++ b/libavfilter/dnn/dnn_backend_common.c @@ -138,7 +138,7 @@ DNNReturnType ff_dnn_start_inference_async(void *ctx, DNNAsyncExecModule *async_ return DNN_SUCCESS; } -DNNAsyncStatusType ff_dnn_get_async_result_common(Queue *task_queue, AVFrame **in, AVFrame **out) +DNNAsyncStatusType ff_dnn_get_result_common(Queue *task_queue, AVFrame **in, AVFrame **out) { TaskItem *task = ff_queue_peek_front(task_queue); diff --git a/libavfilter/dnn/dnn_backend_common.h b/libavfilter/dnn/dnn_backend_common.h index 604c1d3bd7..78e62a94a2 100644 --- a/libavfilter/dnn/dnn_backend_common.h +++ b/libavfilter/dnn/dnn_backend_common.h @@ -29,7 +29,8 @@ #include "libavutil/thread.h" #define DNN_BACKEND_COMMON_OPTIONS \ - { "nireq", "number of request", OFFSET(options.nireq), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, INT_MAX, FLAGS }, + { "nireq", "number of request", OFFSET(options.nireq), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, INT_MAX, FLAGS }, \ + { "async", "use DNN async inference", OFFSET(options.async), AV_OPT_TYPE_BOOL, { .i64 = 1 }, 0, 1, FLAGS }, // one task for one function call from dnn interface typedef struct TaskItem { @@ -135,7 +136,7 @@ DNNReturnType ff_dnn_start_inference_async(void *ctx, DNNAsyncExecModule *async_ * @retval DAST_NOT_READY if inference not completed yet. * @retval DAST_SUCCESS if result successfully extracted */ -DNNAsyncStatusType ff_dnn_get_async_result_common(Queue *task_queue, AVFrame **in, AVFrame **out); +DNNAsyncStatusType ff_dnn_get_result_common(Queue *task_queue, AVFrame **in, AVFrame **out); /** * Allocate input and output frames and fill the Task diff --git a/libavfilter/dnn/dnn_backend_native.c b/libavfilter/dnn/dnn_backend_native.c index 3b2a3aa55d..2d34b88f8a 100644 --- a/libavfilter/dnn/dnn_backend_native.c +++ b/libavfilter/dnn/dnn_backend_native.c @@ -34,6 +34,7 @@ #define FLAGS AV_OPT_FLAG_FILTERING_PARAM static const AVOption dnn_native_options[] = { { "conv2d_threads", "threads num for conv2d layer", OFFSET(options.conv2d_threads), AV_OPT_TYPE_INT, { .i64 = 0 }, INT_MIN, INT_MAX, FLAGS }, + { "async", "use DNN async inference", OFFSET(options.async), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, FLAGS }, { NULL }, }; @@ -189,6 +190,11 @@ DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType f goto fail; native_model->model = model; + if (native_model->ctx.options.async) { + av_log(&native_model->ctx, AV_LOG_WARNING, "Async not supported. Rolling back to sync\n"); + native_model->ctx.options.async = 0; + } + #if !HAVE_PTHREAD_CANCEL if (native_model->ctx.options.conv2d_threads > 1){ av_log(&native_model->ctx, AV_LOG_WARNING, "'conv2d_threads' option was set but it is not supported " @@ -212,6 +218,11 @@ DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType f goto fail; } + native_model->task_queue = ff_queue_create(); + if (!native_model->task_queue) { + goto fail; + } + native_model->inference_queue = ff_queue_create(); if (!native_model->inference_queue) { goto fail; @@ -425,17 +436,30 @@ DNNReturnType ff_dnn_execute_model_native(const DNNModel *model, DNNExecBasePara { NativeModel *native_model = model->model; NativeContext *ctx = &native_model->ctx; - TaskItem task; + TaskItem *task; if (ff_check_exec_params(ctx, DNN_NATIVE, model->func_type, exec_params) != 0) { return DNN_ERROR; } - if (ff_dnn_fill_task(&task, exec_params, native_model, 0, 1) != DNN_SUCCESS) { + task = av_malloc(sizeof(*task)); + if (!task) { + av_log(ctx, AV_LOG_ERROR, "unable to alloc memory for task item.\n"); return DNN_ERROR; } - if (extract_inference_from_task(&task, native_model->inference_queue) != DNN_SUCCESS) { + if (ff_dnn_fill_task(task, exec_params, native_model, ctx->options.async, 1) != DNN_SUCCESS) { + av_freep(&task); + return DNN_ERROR; + } + + if (ff_queue_push_back(native_model->task_queue, task) < 0) { + av_freep(&task); + av_log(ctx, AV_LOG_ERROR, "unable to push back task_queue.\n"); + return DNN_ERROR; + } + + if (extract_inference_from_task(task, native_model->inference_queue) != DNN_SUCCESS) { av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); return DNN_ERROR; } @@ -443,6 +467,26 @@ DNNReturnType ff_dnn_execute_model_native(const DNNModel *model, DNNExecBasePara return execute_model_native(native_model->inference_queue); } +DNNReturnType ff_dnn_flush_native(const DNNModel *model) +{ + NativeModel *native_model = model->model; + + if (ff_queue_size(native_model->inference_queue) == 0) { + // no pending task need to flush + return DNN_SUCCESS; + } + + // for now, use sync node with flush operation + // Switch to async when it is supported + return execute_model_native(native_model->inference_queue); +} + +DNNAsyncStatusType ff_dnn_get_result_native(const DNNModel *model, AVFrame **in, AVFrame **out) +{ + NativeModel *native_model = model->model; + return ff_dnn_get_result_common(native_model->task_queue, in, out); +} + int32_t ff_calculate_operand_dims_count(const DnnOperand *oprd) { int32_t result = 1; @@ -497,6 +541,15 @@ void ff_dnn_free_model_native(DNNModel **model) av_freep(&item); } ff_queue_destroy(native_model->inference_queue); + + while (ff_queue_size(native_model->task_queue) != 0) { + TaskItem *item = ff_queue_pop_front(native_model->task_queue); + av_frame_free(&item->in_frame); + av_frame_free(&item->out_frame); + av_freep(&item); + } + ff_queue_destroy(native_model->task_queue); + av_freep(&native_model); } av_freep(model); diff --git a/libavfilter/dnn/dnn_backend_native.h b/libavfilter/dnn/dnn_backend_native.h index 1b9d5bdf2d..ca61bb353f 100644 --- a/libavfilter/dnn/dnn_backend_native.h +++ b/libavfilter/dnn/dnn_backend_native.h @@ -111,6 +111,7 @@ typedef struct InputParams{ } InputParams; typedef struct NativeOptions{ + uint8_t async; uint32_t conv2d_threads; } NativeOptions; @@ -127,6 +128,7 @@ typedef struct NativeModel{ int32_t layers_num; DnnOperand *operands; int32_t operands_num; + Queue *task_queue; Queue *inference_queue; } NativeModel; @@ -134,6 +136,10 @@ DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType f DNNReturnType ff_dnn_execute_model_native(const DNNModel *model, DNNExecBaseParams *exec_params); +DNNAsyncStatusType ff_dnn_get_result_native(const DNNModel *model, AVFrame **in, AVFrame **out); + +DNNReturnType ff_dnn_flush_native(const DNNModel *model); + void ff_dnn_free_model_native(DNNModel **model); // NOTE: User must check for error (return value <= 0) to handle diff --git a/libavfilter/dnn/dnn_backend_openvino.c b/libavfilter/dnn/dnn_backend_openvino.c index c825e70c82..abf3f4b6b6 100644 --- a/libavfilter/dnn/dnn_backend_openvino.c +++ b/libavfilter/dnn/dnn_backend_openvino.c @@ -39,6 +39,7 @@ typedef struct OVOptions{ char *device_type; int nireq; + uint8_t async; int batch_size; int input_resizable; } OVOptions; @@ -758,55 +759,6 @@ err: } DNNReturnType ff_dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams *exec_params) -{ - OVModel *ov_model = model->model; - OVContext *ctx = &ov_model->ctx; - TaskItem task; - OVRequestItem *request; - - if (ff_check_exec_params(ctx, DNN_OV, model->func_type, exec_params) != 0) { - return DNN_ERROR; - } - - if (model->func_type == DFT_ANALYTICS_CLASSIFY) { - // Once we add async support for tensorflow backend and native backend, - // we'll combine the two sync/async functions in dnn_interface.h to - // simplify the code in filter, and async will be an option within backends. - // so, do not support now, and classify filter will not call this function. - return DNN_ERROR; - } - - if (ctx->options.batch_size > 1) { - avpriv_report_missing_feature(ctx, "batch mode for sync execution"); - return DNN_ERROR; - } - - if (!ov_model->exe_network) { - if (init_model_ov(ov_model, exec_params->input_name, exec_params->output_names[0]) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "Failed init OpenVINO exectuable network or inference request\n"); - return DNN_ERROR; - } - } - - if (ff_dnn_fill_task(&task, exec_params, ov_model, 0, 1) != DNN_SUCCESS) { - return DNN_ERROR; - } - - if (extract_inference_from_task(ov_model->model->func_type, &task, ov_model->inference_queue, exec_params) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); - return DNN_ERROR; - } - - request = ff_safe_queue_pop_front(ov_model->request_queue); - if (!request) { - av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); - return DNN_ERROR; - } - - return execute_model_ov(request, ov_model->inference_queue); -} - -DNNReturnType ff_dnn_execute_model_async_ov(const DNNModel *model, DNNExecBaseParams *exec_params) { OVModel *ov_model = model->model; OVContext *ctx = &ov_model->ctx; @@ -831,7 +783,8 @@ DNNReturnType ff_dnn_execute_model_async_ov(const DNNModel *model, DNNExecBasePa return DNN_ERROR; } - if (ff_dnn_fill_task(task, exec_params, ov_model, 1, 1) != DNN_SUCCESS) { + if (ff_dnn_fill_task(task, exec_params, ov_model, ctx->options.async, 1) != DNN_SUCCESS) { + av_freep(&task); return DNN_ERROR; } @@ -846,26 +799,47 @@ DNNReturnType ff_dnn_execute_model_async_ov(const DNNModel *model, DNNExecBasePa return DNN_ERROR; } - while (ff_queue_size(ov_model->inference_queue) >= ctx->options.batch_size) { + if (ctx->options.async) { + while (ff_queue_size(ov_model->inference_queue) >= ctx->options.batch_size) { + request = ff_safe_queue_pop_front(ov_model->request_queue); + if (!request) { + av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); + return DNN_ERROR; + } + + ret = execute_model_ov(request, ov_model->inference_queue); + if (ret != DNN_SUCCESS) { + return ret; + } + } + + return DNN_SUCCESS; + } + else { + if (model->func_type == DFT_ANALYTICS_CLASSIFY) { + // Classification filter has not been completely + // tested with the sync mode. So, do not support now. + return DNN_ERROR; + } + + if (ctx->options.batch_size > 1) { + avpriv_report_missing_feature(ctx, "batch mode for sync execution"); + return DNN_ERROR; + } + request = ff_safe_queue_pop_front(ov_model->request_queue); if (!request) { av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); return DNN_ERROR; } - - ret = execute_model_ov(request, ov_model->inference_queue); - if (ret != DNN_SUCCESS) { - return ret; - } + return execute_model_ov(request, ov_model->inference_queue); } - - return DNN_SUCCESS; } -DNNAsyncStatusType ff_dnn_get_async_result_ov(const DNNModel *model, AVFrame **in, AVFrame **out) +DNNAsyncStatusType ff_dnn_get_result_ov(const DNNModel *model, AVFrame **in, AVFrame **out) { OVModel *ov_model = model->model; - return ff_dnn_get_async_result_common(ov_model->task_queue, in, out); + return ff_dnn_get_result_common(ov_model->task_queue, in, out); } DNNReturnType ff_dnn_flush_ov(const DNNModel *model) diff --git a/libavfilter/dnn/dnn_backend_openvino.h b/libavfilter/dnn/dnn_backend_openvino.h index 046d0c5b5a..0bbca0c057 100644 --- a/libavfilter/dnn/dnn_backend_openvino.h +++ b/libavfilter/dnn/dnn_backend_openvino.h @@ -32,8 +32,7 @@ DNNModel *ff_dnn_load_model_ov(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx); DNNReturnType ff_dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams *exec_params); -DNNReturnType ff_dnn_execute_model_async_ov(const DNNModel *model, DNNExecBaseParams *exec_params); -DNNAsyncStatusType ff_dnn_get_async_result_ov(const DNNModel *model, AVFrame **in, AVFrame **out); +DNNAsyncStatusType ff_dnn_get_result_ov(const DNNModel *model, AVFrame **in, AVFrame **out); DNNReturnType ff_dnn_flush_ov(const DNNModel *model); void ff_dnn_free_model_ov(DNNModel **model); diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c index ffec1b1328..4a0b561f29 100644 --- a/libavfilter/dnn/dnn_backend_tf.c +++ b/libavfilter/dnn/dnn_backend_tf.c @@ -42,6 +42,7 @@ typedef struct TFOptions{ char *sess_config; + uint8_t async; uint32_t nireq; } TFOptions; @@ -1121,34 +1122,6 @@ err: DNNReturnType ff_dnn_execute_model_tf(const DNNModel *model, DNNExecBaseParams *exec_params) { - TFModel *tf_model = model->model; - TFContext *ctx = &tf_model->ctx; - TaskItem task; - TFRequestItem *request; - - if (ff_check_exec_params(ctx, DNN_TF, model->func_type, exec_params) != 0) { - return DNN_ERROR; - } - - if (ff_dnn_fill_task(&task, exec_params, tf_model, 0, 1) != DNN_SUCCESS) { - return DNN_ERROR; - } - - if (extract_inference_from_task(&task, tf_model->inference_queue) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); - return DNN_ERROR; - } - - request = ff_safe_queue_pop_front(tf_model->request_queue); - if (!request) { - av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); - return DNN_ERROR; - } - - return execute_model_tf(request, tf_model->inference_queue); -} - -DNNReturnType ff_dnn_execute_model_async_tf(const DNNModel *model, DNNExecBaseParams *exec_params) { TFModel *tf_model = model->model; TFContext *ctx = &tf_model->ctx; TaskItem *task; @@ -1164,7 +1137,7 @@ DNNReturnType ff_dnn_execute_model_async_tf(const DNNModel *model, DNNExecBasePa return DNN_ERROR; } - if (ff_dnn_fill_task(task, exec_params, tf_model, 1, 1) != DNN_SUCCESS) { + if (ff_dnn_fill_task(task, exec_params, tf_model, ctx->options.async, 1) != DNN_SUCCESS) { av_freep(&task); return DNN_ERROR; } @@ -1188,10 +1161,10 @@ DNNReturnType ff_dnn_execute_model_async_tf(const DNNModel *model, DNNExecBasePa return execute_model_tf(request, tf_model->inference_queue); } -DNNAsyncStatusType ff_dnn_get_async_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out) +DNNAsyncStatusType ff_dnn_get_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out) { TFModel *tf_model = model->model; - return ff_dnn_get_async_result_common(tf_model->task_queue, in, out); + return ff_dnn_get_result_common(tf_model->task_queue, in, out); } DNNReturnType ff_dnn_flush_tf(const DNNModel *model) diff --git a/libavfilter/dnn/dnn_backend_tf.h b/libavfilter/dnn/dnn_backend_tf.h index aec0fc2011..f14ea8c47a 100644 --- a/libavfilter/dnn/dnn_backend_tf.h +++ b/libavfilter/dnn/dnn_backend_tf.h @@ -32,8 +32,7 @@ DNNModel *ff_dnn_load_model_tf(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx); DNNReturnType ff_dnn_execute_model_tf(const DNNModel *model, DNNExecBaseParams *exec_params); -DNNReturnType ff_dnn_execute_model_async_tf(const DNNModel *model, DNNExecBaseParams *exec_params); -DNNAsyncStatusType ff_dnn_get_async_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out); +DNNAsyncStatusType ff_dnn_get_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out); DNNReturnType ff_dnn_flush_tf(const DNNModel *model); void ff_dnn_free_model_tf(DNNModel **model); diff --git a/libavfilter/dnn/dnn_interface.c b/libavfilter/dnn/dnn_interface.c index 81af934dd5..554a36b0dc 100644 --- a/libavfilter/dnn/dnn_interface.c +++ b/libavfilter/dnn/dnn_interface.c @@ -42,14 +42,15 @@ DNNModule *ff_get_dnn_module(DNNBackendType backend_type) case DNN_NATIVE: dnn_module->load_model = &ff_dnn_load_model_native; dnn_module->execute_model = &ff_dnn_execute_model_native; + dnn_module->get_result = &ff_dnn_get_result_native; + dnn_module->flush = &ff_dnn_flush_native; dnn_module->free_model = &ff_dnn_free_model_native; break; case DNN_TF: #if (CONFIG_LIBTENSORFLOW == 1) dnn_module->load_model = &ff_dnn_load_model_tf; dnn_module->execute_model = &ff_dnn_execute_model_tf; - dnn_module->execute_model_async = &ff_dnn_execute_model_async_tf; - dnn_module->get_async_result = &ff_dnn_get_async_result_tf; + dnn_module->get_result = &ff_dnn_get_result_tf; dnn_module->flush = &ff_dnn_flush_tf; dnn_module->free_model = &ff_dnn_free_model_tf; #else @@ -61,8 +62,7 @@ DNNModule *ff_get_dnn_module(DNNBackendType backend_type) #if (CONFIG_LIBOPENVINO == 1) dnn_module->load_model = &ff_dnn_load_model_ov; dnn_module->execute_model = &ff_dnn_execute_model_ov; - dnn_module->execute_model_async = &ff_dnn_execute_model_async_ov; - dnn_module->get_async_result = &ff_dnn_get_async_result_ov; + dnn_module->get_result = &ff_dnn_get_result_ov; dnn_module->flush = &ff_dnn_flush_ov; dnn_module->free_model = &ff_dnn_free_model_ov; #else diff --git a/libavfilter/dnn_filter_common.c b/libavfilter/dnn_filter_common.c index fcee7482c3..455eaa37f4 100644 --- a/libavfilter/dnn_filter_common.c +++ b/libavfilter/dnn_filter_common.c @@ -84,11 +84,6 @@ int ff_dnn_init(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *fil return AVERROR(EINVAL); } - if (!ctx->dnn_module->execute_model_async && ctx->async) { - ctx->async = 0; - av_log(filter_ctx, AV_LOG_WARNING, "this backend does not support async execution, roll back to sync.\n"); - } - #if !HAVE_PTHREAD_CANCEL if (ctx->async) { ctx->async = 0; @@ -141,18 +136,6 @@ DNNReturnType ff_dnn_execute_model(DnnContext *ctx, AVFrame *in_frame, AVFrame * return (ctx->dnn_module->execute_model)(ctx->model, &exec_params); } -DNNReturnType ff_dnn_execute_model_async(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame) -{ - DNNExecBaseParams exec_params = { - .input_name = ctx->model_inputname, - .output_names = (const char **)ctx->model_outputnames, - .nb_output = ctx->nb_outputs, - .in_frame = in_frame, - .out_frame = out_frame, - }; - return (ctx->dnn_module->execute_model_async)(ctx->model, &exec_params); -} - DNNReturnType ff_dnn_execute_model_classification(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame, const char *target) { DNNExecClassificationParams class_params = { @@ -165,12 +148,12 @@ DNNReturnType ff_dnn_execute_model_classification(DnnContext *ctx, AVFrame *in_f }, .target = target, }; - return (ctx->dnn_module->execute_model_async)(ctx->model, &class_params.base); + return (ctx->dnn_module->execute_model)(ctx->model, &class_params.base); } -DNNAsyncStatusType ff_dnn_get_async_result(DnnContext *ctx, AVFrame **in_frame, AVFrame **out_frame) +DNNAsyncStatusType ff_dnn_get_result(DnnContext *ctx, AVFrame **in_frame, AVFrame **out_frame) { - return (ctx->dnn_module->get_async_result)(ctx->model, in_frame, out_frame); + return (ctx->dnn_module->get_result)(ctx->model, in_frame, out_frame); } DNNReturnType ff_dnn_flush(DnnContext *ctx) diff --git a/libavfilter/dnn_filter_common.h b/libavfilter/dnn_filter_common.h index 8e43d8c8dc..4d92c1dc36 100644 --- a/libavfilter/dnn_filter_common.h +++ b/libavfilter/dnn_filter_common.h @@ -56,9 +56,8 @@ int ff_dnn_set_classify_post_proc(DnnContext *ctx, ClassifyPostProc post_proc); DNNReturnType ff_dnn_get_input(DnnContext *ctx, DNNData *input); DNNReturnType ff_dnn_get_output(DnnContext *ctx, int input_width, int input_height, int *output_width, int *output_height); DNNReturnType ff_dnn_execute_model(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame); -DNNReturnType ff_dnn_execute_model_async(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame); DNNReturnType ff_dnn_execute_model_classification(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame, const char *target); -DNNAsyncStatusType ff_dnn_get_async_result(DnnContext *ctx, AVFrame **in_frame, AVFrame **out_frame); +DNNAsyncStatusType ff_dnn_get_result(DnnContext *ctx, AVFrame **in_frame, AVFrame **out_frame); DNNReturnType ff_dnn_flush(DnnContext *ctx); void ff_dnn_uninit(DnnContext *ctx); diff --git a/libavfilter/dnn_interface.h b/libavfilter/dnn_interface.h index 5e9ffeb077..37e89d9789 100644 --- a/libavfilter/dnn_interface.h +++ b/libavfilter/dnn_interface.h @@ -114,10 +114,8 @@ typedef struct DNNModule{ DNNModel *(*load_model)(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx); // Executes model with specified input and output. Returns DNN_ERROR otherwise. DNNReturnType (*execute_model)(const DNNModel *model, DNNExecBaseParams *exec_params); - // Executes model with specified input and output asynchronously. Returns DNN_ERROR otherwise. - DNNReturnType (*execute_model_async)(const DNNModel *model, DNNExecBaseParams *exec_params); // Retrieve inference result. - DNNAsyncStatusType (*get_async_result)(const DNNModel *model, AVFrame **in, AVFrame **out); + DNNAsyncStatusType (*get_result)(const DNNModel *model, AVFrame **in, AVFrame **out); // Flush all the pending tasks. DNNReturnType (*flush)(const DNNModel *model); // Frees memory allocated for model. diff --git a/libavfilter/vf_derain.c b/libavfilter/vf_derain.c index 720cc70f5c..97bdb4e843 100644 --- a/libavfilter/vf_derain.c +++ b/libavfilter/vf_derain.c @@ -68,6 +68,7 @@ static int query_formats(AVFilterContext *ctx) static int filter_frame(AVFilterLink *inlink, AVFrame *in) { + DNNAsyncStatusType async_state = 0; AVFilterContext *ctx = inlink->dst; AVFilterLink *outlink = ctx->outputs[0]; DRContext *dr_context = ctx->priv; @@ -88,6 +89,12 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in) av_frame_free(&in); return AVERROR(EIO); } + do { + async_state = ff_dnn_get_result(&dr_context->dnnctx, &in, &out); + } while (async_state == DAST_NOT_READY); + + if (async_state != DAST_SUCCESS) + return AVERROR(EINVAL); av_frame_free(&in); diff --git a/libavfilter/vf_dnn_classify.c b/libavfilter/vf_dnn_classify.c index 904067f5f6..d5ee65d403 100644 --- a/libavfilter/vf_dnn_classify.c +++ b/libavfilter/vf_dnn_classify.c @@ -224,7 +224,7 @@ static int dnn_classify_flush_frame(AVFilterLink *outlink, int64_t pts, int64_t do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { av_assert0(in_frame == out_frame); ret = ff_filter_frame(outlink, out_frame); @@ -268,7 +268,7 @@ static int dnn_classify_activate(AVFilterContext *filter_ctx) do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { av_assert0(in_frame == out_frame); ret = ff_filter_frame(outlink, out_frame); diff --git a/libavfilter/vf_dnn_detect.c b/libavfilter/vf_dnn_detect.c index 200b5897c1..2666abfcdc 100644 --- a/libavfilter/vf_dnn_detect.c +++ b/libavfilter/vf_dnn_detect.c @@ -424,7 +424,7 @@ static int dnn_detect_flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *o do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { av_assert0(in_frame == out_frame); ret = ff_filter_frame(outlink, out_frame); @@ -458,7 +458,7 @@ static int dnn_detect_activate_async(AVFilterContext *filter_ctx) if (ret < 0) return ret; if (ret > 0) { - if (ff_dnn_execute_model_async(&ctx->dnnctx, in, in) != DNN_SUCCESS) { + if (ff_dnn_execute_model(&ctx->dnnctx, in, in) != DNN_SUCCESS) { return AVERROR(EIO); } } @@ -468,7 +468,7 @@ static int dnn_detect_activate_async(AVFilterContext *filter_ctx) do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { av_assert0(in_frame == out_frame); ret = ff_filter_frame(outlink, out_frame); @@ -537,5 +537,5 @@ const AVFilter ff_vf_dnn_detect = { FILTER_INPUTS(dnn_detect_inputs), FILTER_OUTPUTS(dnn_detect_outputs), .priv_class = &dnn_detect_class, - .activate = dnn_detect_activate, + .activate = dnn_detect_activate_async, }; diff --git a/libavfilter/vf_dnn_processing.c b/libavfilter/vf_dnn_processing.c index 5f24955263..410cc887dc 100644 --- a/libavfilter/vf_dnn_processing.c +++ b/libavfilter/vf_dnn_processing.c @@ -328,7 +328,7 @@ static int flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *out_pts) do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { if (isPlanarYUV(in_frame->format)) copy_uv_planes(ctx, out_frame, in_frame); @@ -370,7 +370,7 @@ static int activate_async(AVFilterContext *filter_ctx) return AVERROR(ENOMEM); } av_frame_copy_props(out, in); - if (ff_dnn_execute_model_async(&ctx->dnnctx, in, out) != DNN_SUCCESS) { + if (ff_dnn_execute_model(&ctx->dnnctx, in, out) != DNN_SUCCESS) { return AVERROR(EIO); } } @@ -380,7 +380,7 @@ static int activate_async(AVFilterContext *filter_ctx) do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { if (isPlanarYUV(in_frame->format)) copy_uv_planes(ctx, out_frame, in_frame); @@ -454,5 +454,5 @@ const AVFilter ff_vf_dnn_processing = { FILTER_INPUTS(dnn_processing_inputs), FILTER_OUTPUTS(dnn_processing_outputs), .priv_class = &dnn_processing_class, - .activate = activate, + .activate = activate_async, }; diff --git a/libavfilter/vf_sr.c b/libavfilter/vf_sr.c index f009a868f8..f4fc84d251 100644 --- a/libavfilter/vf_sr.c +++ b/libavfilter/vf_sr.c @@ -119,6 +119,7 @@ static int config_output(AVFilterLink *outlink) static int filter_frame(AVFilterLink *inlink, AVFrame *in) { + DNNAsyncStatusType async_state = 0; AVFilterContext *context = inlink->dst; SRContext *ctx = context->priv; AVFilterLink *outlink = context->outputs[0]; @@ -148,6 +149,13 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in) return AVERROR(EIO); } + do { + async_state = ff_dnn_get_result(&ctx->dnnctx, &in, &out); + } while (async_state == DAST_NOT_READY); + + if (async_state != DAST_SUCCESS) + return AVERROR(EINVAL); + if (ctx->sws_uv_scale) { sws_scale(ctx->sws_uv_scale, (const uint8_t **)(in->data + 1), in->linesize + 1, 0, ctx->sws_uv_height, out->data + 1, out->linesize + 1); From patchwork Fri Aug 20 14:21:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29642 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp1309562iov; Fri, 20 Aug 2021 07:22:53 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwYrY2sjITacMHpFgBEkAzyDRP99xBYEtjEkyQq4z2KufJBGnEKUXJBEqWJBf3UiLvVmcgk X-Received: by 2002:a17:906:30d8:: with SMTP id b24mr21933761ejb.358.1629469373078; Fri, 20 Aug 2021 07:22:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629469373; cv=none; d=google.com; s=arc-20160816; b=Cv3/kxC35Hox06rTEVVXoADmcABI2QweyLw8Nr0X7fmt5aaw3/F+WovcFcV+VuCAZD JI+YAvYEDnOIE/fsXmyFaE4BkUbcEd/fLMmIQcRWuWIjjJTc4dvIRFwOcJopveJFhrsn Wr8l0oVwCJkEBJ+klH8nH5tDn3oZjqEJ4Ka1P8owiyH0EuEPbIfYTDmEenP1KdmVWr5h W3eyxwlG+GPmVQaTY3+Tn3zLNQr47AA/eOJdG1nK2GWqzojeCqZW+1jo/Ctsid+SfYOv Yk7V0Umx7FZQl7FrQY50imUx90VySchhfqCEmaxRGBhDF2XC1We31kBje9PC1Zgima0Y lkEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=liQPCk/BLYweUBLyY4Yl/L15vs9NMQoky7YRcU4k51k=; b=slp8O3k7NZe6SeQi2DMJ3m7PgZwG42qBawAwJfh8JhhAl6dGSPkOfO23d3HK+Zjp74 VwVwgV+5e9ATKn2AvmwDWUE23/MDmsTA08NgM+ws94JP8osk8/I2nnnx/9MiF+gM4W4x s3V3IVrdZlti2dMoS1cnpjNCZnqKAaJ3XKJKn1UkFGe8l8MZMWMgraoibUN7oMXhxqnw ZCkAwLzCz6C7ddRX/ESC/7pLaWgeI8MVC86cvOOQw2Jo5FJlntJrMGxpaVMU9UO+GOFC gYefGGGS6thj0qEogzggVQVyOi/zEtAvrd8PIPvkoEbhfgQLn0OHiPKp+iPJ6z/N8a1D RLAA== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=i17BnDcU; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id qf29si6789877ejc.618.2021.08.20.07.22.52; Fri, 20 Aug 2021 07:22:53 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=i17BnDcU; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id A0DFA68A369; Fri, 20 Aug 2021 17:22:29 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pg1-f173.google.com (mail-pg1-f173.google.com [209.85.215.173]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 3BC4D68A199 for ; Fri, 20 Aug 2021 17:22:23 +0300 (EEST) Received: by mail-pg1-f173.google.com with SMTP id o2so9327926pgr.9 for ; Fri, 20 Aug 2021 07:22:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kr7KTKeKKVyXzGXws8VzzEwi1rW5pO9vD1GaL8so44c=; b=i17BnDcUYjQqF4vbY7m31rKzaigiODlOoTIKDc7TXqzLn/7LbZLLYp7FMgmUDVP7eU n/dHaeffy1xXD1siTYDx1lx5s78r+xtvLlr3T5q2LeidC+vVdmCqZfRNTLjr8DcCQyLX AcpRwGLJ72Qmcb+3wnJdN+5uL5/30L7CeIgRfGpY/5FcgarEc09lRKdTkqs30MnOERiu PXfr3bAoOlYpNkoGLmZKtHYY/H0uP4UzV5yAApVrHO8+wP5ql2PHybrrOMT6Ak/CqGnZ 8CZLFnqGPBUZCCxpeAN2hUu1o9KLHJzbT5b1iBp/Clvq3tWq8digLLPIBGF6QwhgFdGg IPpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kr7KTKeKKVyXzGXws8VzzEwi1rW5pO9vD1GaL8so44c=; b=azB6Qd+ImIOaL2GjGp39B/nYRE2o+Q9WOkJ/a4F6aMsuaPfAX/9N/ZnT3jD0eVKkh+ by26DtRWcI4+OyDL2+emPx+1Ny0mKINanj5bhQIu6TD2xrbIkFBVq69wEjdYLAPWT7sJ EDq9NXNukHxJhOfv8U8qsOhgbsehOBFIP8JnQzCX+YGBT1KhR6nFuqxePfv5lviNkuJh MT1SSuGsjjbh3g16z7oOFGlDn5nKXuBItrKMWsqnnelWaAW98h+YeidzMEwqFMiDGCGH lZq48U0SkTTqU8z7Q1kmiY6CGm+DeWZ2ox3FbcX1YJsdZujxCevx+RBtL2BpVIiYkZgi u47w== X-Gm-Message-State: AOAM533wgqq5jHcwpIbdAIR5EbnkKw1oIqupyK2V/IXM0Yd9W51JCkD2 7niaUoq6BOV8ZbKEv7F79sP/xn/kXi2HCA== X-Received: by 2002:a63:1e5c:: with SMTP id p28mr19187296pgm.89.1629469340446; Fri, 20 Aug 2021 07:22:20 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.220.241]) by smtp.googlemail.com with ESMTPSA id x19sm8276043pgk.37.2021.08.20.07.22.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Aug 2021 07:22:20 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Fri, 20 Aug 2021 19:51:00 +0530 Message-Id: <20210820142103.14953-3-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210820142103.14953-1-shubhanshu.e01@gmail.com> References: <20210820142103.14953-1-shubhanshu.e01@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH 3/6] libavfilter: Remove synchronous functions from DNN filters X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: hCAL8Qs2MfPm This commit removes the unused sync mode specific code from the DNN filters since the sync and async mode are now unified from the filters' perspective. Signed-off-by: Shubhanshu Saxena --- libavfilter/vf_dnn_detect.c | 71 +--------------------------- libavfilter/vf_dnn_processing.c | 84 +-------------------------------- 2 files changed, 4 insertions(+), 151 deletions(-) diff --git a/libavfilter/vf_dnn_detect.c b/libavfilter/vf_dnn_detect.c index 2666abfcdc..809d70b930 100644 --- a/libavfilter/vf_dnn_detect.c +++ b/libavfilter/vf_dnn_detect.c @@ -353,63 +353,6 @@ static int dnn_detect_query_formats(AVFilterContext *context) return ff_set_common_formats_from_list(context, pix_fmts); } -static int dnn_detect_filter_frame(AVFilterLink *inlink, AVFrame *in) -{ - AVFilterContext *context = inlink->dst; - AVFilterLink *outlink = context->outputs[0]; - DnnDetectContext *ctx = context->priv; - DNNReturnType dnn_result; - - dnn_result = ff_dnn_execute_model(&ctx->dnnctx, in, in); - if (dnn_result != DNN_SUCCESS){ - av_log(ctx, AV_LOG_ERROR, "failed to execute model\n"); - av_frame_free(&in); - return AVERROR(EIO); - } - - return ff_filter_frame(outlink, in); -} - -static int dnn_detect_activate_sync(AVFilterContext *filter_ctx) -{ - AVFilterLink *inlink = filter_ctx->inputs[0]; - AVFilterLink *outlink = filter_ctx->outputs[0]; - AVFrame *in = NULL; - int64_t pts; - int ret, status; - int got_frame = 0; - - FF_FILTER_FORWARD_STATUS_BACK(outlink, inlink); - - do { - // drain all input frames - ret = ff_inlink_consume_frame(inlink, &in); - if (ret < 0) - return ret; - if (ret > 0) { - ret = dnn_detect_filter_frame(inlink, in); - if (ret < 0) - return ret; - got_frame = 1; - } - } while (ret > 0); - - // if frame got, schedule to next filter - if (got_frame) - return 0; - - if (ff_inlink_acknowledge_status(inlink, &status, &pts)) { - if (status == AVERROR_EOF) { - ff_outlink_set_status(outlink, status, pts); - return ret; - } - } - - FF_FILTER_FORWARD_WANTED(outlink, inlink); - - return FFERROR_NOT_READY; -} - static int dnn_detect_flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *out_pts) { DnnDetectContext *ctx = outlink->src->priv; @@ -439,7 +382,7 @@ static int dnn_detect_flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *o return 0; } -static int dnn_detect_activate_async(AVFilterContext *filter_ctx) +static int dnn_detect_activate(AVFilterContext *filter_ctx) { AVFilterLink *inlink = filter_ctx->inputs[0]; AVFilterLink *outlink = filter_ctx->outputs[0]; @@ -496,16 +439,6 @@ static int dnn_detect_activate_async(AVFilterContext *filter_ctx) return 0; } -static int dnn_detect_activate(AVFilterContext *filter_ctx) -{ - DnnDetectContext *ctx = filter_ctx->priv; - - if (ctx->dnnctx.async) - return dnn_detect_activate_async(filter_ctx); - else - return dnn_detect_activate_sync(filter_ctx); -} - static av_cold void dnn_detect_uninit(AVFilterContext *context) { DnnDetectContext *ctx = context->priv; @@ -537,5 +470,5 @@ const AVFilter ff_vf_dnn_detect = { FILTER_INPUTS(dnn_detect_inputs), FILTER_OUTPUTS(dnn_detect_outputs), .priv_class = &dnn_detect_class, - .activate = dnn_detect_activate_async, + .activate = dnn_detect_activate, }; diff --git a/libavfilter/vf_dnn_processing.c b/libavfilter/vf_dnn_processing.c index 410cc887dc..55634efde5 100644 --- a/libavfilter/vf_dnn_processing.c +++ b/libavfilter/vf_dnn_processing.c @@ -244,76 +244,6 @@ static int copy_uv_planes(DnnProcessingContext *ctx, AVFrame *out, const AVFrame return 0; } -static int filter_frame(AVFilterLink *inlink, AVFrame *in) -{ - AVFilterContext *context = inlink->dst; - AVFilterLink *outlink = context->outputs[0]; - DnnProcessingContext *ctx = context->priv; - DNNReturnType dnn_result; - AVFrame *out; - - out = ff_get_video_buffer(outlink, outlink->w, outlink->h); - if (!out) { - av_frame_free(&in); - return AVERROR(ENOMEM); - } - av_frame_copy_props(out, in); - - dnn_result = ff_dnn_execute_model(&ctx->dnnctx, in, out); - if (dnn_result != DNN_SUCCESS){ - av_log(ctx, AV_LOG_ERROR, "failed to execute model\n"); - av_frame_free(&in); - av_frame_free(&out); - return AVERROR(EIO); - } - - if (isPlanarYUV(in->format)) - copy_uv_planes(ctx, out, in); - - av_frame_free(&in); - return ff_filter_frame(outlink, out); -} - -static int activate_sync(AVFilterContext *filter_ctx) -{ - AVFilterLink *inlink = filter_ctx->inputs[0]; - AVFilterLink *outlink = filter_ctx->outputs[0]; - AVFrame *in = NULL; - int64_t pts; - int ret, status; - int got_frame = 0; - - FF_FILTER_FORWARD_STATUS_BACK(outlink, inlink); - - do { - // drain all input frames - ret = ff_inlink_consume_frame(inlink, &in); - if (ret < 0) - return ret; - if (ret > 0) { - ret = filter_frame(inlink, in); - if (ret < 0) - return ret; - got_frame = 1; - } - } while (ret > 0); - - // if frame got, schedule to next filter - if (got_frame) - return 0; - - if (ff_inlink_acknowledge_status(inlink, &status, &pts)) { - if (status == AVERROR_EOF) { - ff_outlink_set_status(outlink, status, pts); - return ret; - } - } - - FF_FILTER_FORWARD_WANTED(outlink, inlink); - - return FFERROR_NOT_READY; -} - static int flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *out_pts) { DnnProcessingContext *ctx = outlink->src->priv; @@ -345,7 +275,7 @@ static int flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *out_pts) return 0; } -static int activate_async(AVFilterContext *filter_ctx) +static int activate(AVFilterContext *filter_ctx) { AVFilterLink *inlink = filter_ctx->inputs[0]; AVFilterLink *outlink = filter_ctx->outputs[0]; @@ -410,16 +340,6 @@ static int activate_async(AVFilterContext *filter_ctx) return 0; } -static int activate(AVFilterContext *filter_ctx) -{ - DnnProcessingContext *ctx = filter_ctx->priv; - - if (ctx->dnnctx.async) - return activate_async(filter_ctx); - else - return activate_sync(filter_ctx); -} - static av_cold void uninit(AVFilterContext *ctx) { DnnProcessingContext *context = ctx->priv; @@ -454,5 +374,5 @@ const AVFilter ff_vf_dnn_processing = { FILTER_INPUTS(dnn_processing_inputs), FILTER_OUTPUTS(dnn_processing_outputs), .priv_class = &dnn_processing_class, - .activate = activate_async, + .activate = activate, }; From patchwork Fri Aug 20 14:21:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29637 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp1309692iov; Fri, 20 Aug 2021 07:23:04 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwU8nufxN8GduJXzhVC6Nl7eflCgp5ECnLKht37HbT3vbvZanbhVEFjnS4G0Agx67+bYj3n X-Received: by 2002:a17:906:e57:: with SMTP id q23mr21904062eji.483.1629469383941; Fri, 20 Aug 2021 07:23:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629469383; cv=none; d=google.com; s=arc-20160816; b=yhgFfKwWTglC9ZgUJsQFgzqLh08KqPHJsGN0zace+8662ntWr/RyiQ+kSTYBrrLXtC SzPYTNwetWDOSjgUyyDb86szz8Mdys7eJuf5pSQumfhlnKmcoAKlhaOccsJCh+0x36QD Ov3KkjEQo+8d40TmFlvlfeHnixLPw3IAw5ZinPoqNUZEbxkjC3ytnccQ9nzBzmWuMrnE NzB8YGX11wAAZptfCdD6dMrw5D37DX4PmfIm3t12gvCl1pY8+fa0iS4JIoRgpyI0mdhb J8fu1zJyLH9AtCZQolwAohjF18iJXQaU3O2qenT8whXHwKKeHMST19kyKZb617TBxSIL AFBA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=czpwim1ICXE88eV+XWmWrfusKgA9yKOOCddKShbVjD0=; b=BRQA2ZSGECxYAOnEmpYpRzqCPdDYYSIpnd/kEfyVLLDjGvaHFLyHnl39IVQTaAS0XM q483r/XIqQ83541lefLQkr44RSaRo3KD2JGH+SIFzGZr9pc1aMwpwLOhrw/an8P+KflG WD7le+GKQrZke9dYcHavOCj+Z0AXR5PWCZOmcQlqRhB3v4/8FfqviUUInsvtPJZRslId jRMeFe15lSfNSwBRiceaCSD0ZDAux9vw+cErfBF3tM4I9mLzfhaxKbrZDvpkBcKUbkqT qfc5mmNJKVfD1ps1KkYr08KqGWwQy98Yv9fgModspp2K36hVA7sYG/NsQvLebMD/00A4 ynEg== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b="V02L/KPO"; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id k12si6442009eds.497.2021.08.20.07.23.03; Fri, 20 Aug 2021 07:23:03 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b="V02L/KPO"; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id A091868A3B8; Fri, 20 Aug 2021 17:22:30 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 0201768A26E for ; Fri, 20 Aug 2021 17:22:23 +0300 (EEST) Received: by mail-pl1-f170.google.com with SMTP id w6so6028429plg.9 for ; Fri, 20 Aug 2021 07:22:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=eBa4KlC4Qt5tD1Y/Q0cyABhMDrL5cq9JDHN7xa/kbkA=; b=V02L/KPO6UdQiw7WfdIb2Olp5DMfC/39cGPtrlLEwzWCHa1pbR1GgJRSJS6Yp1ZBdE /xF36pbz2cmjgVDt180ezvK2uOxWr2//Vfx1Em+cm7yTWDEXA2myy5wGDosRE5iFCbjW ZR142PN1i/pg5+1GvtYXc44u51dL08vSAKm+LrQX7+h5eH3B/zHrUM4T2CuN1hXu+qc0 Rkn8DDsevLRGTd+FzYtx5W62I4NnGlnzYH3YcPVxn6w5HJQ5OyyLTPnCVYAADhldSUtx Ni1wRZnQoGgSHfXFYA5f3ABycyAVK933tPfMZlgdNZkxvlsyKs8veHQ/tOzBIMyHCTQf j1pw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eBa4KlC4Qt5tD1Y/Q0cyABhMDrL5cq9JDHN7xa/kbkA=; b=oc3YB+NGt1+6fBQOfzPj+hsIms8mR5itsnHShBi3iCspN60hVk8PhDkjNNQdg/1Bqu Ei+Iu/c3TSH2vXTcLG30OerhRWeQQ9otVSxHa+VtodT4Na+zybvlSPyAhR5o/6UJbmXu dM/POb9Div5Bn/pnj6QiLqMYX8xtVjpFvhO/KLM19EHdkoKF3rLZvz8QYCgQazMENG1T NpxCxJjmwS1uzsKhcnj74n5No36Bu5Inpv396eQW4J+bVwyz1tQofrWO3tXfsB4hJpuN 2FJLrRGhOFqLNmKOIuvTvmFPdyVaVP49VpD3xq3KkY0ddpWd7y/qukIsJu2r6NcGPvtF G46A== X-Gm-Message-State: AOAM532LjuYWieHA5HIhN0maII48CRtz3J/l3c6Pj1zPsMqTmklf1hRz R9gE0CxGF9gFfPiuEa8rbPqD3YBU3N5abQ== X-Received: by 2002:a17:90a:c58b:: with SMTP id l11mr4970575pjt.212.1629469342137; Fri, 20 Aug 2021 07:22:22 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.220.241]) by smtp.googlemail.com with ESMTPSA id x19sm8276043pgk.37.2021.08.20.07.22.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Aug 2021 07:22:21 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Fri, 20 Aug 2021 19:51:01 +0530 Message-Id: <20210820142103.14953-4-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210820142103.14953-1-shubhanshu.e01@gmail.com> References: <20210820142103.14953-1-shubhanshu.e01@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH 4/6] libavfilter: Remove Async Flag from DNN Filter Side X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: vqRjKzSllOQl Remove async flag from filter's perspective after the unification of async and sync modes in the DNN backend. Signed-off-by: Shubhanshu Saxena --- doc/filters.texi | 14 ++++---------- libavfilter/dnn/dnn_backend_tf.c | 7 +++++++ libavfilter/dnn_filter_common.c | 7 ------- libavfilter/dnn_filter_common.h | 2 +- 4 files changed, 12 insertions(+), 18 deletions(-) diff --git a/doc/filters.texi b/doc/filters.texi index 480cab706c..fac19ac413 100644 --- a/doc/filters.texi +++ b/doc/filters.texi @@ -10275,11 +10275,8 @@ and the second line is the name of label id 1, etc. The label id is considered as name if the label file is not provided. @item backend_configs -Set the configs to be passed into backend - -@item async -use DNN async execution if set (default: set), -roll back to sync execution if the backend does not support async. +Set the configs to be passed into backend. To use async execution, set async (default: set). +Roll back to sync execution if the backend does not support async. @end table @@ -10331,15 +10328,12 @@ Set the input name of the dnn network. Set the output name of the dnn network. @item backend_configs -Set the configs to be passed into backend +Set the configs to be passed into backend. To use async execution, set async (default: set). +Roll back to sync execution if the backend does not support async. For tensorflow backend, you can set its configs with @option{sess_config} options, please use tools/python/tf_sess_config.py to get the configs of TensorFlow backend for your system. -@item async -use DNN async execution if set (default: set), -roll back to sync execution if the backend does not support async. - @end table @subsection Examples diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c index 4a0b561f29..906934d8c0 100644 --- a/libavfilter/dnn/dnn_backend_tf.c +++ b/libavfilter/dnn/dnn_backend_tf.c @@ -884,6 +884,13 @@ DNNModel *ff_dnn_load_model_tf(const char *model_filename, DNNFunctionType func_ ctx->options.nireq = av_cpu_count() / 2 + 1; } +#if !HAVE_PTHREAD_CANCEL + if (ctx->options.async) { + ctx->options.async = 0; + av_log(filter_ctx, AV_LOG_WARNING, "pthread is not supported, roll back to sync.\n"); + } +#endif + tf_model->request_queue = ff_safe_queue_create(); if (!tf_model->request_queue) { goto err; diff --git a/libavfilter/dnn_filter_common.c b/libavfilter/dnn_filter_common.c index 455eaa37f4..3045ce0131 100644 --- a/libavfilter/dnn_filter_common.c +++ b/libavfilter/dnn_filter_common.c @@ -84,13 +84,6 @@ int ff_dnn_init(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *fil return AVERROR(EINVAL); } -#if !HAVE_PTHREAD_CANCEL - if (ctx->async) { - ctx->async = 0; - av_log(filter_ctx, AV_LOG_WARNING, "pthread is not supported, roll back to sync.\n"); - } -#endif - return 0; } diff --git a/libavfilter/dnn_filter_common.h b/libavfilter/dnn_filter_common.h index 4d92c1dc36..635ae631c1 100644 --- a/libavfilter/dnn_filter_common.h +++ b/libavfilter/dnn_filter_common.h @@ -46,7 +46,7 @@ typedef struct DnnContext { { "output", "output name of the model", OFFSET(model_outputnames_string), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS },\ { "backend_configs", "backend configs", OFFSET(backend_options), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS },\ { "options", "backend configs (deprecated, use backend_configs)", OFFSET(backend_options), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS | AV_OPT_FLAG_DEPRECATED},\ - { "async", "use DNN async inference", OFFSET(async), AV_OPT_TYPE_BOOL, { .i64 = 1}, 0, 1, FLAGS}, + { "async", "use DNN async inference (ignored, use backend_configs='async=1')", OFFSET(async), AV_OPT_TYPE_BOOL, { .i64 = 1}, 0, 1, FLAGS}, int ff_dnn_init(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *filter_ctx); From patchwork Fri Aug 20 14:21:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29639 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp1309866iov; Fri, 20 Aug 2021 07:23:16 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxV5z9T9VXtZkdws6wVozRvED9ywcssd8eo5NB4oyoonJYZl5ruQsdMpcXwWg1lcvWP73pS X-Received: by 2002:a17:906:1b54:: with SMTP id p20mr21831788ejg.395.1629469396236; Fri, 20 Aug 2021 07:23:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629469396; cv=none; d=google.com; s=arc-20160816; b=Qvz/pXPVtiKO/WTwFd3vi83ij6Ym2Ro/YB+XqimWWap4LteEHQR4JzlfudO0QQU3dj XbX9jG7nWxUM+etj7Kt2KIjf7LMVZPriDaNUEO5/H2yujBIEsL8rtEZx/KmKp46l2cV0 bJS1ah7uSsZHOIJ0Fw0/nL33DmE7hYKuKzRQsJxUfbILBis7kw5YKlbf3VjjkUYk+ohf O0WJpT9nPcHJtV6Hu4cjMKfFxO3TNBL8J95IcaiStkH60QSBh3yg61MHgkp4wM+KC5qE l2DP4/9lZPjPZ1PZrNxrWVrQANRYFfTQkZ3OdHUpLFyZCczbayJB1yINjyZFklt8DCo/ irow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=wPmC1V9xA7xcIJpal4W9btuetXoWmeljT0mNz7RTB4g=; b=mc3qwX4FEwHaWeqW8OTV5vaGx8K6i5hO9YLHmeBfAgoslBJgD6bPWpij2bndAhIuEF hasV+mT4HdBkITGCTKSbtPy7G7eQ/kjQbMtFSYssiz2TOzXXBpcqJoT1xkTP+4UlkAjj VYhrVLX+Q4C05z8+PaUqlL+G+YRWy34gzjORSwNJ1PaJQluOgHkUCDWhDKFngjAWrxlL fI70TFSi0YrdkmmbraeUbmQJEERydu/aVs3OTZpv4bpdEpmytlLEdjVJDhKANn6KKt/p aKdkMTtuwEaKmBF0ws1clpjHd0N8LOxfE8RakI7g6l7F0Ane1BwlL4oXyS/NFI8ht/E1 tWnQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=uO7I7z+x; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id o5si6608568eju.351.2021.08.20.07.23.15; Fri, 20 Aug 2021 07:23:16 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=uO7I7z+x; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id AADDE68A410; Fri, 20 Aug 2021 17:22:32 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pj1-f44.google.com (mail-pj1-f44.google.com [209.85.216.44]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 48F6F68A408 for ; Fri, 20 Aug 2021 17:22:26 +0300 (EEST) Received: by mail-pj1-f44.google.com with SMTP id bo18so7493930pjb.0 for ; Fri, 20 Aug 2021 07:22:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=X0GoaER13LZC8Buz2Nw+pjFQZNo+J42xQflmdYhUWao=; b=uO7I7z+xFy972cl/C3Fd88qyR+ZUKlkdLyYadZTVtqRl25/lrby4Hlnlz+CnVmVT7p EGkI2DvYdEP7eGumEvtDW3mylXDOfp5iGtf15wzs5dZaQqBgWdGRPMolpFhdGfgdZC8I QcCEpAnvy+zj4V9HbkVhzgfyHCZWWiF9dXLJWOdI+dtMTgw2qGoeCbCf7hsHErprAuQY /k36v6OSbNsnypbFeUGjyqoxVo79cvzNHgvRXz+SJd43IzsdyiXQkRBE4fMnGwkbQWV0 kS2JDjGOk7N+QkAv74VugO3imKiGwYFIBHZtIjJRBJN+zgOAS5MQ5KmLmxVMZqASExoR fRQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=X0GoaER13LZC8Buz2Nw+pjFQZNo+J42xQflmdYhUWao=; b=K31N6DV6HbABE7Ug0ydvuJfFWsAr2fU/y/oeYfFAkIlaJZl2ErVqjXW7FMQvauDfDI FJudv9/8D1Kf34MiJou6TLQ9HtHmaTedQfPM2J/cIrnhpYAA0t4El3hrMJC44hWAtGUG LZNUSG42sfpF0c+j0rmDJE6wnY+/sDSAb86VEj3F9kvfk2twP2lFt3nXuQi5qm4PEcHe ZeLTNcGCoGETGCPwyHHuwFaUj8UIl8Mec2IXTMYDvSYju+DUMwgCes7y8igu4xjCs+A/ O65nWGw3bGyOKrWje6gyBajDiRoqSuPS30IeOY7JBoJ3HuNVEv2IvFIXd0o5xBlgWMEG OkXw== X-Gm-Message-State: AOAM533qcy2xVdK6LNxMEHuK/loTORmo9k7b4FKYVzgPXstnAxPoxFCz S2YyyjxVfAyL/yNwVUs+37vmMlx+AzajEw== X-Received: by 2002:a17:90b:fd3:: with SMTP id gd19mr4895225pjb.76.1629469344344; Fri, 20 Aug 2021 07:22:24 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.220.241]) by smtp.googlemail.com with ESMTPSA id x19sm8276043pgk.37.2021.08.20.07.22.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Aug 2021 07:22:24 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Fri, 20 Aug 2021 19:51:02 +0530 Message-Id: <20210820142103.14953-5-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210820142103.14953-1-shubhanshu.e01@gmail.com> References: <20210820142103.14953-1-shubhanshu.e01@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH 5/6] lavfi/dnn: Rename InferenceItem to LastLevelTaskItem X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: mRvSKU9Mh8bK This patch renames the InferenceItem to LastLevelTaskItem in the three backends to avoid confusion among the meanings of these structs. The following are the renames done in this patch: 1. extract_inference_from_task -> extract_lltask_from_task 2. InferenceItem -> LastLevelTaskItem 3. inference_queue -> lltask_queue 4. inference -> lltask 5. inference_count -> lltask_count Signed-off-by: Shubhanshu Saxena --- libavfilter/dnn/dnn_backend_common.h | 4 +- libavfilter/dnn/dnn_backend_native.c | 58 ++++++------- libavfilter/dnn/dnn_backend_native.h | 2 +- libavfilter/dnn/dnn_backend_openvino.c | 110 ++++++++++++------------- libavfilter/dnn/dnn_backend_tf.c | 76 ++++++++--------- 5 files changed, 125 insertions(+), 125 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_common.h b/libavfilter/dnn/dnn_backend_common.h index 78e62a94a2..6b6a5e21ae 100644 --- a/libavfilter/dnn/dnn_backend_common.h +++ b/libavfilter/dnn/dnn_backend_common.h @@ -47,10 +47,10 @@ typedef struct TaskItem { } TaskItem; // one task might have multiple inferences -typedef struct InferenceItem { +typedef struct LastLevelTaskItem { TaskItem *task; uint32_t bbox_index; -} InferenceItem; +} LastLevelTaskItem; /** * Common Async Execution Mechanism for the DNN Backends. diff --git a/libavfilter/dnn/dnn_backend_native.c b/libavfilter/dnn/dnn_backend_native.c index 2d34b88f8a..13436c0484 100644 --- a/libavfilter/dnn/dnn_backend_native.c +++ b/libavfilter/dnn/dnn_backend_native.c @@ -46,25 +46,25 @@ static const AVClass dnn_native_class = { .category = AV_CLASS_CATEGORY_FILTER, }; -static DNNReturnType execute_model_native(Queue *inference_queue); +static DNNReturnType execute_model_native(Queue *lltask_queue); -static DNNReturnType extract_inference_from_task(TaskItem *task, Queue *inference_queue) +static DNNReturnType extract_lltask_from_task(TaskItem *task, Queue *lltask_queue) { NativeModel *native_model = task->model; NativeContext *ctx = &native_model->ctx; - InferenceItem *inference = av_malloc(sizeof(*inference)); + LastLevelTaskItem *lltask = av_malloc(sizeof(*lltask)); - if (!inference) { - av_log(ctx, AV_LOG_ERROR, "Unable to allocate space for InferenceItem\n"); + if (!lltask) { + av_log(ctx, AV_LOG_ERROR, "Unable to allocate space for LastLevelTaskItem\n"); return DNN_ERROR; } task->inference_todo = 1; task->inference_done = 0; - inference->task = task; + lltask->task = task; - if (ff_queue_push_back(inference_queue, inference) < 0) { - av_log(ctx, AV_LOG_ERROR, "Failed to push back inference_queue.\n"); - av_freep(&inference); + if (ff_queue_push_back(lltask_queue, lltask) < 0) { + av_log(ctx, AV_LOG_ERROR, "Failed to push back lltask_queue.\n"); + av_freep(&lltask); return DNN_ERROR; } return DNN_SUCCESS; @@ -116,13 +116,13 @@ static DNNReturnType get_output_native(void *model, const char *input_name, int goto err; } - if (extract_inference_from_task(&task, native_model->inference_queue) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); + if (extract_lltask_from_task(&task, native_model->lltask_queue) != DNN_SUCCESS) { + av_log(ctx, AV_LOG_ERROR, "unable to extract last level task from task.\n"); ret = DNN_ERROR; goto err; } - ret = execute_model_native(native_model->inference_queue); + ret = execute_model_native(native_model->lltask_queue); *output_width = task.out_frame->width; *output_height = task.out_frame->height; @@ -223,8 +223,8 @@ DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType f goto fail; } - native_model->inference_queue = ff_queue_create(); - if (!native_model->inference_queue) { + native_model->lltask_queue = ff_queue_create(); + if (!native_model->lltask_queue) { goto fail; } @@ -297,24 +297,24 @@ fail: return NULL; } -static DNNReturnType execute_model_native(Queue *inference_queue) +static DNNReturnType execute_model_native(Queue *lltask_queue) { NativeModel *native_model = NULL; NativeContext *ctx = NULL; int32_t layer; DNNData input, output; DnnOperand *oprd = NULL; - InferenceItem *inference = NULL; + LastLevelTaskItem *lltask = NULL; TaskItem *task = NULL; DNNReturnType ret = 0; - inference = ff_queue_pop_front(inference_queue); - if (!inference) { - av_log(NULL, AV_LOG_ERROR, "Failed to get inference item\n"); + lltask = ff_queue_pop_front(lltask_queue); + if (!lltask) { + av_log(NULL, AV_LOG_ERROR, "Failed to get LastLevelTaskItem\n"); ret = DNN_ERROR; goto err; } - task = inference->task; + task = lltask->task; native_model = task->model; ctx = &native_model->ctx; @@ -428,7 +428,7 @@ static DNNReturnType execute_model_native(Queue *inference_queue) } task->inference_done++; err: - av_freep(&inference); + av_freep(&lltask); return ret; } @@ -459,26 +459,26 @@ DNNReturnType ff_dnn_execute_model_native(const DNNModel *model, DNNExecBasePara return DNN_ERROR; } - if (extract_inference_from_task(task, native_model->inference_queue) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); + if (extract_lltask_from_task(task, native_model->lltask_queue) != DNN_SUCCESS) { + av_log(ctx, AV_LOG_ERROR, "unable to extract last level task from task.\n"); return DNN_ERROR; } - return execute_model_native(native_model->inference_queue); + return execute_model_native(native_model->lltask_queue); } DNNReturnType ff_dnn_flush_native(const DNNModel *model) { NativeModel *native_model = model->model; - if (ff_queue_size(native_model->inference_queue) == 0) { + if (ff_queue_size(native_model->lltask_queue) == 0) { // no pending task need to flush return DNN_SUCCESS; } // for now, use sync node with flush operation // Switch to async when it is supported - return execute_model_native(native_model->inference_queue); + return execute_model_native(native_model->lltask_queue); } DNNAsyncStatusType ff_dnn_get_result_native(const DNNModel *model, AVFrame **in, AVFrame **out) @@ -536,11 +536,11 @@ void ff_dnn_free_model_native(DNNModel **model) av_freep(&native_model->operands); } - while (ff_queue_size(native_model->inference_queue) != 0) { - InferenceItem *item = ff_queue_pop_front(native_model->inference_queue); + while (ff_queue_size(native_model->lltask_queue) != 0) { + LastLevelTaskItem *item = ff_queue_pop_front(native_model->lltask_queue); av_freep(&item); } - ff_queue_destroy(native_model->inference_queue); + ff_queue_destroy(native_model->lltask_queue); while (ff_queue_size(native_model->task_queue) != 0) { TaskItem *item = ff_queue_pop_front(native_model->task_queue); diff --git a/libavfilter/dnn/dnn_backend_native.h b/libavfilter/dnn/dnn_backend_native.h index ca61bb353f..e8017ee4b4 100644 --- a/libavfilter/dnn/dnn_backend_native.h +++ b/libavfilter/dnn/dnn_backend_native.h @@ -129,7 +129,7 @@ typedef struct NativeModel{ DnnOperand *operands; int32_t operands_num; Queue *task_queue; - Queue *inference_queue; + Queue *lltask_queue; } NativeModel; DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx); diff --git a/libavfilter/dnn/dnn_backend_openvino.c b/libavfilter/dnn/dnn_backend_openvino.c index abf3f4b6b6..76dc06c6d7 100644 --- a/libavfilter/dnn/dnn_backend_openvino.c +++ b/libavfilter/dnn/dnn_backend_openvino.c @@ -57,14 +57,14 @@ typedef struct OVModel{ ie_executable_network_t *exe_network; SafeQueue *request_queue; // holds OVRequestItem Queue *task_queue; // holds TaskItem - Queue *inference_queue; // holds InferenceItem + Queue *lltask_queue; // holds LastLevelTaskItem } OVModel; // one request for one call to openvino typedef struct OVRequestItem { ie_infer_request_t *infer_request; - InferenceItem **inferences; - uint32_t inference_count; + LastLevelTaskItem **lltasks; + uint32_t lltask_count; ie_complete_call_back_t callback; } OVRequestItem; @@ -121,12 +121,12 @@ static DNNReturnType fill_model_input_ov(OVModel *ov_model, OVRequestItem *reque IEStatusCode status; DNNData input; ie_blob_t *input_blob = NULL; - InferenceItem *inference; + LastLevelTaskItem *lltask; TaskItem *task; - inference = ff_queue_peek_front(ov_model->inference_queue); - av_assert0(inference); - task = inference->task; + lltask = ff_queue_peek_front(ov_model->lltask_queue); + av_assert0(lltask); + task = lltask->task; status = ie_infer_request_get_blob(request->infer_request, task->input_name, &input_blob); if (status != OK) { @@ -159,13 +159,13 @@ static DNNReturnType fill_model_input_ov(OVModel *ov_model, OVRequestItem *reque input.order = DCO_BGR; for (int i = 0; i < ctx->options.batch_size; ++i) { - inference = ff_queue_pop_front(ov_model->inference_queue); - if (!inference) { + lltask = ff_queue_pop_front(ov_model->lltask_queue); + if (!lltask) { break; } - request->inferences[i] = inference; - request->inference_count = i + 1; - task = inference->task; + request->lltasks[i] = lltask; + request->lltask_count = i + 1; + task = lltask->task; switch (ov_model->model->func_type) { case DFT_PROCESS_FRAME: if (task->do_ioproc) { @@ -180,7 +180,7 @@ static DNNReturnType fill_model_input_ov(OVModel *ov_model, OVRequestItem *reque ff_frame_to_dnn_detect(task->in_frame, &input, ctx); break; case DFT_ANALYTICS_CLASSIFY: - ff_frame_to_dnn_classify(task->in_frame, &input, inference->bbox_index, ctx); + ff_frame_to_dnn_classify(task->in_frame, &input, lltask->bbox_index, ctx); break; default: av_assert0(!"should not reach here"); @@ -200,8 +200,8 @@ static void infer_completion_callback(void *args) precision_e precision; IEStatusCode status; OVRequestItem *request = args; - InferenceItem *inference = request->inferences[0]; - TaskItem *task = inference->task; + LastLevelTaskItem *lltask = request->lltasks[0]; + TaskItem *task = lltask->task; OVModel *ov_model = task->model; SafeQueue *requestq = ov_model->request_queue; ie_blob_t *output_blob = NULL; @@ -248,10 +248,10 @@ static void infer_completion_callback(void *args) output.dt = precision_to_datatype(precision); output.data = blob_buffer.buffer; - av_assert0(request->inference_count <= dims.dims[0]); - av_assert0(request->inference_count >= 1); - for (int i = 0; i < request->inference_count; ++i) { - task = request->inferences[i]->task; + av_assert0(request->lltask_count <= dims.dims[0]); + av_assert0(request->lltask_count >= 1); + for (int i = 0; i < request->lltask_count; ++i) { + task = request->lltasks[i]->task; task->inference_done++; switch (ov_model->model->func_type) { @@ -279,20 +279,20 @@ static void infer_completion_callback(void *args) av_log(ctx, AV_LOG_ERROR, "classify filter needs to provide post proc\n"); return; } - ov_model->model->classify_post_proc(task->out_frame, &output, request->inferences[i]->bbox_index, ov_model->model->filter_ctx); + ov_model->model->classify_post_proc(task->out_frame, &output, request->lltasks[i]->bbox_index, ov_model->model->filter_ctx); break; default: av_assert0(!"should not reach here"); break; } - av_freep(&request->inferences[i]); + av_freep(&request->lltasks[i]); output.data = (uint8_t *)output.data + output.width * output.height * output.channels * get_datatype_size(output.dt); } ie_blob_free(&output_blob); - request->inference_count = 0; + request->lltask_count = 0; if (ff_safe_queue_push_back(requestq, request) < 0) { ie_infer_request_free(&request->infer_request); av_freep(&request); @@ -399,11 +399,11 @@ static DNNReturnType init_model_ov(OVModel *ov_model, const char *input_name, co goto err; } - item->inferences = av_malloc_array(ctx->options.batch_size, sizeof(*item->inferences)); - if (!item->inferences) { + item->lltasks = av_malloc_array(ctx->options.batch_size, sizeof(*item->lltasks)); + if (!item->lltasks) { goto err; } - item->inference_count = 0; + item->lltask_count = 0; } ov_model->task_queue = ff_queue_create(); @@ -411,8 +411,8 @@ static DNNReturnType init_model_ov(OVModel *ov_model, const char *input_name, co goto err; } - ov_model->inference_queue = ff_queue_create(); - if (!ov_model->inference_queue) { + ov_model->lltask_queue = ff_queue_create(); + if (!ov_model->lltask_queue) { goto err; } @@ -427,7 +427,7 @@ static DNNReturnType execute_model_ov(OVRequestItem *request, Queue *inferenceq) { IEStatusCode status; DNNReturnType ret; - InferenceItem *inference; + LastLevelTaskItem *lltask; TaskItem *task; OVContext *ctx; OVModel *ov_model; @@ -438,8 +438,8 @@ static DNNReturnType execute_model_ov(OVRequestItem *request, Queue *inferenceq) return DNN_SUCCESS; } - inference = ff_queue_peek_front(inferenceq); - task = inference->task; + lltask = ff_queue_peek_front(inferenceq); + task = lltask->task; ov_model = task->model; ctx = &ov_model->ctx; @@ -567,21 +567,21 @@ static int contain_valid_detection_bbox(AVFrame *frame) return 1; } -static DNNReturnType extract_inference_from_task(DNNFunctionType func_type, TaskItem *task, Queue *inference_queue, DNNExecBaseParams *exec_params) +static DNNReturnType extract_lltask_from_task(DNNFunctionType func_type, TaskItem *task, Queue *lltask_queue, DNNExecBaseParams *exec_params) { switch (func_type) { case DFT_PROCESS_FRAME: case DFT_ANALYTICS_DETECT: { - InferenceItem *inference = av_malloc(sizeof(*inference)); - if (!inference) { + LastLevelTaskItem *lltask = av_malloc(sizeof(*lltask)); + if (!lltask) { return DNN_ERROR; } task->inference_todo = 1; task->inference_done = 0; - inference->task = task; - if (ff_queue_push_back(inference_queue, inference) < 0) { - av_freep(&inference); + lltask->task = task; + if (ff_queue_push_back(lltask_queue, lltask) < 0) { + av_freep(&lltask); return DNN_ERROR; } return DNN_SUCCESS; @@ -604,7 +604,7 @@ static DNNReturnType extract_inference_from_task(DNNFunctionType func_type, Task header = (const AVDetectionBBoxHeader *)sd->data; for (uint32_t i = 0; i < header->nb_bboxes; i++) { - InferenceItem *inference; + LastLevelTaskItem *lltask; const AVDetectionBBox *bbox = av_get_detection_bbox(header, i); if (params->target) { @@ -613,15 +613,15 @@ static DNNReturnType extract_inference_from_task(DNNFunctionType func_type, Task } } - inference = av_malloc(sizeof(*inference)); - if (!inference) { + lltask = av_malloc(sizeof(*lltask)); + if (!lltask) { return DNN_ERROR; } task->inference_todo++; - inference->task = task; - inference->bbox_index = i; - if (ff_queue_push_back(inference_queue, inference) < 0) { - av_freep(&inference); + lltask->task = task; + lltask->bbox_index = i; + if (ff_queue_push_back(lltask_queue, lltask) < 0) { + av_freep(&lltask); return DNN_ERROR; } } @@ -679,8 +679,8 @@ static DNNReturnType get_output_ov(void *model, const char *input_name, int inpu return DNN_ERROR; } - if (extract_inference_from_task(ov_model->model->func_type, &task, ov_model->inference_queue, NULL) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); + if (extract_lltask_from_task(ov_model->model->func_type, &task, ov_model->lltask_queue, NULL) != DNN_SUCCESS) { + av_log(ctx, AV_LOG_ERROR, "unable to extract last level task from task.\n"); ret = DNN_ERROR; goto err; } @@ -692,7 +692,7 @@ static DNNReturnType get_output_ov(void *model, const char *input_name, int inpu goto err; } - ret = execute_model_ov(request, ov_model->inference_queue); + ret = execute_model_ov(request, ov_model->lltask_queue); *output_width = task.out_frame->width; *output_height = task.out_frame->height; err: @@ -794,20 +794,20 @@ DNNReturnType ff_dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams * return DNN_ERROR; } - if (extract_inference_from_task(model->func_type, task, ov_model->inference_queue, exec_params) != DNN_SUCCESS) { + if (extract_lltask_from_task(model->func_type, task, ov_model->lltask_queue, exec_params) != DNN_SUCCESS) { av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); return DNN_ERROR; } if (ctx->options.async) { - while (ff_queue_size(ov_model->inference_queue) >= ctx->options.batch_size) { + while (ff_queue_size(ov_model->lltask_queue) >= ctx->options.batch_size) { request = ff_safe_queue_pop_front(ov_model->request_queue); if (!request) { av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); return DNN_ERROR; } - ret = execute_model_ov(request, ov_model->inference_queue); + ret = execute_model_ov(request, ov_model->lltask_queue); if (ret != DNN_SUCCESS) { return ret; } @@ -832,7 +832,7 @@ DNNReturnType ff_dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams * av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); return DNN_ERROR; } - return execute_model_ov(request, ov_model->inference_queue); + return execute_model_ov(request, ov_model->lltask_queue); } } @@ -850,7 +850,7 @@ DNNReturnType ff_dnn_flush_ov(const DNNModel *model) IEStatusCode status; DNNReturnType ret; - if (ff_queue_size(ov_model->inference_queue) == 0) { + if (ff_queue_size(ov_model->lltask_queue) == 0) { // no pending task need to flush return DNN_SUCCESS; } @@ -889,16 +889,16 @@ void ff_dnn_free_model_ov(DNNModel **model) if (item && item->infer_request) { ie_infer_request_free(&item->infer_request); } - av_freep(&item->inferences); + av_freep(&item->lltasks); av_freep(&item); } ff_safe_queue_destroy(ov_model->request_queue); - while (ff_queue_size(ov_model->inference_queue) != 0) { - InferenceItem *item = ff_queue_pop_front(ov_model->inference_queue); + while (ff_queue_size(ov_model->lltask_queue) != 0) { + LastLevelTaskItem *item = ff_queue_pop_front(ov_model->lltask_queue); av_freep(&item); } - ff_queue_destroy(ov_model->inference_queue); + ff_queue_destroy(ov_model->lltask_queue); while (ff_queue_size(ov_model->task_queue) != 0) { TaskItem *item = ff_queue_pop_front(ov_model->task_queue); diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c index 906934d8c0..dfac58b357 100644 --- a/libavfilter/dnn/dnn_backend_tf.c +++ b/libavfilter/dnn/dnn_backend_tf.c @@ -58,7 +58,7 @@ typedef struct TFModel{ TF_Session *session; TF_Status *status; SafeQueue *request_queue; - Queue *inference_queue; + Queue *lltask_queue; Queue *task_queue; } TFModel; @@ -75,7 +75,7 @@ typedef struct TFInferRequest { typedef struct TFRequestItem { TFInferRequest *infer_request; - InferenceItem *inference; + LastLevelTaskItem *lltask; TF_Status *status; DNNAsyncExecModule exec_module; } TFRequestItem; @@ -90,7 +90,7 @@ static const AVOption dnn_tensorflow_options[] = { AVFILTER_DEFINE_CLASS(dnn_tensorflow); -static DNNReturnType execute_model_tf(TFRequestItem *request, Queue *inference_queue); +static DNNReturnType execute_model_tf(TFRequestItem *request, Queue *lltask_queue); static void infer_completion_callback(void *args); static inline void destroy_request_item(TFRequestItem **arg); @@ -158,8 +158,8 @@ static DNNReturnType tf_start_inference(void *args) { TFRequestItem *request = args; TFInferRequest *infer_request = request->infer_request; - InferenceItem *inference = request->inference; - TaskItem *task = inference->task; + LastLevelTaskItem *lltask = request->lltask; + TaskItem *task = lltask->task; TFModel *tf_model = task->model; if (!request) { @@ -196,27 +196,27 @@ static inline void destroy_request_item(TFRequestItem **arg) { request = *arg; tf_free_request(request->infer_request); av_freep(&request->infer_request); - av_freep(&request->inference); + av_freep(&request->lltask); TF_DeleteStatus(request->status); ff_dnn_async_module_cleanup(&request->exec_module); av_freep(arg); } -static DNNReturnType extract_inference_from_task(TaskItem *task, Queue *inference_queue) +static DNNReturnType extract_lltask_from_task(TaskItem *task, Queue *lltask_queue) { TFModel *tf_model = task->model; TFContext *ctx = &tf_model->ctx; - InferenceItem *inference = av_malloc(sizeof(*inference)); - if (!inference) { - av_log(ctx, AV_LOG_ERROR, "Unable to allocate space for InferenceItem\n"); + LastLevelTaskItem *lltask = av_malloc(sizeof(*lltask)); + if (!lltask) { + av_log(ctx, AV_LOG_ERROR, "Unable to allocate space for LastLevelTaskItem\n"); return DNN_ERROR; } task->inference_todo = 1; task->inference_done = 0; - inference->task = task; - if (ff_queue_push_back(inference_queue, inference) < 0) { - av_log(ctx, AV_LOG_ERROR, "Failed to push back inference_queue.\n"); - av_freep(&inference); + lltask->task = task; + if (ff_queue_push_back(lltask_queue, lltask) < 0) { + av_log(ctx, AV_LOG_ERROR, "Failed to push back lltask_queue.\n"); + av_freep(&lltask); return DNN_ERROR; } return DNN_SUCCESS; @@ -333,7 +333,7 @@ static DNNReturnType get_output_tf(void *model, const char *input_name, int inpu goto err; } - if (extract_inference_from_task(&task, tf_model->inference_queue) != DNN_SUCCESS) { + if (extract_lltask_from_task(&task, tf_model->lltask_queue) != DNN_SUCCESS) { av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); ret = DNN_ERROR; goto err; @@ -346,7 +346,7 @@ static DNNReturnType get_output_tf(void *model, const char *input_name, int inpu goto err; } - ret = execute_model_tf(request, tf_model->inference_queue); + ret = execute_model_tf(request, tf_model->lltask_queue); *output_width = task.out_frame->width; *output_height = task.out_frame->height; @@ -901,7 +901,7 @@ DNNModel *ff_dnn_load_model_tf(const char *model_filename, DNNFunctionType func_ if (!item) { goto err; } - item->inference = NULL; + item->lltask = NULL; item->infer_request = tf_create_inference_request(); if (!item->infer_request) { av_log(ctx, AV_LOG_ERROR, "Failed to allocate memory for TensorFlow inference request\n"); @@ -919,8 +919,8 @@ DNNModel *ff_dnn_load_model_tf(const char *model_filename, DNNFunctionType func_ } } - tf_model->inference_queue = ff_queue_create(); - if (!tf_model->inference_queue) { + tf_model->lltask_queue = ff_queue_create(); + if (!tf_model->lltask_queue) { goto err; } @@ -944,15 +944,15 @@ err: static DNNReturnType fill_model_input_tf(TFModel *tf_model, TFRequestItem *request) { DNNData input; - InferenceItem *inference; + LastLevelTaskItem *lltask; TaskItem *task; TFInferRequest *infer_request; TFContext *ctx = &tf_model->ctx; - inference = ff_queue_pop_front(tf_model->inference_queue); - av_assert0(inference); - task = inference->task; - request->inference = inference; + lltask = ff_queue_pop_front(tf_model->lltask_queue); + av_assert0(lltask); + task = lltask->task; + request->lltask = lltask; if (get_input_tf(tf_model, &input, task->input_name) != DNN_SUCCESS) { goto err; @@ -1030,8 +1030,8 @@ err: static void infer_completion_callback(void *args) { TFRequestItem *request = args; - InferenceItem *inference = request->inference; - TaskItem *task = inference->task; + LastLevelTaskItem *lltask = request->lltask; + TaskItem *task = lltask->task; DNNData *outputs; TFInferRequest *infer_request = request->infer_request; TFModel *tf_model = task->model; @@ -1086,20 +1086,20 @@ err: } } -static DNNReturnType execute_model_tf(TFRequestItem *request, Queue *inference_queue) +static DNNReturnType execute_model_tf(TFRequestItem *request, Queue *lltask_queue) { TFModel *tf_model; TFContext *ctx; - InferenceItem *inference; + LastLevelTaskItem *lltask; TaskItem *task; - if (ff_queue_size(inference_queue) == 0) { + if (ff_queue_size(lltask_queue) == 0) { destroy_request_item(&request); return DNN_SUCCESS; } - inference = ff_queue_peek_front(inference_queue); - task = inference->task; + lltask = ff_queue_peek_front(lltask_queue); + task = lltask->task; tf_model = task->model; ctx = &tf_model->ctx; @@ -1155,8 +1155,8 @@ DNNReturnType ff_dnn_execute_model_tf(const DNNModel *model, DNNExecBaseParams * return DNN_ERROR; } - if (extract_inference_from_task(task, tf_model->inference_queue) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); + if (extract_lltask_from_task(task, tf_model->lltask_queue) != DNN_SUCCESS) { + av_log(ctx, AV_LOG_ERROR, "unable to extract last level task from task.\n"); return DNN_ERROR; } @@ -1165,7 +1165,7 @@ DNNReturnType ff_dnn_execute_model_tf(const DNNModel *model, DNNExecBaseParams * av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); return DNN_ERROR; } - return execute_model_tf(request, tf_model->inference_queue); + return execute_model_tf(request, tf_model->lltask_queue); } DNNAsyncStatusType ff_dnn_get_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out) @@ -1181,7 +1181,7 @@ DNNReturnType ff_dnn_flush_tf(const DNNModel *model) TFRequestItem *request; DNNReturnType ret; - if (ff_queue_size(tf_model->inference_queue) == 0) { + if (ff_queue_size(tf_model->lltask_queue) == 0) { // no pending task need to flush return DNN_SUCCESS; } @@ -1216,11 +1216,11 @@ void ff_dnn_free_model_tf(DNNModel **model) } ff_safe_queue_destroy(tf_model->request_queue); - while (ff_queue_size(tf_model->inference_queue) != 0) { - InferenceItem *item = ff_queue_pop_front(tf_model->inference_queue); + while (ff_queue_size(tf_model->lltask_queue) != 0) { + LastLevelTaskItem *item = ff_queue_pop_front(tf_model->lltask_queue); av_freep(&item); } - ff_queue_destroy(tf_model->inference_queue); + ff_queue_destroy(tf_model->lltask_queue); while (ff_queue_size(tf_model->task_queue) != 0) { TaskItem *item = ff_queue_pop_front(tf_model->task_queue); From patchwork Fri Aug 20 14:21:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29640 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp1310118iov; Fri, 20 Aug 2021 07:23:29 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyaGnBVJ+H2oJ6qcQSQ0uCta6ARp4GcQipvoTIELz69W+YixY4qDMMccDwV3B4N+eRX5fXc X-Received: by 2002:aa7:cd9a:: with SMTP id x26mr22533074edv.26.1629469408916; Fri, 20 Aug 2021 07:23:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629469408; cv=none; d=google.com; s=arc-20160816; b=TREHHOmILd3rNBBxySF69bFZ1Xy0X+dAMKXiV5x+8hTOFsLqPvX2YGWdMXocHwVeOS m71AXFHyLg6E7oBI17FqA85Kahne6u4MMSMBB1nLNAOSaHCYudBRxkAkGiI3lKOInwiq 4TKvoo8zY9ZU2YB4NsuSpQER8JYVYhKffm1vlwtgL8UKfWtZ9Ur6gZAb4U1jyE0Faxd0 KywFegh+hTHfUYeDDQIE2McerRMZihBM6KqXHqCoCqEP2zQFN79ZQqBvhS+fBxrO5EKA /6uNSfD+bE9cN3qbO8oSW684Eu6TpByWIJGcgdn/zi/FDVd8XoiIGqHbishSadYJvPoB gLYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=vszPdJYsEsxq2tIgnFbwksHsgxBwCRolasv0H3euJbw=; b=LVJVI5imbNmmuqpJtIIMsVcku/C8WNN4EJKWEPtSl7HV9fgNetomCdAT/KQa+oTXgl ptgGdRfMIR84/XaWzp+Msg0Ej7KjD5yV3QcDVAJQKiZYGwo87X5nTG0QeZlzVhQNtR0E V+2J8CsIbNVFoIl4EB6/yB7CW9PYmcfHa1xYeQUaSVaCVdwh3zUEoTLWJTjuXdVSr36k WDm3J0Y9EgpeycCnIiqDUOj3+OXI7NptsMLHuOMNoJxqU7or6h1P3rb1QnJTl5RmgN/q 6/5eDei9CdhCZphIsbVKPcqjwoan5uh8KlFdNruznsEbyF13d+URwAHBYDWBOLLt7PSE BsWA== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=OuluSjd4; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id c12si323299eja.180.2021.08.20.07.23.27; Fri, 20 Aug 2021 07:23:28 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=OuluSjd4; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id A383068A450; Fri, 20 Aug 2021 17:22:34 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id D3E0168A41C for ; Fri, 20 Aug 2021 17:22:27 +0300 (EEST) Received: by mail-pj1-f47.google.com with SMTP id hv22-20020a17090ae416b0290178c579e424so7341209pjb.3 for ; Fri, 20 Aug 2021 07:22:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cSd/F23ONSbWsuDHuI0hiic6vHzuecG+gzhNcisIT1w=; b=OuluSjd4EzIxcmO/ft8h6B1oxcS15MLguJaGNj0SN+/8taWejpyTt63FengunP0h3m BKmlnHq93dQSVsU33Ok5r4Se9PJQo6glXpNh1PdGDOnORmcS70+ggAtwucpH26SPhZGm di5cMKwXa6zoqDhFS8STHRbFlYIoLKGdeRFkfQTn+lyqVXwgEGax/i7WnOi/haZkUvGc 1ZR7u/xtfBtiNuwM2PbrR+RbPjPCWKdXClox8RnXZ2/CmMvmQz0JsQN6VxpbGCFWG84p QKKtTGMvn1PE2lQRcPBlDtTDEM9sD8gJQLX0vVvZhXfHnacfv5fQiv9abnWB2Of/YeUs y8Qg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cSd/F23ONSbWsuDHuI0hiic6vHzuecG+gzhNcisIT1w=; b=gkcid7xheaXRmF+zo7MPZDLoG7mlbigBmUWVReY9Lil62KAmpDDnER1YmKpv/Lq0o5 DVoZcKBa+jwBt9osMqAzDi5IoEbgKPwyP4X3OCAyNv/H+7EByPMKsOoCtWAMdzU5aMQm Pom3WwEFmgn6FOYzPvB2CAR2encY8M60/Wz+xKbK05fVc6CJiuFFwHRXghCWqDXqdlbe NinJLdfYaFyOGXwvBPDM9pjStE3A0F80bsi4Wnzsq5Sud+NqtqXOUq0IjokDAISQxmQS 5esQm1B292SjaOkmRLHTHuqmSHjd2ZUIQfsSBSvLXq7DHEx5sw69YfvDUgUX+/higuPR fMWg== X-Gm-Message-State: AOAM532Ph6CaBmDwK/irwkf//aG44YMIWOWRcpuL1x3Ie0tyc82EIlGC 5oOhTsOy25cjxn8M4hEYJGXeRAp9tdzJCQ== X-Received: by 2002:a17:902:a5c5:b029:12c:9108:d83 with SMTP id t5-20020a170902a5c5b029012c91080d83mr16724705plq.33.1629469346027; Fri, 20 Aug 2021 07:22:26 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.220.241]) by smtp.googlemail.com with ESMTPSA id x19sm8276043pgk.37.2021.08.20.07.22.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Aug 2021 07:22:25 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Fri, 20 Aug 2021 19:51:03 +0530 Message-Id: <20210820142103.14953-6-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210820142103.14953-1-shubhanshu.e01@gmail.com> References: <20210820142103.14953-1-shubhanshu.e01@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH 6/6] doc/filters.texi: Include dnn_processing in docs of sr and derain filter X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: j9MTXt34lKVb Signed-off-by: Shubhanshu Saxena --- doc/filters.texi | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/filters.texi b/doc/filters.texi index fac19ac413..03a355a982 100644 --- a/doc/filters.texi +++ b/doc/filters.texi @@ -9974,7 +9974,7 @@ Note that different backends use different file formats. TensorFlow and native backend can load files for only its format. @end table -It can also be finished with @ref{dnn_processing} filter. +To get full functionality (such as async execution), please use the @ref{dnn_processing} filter. @section deshake @@ -19263,7 +19263,7 @@ Default value is @code{2}. Scale factor is necessary for SRCNN model, because it input upscaled using bicubic upscaling with proper scale factor. @end table -This feature can also be finished with @ref{dnn_processing} filter. +To get full functionality (such as async execution), please use the @ref{dnn_processing} filter. @section ssim