From patchwork Sat Aug 21 07:59:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29662 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp330946iov; Sat, 21 Aug 2021 00:59:36 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxPd1a1hpxrsM0BRHaQQ+bGXozljOo8TAPQ9ENK69YrwNpVUB2z7fZc/SgNYuX4M1kZq6pd X-Received: by 2002:aa7:d54f:: with SMTP id u15mr24027023edr.178.1629532776470; Sat, 21 Aug 2021 00:59:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629532776; cv=none; d=google.com; s=arc-20160816; b=yuOKtjN/LLd0jAipJeW1HLogmcnYN9v9SRYy6Nxu/217UnTeFZhzTyr7YoU+xvosTU UQNdgptpBGhbbPFNUUpUjPVEB4IcA5kZQMWr/AxKoDQ0C8Oe+Z4iEVxjkvaGwiZhvO0U EfN0nYx8I+Cf06EoBnvIbxEwZXXg7ZC1ZMP+ZIbvBhv2S+7vIqWXpLO7FdMqk+q91d2m 0A3r8NWeNz6PmV9WTul9c2e0M8c2M216JoEjAGh+erq7jx0/B+f9Wf7Q/UH+idkAmx8n NqM4duMpvAB4y6CuBT/7zZUX6Tc8U/bYyJQjvdjT4IG25P3lGN60209GQhd3rxJ1Kkvx ir+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:message-id:date:to:from :dkim-signature:delivered-to; bh=xu2dKNmWrWwG0geOgKuam9qO2bznmV2ZbhWhVoZ63Zk=; b=ebFLSQXiH8o39k/qP//SKuJKLOXhcivNvxcTg+HXIOMBNmEr35FoUpu4pcBohznb6w FMXpfP18FTwSdoJsXUhRB52lPVYYxZy/f5vSSOpDo3EgnJAfIMlmTyys02QEaElUVtUK dU+NySSRzjJwVYyXaeVfy65fzY7DTDGGP1CuCwRfiVVN46nHkWLVg0DnhG85iY1PILVU Hm++T4NdYkm4Cj67XGtYJZd6oIvu+4CVQUf06MisOIyJbrVQ9b27zDmrIiTTJtvE32lP ocQTvngwgZyxpNgTT6hm0FjRNb6ASyWWmlW+qkdR9L/gWYj3vDcV+rTWAedW2I/6H8gx Ayiw== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=EHB2D4e9; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id h15si90670edw.3.2021.08.21.00.59.36; Sat, 21 Aug 2021 00:59:36 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=EHB2D4e9; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id A723E68A306; Sat, 21 Aug 2021 10:59:32 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id DCDA36880FC for ; Sat, 21 Aug 2021 10:59:25 +0300 (EEST) Received: by mail-pl1-f174.google.com with SMTP id c4so7262397plh.7 for ; Sat, 21 Aug 2021 00:59:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=pw+Yajcr6OYzBhe2Ue6TuF+FKjaSfHiIKA9m+KwpSuY=; b=EHB2D4e9An6wr8F/ZFB8agZujYgkd4U6BzAgkVhja/SIOKXp8AoEnVR1nT9iPJZ1Nq ck4tA+hoyAIlKBtrOAcRwsxnll4SYB36kk7KLpk+aiWh/wFlo4fEbWIDjQ+lFgWGdqwa szVlyQyPlS0RVdGOLoAzBFtEALHhPD4kS4D2U/oqqc8eDT+qwXrqj9ENF5a+UsDw8Npx oI5Hb8gqVm9J0ga5keixPtkOvrlOpRDrlfvORZ9ysyAKY+op+F4VAOE3D+jei8XmfCRd 6F6r96gSbQ/gHs/jfg5oljHT74Wpb4JniiZGkdWCYASONutmz/lTwE8rFxNGkXwulIX1 JCWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=pw+Yajcr6OYzBhe2Ue6TuF+FKjaSfHiIKA9m+KwpSuY=; b=EfsqnIkZjy87XvR16ElPfPj0KdQGAYKbXzb0O14rAIeqBzTG19M0UA9Fy819oCgRg4 0ngSH9hKIavPdMJdQ4JJfkBnS/PKV+kWP9Eko5ttPL6G2L3cOSty/COIdhPDEm093sAX J9CLT7EHdMlthjrkYrHvl2HXkP9Ukm8D1NldIyBDQmuOzNAv2FZjdllIyZMJuiNuXUqe ajOIxqnmiYsNJ+Uo6JvzBX+3nL8DI3QIBpf9e5pJxUrGw2Z+47NroFFE1gC93uyDzEjL p9HBL2Bhwpa6vYwIqxWwhZeQT2TBgEww1FLfpz+Jki7FqIvwdvRAZ885S3tWo+vUMoqL X/XA== X-Gm-Message-State: AOAM531NxhZco5DJoPDA1dt6UxYiIQrfzMeN3/yLXg+srlpAAxpA290x i0ndJfMAqGL+W73ISN942TN3MSaa6tQ2Cg== X-Received: by 2002:a17:903:230d:b0:12d:b55c:884a with SMTP id d13-20020a170903230d00b0012db55c884amr19717654plh.22.1629532763757; Sat, 21 Aug 2021 00:59:23 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.220.198]) by smtp.googlemail.com with ESMTPSA id z20sm8565295pjq.14.2021.08.21.00.59.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Aug 2021 00:59:23 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Sat, 21 Aug 2021 13:29:00 +0530 Message-Id: <20210821075905.12850-1-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v2 1/6] lavfi/dnn: Task-based Inference in Native Backend X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: LwqhAfdLxLX6 This commit rearranges the code in Native Backend to use the TaskItem for inference. Signed-off-by: Shubhanshu Saxena --- libavfilter/dnn/dnn_backend_native.c | 176 ++++++++++++++++++--------- libavfilter/dnn/dnn_backend_native.h | 2 + 2 files changed, 121 insertions(+), 57 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_native.c b/libavfilter/dnn/dnn_backend_native.c index a6be27f1fd..3b2a3aa55d 100644 --- a/libavfilter/dnn/dnn_backend_native.c +++ b/libavfilter/dnn/dnn_backend_native.c @@ -45,9 +45,29 @@ static const AVClass dnn_native_class = { .category = AV_CLASS_CATEGORY_FILTER, }; -static DNNReturnType execute_model_native(const DNNModel *model, const char *input_name, AVFrame *in_frame, - const char **output_names, uint32_t nb_output, AVFrame *out_frame, - int do_ioproc); +static DNNReturnType execute_model_native(Queue *inference_queue); + +static DNNReturnType extract_inference_from_task(TaskItem *task, Queue *inference_queue) +{ + NativeModel *native_model = task->model; + NativeContext *ctx = &native_model->ctx; + InferenceItem *inference = av_malloc(sizeof(*inference)); + + if (!inference) { + av_log(ctx, AV_LOG_ERROR, "Unable to allocate space for InferenceItem\n"); + return DNN_ERROR; + } + task->inference_todo = 1; + task->inference_done = 0; + inference->task = task; + + if (ff_queue_push_back(inference_queue, inference) < 0) { + av_log(ctx, AV_LOG_ERROR, "Failed to push back inference_queue.\n"); + av_freep(&inference); + return DNN_ERROR; + } + return DNN_SUCCESS; +} static DNNReturnType get_input_native(void *model, DNNData *input, const char *input_name) { @@ -78,34 +98,36 @@ static DNNReturnType get_input_native(void *model, DNNData *input, const char *i static DNNReturnType get_output_native(void *model, const char *input_name, int input_width, int input_height, const char *output_name, int *output_width, int *output_height) { - DNNReturnType ret; + DNNReturnType ret = 0; NativeModel *native_model = model; NativeContext *ctx = &native_model->ctx; - AVFrame *in_frame = av_frame_alloc(); - AVFrame *out_frame = NULL; - - if (!in_frame) { - av_log(ctx, AV_LOG_ERROR, "Could not allocate memory for input frame\n"); - return DNN_ERROR; + TaskItem task; + DNNExecBaseParams exec_params = { + .input_name = input_name, + .output_names = &output_name, + .nb_output = 1, + .in_frame = NULL, + .out_frame = NULL, + }; + + if (ff_dnn_fill_gettingoutput_task(&task, &exec_params, native_model, input_height, input_width, ctx) != DNN_SUCCESS) { + ret = DNN_ERROR; + goto err; } - out_frame = av_frame_alloc(); - - if (!out_frame) { - av_log(ctx, AV_LOG_ERROR, "Could not allocate memory for output frame\n"); - av_frame_free(&in_frame); - return DNN_ERROR; + if (extract_inference_from_task(&task, native_model->inference_queue) != DNN_SUCCESS) { + av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); + ret = DNN_ERROR; + goto err; } - in_frame->width = input_width; - in_frame->height = input_height; - - ret = execute_model_native(native_model->model, input_name, in_frame, &output_name, 1, out_frame, 0); - *output_width = out_frame->width; - *output_height = out_frame->height; + ret = execute_model_native(native_model->inference_queue); + *output_width = task.out_frame->width; + *output_height = task.out_frame->height; - av_frame_free(&out_frame); - av_frame_free(&in_frame); +err: + av_frame_free(&task.out_frame); + av_frame_free(&task.in_frame); return ret; } @@ -190,6 +212,11 @@ DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType f goto fail; } + native_model->inference_queue = ff_queue_create(); + if (!native_model->inference_queue) { + goto fail; + } + for (layer = 0; layer < native_model->layers_num; ++layer){ layer_type = (int32_t)avio_rl32(model_file_context); dnn_size += 4; @@ -259,50 +286,66 @@ fail: return NULL; } -static DNNReturnType execute_model_native(const DNNModel *model, const char *input_name, AVFrame *in_frame, - const char **output_names, uint32_t nb_output, AVFrame *out_frame, - int do_ioproc) +static DNNReturnType execute_model_native(Queue *inference_queue) { - NativeModel *native_model = model->model; - NativeContext *ctx = &native_model->ctx; + NativeModel *native_model = NULL; + NativeContext *ctx = NULL; int32_t layer; DNNData input, output; DnnOperand *oprd = NULL; + InferenceItem *inference = NULL; + TaskItem *task = NULL; + DNNReturnType ret = 0; + + inference = ff_queue_pop_front(inference_queue); + if (!inference) { + av_log(NULL, AV_LOG_ERROR, "Failed to get inference item\n"); + ret = DNN_ERROR; + goto err; + } + task = inference->task; + native_model = task->model; + ctx = &native_model->ctx; if (native_model->layers_num <= 0 || native_model->operands_num <= 0) { av_log(ctx, AV_LOG_ERROR, "No operands or layers in model\n"); - return DNN_ERROR; + ret = DNN_ERROR; + goto err; } for (int i = 0; i < native_model->operands_num; ++i) { oprd = &native_model->operands[i]; - if (strcmp(oprd->name, input_name) == 0) { + if (strcmp(oprd->name, task->input_name) == 0) { if (oprd->type != DOT_INPUT) { - av_log(ctx, AV_LOG_ERROR, "Found \"%s\" in model, but it is not input node\n", input_name); - return DNN_ERROR; + av_log(ctx, AV_LOG_ERROR, "Found \"%s\" in model, but it is not input node\n", task->input_name); + ret = DNN_ERROR; + goto err; } break; } oprd = NULL; } if (!oprd) { - av_log(ctx, AV_LOG_ERROR, "Could not find \"%s\" in model\n", input_name); - return DNN_ERROR; + av_log(ctx, AV_LOG_ERROR, "Could not find \"%s\" in model\n", task->input_name); + ret = DNN_ERROR; + goto err; } - oprd->dims[1] = in_frame->height; - oprd->dims[2] = in_frame->width; + oprd->dims[1] = task->in_frame->height; + oprd->dims[2] = task->in_frame->width; av_freep(&oprd->data); oprd->length = ff_calculate_operand_data_length(oprd); if (oprd->length <= 0) { av_log(ctx, AV_LOG_ERROR, "The input data length overflow\n"); - return DNN_ERROR; + ret = DNN_ERROR; + goto err; } oprd->data = av_malloc(oprd->length); if (!oprd->data) { av_log(ctx, AV_LOG_ERROR, "Failed to malloc memory for input data\n"); - return DNN_ERROR; + ret = DNN_ERROR; + goto err; } input.height = oprd->dims[1]; @@ -310,19 +353,20 @@ static DNNReturnType execute_model_native(const DNNModel *model, const char *inp input.channels = oprd->dims[3]; input.data = oprd->data; input.dt = oprd->data_type; - if (do_ioproc) { + if (task->do_ioproc) { if (native_model->model->frame_pre_proc != NULL) { - native_model->model->frame_pre_proc(in_frame, &input, native_model->model->filter_ctx); + native_model->model->frame_pre_proc(task->in_frame, &input, native_model->model->filter_ctx); } else { - ff_proc_from_frame_to_dnn(in_frame, &input, ctx); + ff_proc_from_frame_to_dnn(task->in_frame, &input, ctx); } } - if (nb_output != 1) { + if (task->nb_output != 1) { // currently, the filter does not need multiple outputs, // so we just pending the support until we really need it. avpriv_report_missing_feature(ctx, "multiple outputs"); - return DNN_ERROR; + ret = DNN_ERROR; + goto err; } for (layer = 0; layer < native_model->layers_num; ++layer){ @@ -333,13 +377,14 @@ static DNNReturnType execute_model_native(const DNNModel *model, const char *inp native_model->layers[layer].params, &native_model->ctx) == DNN_ERROR) { av_log(ctx, AV_LOG_ERROR, "Failed to execute model\n"); - return DNN_ERROR; + ret = DNN_ERROR; + goto err; } } - for (uint32_t i = 0; i < nb_output; ++i) { + for (uint32_t i = 0; i < task->nb_output; ++i) { DnnOperand *oprd = NULL; - const char *output_name = output_names[i]; + const char *output_name = task->output_names[i]; for (int j = 0; j < native_model->operands_num; ++j) { if (strcmp(native_model->operands[j].name, output_name) == 0) { oprd = &native_model->operands[j]; @@ -349,7 +394,8 @@ static DNNReturnType execute_model_native(const DNNModel *model, const char *inp if (oprd == NULL) { av_log(ctx, AV_LOG_ERROR, "Could not find output in model\n"); - return DNN_ERROR; + ret = DNN_ERROR; + goto err; } output.data = oprd->data; @@ -358,32 +404,43 @@ static DNNReturnType execute_model_native(const DNNModel *model, const char *inp output.channels = oprd->dims[3]; output.dt = oprd->data_type; - if (do_ioproc) { + if (task->do_ioproc) { if (native_model->model->frame_post_proc != NULL) { - native_model->model->frame_post_proc(out_frame, &output, native_model->model->filter_ctx); + native_model->model->frame_post_proc(task->out_frame, &output, native_model->model->filter_ctx); } else { - ff_proc_from_dnn_to_frame(out_frame, &output, ctx); + ff_proc_from_dnn_to_frame(task->out_frame, &output, ctx); } } else { - out_frame->width = output.width; - out_frame->height = output.height; + task->out_frame->width = output.width; + task->out_frame->height = output.height; } } - - return DNN_SUCCESS; + task->inference_done++; +err: + av_freep(&inference); + return ret; } DNNReturnType ff_dnn_execute_model_native(const DNNModel *model, DNNExecBaseParams *exec_params) { NativeModel *native_model = model->model; NativeContext *ctx = &native_model->ctx; + TaskItem task; if (ff_check_exec_params(ctx, DNN_NATIVE, model->func_type, exec_params) != 0) { return DNN_ERROR; } - return execute_model_native(model, exec_params->input_name, exec_params->in_frame, - exec_params->output_names, exec_params->nb_output, exec_params->out_frame, 1); + if (ff_dnn_fill_task(&task, exec_params, native_model, 0, 1) != DNN_SUCCESS) { + return DNN_ERROR; + } + + if (extract_inference_from_task(&task, native_model->inference_queue) != DNN_SUCCESS) { + av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); + return DNN_ERROR; + } + + return execute_model_native(native_model->inference_queue); } int32_t ff_calculate_operand_dims_count(const DnnOperand *oprd) @@ -435,6 +492,11 @@ void ff_dnn_free_model_native(DNNModel **model) av_freep(&native_model->operands); } + while (ff_queue_size(native_model->inference_queue) != 0) { + InferenceItem *item = ff_queue_pop_front(native_model->inference_queue); + av_freep(&item); + } + ff_queue_destroy(native_model->inference_queue); av_freep(&native_model); } av_freep(model); diff --git a/libavfilter/dnn/dnn_backend_native.h b/libavfilter/dnn/dnn_backend_native.h index 89bcb8e358..1b9d5bdf2d 100644 --- a/libavfilter/dnn/dnn_backend_native.h +++ b/libavfilter/dnn/dnn_backend_native.h @@ -30,6 +30,7 @@ #include "../dnn_interface.h" #include "libavformat/avio.h" #include "libavutil/opt.h" +#include "queue.h" /** * the enum value of DNNLayerType should not be changed, @@ -126,6 +127,7 @@ typedef struct NativeModel{ int32_t layers_num; DnnOperand *operands; int32_t operands_num; + Queue *inference_queue; } NativeModel; DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx); From patchwork Sat Aug 21 07:59:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29660 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp331059iov; Sat, 21 Aug 2021 00:59:46 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzMpZbO3cgmJSxJDpuwYyzVNt7gd3yHGiRZV49vlUChOVJbRGm/Gd4FQODv7RT7OwgLi7Wa X-Received: by 2002:a17:906:eb0e:: with SMTP id mb14mr5538688ejb.481.1629532786608; Sat, 21 Aug 2021 00:59:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629532786; cv=none; d=google.com; s=arc-20160816; b=j96uatYa5KcrW3IQ1nBzs+yIrC4DnaWHlP5tMZC5tK9NuPlrAFE6k+Z+njhjt+P9Cf tN5yTFXMmNJxmH2LDxNrnDv/p1Q4PVK1WODEAcoETO+yPwgliODm601itNv+sBXhFu6f UCrDs0C+89l3YeVBI/Geu4ymPgIeWtBUpPhnjLcYruz5fQvgAC8cYsieO9hUKq0l75aM lLP5uLDha3aOvoH6C95PByYOrFTysdCUxRbh9t+DhWNitbNs0l9x9k/vzrIFGN1FBzw3 pyTuDiTYuasgjdq+y/g3n3Uwy5GnQc3V2cNAFglgD7amFkZunFgoWfcEUtFuIxx2zN1p NqPg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=8Ij/hgMq2/l3h2l9qGs1fO6QsWmR7QsWfLXZDcZ4YU0=; b=Zb6SdThco0TreIqb8NSxQsUVt5cgyXmrbb5p2kIe7BxOcRgm9tVVeH+dFvVW6bENg9 N1oSDRnNGUwUQPYe0WppEKw0h3oAL2387sRaGq/ge5pwKHiQfranBuEqmL/04Kd3vh3b JD6wdZdPcyaapg18v6FSspP3Jh77sQBHZOR+mV6ETX74Y4XsR4se7pSVmhJbOxSmlMcw +1hWXxF4mmqpraXMlXnU9Vm+sZGtc59r5xpI7mY/FT6X+uVPiVmjQeKNu0PewoCz9A+e ScKXAwnPLt+J8nQN62xsF9wyo0rmgMuf5sqAZQs0Cns4zqHB3QjjRZN7IyHLpfMcHcko +hhA== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=uK0TYo6o; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id w11si10281704ede.300.2021.08.21.00.59.46; Sat, 21 Aug 2021 00:59:46 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=uK0TYo6o; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 9F71168A3AF; Sat, 21 Aug 2021 10:59:33 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id D9A346880FC for ; Sat, 21 Aug 2021 10:59:27 +0300 (EEST) Received: by mail-pl1-f182.google.com with SMTP id e1so331397plt.11 for ; Sat, 21 Aug 2021 00:59:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=D0chzyfm0GqqnrVwzFmf9+NvIIx8DdAyC0PvWDW6vp0=; b=uK0TYo6osqHOjd4N5/nl8jV/UX/6BmBBibfFZ4dtkCdttm2wCsSfJMAliSSl4fNSR8 SsqpebEZlEFyCR29VbRp/Bn6ihsBRBAVNoQhQ3xCwVi/SVoP1+aoEg662/ABbPIG1jSQ VQmWKN21BZM0lhIlljFoQbSbGIns4ZvwB2dMNOjnuXOz6a4kvr0LmN2k6jBNbWOpv+8D HqjSPszsdjH32iZ+J6fbbVgYKMdR+7wVUqCF7V4UZQjtYn5AFbDW3CGnwirRd7fFXewo 3tfYTQZfDHcqInqywhRkyOMq2hXrnwgnWa28OXcDQwCQ7HdnO3OLukZOI2BbcNdbrYHM iMoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=D0chzyfm0GqqnrVwzFmf9+NvIIx8DdAyC0PvWDW6vp0=; b=fAiHjoKGcTnYSDlcYHRe/Vs/qKvHoG2VzNMqxG8szF3lBQHEFZF2Z3FzEdz4pR3++A BoyLUZNh3bEy+5/IVaHNnd9bNNAFyC5BOjO/uxigFDNd2VKa9J6LDnsJ+qmksYO3BOm1 OTIo5Cxe1BfEi/Sbo6Yuij0BSXGblhJZn4ZGnAKNmSzBopjUzBhykE5SRSzLW9V8T6VE TOVTy4/p3+Zv5+AMUSCddLTPtwfhGQOwF3uSQmmOqXNbxtT6/FwYWSKJjlZfUZ91c2Sp eFPMlohzWmu/5Pytk3KP/khhzs7nXTsJTLs5u71/8SFzlT9uvFn7NerwKZ65JK3R05fP lrYg== X-Gm-Message-State: AOAM532zMxffbYuHoQO6TEVeIxB8U9bFpgN4+E5CphnKATI7ckYtNuHs JCqLWDZBnf4CUAUq6+vF81y+LpVqXnSnZA== X-Received: by 2002:a17:90a:c48:: with SMTP id u8mr8506808pje.150.1629532765964; Sat, 21 Aug 2021 00:59:25 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.220.198]) by smtp.googlemail.com with ESMTPSA id z20sm8565295pjq.14.2021.08.21.00.59.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Aug 2021 00:59:25 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Sat, 21 Aug 2021 13:29:01 +0530 Message-Id: <20210821075905.12850-2-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210821075905.12850-1-shubhanshu.e01@gmail.com> References: <20210821075905.12850-1-shubhanshu.e01@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v2 2/6] libavfilter: Unify Execution Modes in DNN Filters X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: zR0r38U3zXEU This commit unifies the async and sync mode from the DNN filters' perspective. As of this commit, the Native backend only supports synchronous execution mode. Now the user can switch between async and sync mode by using the 'async' option in the backend_configs. The values can be 1 for async and 0 for sync mode of execution. This commit affects the following filters: 1. vf_dnn_classify 2. vf_dnn_detect 3. vf_dnn_processing 4. vf_sr 5. vf_derain Signed-off-by: Shubhanshu Saxena --- libavfilter/dnn/dnn_backend_common.c | 2 +- libavfilter/dnn/dnn_backend_common.h | 5 +- libavfilter/dnn/dnn_backend_native.c | 59 +++++++++++++++- libavfilter/dnn/dnn_backend_native.h | 6 ++ libavfilter/dnn/dnn_backend_openvino.c | 94 ++++++++++---------------- libavfilter/dnn/dnn_backend_openvino.h | 3 +- libavfilter/dnn/dnn_backend_tf.c | 35 ++-------- libavfilter/dnn/dnn_backend_tf.h | 3 +- libavfilter/dnn/dnn_interface.c | 8 +-- libavfilter/dnn_filter_common.c | 23 +------ libavfilter/dnn_filter_common.h | 3 +- libavfilter/dnn_interface.h | 4 +- libavfilter/vf_derain.c | 7 ++ libavfilter/vf_dnn_classify.c | 4 +- libavfilter/vf_dnn_detect.c | 10 +-- libavfilter/vf_dnn_processing.c | 10 +-- libavfilter/vf_sr.c | 8 +++ 17 files changed, 142 insertions(+), 142 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_common.c b/libavfilter/dnn/dnn_backend_common.c index 426683b73d..d2bc016fef 100644 --- a/libavfilter/dnn/dnn_backend_common.c +++ b/libavfilter/dnn/dnn_backend_common.c @@ -138,7 +138,7 @@ DNNReturnType ff_dnn_start_inference_async(void *ctx, DNNAsyncExecModule *async_ return DNN_SUCCESS; } -DNNAsyncStatusType ff_dnn_get_async_result_common(Queue *task_queue, AVFrame **in, AVFrame **out) +DNNAsyncStatusType ff_dnn_get_result_common(Queue *task_queue, AVFrame **in, AVFrame **out) { TaskItem *task = ff_queue_peek_front(task_queue); diff --git a/libavfilter/dnn/dnn_backend_common.h b/libavfilter/dnn/dnn_backend_common.h index 604c1d3bd7..78e62a94a2 100644 --- a/libavfilter/dnn/dnn_backend_common.h +++ b/libavfilter/dnn/dnn_backend_common.h @@ -29,7 +29,8 @@ #include "libavutil/thread.h" #define DNN_BACKEND_COMMON_OPTIONS \ - { "nireq", "number of request", OFFSET(options.nireq), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, INT_MAX, FLAGS }, + { "nireq", "number of request", OFFSET(options.nireq), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, INT_MAX, FLAGS }, \ + { "async", "use DNN async inference", OFFSET(options.async), AV_OPT_TYPE_BOOL, { .i64 = 1 }, 0, 1, FLAGS }, // one task for one function call from dnn interface typedef struct TaskItem { @@ -135,7 +136,7 @@ DNNReturnType ff_dnn_start_inference_async(void *ctx, DNNAsyncExecModule *async_ * @retval DAST_NOT_READY if inference not completed yet. * @retval DAST_SUCCESS if result successfully extracted */ -DNNAsyncStatusType ff_dnn_get_async_result_common(Queue *task_queue, AVFrame **in, AVFrame **out); +DNNAsyncStatusType ff_dnn_get_result_common(Queue *task_queue, AVFrame **in, AVFrame **out); /** * Allocate input and output frames and fill the Task diff --git a/libavfilter/dnn/dnn_backend_native.c b/libavfilter/dnn/dnn_backend_native.c index 3b2a3aa55d..2d34b88f8a 100644 --- a/libavfilter/dnn/dnn_backend_native.c +++ b/libavfilter/dnn/dnn_backend_native.c @@ -34,6 +34,7 @@ #define FLAGS AV_OPT_FLAG_FILTERING_PARAM static const AVOption dnn_native_options[] = { { "conv2d_threads", "threads num for conv2d layer", OFFSET(options.conv2d_threads), AV_OPT_TYPE_INT, { .i64 = 0 }, INT_MIN, INT_MAX, FLAGS }, + { "async", "use DNN async inference", OFFSET(options.async), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, FLAGS }, { NULL }, }; @@ -189,6 +190,11 @@ DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType f goto fail; native_model->model = model; + if (native_model->ctx.options.async) { + av_log(&native_model->ctx, AV_LOG_WARNING, "Async not supported. Rolling back to sync\n"); + native_model->ctx.options.async = 0; + } + #if !HAVE_PTHREAD_CANCEL if (native_model->ctx.options.conv2d_threads > 1){ av_log(&native_model->ctx, AV_LOG_WARNING, "'conv2d_threads' option was set but it is not supported " @@ -212,6 +218,11 @@ DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType f goto fail; } + native_model->task_queue = ff_queue_create(); + if (!native_model->task_queue) { + goto fail; + } + native_model->inference_queue = ff_queue_create(); if (!native_model->inference_queue) { goto fail; @@ -425,17 +436,30 @@ DNNReturnType ff_dnn_execute_model_native(const DNNModel *model, DNNExecBasePara { NativeModel *native_model = model->model; NativeContext *ctx = &native_model->ctx; - TaskItem task; + TaskItem *task; if (ff_check_exec_params(ctx, DNN_NATIVE, model->func_type, exec_params) != 0) { return DNN_ERROR; } - if (ff_dnn_fill_task(&task, exec_params, native_model, 0, 1) != DNN_SUCCESS) { + task = av_malloc(sizeof(*task)); + if (!task) { + av_log(ctx, AV_LOG_ERROR, "unable to alloc memory for task item.\n"); return DNN_ERROR; } - if (extract_inference_from_task(&task, native_model->inference_queue) != DNN_SUCCESS) { + if (ff_dnn_fill_task(task, exec_params, native_model, ctx->options.async, 1) != DNN_SUCCESS) { + av_freep(&task); + return DNN_ERROR; + } + + if (ff_queue_push_back(native_model->task_queue, task) < 0) { + av_freep(&task); + av_log(ctx, AV_LOG_ERROR, "unable to push back task_queue.\n"); + return DNN_ERROR; + } + + if (extract_inference_from_task(task, native_model->inference_queue) != DNN_SUCCESS) { av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); return DNN_ERROR; } @@ -443,6 +467,26 @@ DNNReturnType ff_dnn_execute_model_native(const DNNModel *model, DNNExecBasePara return execute_model_native(native_model->inference_queue); } +DNNReturnType ff_dnn_flush_native(const DNNModel *model) +{ + NativeModel *native_model = model->model; + + if (ff_queue_size(native_model->inference_queue) == 0) { + // no pending task need to flush + return DNN_SUCCESS; + } + + // for now, use sync node with flush operation + // Switch to async when it is supported + return execute_model_native(native_model->inference_queue); +} + +DNNAsyncStatusType ff_dnn_get_result_native(const DNNModel *model, AVFrame **in, AVFrame **out) +{ + NativeModel *native_model = model->model; + return ff_dnn_get_result_common(native_model->task_queue, in, out); +} + int32_t ff_calculate_operand_dims_count(const DnnOperand *oprd) { int32_t result = 1; @@ -497,6 +541,15 @@ void ff_dnn_free_model_native(DNNModel **model) av_freep(&item); } ff_queue_destroy(native_model->inference_queue); + + while (ff_queue_size(native_model->task_queue) != 0) { + TaskItem *item = ff_queue_pop_front(native_model->task_queue); + av_frame_free(&item->in_frame); + av_frame_free(&item->out_frame); + av_freep(&item); + } + ff_queue_destroy(native_model->task_queue); + av_freep(&native_model); } av_freep(model); diff --git a/libavfilter/dnn/dnn_backend_native.h b/libavfilter/dnn/dnn_backend_native.h index 1b9d5bdf2d..ca61bb353f 100644 --- a/libavfilter/dnn/dnn_backend_native.h +++ b/libavfilter/dnn/dnn_backend_native.h @@ -111,6 +111,7 @@ typedef struct InputParams{ } InputParams; typedef struct NativeOptions{ + uint8_t async; uint32_t conv2d_threads; } NativeOptions; @@ -127,6 +128,7 @@ typedef struct NativeModel{ int32_t layers_num; DnnOperand *operands; int32_t operands_num; + Queue *task_queue; Queue *inference_queue; } NativeModel; @@ -134,6 +136,10 @@ DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType f DNNReturnType ff_dnn_execute_model_native(const DNNModel *model, DNNExecBaseParams *exec_params); +DNNAsyncStatusType ff_dnn_get_result_native(const DNNModel *model, AVFrame **in, AVFrame **out); + +DNNReturnType ff_dnn_flush_native(const DNNModel *model); + void ff_dnn_free_model_native(DNNModel **model); // NOTE: User must check for error (return value <= 0) to handle diff --git a/libavfilter/dnn/dnn_backend_openvino.c b/libavfilter/dnn/dnn_backend_openvino.c index c825e70c82..abf3f4b6b6 100644 --- a/libavfilter/dnn/dnn_backend_openvino.c +++ b/libavfilter/dnn/dnn_backend_openvino.c @@ -39,6 +39,7 @@ typedef struct OVOptions{ char *device_type; int nireq; + uint8_t async; int batch_size; int input_resizable; } OVOptions; @@ -758,55 +759,6 @@ err: } DNNReturnType ff_dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams *exec_params) -{ - OVModel *ov_model = model->model; - OVContext *ctx = &ov_model->ctx; - TaskItem task; - OVRequestItem *request; - - if (ff_check_exec_params(ctx, DNN_OV, model->func_type, exec_params) != 0) { - return DNN_ERROR; - } - - if (model->func_type == DFT_ANALYTICS_CLASSIFY) { - // Once we add async support for tensorflow backend and native backend, - // we'll combine the two sync/async functions in dnn_interface.h to - // simplify the code in filter, and async will be an option within backends. - // so, do not support now, and classify filter will not call this function. - return DNN_ERROR; - } - - if (ctx->options.batch_size > 1) { - avpriv_report_missing_feature(ctx, "batch mode for sync execution"); - return DNN_ERROR; - } - - if (!ov_model->exe_network) { - if (init_model_ov(ov_model, exec_params->input_name, exec_params->output_names[0]) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "Failed init OpenVINO exectuable network or inference request\n"); - return DNN_ERROR; - } - } - - if (ff_dnn_fill_task(&task, exec_params, ov_model, 0, 1) != DNN_SUCCESS) { - return DNN_ERROR; - } - - if (extract_inference_from_task(ov_model->model->func_type, &task, ov_model->inference_queue, exec_params) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); - return DNN_ERROR; - } - - request = ff_safe_queue_pop_front(ov_model->request_queue); - if (!request) { - av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); - return DNN_ERROR; - } - - return execute_model_ov(request, ov_model->inference_queue); -} - -DNNReturnType ff_dnn_execute_model_async_ov(const DNNModel *model, DNNExecBaseParams *exec_params) { OVModel *ov_model = model->model; OVContext *ctx = &ov_model->ctx; @@ -831,7 +783,8 @@ DNNReturnType ff_dnn_execute_model_async_ov(const DNNModel *model, DNNExecBasePa return DNN_ERROR; } - if (ff_dnn_fill_task(task, exec_params, ov_model, 1, 1) != DNN_SUCCESS) { + if (ff_dnn_fill_task(task, exec_params, ov_model, ctx->options.async, 1) != DNN_SUCCESS) { + av_freep(&task); return DNN_ERROR; } @@ -846,26 +799,47 @@ DNNReturnType ff_dnn_execute_model_async_ov(const DNNModel *model, DNNExecBasePa return DNN_ERROR; } - while (ff_queue_size(ov_model->inference_queue) >= ctx->options.batch_size) { + if (ctx->options.async) { + while (ff_queue_size(ov_model->inference_queue) >= ctx->options.batch_size) { + request = ff_safe_queue_pop_front(ov_model->request_queue); + if (!request) { + av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); + return DNN_ERROR; + } + + ret = execute_model_ov(request, ov_model->inference_queue); + if (ret != DNN_SUCCESS) { + return ret; + } + } + + return DNN_SUCCESS; + } + else { + if (model->func_type == DFT_ANALYTICS_CLASSIFY) { + // Classification filter has not been completely + // tested with the sync mode. So, do not support now. + return DNN_ERROR; + } + + if (ctx->options.batch_size > 1) { + avpriv_report_missing_feature(ctx, "batch mode for sync execution"); + return DNN_ERROR; + } + request = ff_safe_queue_pop_front(ov_model->request_queue); if (!request) { av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); return DNN_ERROR; } - - ret = execute_model_ov(request, ov_model->inference_queue); - if (ret != DNN_SUCCESS) { - return ret; - } + return execute_model_ov(request, ov_model->inference_queue); } - - return DNN_SUCCESS; } -DNNAsyncStatusType ff_dnn_get_async_result_ov(const DNNModel *model, AVFrame **in, AVFrame **out) +DNNAsyncStatusType ff_dnn_get_result_ov(const DNNModel *model, AVFrame **in, AVFrame **out) { OVModel *ov_model = model->model; - return ff_dnn_get_async_result_common(ov_model->task_queue, in, out); + return ff_dnn_get_result_common(ov_model->task_queue, in, out); } DNNReturnType ff_dnn_flush_ov(const DNNModel *model) diff --git a/libavfilter/dnn/dnn_backend_openvino.h b/libavfilter/dnn/dnn_backend_openvino.h index 046d0c5b5a..0bbca0c057 100644 --- a/libavfilter/dnn/dnn_backend_openvino.h +++ b/libavfilter/dnn/dnn_backend_openvino.h @@ -32,8 +32,7 @@ DNNModel *ff_dnn_load_model_ov(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx); DNNReturnType ff_dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams *exec_params); -DNNReturnType ff_dnn_execute_model_async_ov(const DNNModel *model, DNNExecBaseParams *exec_params); -DNNAsyncStatusType ff_dnn_get_async_result_ov(const DNNModel *model, AVFrame **in, AVFrame **out); +DNNAsyncStatusType ff_dnn_get_result_ov(const DNNModel *model, AVFrame **in, AVFrame **out); DNNReturnType ff_dnn_flush_ov(const DNNModel *model); void ff_dnn_free_model_ov(DNNModel **model); diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c index ffec1b1328..4a0b561f29 100644 --- a/libavfilter/dnn/dnn_backend_tf.c +++ b/libavfilter/dnn/dnn_backend_tf.c @@ -42,6 +42,7 @@ typedef struct TFOptions{ char *sess_config; + uint8_t async; uint32_t nireq; } TFOptions; @@ -1121,34 +1122,6 @@ err: DNNReturnType ff_dnn_execute_model_tf(const DNNModel *model, DNNExecBaseParams *exec_params) { - TFModel *tf_model = model->model; - TFContext *ctx = &tf_model->ctx; - TaskItem task; - TFRequestItem *request; - - if (ff_check_exec_params(ctx, DNN_TF, model->func_type, exec_params) != 0) { - return DNN_ERROR; - } - - if (ff_dnn_fill_task(&task, exec_params, tf_model, 0, 1) != DNN_SUCCESS) { - return DNN_ERROR; - } - - if (extract_inference_from_task(&task, tf_model->inference_queue) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); - return DNN_ERROR; - } - - request = ff_safe_queue_pop_front(tf_model->request_queue); - if (!request) { - av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); - return DNN_ERROR; - } - - return execute_model_tf(request, tf_model->inference_queue); -} - -DNNReturnType ff_dnn_execute_model_async_tf(const DNNModel *model, DNNExecBaseParams *exec_params) { TFModel *tf_model = model->model; TFContext *ctx = &tf_model->ctx; TaskItem *task; @@ -1164,7 +1137,7 @@ DNNReturnType ff_dnn_execute_model_async_tf(const DNNModel *model, DNNExecBasePa return DNN_ERROR; } - if (ff_dnn_fill_task(task, exec_params, tf_model, 1, 1) != DNN_SUCCESS) { + if (ff_dnn_fill_task(task, exec_params, tf_model, ctx->options.async, 1) != DNN_SUCCESS) { av_freep(&task); return DNN_ERROR; } @@ -1188,10 +1161,10 @@ DNNReturnType ff_dnn_execute_model_async_tf(const DNNModel *model, DNNExecBasePa return execute_model_tf(request, tf_model->inference_queue); } -DNNAsyncStatusType ff_dnn_get_async_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out) +DNNAsyncStatusType ff_dnn_get_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out) { TFModel *tf_model = model->model; - return ff_dnn_get_async_result_common(tf_model->task_queue, in, out); + return ff_dnn_get_result_common(tf_model->task_queue, in, out); } DNNReturnType ff_dnn_flush_tf(const DNNModel *model) diff --git a/libavfilter/dnn/dnn_backend_tf.h b/libavfilter/dnn/dnn_backend_tf.h index aec0fc2011..f14ea8c47a 100644 --- a/libavfilter/dnn/dnn_backend_tf.h +++ b/libavfilter/dnn/dnn_backend_tf.h @@ -32,8 +32,7 @@ DNNModel *ff_dnn_load_model_tf(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx); DNNReturnType ff_dnn_execute_model_tf(const DNNModel *model, DNNExecBaseParams *exec_params); -DNNReturnType ff_dnn_execute_model_async_tf(const DNNModel *model, DNNExecBaseParams *exec_params); -DNNAsyncStatusType ff_dnn_get_async_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out); +DNNAsyncStatusType ff_dnn_get_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out); DNNReturnType ff_dnn_flush_tf(const DNNModel *model); void ff_dnn_free_model_tf(DNNModel **model); diff --git a/libavfilter/dnn/dnn_interface.c b/libavfilter/dnn/dnn_interface.c index 81af934dd5..554a36b0dc 100644 --- a/libavfilter/dnn/dnn_interface.c +++ b/libavfilter/dnn/dnn_interface.c @@ -42,14 +42,15 @@ DNNModule *ff_get_dnn_module(DNNBackendType backend_type) case DNN_NATIVE: dnn_module->load_model = &ff_dnn_load_model_native; dnn_module->execute_model = &ff_dnn_execute_model_native; + dnn_module->get_result = &ff_dnn_get_result_native; + dnn_module->flush = &ff_dnn_flush_native; dnn_module->free_model = &ff_dnn_free_model_native; break; case DNN_TF: #if (CONFIG_LIBTENSORFLOW == 1) dnn_module->load_model = &ff_dnn_load_model_tf; dnn_module->execute_model = &ff_dnn_execute_model_tf; - dnn_module->execute_model_async = &ff_dnn_execute_model_async_tf; - dnn_module->get_async_result = &ff_dnn_get_async_result_tf; + dnn_module->get_result = &ff_dnn_get_result_tf; dnn_module->flush = &ff_dnn_flush_tf; dnn_module->free_model = &ff_dnn_free_model_tf; #else @@ -61,8 +62,7 @@ DNNModule *ff_get_dnn_module(DNNBackendType backend_type) #if (CONFIG_LIBOPENVINO == 1) dnn_module->load_model = &ff_dnn_load_model_ov; dnn_module->execute_model = &ff_dnn_execute_model_ov; - dnn_module->execute_model_async = &ff_dnn_execute_model_async_ov; - dnn_module->get_async_result = &ff_dnn_get_async_result_ov; + dnn_module->get_result = &ff_dnn_get_result_ov; dnn_module->flush = &ff_dnn_flush_ov; dnn_module->free_model = &ff_dnn_free_model_ov; #else diff --git a/libavfilter/dnn_filter_common.c b/libavfilter/dnn_filter_common.c index fcee7482c3..455eaa37f4 100644 --- a/libavfilter/dnn_filter_common.c +++ b/libavfilter/dnn_filter_common.c @@ -84,11 +84,6 @@ int ff_dnn_init(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *fil return AVERROR(EINVAL); } - if (!ctx->dnn_module->execute_model_async && ctx->async) { - ctx->async = 0; - av_log(filter_ctx, AV_LOG_WARNING, "this backend does not support async execution, roll back to sync.\n"); - } - #if !HAVE_PTHREAD_CANCEL if (ctx->async) { ctx->async = 0; @@ -141,18 +136,6 @@ DNNReturnType ff_dnn_execute_model(DnnContext *ctx, AVFrame *in_frame, AVFrame * return (ctx->dnn_module->execute_model)(ctx->model, &exec_params); } -DNNReturnType ff_dnn_execute_model_async(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame) -{ - DNNExecBaseParams exec_params = { - .input_name = ctx->model_inputname, - .output_names = (const char **)ctx->model_outputnames, - .nb_output = ctx->nb_outputs, - .in_frame = in_frame, - .out_frame = out_frame, - }; - return (ctx->dnn_module->execute_model_async)(ctx->model, &exec_params); -} - DNNReturnType ff_dnn_execute_model_classification(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame, const char *target) { DNNExecClassificationParams class_params = { @@ -165,12 +148,12 @@ DNNReturnType ff_dnn_execute_model_classification(DnnContext *ctx, AVFrame *in_f }, .target = target, }; - return (ctx->dnn_module->execute_model_async)(ctx->model, &class_params.base); + return (ctx->dnn_module->execute_model)(ctx->model, &class_params.base); } -DNNAsyncStatusType ff_dnn_get_async_result(DnnContext *ctx, AVFrame **in_frame, AVFrame **out_frame) +DNNAsyncStatusType ff_dnn_get_result(DnnContext *ctx, AVFrame **in_frame, AVFrame **out_frame) { - return (ctx->dnn_module->get_async_result)(ctx->model, in_frame, out_frame); + return (ctx->dnn_module->get_result)(ctx->model, in_frame, out_frame); } DNNReturnType ff_dnn_flush(DnnContext *ctx) diff --git a/libavfilter/dnn_filter_common.h b/libavfilter/dnn_filter_common.h index 8e43d8c8dc..4d92c1dc36 100644 --- a/libavfilter/dnn_filter_common.h +++ b/libavfilter/dnn_filter_common.h @@ -56,9 +56,8 @@ int ff_dnn_set_classify_post_proc(DnnContext *ctx, ClassifyPostProc post_proc); DNNReturnType ff_dnn_get_input(DnnContext *ctx, DNNData *input); DNNReturnType ff_dnn_get_output(DnnContext *ctx, int input_width, int input_height, int *output_width, int *output_height); DNNReturnType ff_dnn_execute_model(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame); -DNNReturnType ff_dnn_execute_model_async(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame); DNNReturnType ff_dnn_execute_model_classification(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame, const char *target); -DNNAsyncStatusType ff_dnn_get_async_result(DnnContext *ctx, AVFrame **in_frame, AVFrame **out_frame); +DNNAsyncStatusType ff_dnn_get_result(DnnContext *ctx, AVFrame **in_frame, AVFrame **out_frame); DNNReturnType ff_dnn_flush(DnnContext *ctx); void ff_dnn_uninit(DnnContext *ctx); diff --git a/libavfilter/dnn_interface.h b/libavfilter/dnn_interface.h index 5e9ffeb077..37e89d9789 100644 --- a/libavfilter/dnn_interface.h +++ b/libavfilter/dnn_interface.h @@ -114,10 +114,8 @@ typedef struct DNNModule{ DNNModel *(*load_model)(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx); // Executes model with specified input and output. Returns DNN_ERROR otherwise. DNNReturnType (*execute_model)(const DNNModel *model, DNNExecBaseParams *exec_params); - // Executes model with specified input and output asynchronously. Returns DNN_ERROR otherwise. - DNNReturnType (*execute_model_async)(const DNNModel *model, DNNExecBaseParams *exec_params); // Retrieve inference result. - DNNAsyncStatusType (*get_async_result)(const DNNModel *model, AVFrame **in, AVFrame **out); + DNNAsyncStatusType (*get_result)(const DNNModel *model, AVFrame **in, AVFrame **out); // Flush all the pending tasks. DNNReturnType (*flush)(const DNNModel *model); // Frees memory allocated for model. diff --git a/libavfilter/vf_derain.c b/libavfilter/vf_derain.c index 720cc70f5c..97bdb4e843 100644 --- a/libavfilter/vf_derain.c +++ b/libavfilter/vf_derain.c @@ -68,6 +68,7 @@ static int query_formats(AVFilterContext *ctx) static int filter_frame(AVFilterLink *inlink, AVFrame *in) { + DNNAsyncStatusType async_state = 0; AVFilterContext *ctx = inlink->dst; AVFilterLink *outlink = ctx->outputs[0]; DRContext *dr_context = ctx->priv; @@ -88,6 +89,12 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in) av_frame_free(&in); return AVERROR(EIO); } + do { + async_state = ff_dnn_get_result(&dr_context->dnnctx, &in, &out); + } while (async_state == DAST_NOT_READY); + + if (async_state != DAST_SUCCESS) + return AVERROR(EINVAL); av_frame_free(&in); diff --git a/libavfilter/vf_dnn_classify.c b/libavfilter/vf_dnn_classify.c index 904067f5f6..d5ee65d403 100644 --- a/libavfilter/vf_dnn_classify.c +++ b/libavfilter/vf_dnn_classify.c @@ -224,7 +224,7 @@ static int dnn_classify_flush_frame(AVFilterLink *outlink, int64_t pts, int64_t do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { av_assert0(in_frame == out_frame); ret = ff_filter_frame(outlink, out_frame); @@ -268,7 +268,7 @@ static int dnn_classify_activate(AVFilterContext *filter_ctx) do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { av_assert0(in_frame == out_frame); ret = ff_filter_frame(outlink, out_frame); diff --git a/libavfilter/vf_dnn_detect.c b/libavfilter/vf_dnn_detect.c index 200b5897c1..11de376753 100644 --- a/libavfilter/vf_dnn_detect.c +++ b/libavfilter/vf_dnn_detect.c @@ -424,7 +424,7 @@ static int dnn_detect_flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *o do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { av_assert0(in_frame == out_frame); ret = ff_filter_frame(outlink, out_frame); @@ -458,7 +458,7 @@ static int dnn_detect_activate_async(AVFilterContext *filter_ctx) if (ret < 0) return ret; if (ret > 0) { - if (ff_dnn_execute_model_async(&ctx->dnnctx, in, in) != DNN_SUCCESS) { + if (ff_dnn_execute_model(&ctx->dnnctx, in, in) != DNN_SUCCESS) { return AVERROR(EIO); } } @@ -468,7 +468,7 @@ static int dnn_detect_activate_async(AVFilterContext *filter_ctx) do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { av_assert0(in_frame == out_frame); ret = ff_filter_frame(outlink, out_frame); @@ -496,7 +496,7 @@ static int dnn_detect_activate_async(AVFilterContext *filter_ctx) return 0; } -static int dnn_detect_activate(AVFilterContext *filter_ctx) +static av_unused int dnn_detect_activate(AVFilterContext *filter_ctx) { DnnDetectContext *ctx = filter_ctx->priv; @@ -537,5 +537,5 @@ const AVFilter ff_vf_dnn_detect = { FILTER_INPUTS(dnn_detect_inputs), FILTER_OUTPUTS(dnn_detect_outputs), .priv_class = &dnn_detect_class, - .activate = dnn_detect_activate, + .activate = dnn_detect_activate_async, }; diff --git a/libavfilter/vf_dnn_processing.c b/libavfilter/vf_dnn_processing.c index 5f24955263..7435dd4959 100644 --- a/libavfilter/vf_dnn_processing.c +++ b/libavfilter/vf_dnn_processing.c @@ -328,7 +328,7 @@ static int flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *out_pts) do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { if (isPlanarYUV(in_frame->format)) copy_uv_planes(ctx, out_frame, in_frame); @@ -370,7 +370,7 @@ static int activate_async(AVFilterContext *filter_ctx) return AVERROR(ENOMEM); } av_frame_copy_props(out, in); - if (ff_dnn_execute_model_async(&ctx->dnnctx, in, out) != DNN_SUCCESS) { + if (ff_dnn_execute_model(&ctx->dnnctx, in, out) != DNN_SUCCESS) { return AVERROR(EIO); } } @@ -380,7 +380,7 @@ static int activate_async(AVFilterContext *filter_ctx) do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { if (isPlanarYUV(in_frame->format)) copy_uv_planes(ctx, out_frame, in_frame); @@ -410,7 +410,7 @@ static int activate_async(AVFilterContext *filter_ctx) return 0; } -static int activate(AVFilterContext *filter_ctx) +static av_unused int activate(AVFilterContext *filter_ctx) { DnnProcessingContext *ctx = filter_ctx->priv; @@ -454,5 +454,5 @@ const AVFilter ff_vf_dnn_processing = { FILTER_INPUTS(dnn_processing_inputs), FILTER_OUTPUTS(dnn_processing_outputs), .priv_class = &dnn_processing_class, - .activate = activate, + .activate = activate_async, }; diff --git a/libavfilter/vf_sr.c b/libavfilter/vf_sr.c index f009a868f8..f4fc84d251 100644 --- a/libavfilter/vf_sr.c +++ b/libavfilter/vf_sr.c @@ -119,6 +119,7 @@ static int config_output(AVFilterLink *outlink) static int filter_frame(AVFilterLink *inlink, AVFrame *in) { + DNNAsyncStatusType async_state = 0; AVFilterContext *context = inlink->dst; SRContext *ctx = context->priv; AVFilterLink *outlink = context->outputs[0]; @@ -148,6 +149,13 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in) return AVERROR(EIO); } + do { + async_state = ff_dnn_get_result(&ctx->dnnctx, &in, &out); + } while (async_state == DAST_NOT_READY); + + if (async_state != DAST_SUCCESS) + return AVERROR(EINVAL); + if (ctx->sws_uv_scale) { sws_scale(ctx->sws_uv_scale, (const uint8_t **)(in->data + 1), in->linesize + 1, 0, ctx->sws_uv_height, out->data + 1, out->linesize + 1); From patchwork Sat Aug 21 07:59:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29659 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp331168iov; Sat, 21 Aug 2021 00:59:57 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw6hUboi2G8+9NDJJFYAQFVAtVnJ/92Je0DqQXB23vCymgWBxupbvWkp/nNV9EoBxJi8XfP X-Received: by 2002:a17:907:9853:: with SMTP id jj19mr25705973ejc.69.1629532796932; Sat, 21 Aug 2021 00:59:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629532796; cv=none; d=google.com; s=arc-20160816; b=iz8ON91DaSF9agoOHPiAiGd1drG0msFfG5LWArbQIRr7WfZl6Z25v0zK0e7n8Pv1LH UQB+X1zN3jGDSsMxv9EzoUuuiRh9BbLmqueY8bDgEDrra3vzpsyaow0K0BZcyEFVzBJN 4s5a2A9J64I/b7jmYWFQH/pV8HAS8ELN3GPnH2PBgR3NT5OGhz+VgUvHiHZHmugtlkJr xGm078eIpxsYGn2hLHLe28FXB4fx+y+9dYguM0tglGJEl3p4xmMfT7npy2Zh8JpptNRo FUyL2xue0Gta0aBUUV4D6v6VKfsEOMihG98i3T9Rn69Mxrx/L4wRFskI3/3N+D8SwEOm tmtg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=tgeb9EeUFRLp7ILdglfym8VhzXOPzTcP3IskwCvOEoQ=; b=MCzyExZMwR75PUVNVYIWdV6GWGoqrZmybQSMgZjjqW7xRcqAj2jLbWW++Buj1zB/W/ LAfwunN91tXfylbz+39EZQqvTOkWgOGugOmuE6WDK1ANWfbBRter8M8L7SfKLF+LdtME jLv6wee4MBkQBWn6Pu4AiDh87iVm4WAj+y2QodlwMbj2i0oXz3nzip7WBT9CyJuPI2UE STiIHzIuqeLMqeE+CIpMkahTybdtUryzz4GO9DqqbtzAJO/4muEgGvM6sAIKcXZAN7JD mpC7Ae/Rx4hAVZieMPsO+Hud9Wd3s55H2dfcMofyS74upAuNmG5eavd8nVMhBrQ7ik1X 6NbA== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=aL8120+w; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id l3si9636993ejo.634.2021.08.21.00.59.56; Sat, 21 Aug 2021 00:59:56 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=aL8120+w; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id A681968A408; Sat, 21 Aug 2021 10:59:35 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 79D5A68A348 for ; Sat, 21 Aug 2021 10:59:29 +0300 (EEST) Received: by mail-pj1-f43.google.com with SMTP id mq3so8774517pjb.5 for ; Sat, 21 Aug 2021 00:59:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kerxkJIFTEUo1lakrAXtZvZ6zlf+he6RHyzlWv+8W5I=; b=aL8120+wxJ9dcH/+neqXE2MCS7oltZWh3ypYWA/REcmlbU3pljfET79d1Jh+ONcz7y VZFm1O9TDIm9ncgNL4bmQOOmqYzO02DmpFSKm1Mcvfci8tBswpFnnTSl6vjNP9WAiWb3 ZAscTqsVVgfLR5bJax5K5j/7Nx8RFpjPP63CgqoVOkEQNk0OA/91esMEPKWruCT/daad U7ypxZZE3B38mIfxNxJoKg94SIMc3yEY/jcFstDPYmiMvvAkAjeaOP+lMbF2daIcyHky 479AlPMUnkof7x/0s7Cui5sMvxeBccI41U1gv+LcTBsXkt51bLAJlJLepYilIOR5LjJZ fJVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kerxkJIFTEUo1lakrAXtZvZ6zlf+he6RHyzlWv+8W5I=; b=bAUfP5qwp9fcyOoO20AY3gvgb5cBcjM1eRCXzVHpbF7YjPt1mNLfXV5bGoe7TAbsLA CIpGEmNa3JiJ7H9qOpyxRVL2QTEYLlly1eHeGIhj8HqLLvtm1ria+nKywPlv6RIjvyyh ElFpDXEMSydLlY5Hor+ABz9x6aMO5lB+A0xznF+AeR0QH6JIAaens3EtiAMwE5lRELPG e/dnpBNUgnrOv718yKaUPHD+e5mP+miUktqemvexM0b6B3253/TLb1XE86lODZxtvO7j edoE0oHaBKFaRbNnKIknZBgguD7uzAOUjiE6+TLc0BoD2b5EyjwJHgvafLnALaPGFrNt eBPQ== X-Gm-Message-State: AOAM530SShAhNCvWNQdeRtMUoiyMNyKr+w/Q1GjlQSK1fXwUqL007xSg ZHnBllpLvns+bvilNW9hyUlrLSVLGsmlOw== X-Received: by 2002:a17:90a:b88:: with SMTP id 8mr8799506pjr.12.1629532767743; Sat, 21 Aug 2021 00:59:27 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.220.198]) by smtp.googlemail.com with ESMTPSA id z20sm8565295pjq.14.2021.08.21.00.59.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Aug 2021 00:59:27 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Sat, 21 Aug 2021 13:29:02 +0530 Message-Id: <20210821075905.12850-3-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210821075905.12850-1-shubhanshu.e01@gmail.com> References: <20210821075905.12850-1-shubhanshu.e01@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v2 3/6] libavfilter: Remove synchronous functions from DNN filters X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: 6ji9ttsh3WBc This commit removes the unused sync mode specific code from the DNN filters since the sync and async mode are now unified from the filters' perspective. Signed-off-by: Shubhanshu Saxena --- libavfilter/vf_dnn_detect.c | 71 +--------------------------- libavfilter/vf_dnn_processing.c | 84 +-------------------------------- 2 files changed, 4 insertions(+), 151 deletions(-) diff --git a/libavfilter/vf_dnn_detect.c b/libavfilter/vf_dnn_detect.c index 11de376753..809d70b930 100644 --- a/libavfilter/vf_dnn_detect.c +++ b/libavfilter/vf_dnn_detect.c @@ -353,63 +353,6 @@ static int dnn_detect_query_formats(AVFilterContext *context) return ff_set_common_formats_from_list(context, pix_fmts); } -static int dnn_detect_filter_frame(AVFilterLink *inlink, AVFrame *in) -{ - AVFilterContext *context = inlink->dst; - AVFilterLink *outlink = context->outputs[0]; - DnnDetectContext *ctx = context->priv; - DNNReturnType dnn_result; - - dnn_result = ff_dnn_execute_model(&ctx->dnnctx, in, in); - if (dnn_result != DNN_SUCCESS){ - av_log(ctx, AV_LOG_ERROR, "failed to execute model\n"); - av_frame_free(&in); - return AVERROR(EIO); - } - - return ff_filter_frame(outlink, in); -} - -static int dnn_detect_activate_sync(AVFilterContext *filter_ctx) -{ - AVFilterLink *inlink = filter_ctx->inputs[0]; - AVFilterLink *outlink = filter_ctx->outputs[0]; - AVFrame *in = NULL; - int64_t pts; - int ret, status; - int got_frame = 0; - - FF_FILTER_FORWARD_STATUS_BACK(outlink, inlink); - - do { - // drain all input frames - ret = ff_inlink_consume_frame(inlink, &in); - if (ret < 0) - return ret; - if (ret > 0) { - ret = dnn_detect_filter_frame(inlink, in); - if (ret < 0) - return ret; - got_frame = 1; - } - } while (ret > 0); - - // if frame got, schedule to next filter - if (got_frame) - return 0; - - if (ff_inlink_acknowledge_status(inlink, &status, &pts)) { - if (status == AVERROR_EOF) { - ff_outlink_set_status(outlink, status, pts); - return ret; - } - } - - FF_FILTER_FORWARD_WANTED(outlink, inlink); - - return FFERROR_NOT_READY; -} - static int dnn_detect_flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *out_pts) { DnnDetectContext *ctx = outlink->src->priv; @@ -439,7 +382,7 @@ static int dnn_detect_flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *o return 0; } -static int dnn_detect_activate_async(AVFilterContext *filter_ctx) +static int dnn_detect_activate(AVFilterContext *filter_ctx) { AVFilterLink *inlink = filter_ctx->inputs[0]; AVFilterLink *outlink = filter_ctx->outputs[0]; @@ -496,16 +439,6 @@ static int dnn_detect_activate_async(AVFilterContext *filter_ctx) return 0; } -static av_unused int dnn_detect_activate(AVFilterContext *filter_ctx) -{ - DnnDetectContext *ctx = filter_ctx->priv; - - if (ctx->dnnctx.async) - return dnn_detect_activate_async(filter_ctx); - else - return dnn_detect_activate_sync(filter_ctx); -} - static av_cold void dnn_detect_uninit(AVFilterContext *context) { DnnDetectContext *ctx = context->priv; @@ -537,5 +470,5 @@ const AVFilter ff_vf_dnn_detect = { FILTER_INPUTS(dnn_detect_inputs), FILTER_OUTPUTS(dnn_detect_outputs), .priv_class = &dnn_detect_class, - .activate = dnn_detect_activate_async, + .activate = dnn_detect_activate, }; diff --git a/libavfilter/vf_dnn_processing.c b/libavfilter/vf_dnn_processing.c index 7435dd4959..55634efde5 100644 --- a/libavfilter/vf_dnn_processing.c +++ b/libavfilter/vf_dnn_processing.c @@ -244,76 +244,6 @@ static int copy_uv_planes(DnnProcessingContext *ctx, AVFrame *out, const AVFrame return 0; } -static int filter_frame(AVFilterLink *inlink, AVFrame *in) -{ - AVFilterContext *context = inlink->dst; - AVFilterLink *outlink = context->outputs[0]; - DnnProcessingContext *ctx = context->priv; - DNNReturnType dnn_result; - AVFrame *out; - - out = ff_get_video_buffer(outlink, outlink->w, outlink->h); - if (!out) { - av_frame_free(&in); - return AVERROR(ENOMEM); - } - av_frame_copy_props(out, in); - - dnn_result = ff_dnn_execute_model(&ctx->dnnctx, in, out); - if (dnn_result != DNN_SUCCESS){ - av_log(ctx, AV_LOG_ERROR, "failed to execute model\n"); - av_frame_free(&in); - av_frame_free(&out); - return AVERROR(EIO); - } - - if (isPlanarYUV(in->format)) - copy_uv_planes(ctx, out, in); - - av_frame_free(&in); - return ff_filter_frame(outlink, out); -} - -static int activate_sync(AVFilterContext *filter_ctx) -{ - AVFilterLink *inlink = filter_ctx->inputs[0]; - AVFilterLink *outlink = filter_ctx->outputs[0]; - AVFrame *in = NULL; - int64_t pts; - int ret, status; - int got_frame = 0; - - FF_FILTER_FORWARD_STATUS_BACK(outlink, inlink); - - do { - // drain all input frames - ret = ff_inlink_consume_frame(inlink, &in); - if (ret < 0) - return ret; - if (ret > 0) { - ret = filter_frame(inlink, in); - if (ret < 0) - return ret; - got_frame = 1; - } - } while (ret > 0); - - // if frame got, schedule to next filter - if (got_frame) - return 0; - - if (ff_inlink_acknowledge_status(inlink, &status, &pts)) { - if (status == AVERROR_EOF) { - ff_outlink_set_status(outlink, status, pts); - return ret; - } - } - - FF_FILTER_FORWARD_WANTED(outlink, inlink); - - return FFERROR_NOT_READY; -} - static int flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *out_pts) { DnnProcessingContext *ctx = outlink->src->priv; @@ -345,7 +275,7 @@ static int flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *out_pts) return 0; } -static int activate_async(AVFilterContext *filter_ctx) +static int activate(AVFilterContext *filter_ctx) { AVFilterLink *inlink = filter_ctx->inputs[0]; AVFilterLink *outlink = filter_ctx->outputs[0]; @@ -410,16 +340,6 @@ static int activate_async(AVFilterContext *filter_ctx) return 0; } -static av_unused int activate(AVFilterContext *filter_ctx) -{ - DnnProcessingContext *ctx = filter_ctx->priv; - - if (ctx->dnnctx.async) - return activate_async(filter_ctx); - else - return activate_sync(filter_ctx); -} - static av_cold void uninit(AVFilterContext *ctx) { DnnProcessingContext *context = ctx->priv; @@ -454,5 +374,5 @@ const AVFilter ff_vf_dnn_processing = { FILTER_INPUTS(dnn_processing_inputs), FILTER_OUTPUTS(dnn_processing_outputs), .priv_class = &dnn_processing_class, - .activate = activate_async, + .activate = activate, }; From patchwork Sat Aug 21 07:59:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29661 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp331290iov; Sat, 21 Aug 2021 01:00:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxeeWceGDm5qc37gcHuRzrJMFYNwFfBzI91CvnCLpwcei7J3O/S5xp/MLcD7pgLIIEjD0Xy X-Received: by 2002:a05:6402:50c6:: with SMTP id h6mr18339493edb.3.1629532806040; Sat, 21 Aug 2021 01:00:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629532806; cv=none; d=google.com; s=arc-20160816; b=H6g1ZcPaA4Krwnt4IIq9uwNwiDIudHxhROMKeyfsmeVVOg0fHYKSW7bwuqrfqL9MnW 2q6A0NEU5XzQKua43H+mwxONPKFtBxWEuBAB3JmeOkH9dQHHSkd4y33VO5zq75FbGo6n UDRPjmszAqeS2Bwzrp7zdlKgLiuzmyuB1CyOXDtabsIemIEfubVRwSgieRm5H3ydPLJN wska0E7ZZHiMPVrczlPe4xBfubaaWKjZ7f5jJhsGKBfGqmqjOL5tOcUBIBPQzU4h8sVc IS5mlrEVWyPqDQrfkcFCXC1jFdJTim1IcFx3EsnmnN6kwtXhhqPBVfKj8ynDkMswVKtq RSmA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=k0kmrE/IIfPoS9DTZHJ5VSZR/Up6vD2+XXVCS5/uXT4=; b=AMoASrKIikEOgorhcIXGrYqakxGBtQLrnUEwy4Wytr9llKwSQPENFw1lU6aSGPNw9S bQ/nj7Ji+CWn94miW2FwlR5znjbS10eL+xkjA4f9jcL2kpWFy4+khXlFnyT43DQ4vDvu fPAqZv4wQUf3k7IJA8BVOxsiSYJgzOmFnwrwC5BCpJE0LWqGfoIKNcdbeUODeYDb8eSU UYjiPD6Qg4WsG6Q96ZTInd+qFoaQ0L36mrqOp+yolKkvay1DRj9vUXcaBeCtNaLAoubJ tn1dULeCarMbcJdvJaILlvw1GgIuU/XunEfC/0mKBgb9ZyXvgsc8g11BAP1H0S0/hHqc ygVA== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=oRofFkiI; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id o24si8144261edr.546.2021.08.21.01.00.05; Sat, 21 Aug 2021 01:00:06 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=oRofFkiI; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 9ED1268A41C; Sat, 21 Aug 2021 10:59:37 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 1F64C68A41C for ; Sat, 21 Aug 2021 10:59:31 +0300 (EEST) Received: by mail-pl1-f174.google.com with SMTP id o10so7331356plg.0 for ; Sat, 21 Aug 2021 00:59:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KkQ9KUeoi1jMlISyzZYntIzJ2/tzZWnF27ESfOXQp2o=; b=oRofFkiImZOFkGzA586mJVU7NhiuPDqrOJQFqsgxyUGfUTgxfOqE8Tj/+So52cpNX0 72HP6guXwwH/SaJNTr3v6q+bhmCwasEBf5Dk3wqCibT/y7WTXmsP4PAVQqiir72zQX48 NeScuVqQShL29CprGqeqUmr/buM6Y/+RCwoGGJZqzpZ/9c6HlJqg/5I5L1J1NWpt2eqh R64gyWdWJJ79LV8Tl6e362i/kCK0wcrgC4Gr4wcmfMy8An/SFkJgdk/9I6UgKWq2XVdR XR4Fb/rNvibmcNYp0PEGX/wmPujge2jzpI5ywTR1gB6qzCq0grxqba+5yQ5FraipjHNQ GpFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KkQ9KUeoi1jMlISyzZYntIzJ2/tzZWnF27ESfOXQp2o=; b=PpddcXAvc/cZ6FdqQnOf3YQVFGPLhXHc8x2sxW9MrFyPCmBAPha6VE+2HgJR/BpNDj m7fmuV+0yx6tl4ZVHnAH5Kq8istjtx4vknxSGWgyw9ySDEfSH37ylt57NF1CbSCKxFxF uRhgrwdllnPBMARMw6FZ3Y8T8uJFzTQh4apLDV3kGt00px7fqQJiwo5V34/uy0+PtV3G wTyODXzL7s/6r4mkmuy9IiytO5dhhYldkuSYHI8pd4nx7skiaJVHs5kKfFQQulp6JU4y sCss46ViCpwdSjuOuMOXOGZwTBKEgiVWiYHewM4Mwpq8iw3TKCyXjoMJeXp4VAWNFXxl m1rw== X-Gm-Message-State: AOAM532oSGirbwdHzGmMC99WugngioHKTWFiLAaOJuhsunXQa5Alb1VQ mh2JYQiGI/4GVJP6vJ92YSBJRQSbe+c7dw== X-Received: by 2002:a17:90b:b0e:: with SMTP id bf14mr6044821pjb.80.1629532769432; Sat, 21 Aug 2021 00:59:29 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.220.198]) by smtp.googlemail.com with ESMTPSA id z20sm8565295pjq.14.2021.08.21.00.59.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Aug 2021 00:59:29 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Sat, 21 Aug 2021 13:29:03 +0530 Message-Id: <20210821075905.12850-4-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210821075905.12850-1-shubhanshu.e01@gmail.com> References: <20210821075905.12850-1-shubhanshu.e01@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v2 4/6] libavfilter: Remove Async Flag from DNN Filter Side X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: /Rg+485h4uzu Remove async flag from filter's perspective after the unification of async and sync modes in the DNN backend. Signed-off-by: Shubhanshu Saxena --- doc/filters.texi | 14 ++++---------- libavfilter/dnn/dnn_backend_tf.c | 7 +++++++ libavfilter/dnn_filter_common.c | 7 ------- libavfilter/dnn_filter_common.h | 2 +- 4 files changed, 12 insertions(+), 18 deletions(-) diff --git a/doc/filters.texi b/doc/filters.texi index a5752c4d6e..f7b6b61f4c 100644 --- a/doc/filters.texi +++ b/doc/filters.texi @@ -10275,11 +10275,8 @@ and the second line is the name of label id 1, etc. The label id is considered as name if the label file is not provided. @item backend_configs -Set the configs to be passed into backend - -@item async -use DNN async execution if set (default: set), -roll back to sync execution if the backend does not support async. +Set the configs to be passed into backend. To use async execution, set async (default: set). +Roll back to sync execution if the backend does not support async. @end table @@ -10331,15 +10328,12 @@ Set the input name of the dnn network. Set the output name of the dnn network. @item backend_configs -Set the configs to be passed into backend +Set the configs to be passed into backend. To use async execution, set async (default: set). +Roll back to sync execution if the backend does not support async. For tensorflow backend, you can set its configs with @option{sess_config} options, please use tools/python/tf_sess_config.py to get the configs of TensorFlow backend for your system. -@item async -use DNN async execution if set (default: set), -roll back to sync execution if the backend does not support async. - @end table @subsection Examples diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c index 4a0b561f29..906934d8c0 100644 --- a/libavfilter/dnn/dnn_backend_tf.c +++ b/libavfilter/dnn/dnn_backend_tf.c @@ -884,6 +884,13 @@ DNNModel *ff_dnn_load_model_tf(const char *model_filename, DNNFunctionType func_ ctx->options.nireq = av_cpu_count() / 2 + 1; } +#if !HAVE_PTHREAD_CANCEL + if (ctx->options.async) { + ctx->options.async = 0; + av_log(filter_ctx, AV_LOG_WARNING, "pthread is not supported, roll back to sync.\n"); + } +#endif + tf_model->request_queue = ff_safe_queue_create(); if (!tf_model->request_queue) { goto err; diff --git a/libavfilter/dnn_filter_common.c b/libavfilter/dnn_filter_common.c index 455eaa37f4..3045ce0131 100644 --- a/libavfilter/dnn_filter_common.c +++ b/libavfilter/dnn_filter_common.c @@ -84,13 +84,6 @@ int ff_dnn_init(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *fil return AVERROR(EINVAL); } -#if !HAVE_PTHREAD_CANCEL - if (ctx->async) { - ctx->async = 0; - av_log(filter_ctx, AV_LOG_WARNING, "pthread is not supported, roll back to sync.\n"); - } -#endif - return 0; } diff --git a/libavfilter/dnn_filter_common.h b/libavfilter/dnn_filter_common.h index 4d92c1dc36..635ae631c1 100644 --- a/libavfilter/dnn_filter_common.h +++ b/libavfilter/dnn_filter_common.h @@ -46,7 +46,7 @@ typedef struct DnnContext { { "output", "output name of the model", OFFSET(model_outputnames_string), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS },\ { "backend_configs", "backend configs", OFFSET(backend_options), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS },\ { "options", "backend configs (deprecated, use backend_configs)", OFFSET(backend_options), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS | AV_OPT_FLAG_DEPRECATED},\ - { "async", "use DNN async inference", OFFSET(async), AV_OPT_TYPE_BOOL, { .i64 = 1}, 0, 1, FLAGS}, + { "async", "use DNN async inference (ignored, use backend_configs='async=1')", OFFSET(async), AV_OPT_TYPE_BOOL, { .i64 = 1}, 0, 1, FLAGS}, int ff_dnn_init(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *filter_ctx); From patchwork Sat Aug 21 07:59:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29664 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp331420iov; Sat, 21 Aug 2021 01:00:18 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy16bIVRDDGAGAZu0A7hqdsrDEm3M6HtmfaaGlMXCY5fxhmHbXBgkQOOfP6eUShR/LhxHWc X-Received: by 2002:a05:6402:13c5:: with SMTP id a5mr25808018edx.132.1629532818514; Sat, 21 Aug 2021 01:00:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629532818; cv=none; d=google.com; s=arc-20160816; b=IIl75TpHwUCcJrr8dKr2uijyIyqaBVe0N7V1kzIaju6ltq0mT3//2NCdG84CznJSWr Uutn+Vjj5tHBOGNh1YUJVsuYOHMEbCigKiL9YIHaySEFzfbRsCZByOMfT5vQmfJc94nk nMFSG11+lVsWNPz1ub72W3899NGPXouH0AWiPy/qhYdWdIiqiC9nXtKIOaEyYAtf25hV j2uvki47IwPfnE0mXAPEHeLUzXy02T7IJ8DhNdKBVhY2Ysvem6RMSqvtJItv+Twtff6S GGJ8Od03cnu++I/hEzTPl6et5+IBloG3Wib+asoDPy0BMptrOIE/fAs74o8dXK1/7ZKl x/1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=wPmC1V9xA7xcIJpal4W9btuetXoWmeljT0mNz7RTB4g=; b=SfYoZC1Z9WahDPlclg10ciQrXtzO8xhFI9hE+uW2gVljfDxgKiJsoZ1D8AF/0dMZul F9RFLQi6kRMcq6BAhJteAARzhkuCYJa1orRoRGrukynXFUt8nJBppkL7/ZYqdCNaAqJT qhinJ1++8LdhQdUJCIfjXIddDWahIqomFTBNkNXeRNpyPHlxMIFXN7fpcMfCrgYZumvx Qu8XbR+d5xKTLS+S58cuipz7K9/zEOE9pXmg1meyoKAohCx4eTYKr57Id7EzXu3/n49p 85y/1NxO6m0Lg/e7Xc4KJYGGhm1s/8sDyb5s8Gujj9U0kzzEurzeFfXLNxSXb+/9eAxk xbOA== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=NA1x9WPY; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id s10si10189252edd.264.2021.08.21.01.00.17; Sat, 21 Aug 2021 01:00:18 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=NA1x9WPY; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 9B1AC68A477; Sat, 21 Aug 2021 10:59:40 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 5263068A323 for ; Sat, 21 Aug 2021 10:59:33 +0300 (EEST) Received: by mail-pl1-f175.google.com with SMTP id c4so7262489plh.7 for ; Sat, 21 Aug 2021 00:59:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=X0GoaER13LZC8Buz2Nw+pjFQZNo+J42xQflmdYhUWao=; b=NA1x9WPYPl/GBU//zWSMcwmHx1C5D0icW6q6SPwRlcfPpK1SmBlM/immkdbHEvsyzk f/O16nbLD8mWKBvG+mi3IcwcHm8CdBhP3RJxJoWvX7t0g2azpFcyF3MPy07EFbKqjmJI Mz2ZRD6z1UMbLtnkHGwWKz0QEvCbleY7hX5qmjH2AgD9O32wOfMfXIClCgHeNuPfowa1 DKcOwUNFkqQBiSKnS74YV66painsMMbKI4crToJWJrfFz5MNcUI7xCaGSlB5xZt6qYXr 8kFjNltuWKLcVIpfyE1K6OVsDQCAMoHVRLXDWPTjQ7ZEWb/d2DOh/+2zamG1gGsPRzyF LyfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=X0GoaER13LZC8Buz2Nw+pjFQZNo+J42xQflmdYhUWao=; b=EerCKpu5tTPFq2C49tSZl+6rWj27W8rLBsWCFwgSaKqgdMHBML+nGOvaZASuPWf2gI ED80SXe1cXSZcaGPwdIpdg+v1ZVlFWJ6CtQXa2wNLqK763Jd8whn8rC9l+H7RzagHuuU 80Wpbc8ljohjJgVTpcnjfpbPrUtdaSoY1euhQ+BB6NLScph1ysn2hw/yYXNf3/5Ek/P3 rImXG940S0LYLMKPCTXsStBioKOtU/30LQZOnCk+hzh8W9JPE9HhJwGKzsEQZwUqM+oU danZvztTAtBLcDssFgvNL3yAAjwcwRFUcRa2kdbtG1xDRlTWFuMrGZa4N+hS5mE4vMfA W4ig== X-Gm-Message-State: AOAM533Kc/6PUrOLOd/FhAW230PBPS1gel8i/GEPQUR0HfJODrjxSn++ X6JRgce/zFu4bX6Z0PIF3kr89GkWiaI6cQ== X-Received: by 2002:a17:90a:8405:: with SMTP id j5mr8994389pjn.160.1629532771455; Sat, 21 Aug 2021 00:59:31 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.220.198]) by smtp.googlemail.com with ESMTPSA id z20sm8565295pjq.14.2021.08.21.00.59.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Aug 2021 00:59:31 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Sat, 21 Aug 2021 13:29:04 +0530 Message-Id: <20210821075905.12850-5-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210821075905.12850-1-shubhanshu.e01@gmail.com> References: <20210821075905.12850-1-shubhanshu.e01@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v2 5/6] lavfi/dnn: Rename InferenceItem to LastLevelTaskItem X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: C3qO586EkVuk This patch renames the InferenceItem to LastLevelTaskItem in the three backends to avoid confusion among the meanings of these structs. The following are the renames done in this patch: 1. extract_inference_from_task -> extract_lltask_from_task 2. InferenceItem -> LastLevelTaskItem 3. inference_queue -> lltask_queue 4. inference -> lltask 5. inference_count -> lltask_count Signed-off-by: Shubhanshu Saxena --- libavfilter/dnn/dnn_backend_common.h | 4 +- libavfilter/dnn/dnn_backend_native.c | 58 ++++++------- libavfilter/dnn/dnn_backend_native.h | 2 +- libavfilter/dnn/dnn_backend_openvino.c | 110 ++++++++++++------------- libavfilter/dnn/dnn_backend_tf.c | 76 ++++++++--------- 5 files changed, 125 insertions(+), 125 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_common.h b/libavfilter/dnn/dnn_backend_common.h index 78e62a94a2..6b6a5e21ae 100644 --- a/libavfilter/dnn/dnn_backend_common.h +++ b/libavfilter/dnn/dnn_backend_common.h @@ -47,10 +47,10 @@ typedef struct TaskItem { } TaskItem; // one task might have multiple inferences -typedef struct InferenceItem { +typedef struct LastLevelTaskItem { TaskItem *task; uint32_t bbox_index; -} InferenceItem; +} LastLevelTaskItem; /** * Common Async Execution Mechanism for the DNN Backends. diff --git a/libavfilter/dnn/dnn_backend_native.c b/libavfilter/dnn/dnn_backend_native.c index 2d34b88f8a..13436c0484 100644 --- a/libavfilter/dnn/dnn_backend_native.c +++ b/libavfilter/dnn/dnn_backend_native.c @@ -46,25 +46,25 @@ static const AVClass dnn_native_class = { .category = AV_CLASS_CATEGORY_FILTER, }; -static DNNReturnType execute_model_native(Queue *inference_queue); +static DNNReturnType execute_model_native(Queue *lltask_queue); -static DNNReturnType extract_inference_from_task(TaskItem *task, Queue *inference_queue) +static DNNReturnType extract_lltask_from_task(TaskItem *task, Queue *lltask_queue) { NativeModel *native_model = task->model; NativeContext *ctx = &native_model->ctx; - InferenceItem *inference = av_malloc(sizeof(*inference)); + LastLevelTaskItem *lltask = av_malloc(sizeof(*lltask)); - if (!inference) { - av_log(ctx, AV_LOG_ERROR, "Unable to allocate space for InferenceItem\n"); + if (!lltask) { + av_log(ctx, AV_LOG_ERROR, "Unable to allocate space for LastLevelTaskItem\n"); return DNN_ERROR; } task->inference_todo = 1; task->inference_done = 0; - inference->task = task; + lltask->task = task; - if (ff_queue_push_back(inference_queue, inference) < 0) { - av_log(ctx, AV_LOG_ERROR, "Failed to push back inference_queue.\n"); - av_freep(&inference); + if (ff_queue_push_back(lltask_queue, lltask) < 0) { + av_log(ctx, AV_LOG_ERROR, "Failed to push back lltask_queue.\n"); + av_freep(&lltask); return DNN_ERROR; } return DNN_SUCCESS; @@ -116,13 +116,13 @@ static DNNReturnType get_output_native(void *model, const char *input_name, int goto err; } - if (extract_inference_from_task(&task, native_model->inference_queue) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); + if (extract_lltask_from_task(&task, native_model->lltask_queue) != DNN_SUCCESS) { + av_log(ctx, AV_LOG_ERROR, "unable to extract last level task from task.\n"); ret = DNN_ERROR; goto err; } - ret = execute_model_native(native_model->inference_queue); + ret = execute_model_native(native_model->lltask_queue); *output_width = task.out_frame->width; *output_height = task.out_frame->height; @@ -223,8 +223,8 @@ DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType f goto fail; } - native_model->inference_queue = ff_queue_create(); - if (!native_model->inference_queue) { + native_model->lltask_queue = ff_queue_create(); + if (!native_model->lltask_queue) { goto fail; } @@ -297,24 +297,24 @@ fail: return NULL; } -static DNNReturnType execute_model_native(Queue *inference_queue) +static DNNReturnType execute_model_native(Queue *lltask_queue) { NativeModel *native_model = NULL; NativeContext *ctx = NULL; int32_t layer; DNNData input, output; DnnOperand *oprd = NULL; - InferenceItem *inference = NULL; + LastLevelTaskItem *lltask = NULL; TaskItem *task = NULL; DNNReturnType ret = 0; - inference = ff_queue_pop_front(inference_queue); - if (!inference) { - av_log(NULL, AV_LOG_ERROR, "Failed to get inference item\n"); + lltask = ff_queue_pop_front(lltask_queue); + if (!lltask) { + av_log(NULL, AV_LOG_ERROR, "Failed to get LastLevelTaskItem\n"); ret = DNN_ERROR; goto err; } - task = inference->task; + task = lltask->task; native_model = task->model; ctx = &native_model->ctx; @@ -428,7 +428,7 @@ static DNNReturnType execute_model_native(Queue *inference_queue) } task->inference_done++; err: - av_freep(&inference); + av_freep(&lltask); return ret; } @@ -459,26 +459,26 @@ DNNReturnType ff_dnn_execute_model_native(const DNNModel *model, DNNExecBasePara return DNN_ERROR; } - if (extract_inference_from_task(task, native_model->inference_queue) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); + if (extract_lltask_from_task(task, native_model->lltask_queue) != DNN_SUCCESS) { + av_log(ctx, AV_LOG_ERROR, "unable to extract last level task from task.\n"); return DNN_ERROR; } - return execute_model_native(native_model->inference_queue); + return execute_model_native(native_model->lltask_queue); } DNNReturnType ff_dnn_flush_native(const DNNModel *model) { NativeModel *native_model = model->model; - if (ff_queue_size(native_model->inference_queue) == 0) { + if (ff_queue_size(native_model->lltask_queue) == 0) { // no pending task need to flush return DNN_SUCCESS; } // for now, use sync node with flush operation // Switch to async when it is supported - return execute_model_native(native_model->inference_queue); + return execute_model_native(native_model->lltask_queue); } DNNAsyncStatusType ff_dnn_get_result_native(const DNNModel *model, AVFrame **in, AVFrame **out) @@ -536,11 +536,11 @@ void ff_dnn_free_model_native(DNNModel **model) av_freep(&native_model->operands); } - while (ff_queue_size(native_model->inference_queue) != 0) { - InferenceItem *item = ff_queue_pop_front(native_model->inference_queue); + while (ff_queue_size(native_model->lltask_queue) != 0) { + LastLevelTaskItem *item = ff_queue_pop_front(native_model->lltask_queue); av_freep(&item); } - ff_queue_destroy(native_model->inference_queue); + ff_queue_destroy(native_model->lltask_queue); while (ff_queue_size(native_model->task_queue) != 0) { TaskItem *item = ff_queue_pop_front(native_model->task_queue); diff --git a/libavfilter/dnn/dnn_backend_native.h b/libavfilter/dnn/dnn_backend_native.h index ca61bb353f..e8017ee4b4 100644 --- a/libavfilter/dnn/dnn_backend_native.h +++ b/libavfilter/dnn/dnn_backend_native.h @@ -129,7 +129,7 @@ typedef struct NativeModel{ DnnOperand *operands; int32_t operands_num; Queue *task_queue; - Queue *inference_queue; + Queue *lltask_queue; } NativeModel; DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx); diff --git a/libavfilter/dnn/dnn_backend_openvino.c b/libavfilter/dnn/dnn_backend_openvino.c index abf3f4b6b6..76dc06c6d7 100644 --- a/libavfilter/dnn/dnn_backend_openvino.c +++ b/libavfilter/dnn/dnn_backend_openvino.c @@ -57,14 +57,14 @@ typedef struct OVModel{ ie_executable_network_t *exe_network; SafeQueue *request_queue; // holds OVRequestItem Queue *task_queue; // holds TaskItem - Queue *inference_queue; // holds InferenceItem + Queue *lltask_queue; // holds LastLevelTaskItem } OVModel; // one request for one call to openvino typedef struct OVRequestItem { ie_infer_request_t *infer_request; - InferenceItem **inferences; - uint32_t inference_count; + LastLevelTaskItem **lltasks; + uint32_t lltask_count; ie_complete_call_back_t callback; } OVRequestItem; @@ -121,12 +121,12 @@ static DNNReturnType fill_model_input_ov(OVModel *ov_model, OVRequestItem *reque IEStatusCode status; DNNData input; ie_blob_t *input_blob = NULL; - InferenceItem *inference; + LastLevelTaskItem *lltask; TaskItem *task; - inference = ff_queue_peek_front(ov_model->inference_queue); - av_assert0(inference); - task = inference->task; + lltask = ff_queue_peek_front(ov_model->lltask_queue); + av_assert0(lltask); + task = lltask->task; status = ie_infer_request_get_blob(request->infer_request, task->input_name, &input_blob); if (status != OK) { @@ -159,13 +159,13 @@ static DNNReturnType fill_model_input_ov(OVModel *ov_model, OVRequestItem *reque input.order = DCO_BGR; for (int i = 0; i < ctx->options.batch_size; ++i) { - inference = ff_queue_pop_front(ov_model->inference_queue); - if (!inference) { + lltask = ff_queue_pop_front(ov_model->lltask_queue); + if (!lltask) { break; } - request->inferences[i] = inference; - request->inference_count = i + 1; - task = inference->task; + request->lltasks[i] = lltask; + request->lltask_count = i + 1; + task = lltask->task; switch (ov_model->model->func_type) { case DFT_PROCESS_FRAME: if (task->do_ioproc) { @@ -180,7 +180,7 @@ static DNNReturnType fill_model_input_ov(OVModel *ov_model, OVRequestItem *reque ff_frame_to_dnn_detect(task->in_frame, &input, ctx); break; case DFT_ANALYTICS_CLASSIFY: - ff_frame_to_dnn_classify(task->in_frame, &input, inference->bbox_index, ctx); + ff_frame_to_dnn_classify(task->in_frame, &input, lltask->bbox_index, ctx); break; default: av_assert0(!"should not reach here"); @@ -200,8 +200,8 @@ static void infer_completion_callback(void *args) precision_e precision; IEStatusCode status; OVRequestItem *request = args; - InferenceItem *inference = request->inferences[0]; - TaskItem *task = inference->task; + LastLevelTaskItem *lltask = request->lltasks[0]; + TaskItem *task = lltask->task; OVModel *ov_model = task->model; SafeQueue *requestq = ov_model->request_queue; ie_blob_t *output_blob = NULL; @@ -248,10 +248,10 @@ static void infer_completion_callback(void *args) output.dt = precision_to_datatype(precision); output.data = blob_buffer.buffer; - av_assert0(request->inference_count <= dims.dims[0]); - av_assert0(request->inference_count >= 1); - for (int i = 0; i < request->inference_count; ++i) { - task = request->inferences[i]->task; + av_assert0(request->lltask_count <= dims.dims[0]); + av_assert0(request->lltask_count >= 1); + for (int i = 0; i < request->lltask_count; ++i) { + task = request->lltasks[i]->task; task->inference_done++; switch (ov_model->model->func_type) { @@ -279,20 +279,20 @@ static void infer_completion_callback(void *args) av_log(ctx, AV_LOG_ERROR, "classify filter needs to provide post proc\n"); return; } - ov_model->model->classify_post_proc(task->out_frame, &output, request->inferences[i]->bbox_index, ov_model->model->filter_ctx); + ov_model->model->classify_post_proc(task->out_frame, &output, request->lltasks[i]->bbox_index, ov_model->model->filter_ctx); break; default: av_assert0(!"should not reach here"); break; } - av_freep(&request->inferences[i]); + av_freep(&request->lltasks[i]); output.data = (uint8_t *)output.data + output.width * output.height * output.channels * get_datatype_size(output.dt); } ie_blob_free(&output_blob); - request->inference_count = 0; + request->lltask_count = 0; if (ff_safe_queue_push_back(requestq, request) < 0) { ie_infer_request_free(&request->infer_request); av_freep(&request); @@ -399,11 +399,11 @@ static DNNReturnType init_model_ov(OVModel *ov_model, const char *input_name, co goto err; } - item->inferences = av_malloc_array(ctx->options.batch_size, sizeof(*item->inferences)); - if (!item->inferences) { + item->lltasks = av_malloc_array(ctx->options.batch_size, sizeof(*item->lltasks)); + if (!item->lltasks) { goto err; } - item->inference_count = 0; + item->lltask_count = 0; } ov_model->task_queue = ff_queue_create(); @@ -411,8 +411,8 @@ static DNNReturnType init_model_ov(OVModel *ov_model, const char *input_name, co goto err; } - ov_model->inference_queue = ff_queue_create(); - if (!ov_model->inference_queue) { + ov_model->lltask_queue = ff_queue_create(); + if (!ov_model->lltask_queue) { goto err; } @@ -427,7 +427,7 @@ static DNNReturnType execute_model_ov(OVRequestItem *request, Queue *inferenceq) { IEStatusCode status; DNNReturnType ret; - InferenceItem *inference; + LastLevelTaskItem *lltask; TaskItem *task; OVContext *ctx; OVModel *ov_model; @@ -438,8 +438,8 @@ static DNNReturnType execute_model_ov(OVRequestItem *request, Queue *inferenceq) return DNN_SUCCESS; } - inference = ff_queue_peek_front(inferenceq); - task = inference->task; + lltask = ff_queue_peek_front(inferenceq); + task = lltask->task; ov_model = task->model; ctx = &ov_model->ctx; @@ -567,21 +567,21 @@ static int contain_valid_detection_bbox(AVFrame *frame) return 1; } -static DNNReturnType extract_inference_from_task(DNNFunctionType func_type, TaskItem *task, Queue *inference_queue, DNNExecBaseParams *exec_params) +static DNNReturnType extract_lltask_from_task(DNNFunctionType func_type, TaskItem *task, Queue *lltask_queue, DNNExecBaseParams *exec_params) { switch (func_type) { case DFT_PROCESS_FRAME: case DFT_ANALYTICS_DETECT: { - InferenceItem *inference = av_malloc(sizeof(*inference)); - if (!inference) { + LastLevelTaskItem *lltask = av_malloc(sizeof(*lltask)); + if (!lltask) { return DNN_ERROR; } task->inference_todo = 1; task->inference_done = 0; - inference->task = task; - if (ff_queue_push_back(inference_queue, inference) < 0) { - av_freep(&inference); + lltask->task = task; + if (ff_queue_push_back(lltask_queue, lltask) < 0) { + av_freep(&lltask); return DNN_ERROR; } return DNN_SUCCESS; @@ -604,7 +604,7 @@ static DNNReturnType extract_inference_from_task(DNNFunctionType func_type, Task header = (const AVDetectionBBoxHeader *)sd->data; for (uint32_t i = 0; i < header->nb_bboxes; i++) { - InferenceItem *inference; + LastLevelTaskItem *lltask; const AVDetectionBBox *bbox = av_get_detection_bbox(header, i); if (params->target) { @@ -613,15 +613,15 @@ static DNNReturnType extract_inference_from_task(DNNFunctionType func_type, Task } } - inference = av_malloc(sizeof(*inference)); - if (!inference) { + lltask = av_malloc(sizeof(*lltask)); + if (!lltask) { return DNN_ERROR; } task->inference_todo++; - inference->task = task; - inference->bbox_index = i; - if (ff_queue_push_back(inference_queue, inference) < 0) { - av_freep(&inference); + lltask->task = task; + lltask->bbox_index = i; + if (ff_queue_push_back(lltask_queue, lltask) < 0) { + av_freep(&lltask); return DNN_ERROR; } } @@ -679,8 +679,8 @@ static DNNReturnType get_output_ov(void *model, const char *input_name, int inpu return DNN_ERROR; } - if (extract_inference_from_task(ov_model->model->func_type, &task, ov_model->inference_queue, NULL) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); + if (extract_lltask_from_task(ov_model->model->func_type, &task, ov_model->lltask_queue, NULL) != DNN_SUCCESS) { + av_log(ctx, AV_LOG_ERROR, "unable to extract last level task from task.\n"); ret = DNN_ERROR; goto err; } @@ -692,7 +692,7 @@ static DNNReturnType get_output_ov(void *model, const char *input_name, int inpu goto err; } - ret = execute_model_ov(request, ov_model->inference_queue); + ret = execute_model_ov(request, ov_model->lltask_queue); *output_width = task.out_frame->width; *output_height = task.out_frame->height; err: @@ -794,20 +794,20 @@ DNNReturnType ff_dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams * return DNN_ERROR; } - if (extract_inference_from_task(model->func_type, task, ov_model->inference_queue, exec_params) != DNN_SUCCESS) { + if (extract_lltask_from_task(model->func_type, task, ov_model->lltask_queue, exec_params) != DNN_SUCCESS) { av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); return DNN_ERROR; } if (ctx->options.async) { - while (ff_queue_size(ov_model->inference_queue) >= ctx->options.batch_size) { + while (ff_queue_size(ov_model->lltask_queue) >= ctx->options.batch_size) { request = ff_safe_queue_pop_front(ov_model->request_queue); if (!request) { av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); return DNN_ERROR; } - ret = execute_model_ov(request, ov_model->inference_queue); + ret = execute_model_ov(request, ov_model->lltask_queue); if (ret != DNN_SUCCESS) { return ret; } @@ -832,7 +832,7 @@ DNNReturnType ff_dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams * av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); return DNN_ERROR; } - return execute_model_ov(request, ov_model->inference_queue); + return execute_model_ov(request, ov_model->lltask_queue); } } @@ -850,7 +850,7 @@ DNNReturnType ff_dnn_flush_ov(const DNNModel *model) IEStatusCode status; DNNReturnType ret; - if (ff_queue_size(ov_model->inference_queue) == 0) { + if (ff_queue_size(ov_model->lltask_queue) == 0) { // no pending task need to flush return DNN_SUCCESS; } @@ -889,16 +889,16 @@ void ff_dnn_free_model_ov(DNNModel **model) if (item && item->infer_request) { ie_infer_request_free(&item->infer_request); } - av_freep(&item->inferences); + av_freep(&item->lltasks); av_freep(&item); } ff_safe_queue_destroy(ov_model->request_queue); - while (ff_queue_size(ov_model->inference_queue) != 0) { - InferenceItem *item = ff_queue_pop_front(ov_model->inference_queue); + while (ff_queue_size(ov_model->lltask_queue) != 0) { + LastLevelTaskItem *item = ff_queue_pop_front(ov_model->lltask_queue); av_freep(&item); } - ff_queue_destroy(ov_model->inference_queue); + ff_queue_destroy(ov_model->lltask_queue); while (ff_queue_size(ov_model->task_queue) != 0) { TaskItem *item = ff_queue_pop_front(ov_model->task_queue); diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c index 906934d8c0..dfac58b357 100644 --- a/libavfilter/dnn/dnn_backend_tf.c +++ b/libavfilter/dnn/dnn_backend_tf.c @@ -58,7 +58,7 @@ typedef struct TFModel{ TF_Session *session; TF_Status *status; SafeQueue *request_queue; - Queue *inference_queue; + Queue *lltask_queue; Queue *task_queue; } TFModel; @@ -75,7 +75,7 @@ typedef struct TFInferRequest { typedef struct TFRequestItem { TFInferRequest *infer_request; - InferenceItem *inference; + LastLevelTaskItem *lltask; TF_Status *status; DNNAsyncExecModule exec_module; } TFRequestItem; @@ -90,7 +90,7 @@ static const AVOption dnn_tensorflow_options[] = { AVFILTER_DEFINE_CLASS(dnn_tensorflow); -static DNNReturnType execute_model_tf(TFRequestItem *request, Queue *inference_queue); +static DNNReturnType execute_model_tf(TFRequestItem *request, Queue *lltask_queue); static void infer_completion_callback(void *args); static inline void destroy_request_item(TFRequestItem **arg); @@ -158,8 +158,8 @@ static DNNReturnType tf_start_inference(void *args) { TFRequestItem *request = args; TFInferRequest *infer_request = request->infer_request; - InferenceItem *inference = request->inference; - TaskItem *task = inference->task; + LastLevelTaskItem *lltask = request->lltask; + TaskItem *task = lltask->task; TFModel *tf_model = task->model; if (!request) { @@ -196,27 +196,27 @@ static inline void destroy_request_item(TFRequestItem **arg) { request = *arg; tf_free_request(request->infer_request); av_freep(&request->infer_request); - av_freep(&request->inference); + av_freep(&request->lltask); TF_DeleteStatus(request->status); ff_dnn_async_module_cleanup(&request->exec_module); av_freep(arg); } -static DNNReturnType extract_inference_from_task(TaskItem *task, Queue *inference_queue) +static DNNReturnType extract_lltask_from_task(TaskItem *task, Queue *lltask_queue) { TFModel *tf_model = task->model; TFContext *ctx = &tf_model->ctx; - InferenceItem *inference = av_malloc(sizeof(*inference)); - if (!inference) { - av_log(ctx, AV_LOG_ERROR, "Unable to allocate space for InferenceItem\n"); + LastLevelTaskItem *lltask = av_malloc(sizeof(*lltask)); + if (!lltask) { + av_log(ctx, AV_LOG_ERROR, "Unable to allocate space for LastLevelTaskItem\n"); return DNN_ERROR; } task->inference_todo = 1; task->inference_done = 0; - inference->task = task; - if (ff_queue_push_back(inference_queue, inference) < 0) { - av_log(ctx, AV_LOG_ERROR, "Failed to push back inference_queue.\n"); - av_freep(&inference); + lltask->task = task; + if (ff_queue_push_back(lltask_queue, lltask) < 0) { + av_log(ctx, AV_LOG_ERROR, "Failed to push back lltask_queue.\n"); + av_freep(&lltask); return DNN_ERROR; } return DNN_SUCCESS; @@ -333,7 +333,7 @@ static DNNReturnType get_output_tf(void *model, const char *input_name, int inpu goto err; } - if (extract_inference_from_task(&task, tf_model->inference_queue) != DNN_SUCCESS) { + if (extract_lltask_from_task(&task, tf_model->lltask_queue) != DNN_SUCCESS) { av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); ret = DNN_ERROR; goto err; @@ -346,7 +346,7 @@ static DNNReturnType get_output_tf(void *model, const char *input_name, int inpu goto err; } - ret = execute_model_tf(request, tf_model->inference_queue); + ret = execute_model_tf(request, tf_model->lltask_queue); *output_width = task.out_frame->width; *output_height = task.out_frame->height; @@ -901,7 +901,7 @@ DNNModel *ff_dnn_load_model_tf(const char *model_filename, DNNFunctionType func_ if (!item) { goto err; } - item->inference = NULL; + item->lltask = NULL; item->infer_request = tf_create_inference_request(); if (!item->infer_request) { av_log(ctx, AV_LOG_ERROR, "Failed to allocate memory for TensorFlow inference request\n"); @@ -919,8 +919,8 @@ DNNModel *ff_dnn_load_model_tf(const char *model_filename, DNNFunctionType func_ } } - tf_model->inference_queue = ff_queue_create(); - if (!tf_model->inference_queue) { + tf_model->lltask_queue = ff_queue_create(); + if (!tf_model->lltask_queue) { goto err; } @@ -944,15 +944,15 @@ err: static DNNReturnType fill_model_input_tf(TFModel *tf_model, TFRequestItem *request) { DNNData input; - InferenceItem *inference; + LastLevelTaskItem *lltask; TaskItem *task; TFInferRequest *infer_request; TFContext *ctx = &tf_model->ctx; - inference = ff_queue_pop_front(tf_model->inference_queue); - av_assert0(inference); - task = inference->task; - request->inference = inference; + lltask = ff_queue_pop_front(tf_model->lltask_queue); + av_assert0(lltask); + task = lltask->task; + request->lltask = lltask; if (get_input_tf(tf_model, &input, task->input_name) != DNN_SUCCESS) { goto err; @@ -1030,8 +1030,8 @@ err: static void infer_completion_callback(void *args) { TFRequestItem *request = args; - InferenceItem *inference = request->inference; - TaskItem *task = inference->task; + LastLevelTaskItem *lltask = request->lltask; + TaskItem *task = lltask->task; DNNData *outputs; TFInferRequest *infer_request = request->infer_request; TFModel *tf_model = task->model; @@ -1086,20 +1086,20 @@ err: } } -static DNNReturnType execute_model_tf(TFRequestItem *request, Queue *inference_queue) +static DNNReturnType execute_model_tf(TFRequestItem *request, Queue *lltask_queue) { TFModel *tf_model; TFContext *ctx; - InferenceItem *inference; + LastLevelTaskItem *lltask; TaskItem *task; - if (ff_queue_size(inference_queue) == 0) { + if (ff_queue_size(lltask_queue) == 0) { destroy_request_item(&request); return DNN_SUCCESS; } - inference = ff_queue_peek_front(inference_queue); - task = inference->task; + lltask = ff_queue_peek_front(lltask_queue); + task = lltask->task; tf_model = task->model; ctx = &tf_model->ctx; @@ -1155,8 +1155,8 @@ DNNReturnType ff_dnn_execute_model_tf(const DNNModel *model, DNNExecBaseParams * return DNN_ERROR; } - if (extract_inference_from_task(task, tf_model->inference_queue) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); + if (extract_lltask_from_task(task, tf_model->lltask_queue) != DNN_SUCCESS) { + av_log(ctx, AV_LOG_ERROR, "unable to extract last level task from task.\n"); return DNN_ERROR; } @@ -1165,7 +1165,7 @@ DNNReturnType ff_dnn_execute_model_tf(const DNNModel *model, DNNExecBaseParams * av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); return DNN_ERROR; } - return execute_model_tf(request, tf_model->inference_queue); + return execute_model_tf(request, tf_model->lltask_queue); } DNNAsyncStatusType ff_dnn_get_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out) @@ -1181,7 +1181,7 @@ DNNReturnType ff_dnn_flush_tf(const DNNModel *model) TFRequestItem *request; DNNReturnType ret; - if (ff_queue_size(tf_model->inference_queue) == 0) { + if (ff_queue_size(tf_model->lltask_queue) == 0) { // no pending task need to flush return DNN_SUCCESS; } @@ -1216,11 +1216,11 @@ void ff_dnn_free_model_tf(DNNModel **model) } ff_safe_queue_destroy(tf_model->request_queue); - while (ff_queue_size(tf_model->inference_queue) != 0) { - InferenceItem *item = ff_queue_pop_front(tf_model->inference_queue); + while (ff_queue_size(tf_model->lltask_queue) != 0) { + LastLevelTaskItem *item = ff_queue_pop_front(tf_model->lltask_queue); av_freep(&item); } - ff_queue_destroy(tf_model->inference_queue); + ff_queue_destroy(tf_model->lltask_queue); while (ff_queue_size(tf_model->task_queue) != 0) { TaskItem *item = ff_queue_pop_front(tf_model->task_queue); From patchwork Sat Aug 21 07:59:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29663 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp331568iov; Sat, 21 Aug 2021 01:00:31 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzdxHoCOdxJ4q+ycX6ngbff78dtxWeTjUdmFTryswRjg5O/xKX6/aKIVeux2FKwMUsL4E+A X-Received: by 2002:a05:6402:4384:: with SMTP id o4mr2052231edc.206.1629532831264; Sat, 21 Aug 2021 01:00:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629532831; cv=none; d=google.com; s=arc-20160816; b=V5mbgLVoMPCuwwhVKmKsIuqyGIvDGjwmpJL8KPE6jWRaA13EEhnVAIAT64Pxz3DpTJ jj2H555mjrDq+bQTuZOJlY6TAYwc+REQVopLHm2kLz6/8flF3xBUX8UPm9Wj/mUswZyd t3jMi/Aq2zAJC/MkPzEIb/MF65Wht9xtN0eZ6ueXKMc3Q/HLG6FF3HvuvK9DOWBw6eTP eQocL4qyTVA7vnk+rg+J78sVgK/K74ZmwD62hVI2havJIxIJqI6ppwS+eoFIxz3WBqKp ROxHoof73iV+yMpXuIAlxEqvt9dARUjEWEywVFUevnowMEt0vXU1WyWHj5Zf2xDPh1+2 u8PQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=cTKKl61Ln7Zx7GIRsdrotEoursv+lR3wZzsGN6sp1xI=; b=EW3CHquoS5yVTxL3ExeVa8gHWChPgsCzptyiSeW/5VRXscBC1nUq3kQal2Eafn7aNk XGOoH6DoffJfU6FYfW4eENtC8k/onNsf7Qp4f69Kc3JgdTtEmxnzkHROGheIVu7nhQqB woNrINEfsjTyHw3ZTT0dRSRrzZ517xva4TDtI44bAMOv3AoWC9GNCv11H8+HereeNf/A fI3BSNw1381qvPb1AEZpTp2GLMccD84MUykNKFkbR3o8ZbFja7TkhyCSMB0fExUiIgQU GZ+cCrZmm6PlwG4DGPIhcRGxG/u5Otet/RaRqKXK1Z88ZiJ3YUkCgYOTStQbZbawP0/E v72w== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=gp7IhRsv; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id cw3si2975964edb.297.2021.08.21.01.00.30; Sat, 21 Aug 2021 01:00:31 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=gp7IhRsv; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 8DC9468A47F; Sat, 21 Aug 2021 10:59:42 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id BB13D68A33F for ; Sat, 21 Aug 2021 10:59:34 +0300 (EEST) Received: by mail-pj1-f43.google.com with SMTP id oc2-20020a17090b1c0200b00179e56772d6so5510614pjb.4 for ; Sat, 21 Aug 2021 00:59:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BSvZpa/k5TKVEOIwHz6D3bY9yB/ELXX1N1bpXmN6UN8=; b=gp7IhRsvrISsOCGZhvvvr/Ugz1Y6RbzlfDog5uq8jWT/hzyujxOFYMhhTy7tVTA3Qx T4X3Z7Q6Z66m6C6MPxTHhFC7kJtsjag6Af8rcA1aVf8iqlMEmMePGoujDdtNuyeHGJsu zYu0nlS489UOCEz/hDiZ7Lmc5y+3QRiKdjNZdn/kPLweyJIyMDjKYc88K9pewCPjMgKX yM8E6MlrmpmEo4gcFsLTTzL7MlpuUcON+jZA3Dhfl+U4EZP5Y4/ACFvvo5wdxKAJt1XD XyZIsRo0lPXsK5dD7QJzzTzwrYKzJ1axbrIWzd2ng6yr/ARwfcv895Grrb/icjWk47IV CD4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BSvZpa/k5TKVEOIwHz6D3bY9yB/ELXX1N1bpXmN6UN8=; b=EPg9lMBKMG+NH8tZ/dko2XOeJ04e8q1OAmfFHtr8jQg9+00Kf3+wURdm/gYPKgERJ6 3sNKadUX+fPC7Fx3O4qNN5vTUYqpfUO6Dm3MbqJfjqpx6E3xnRwDHCsB5xi5q+CpqVK8 hM2Orgg1nQLg5+t56AhoyrjAWBDgPgQmrrV/MimHJdYdL/Lwp2ptfSC+ES5CDJE2mu8x kUq3XZuTvSWL/LAEaOb+r7xKaDA86ajX6bBZEosrxlB1ypv0ypQ/9s5W+rWAKPPJww0f RavzuBg+7WYfhODI23CqblVdVNsyPEht+0c01xrq5UJTVXGVeMyPMApMb067niZkrL4Z h/vQ== X-Gm-Message-State: AOAM5315njMmXt4wA5+H7GYxNy5fSLgFLEK2bJjWnCAlSm33fjhUapJg uEZH3pAsVIu4peGdf3KO7isKYNYCH0tV2w== X-Received: by 2002:a17:90a:ce16:: with SMTP id f22mr9104919pju.90.1629532773171; Sat, 21 Aug 2021 00:59:33 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.220.198]) by smtp.googlemail.com with ESMTPSA id z20sm8565295pjq.14.2021.08.21.00.59.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Aug 2021 00:59:32 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Sat, 21 Aug 2021 13:29:05 +0530 Message-Id: <20210821075905.12850-6-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210821075905.12850-1-shubhanshu.e01@gmail.com> References: <20210821075905.12850-1-shubhanshu.e01@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v2 6/6] doc/filters.texi: Include dnn_processing in docs of sr and derain filter X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: ofyqu0IvwVE0 Signed-off-by: Shubhanshu Saxena --- doc/filters.texi | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/filters.texi b/doc/filters.texi index f7b6b61f4c..0b2da7c71f 100644 --- a/doc/filters.texi +++ b/doc/filters.texi @@ -9974,7 +9974,7 @@ Note that different backends use different file formats. TensorFlow and native backend can load files for only its format. @end table -It can also be finished with @ref{dnn_processing} filter. +To get full functionality (such as async execution), please use the @ref{dnn_processing} filter. @section deshake @@ -19263,7 +19263,7 @@ Default value is @code{2}. Scale factor is necessary for SRCNN model, because it input upscaled using bicubic upscaling with proper scale factor. @end table -This feature can also be finished with @ref{dnn_processing} filter. +To get full functionality (such as async execution), please use the @ref{dnn_processing} filter. @section ssim