From patchwork Wed Aug 25 10:39:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29773 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2a4a:0:0:0:0 with SMTP id k10csp434722iov; Wed, 25 Aug 2021 03:40:38 -0700 (PDT) X-Google-Smtp-Source: ABdhPJydVSDL2q3e+1F3KrHRqWhCHqtgQCDP5U7TyapKdoGTW4iMy0uPLBrQuwM+TB1Lq7dPrfIa X-Received: by 2002:a17:906:481b:: with SMTP id w27mr45393238ejq.151.1629888038627; Wed, 25 Aug 2021 03:40:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629888038; cv=none; d=google.com; s=arc-20160816; b=pun/HfHhTTtVU2imZtMaKVft5b6KKECtkRfZuMu41izIQZPGw1S0G6uXCEtBDNbIOg 3GG3rpd1t3DktG3FbVn2xtHsd7kCqN94JkEEKnGl3dGAc9AJRcqzgQ/2nQWv/uoEursZ zRxoZf3qG8CK/OvkNRwbyeqs2rYEMmbZvds9asvuqVmrrGPnPgObQAqa4iWvU3qDs2EB R2+sxbuVFSaJgwW65HUtSmvBV5ZEqF/CBhOiBnlVJ+fDWPTI4nukG0WDVz+8pcV1UK3a QHyVQ1fzzMEOiVlGIPhn4pnJiP7VYjdcO5XJW5n9otgJ1u3cagIQEIVYj5du3lXwU0vu 3NKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=8Ij/hgMq2/l3h2l9qGs1fO6QsWmR7QsWfLXZDcZ4YU0=; b=DsmFvpEgZ0UPuTU5o75Oo5oHJNz3YMjwBpQvw43CwFs6LXPtL0i0ROsHOsKE4vMlDm Hk3bCwx6CC5owyugpZRopMw+sHyCNccxygl7iCxPAnatlzUfBvKp0EOZk1xJIRL7dqU2 Tra7ZWrnJNlnC+/ffGHWad65i6HNpU7GFqhuRZm7AWrAviyPF8NJkxCRt4eXyMOPszvr QRHQ7QlYlXM4cum803hvbp3H8WSf4KHzf20Sj+qWo06Msj44UTqLLZMOqOw5j4ysG0hd vtPXmya+fbiqchsELANmjOmy8AeAM/8k7zOgs1krItB6fCkIOIsm70gSjS35bl65LDJ8 Qs5A== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=bngOUE2R; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id j15si11631187edq.329.2021.08.25.03.40.38; Wed, 25 Aug 2021 03:40:38 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=bngOUE2R; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id B500168A273; Wed, 25 Aug 2021 13:40:22 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 064E86818DB for ; Wed, 25 Aug 2021 13:40:16 +0300 (EEST) Received: by mail-pj1-f43.google.com with SMTP id z24-20020a17090acb1800b0018e87a24300so4429602pjt.0 for ; Wed, 25 Aug 2021 03:40:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=D0chzyfm0GqqnrVwzFmf9+NvIIx8DdAyC0PvWDW6vp0=; b=bngOUE2RxkaS6VZ1qNLWF9m5z3aS6LDQiBsH3t9NB7gzpVhwI7D6mG3dRTPbLiBxZu QdAnbg2WYciYwvPM9Mph3BXqDTj+Goxg7vsMCAyXn6CAWuIyJsf5fai64q19ZuOdd5vz o3R98OIzPffWNKqTkUGuQxDbk+KgMcEA6F1NA5erEvW+PITftSR+N28SuevSfiJdAmu0 sfiPV+Pwj4wceVvIfvmMlBd1hfshAGJpdUoEiQNOLFp4YQD/+/wvsxI7wso+Wvwlw/HW oeMj41aD9kRGxwGez/Z5qkrusxya6c/wPJWtMnUZ9DYmP6w1lKD2/k94Dd1DHQjlvvyM sDwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=D0chzyfm0GqqnrVwzFmf9+NvIIx8DdAyC0PvWDW6vp0=; b=qgHwPEiZRZL/pFnd0xZMl45R7v9ZhrsOxQHta9roRM9wC7QMyxtIW9b/kCp+JYCBHc x4LHeieJ7C1ksckrvdMEDGsOfkPnbdr/btA1OP9Su0tMyktq4/4DREHSLrsKoqVc2oW/ 2rW3C8boRKTg2HlajbyFgkQpd/GPPZo4x96WgGvVH+FuUJ9vTQAjY8HHNKvfrG9TYwPJ ur7CojTI+mkP5J/CVKl9//WJXvEIt5gQs+qmwGmWPkorSmaWbhh77v3G9BpgSHrRgCwd nmF7mycVQTHbDVhKe2UWL1JwPELQ/XB8VgmulekEUFJ+PP7LG9A6JIb4qp+xDy6MnqVH Kq3g== X-Gm-Message-State: AOAM533b1bKXNdjorAxMEvelrdqA/0InjDZVOQffidUprQEtarLHvl13 EP5CEMWNr6T+RcolX17nE2AQpk7QyrosXA== X-Received: by 2002:a17:902:ab52:b0:12d:92b8:60c7 with SMTP id ij18-20020a170902ab5200b0012d92b860c7mr37847148plb.44.1629888013714; Wed, 25 Aug 2021 03:40:13 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.157.221.29]) by smtp.googlemail.com with ESMTPSA id w16sm4548735pff.130.2021.08.25.03.40.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Aug 2021 03:40:13 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Wed, 25 Aug 2021 16:09:35 +0530 Message-Id: <20210825103940.28798-2-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210825103940.28798-1-shubhanshu.e01@gmail.com> References: <20210825103940.28798-1-shubhanshu.e01@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v4 2/7] libavfilter: Unify Execution Modes in DNN Filters X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: Y5qWjFY7rnaZ This commit unifies the async and sync mode from the DNN filters' perspective. As of this commit, the Native backend only supports synchronous execution mode. Now the user can switch between async and sync mode by using the 'async' option in the backend_configs. The values can be 1 for async and 0 for sync mode of execution. This commit affects the following filters: 1. vf_dnn_classify 2. vf_dnn_detect 3. vf_dnn_processing 4. vf_sr 5. vf_derain Signed-off-by: Shubhanshu Saxena --- libavfilter/dnn/dnn_backend_common.c | 2 +- libavfilter/dnn/dnn_backend_common.h | 5 +- libavfilter/dnn/dnn_backend_native.c | 59 +++++++++++++++- libavfilter/dnn/dnn_backend_native.h | 6 ++ libavfilter/dnn/dnn_backend_openvino.c | 94 ++++++++++---------------- libavfilter/dnn/dnn_backend_openvino.h | 3 +- libavfilter/dnn/dnn_backend_tf.c | 35 ++-------- libavfilter/dnn/dnn_backend_tf.h | 3 +- libavfilter/dnn/dnn_interface.c | 8 +-- libavfilter/dnn_filter_common.c | 23 +------ libavfilter/dnn_filter_common.h | 3 +- libavfilter/dnn_interface.h | 4 +- libavfilter/vf_derain.c | 7 ++ libavfilter/vf_dnn_classify.c | 4 +- libavfilter/vf_dnn_detect.c | 10 +-- libavfilter/vf_dnn_processing.c | 10 +-- libavfilter/vf_sr.c | 8 +++ 17 files changed, 142 insertions(+), 142 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_common.c b/libavfilter/dnn/dnn_backend_common.c index 426683b73d..d2bc016fef 100644 --- a/libavfilter/dnn/dnn_backend_common.c +++ b/libavfilter/dnn/dnn_backend_common.c @@ -138,7 +138,7 @@ DNNReturnType ff_dnn_start_inference_async(void *ctx, DNNAsyncExecModule *async_ return DNN_SUCCESS; } -DNNAsyncStatusType ff_dnn_get_async_result_common(Queue *task_queue, AVFrame **in, AVFrame **out) +DNNAsyncStatusType ff_dnn_get_result_common(Queue *task_queue, AVFrame **in, AVFrame **out) { TaskItem *task = ff_queue_peek_front(task_queue); diff --git a/libavfilter/dnn/dnn_backend_common.h b/libavfilter/dnn/dnn_backend_common.h index 604c1d3bd7..78e62a94a2 100644 --- a/libavfilter/dnn/dnn_backend_common.h +++ b/libavfilter/dnn/dnn_backend_common.h @@ -29,7 +29,8 @@ #include "libavutil/thread.h" #define DNN_BACKEND_COMMON_OPTIONS \ - { "nireq", "number of request", OFFSET(options.nireq), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, INT_MAX, FLAGS }, + { "nireq", "number of request", OFFSET(options.nireq), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, INT_MAX, FLAGS }, \ + { "async", "use DNN async inference", OFFSET(options.async), AV_OPT_TYPE_BOOL, { .i64 = 1 }, 0, 1, FLAGS }, // one task for one function call from dnn interface typedef struct TaskItem { @@ -135,7 +136,7 @@ DNNReturnType ff_dnn_start_inference_async(void *ctx, DNNAsyncExecModule *async_ * @retval DAST_NOT_READY if inference not completed yet. * @retval DAST_SUCCESS if result successfully extracted */ -DNNAsyncStatusType ff_dnn_get_async_result_common(Queue *task_queue, AVFrame **in, AVFrame **out); +DNNAsyncStatusType ff_dnn_get_result_common(Queue *task_queue, AVFrame **in, AVFrame **out); /** * Allocate input and output frames and fill the Task diff --git a/libavfilter/dnn/dnn_backend_native.c b/libavfilter/dnn/dnn_backend_native.c index 3b2a3aa55d..2d34b88f8a 100644 --- a/libavfilter/dnn/dnn_backend_native.c +++ b/libavfilter/dnn/dnn_backend_native.c @@ -34,6 +34,7 @@ #define FLAGS AV_OPT_FLAG_FILTERING_PARAM static const AVOption dnn_native_options[] = { { "conv2d_threads", "threads num for conv2d layer", OFFSET(options.conv2d_threads), AV_OPT_TYPE_INT, { .i64 = 0 }, INT_MIN, INT_MAX, FLAGS }, + { "async", "use DNN async inference", OFFSET(options.async), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, FLAGS }, { NULL }, }; @@ -189,6 +190,11 @@ DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType f goto fail; native_model->model = model; + if (native_model->ctx.options.async) { + av_log(&native_model->ctx, AV_LOG_WARNING, "Async not supported. Rolling back to sync\n"); + native_model->ctx.options.async = 0; + } + #if !HAVE_PTHREAD_CANCEL if (native_model->ctx.options.conv2d_threads > 1){ av_log(&native_model->ctx, AV_LOG_WARNING, "'conv2d_threads' option was set but it is not supported " @@ -212,6 +218,11 @@ DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType f goto fail; } + native_model->task_queue = ff_queue_create(); + if (!native_model->task_queue) { + goto fail; + } + native_model->inference_queue = ff_queue_create(); if (!native_model->inference_queue) { goto fail; @@ -425,17 +436,30 @@ DNNReturnType ff_dnn_execute_model_native(const DNNModel *model, DNNExecBasePara { NativeModel *native_model = model->model; NativeContext *ctx = &native_model->ctx; - TaskItem task; + TaskItem *task; if (ff_check_exec_params(ctx, DNN_NATIVE, model->func_type, exec_params) != 0) { return DNN_ERROR; } - if (ff_dnn_fill_task(&task, exec_params, native_model, 0, 1) != DNN_SUCCESS) { + task = av_malloc(sizeof(*task)); + if (!task) { + av_log(ctx, AV_LOG_ERROR, "unable to alloc memory for task item.\n"); return DNN_ERROR; } - if (extract_inference_from_task(&task, native_model->inference_queue) != DNN_SUCCESS) { + if (ff_dnn_fill_task(task, exec_params, native_model, ctx->options.async, 1) != DNN_SUCCESS) { + av_freep(&task); + return DNN_ERROR; + } + + if (ff_queue_push_back(native_model->task_queue, task) < 0) { + av_freep(&task); + av_log(ctx, AV_LOG_ERROR, "unable to push back task_queue.\n"); + return DNN_ERROR; + } + + if (extract_inference_from_task(task, native_model->inference_queue) != DNN_SUCCESS) { av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); return DNN_ERROR; } @@ -443,6 +467,26 @@ DNNReturnType ff_dnn_execute_model_native(const DNNModel *model, DNNExecBasePara return execute_model_native(native_model->inference_queue); } +DNNReturnType ff_dnn_flush_native(const DNNModel *model) +{ + NativeModel *native_model = model->model; + + if (ff_queue_size(native_model->inference_queue) == 0) { + // no pending task need to flush + return DNN_SUCCESS; + } + + // for now, use sync node with flush operation + // Switch to async when it is supported + return execute_model_native(native_model->inference_queue); +} + +DNNAsyncStatusType ff_dnn_get_result_native(const DNNModel *model, AVFrame **in, AVFrame **out) +{ + NativeModel *native_model = model->model; + return ff_dnn_get_result_common(native_model->task_queue, in, out); +} + int32_t ff_calculate_operand_dims_count(const DnnOperand *oprd) { int32_t result = 1; @@ -497,6 +541,15 @@ void ff_dnn_free_model_native(DNNModel **model) av_freep(&item); } ff_queue_destroy(native_model->inference_queue); + + while (ff_queue_size(native_model->task_queue) != 0) { + TaskItem *item = ff_queue_pop_front(native_model->task_queue); + av_frame_free(&item->in_frame); + av_frame_free(&item->out_frame); + av_freep(&item); + } + ff_queue_destroy(native_model->task_queue); + av_freep(&native_model); } av_freep(model); diff --git a/libavfilter/dnn/dnn_backend_native.h b/libavfilter/dnn/dnn_backend_native.h index 1b9d5bdf2d..ca61bb353f 100644 --- a/libavfilter/dnn/dnn_backend_native.h +++ b/libavfilter/dnn/dnn_backend_native.h @@ -111,6 +111,7 @@ typedef struct InputParams{ } InputParams; typedef struct NativeOptions{ + uint8_t async; uint32_t conv2d_threads; } NativeOptions; @@ -127,6 +128,7 @@ typedef struct NativeModel{ int32_t layers_num; DnnOperand *operands; int32_t operands_num; + Queue *task_queue; Queue *inference_queue; } NativeModel; @@ -134,6 +136,10 @@ DNNModel *ff_dnn_load_model_native(const char *model_filename, DNNFunctionType f DNNReturnType ff_dnn_execute_model_native(const DNNModel *model, DNNExecBaseParams *exec_params); +DNNAsyncStatusType ff_dnn_get_result_native(const DNNModel *model, AVFrame **in, AVFrame **out); + +DNNReturnType ff_dnn_flush_native(const DNNModel *model); + void ff_dnn_free_model_native(DNNModel **model); // NOTE: User must check for error (return value <= 0) to handle diff --git a/libavfilter/dnn/dnn_backend_openvino.c b/libavfilter/dnn/dnn_backend_openvino.c index c825e70c82..abf3f4b6b6 100644 --- a/libavfilter/dnn/dnn_backend_openvino.c +++ b/libavfilter/dnn/dnn_backend_openvino.c @@ -39,6 +39,7 @@ typedef struct OVOptions{ char *device_type; int nireq; + uint8_t async; int batch_size; int input_resizable; } OVOptions; @@ -758,55 +759,6 @@ err: } DNNReturnType ff_dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams *exec_params) -{ - OVModel *ov_model = model->model; - OVContext *ctx = &ov_model->ctx; - TaskItem task; - OVRequestItem *request; - - if (ff_check_exec_params(ctx, DNN_OV, model->func_type, exec_params) != 0) { - return DNN_ERROR; - } - - if (model->func_type == DFT_ANALYTICS_CLASSIFY) { - // Once we add async support for tensorflow backend and native backend, - // we'll combine the two sync/async functions in dnn_interface.h to - // simplify the code in filter, and async will be an option within backends. - // so, do not support now, and classify filter will not call this function. - return DNN_ERROR; - } - - if (ctx->options.batch_size > 1) { - avpriv_report_missing_feature(ctx, "batch mode for sync execution"); - return DNN_ERROR; - } - - if (!ov_model->exe_network) { - if (init_model_ov(ov_model, exec_params->input_name, exec_params->output_names[0]) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "Failed init OpenVINO exectuable network or inference request\n"); - return DNN_ERROR; - } - } - - if (ff_dnn_fill_task(&task, exec_params, ov_model, 0, 1) != DNN_SUCCESS) { - return DNN_ERROR; - } - - if (extract_inference_from_task(ov_model->model->func_type, &task, ov_model->inference_queue, exec_params) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); - return DNN_ERROR; - } - - request = ff_safe_queue_pop_front(ov_model->request_queue); - if (!request) { - av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); - return DNN_ERROR; - } - - return execute_model_ov(request, ov_model->inference_queue); -} - -DNNReturnType ff_dnn_execute_model_async_ov(const DNNModel *model, DNNExecBaseParams *exec_params) { OVModel *ov_model = model->model; OVContext *ctx = &ov_model->ctx; @@ -831,7 +783,8 @@ DNNReturnType ff_dnn_execute_model_async_ov(const DNNModel *model, DNNExecBasePa return DNN_ERROR; } - if (ff_dnn_fill_task(task, exec_params, ov_model, 1, 1) != DNN_SUCCESS) { + if (ff_dnn_fill_task(task, exec_params, ov_model, ctx->options.async, 1) != DNN_SUCCESS) { + av_freep(&task); return DNN_ERROR; } @@ -846,26 +799,47 @@ DNNReturnType ff_dnn_execute_model_async_ov(const DNNModel *model, DNNExecBasePa return DNN_ERROR; } - while (ff_queue_size(ov_model->inference_queue) >= ctx->options.batch_size) { + if (ctx->options.async) { + while (ff_queue_size(ov_model->inference_queue) >= ctx->options.batch_size) { + request = ff_safe_queue_pop_front(ov_model->request_queue); + if (!request) { + av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); + return DNN_ERROR; + } + + ret = execute_model_ov(request, ov_model->inference_queue); + if (ret != DNN_SUCCESS) { + return ret; + } + } + + return DNN_SUCCESS; + } + else { + if (model->func_type == DFT_ANALYTICS_CLASSIFY) { + // Classification filter has not been completely + // tested with the sync mode. So, do not support now. + return DNN_ERROR; + } + + if (ctx->options.batch_size > 1) { + avpriv_report_missing_feature(ctx, "batch mode for sync execution"); + return DNN_ERROR; + } + request = ff_safe_queue_pop_front(ov_model->request_queue); if (!request) { av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); return DNN_ERROR; } - - ret = execute_model_ov(request, ov_model->inference_queue); - if (ret != DNN_SUCCESS) { - return ret; - } + return execute_model_ov(request, ov_model->inference_queue); } - - return DNN_SUCCESS; } -DNNAsyncStatusType ff_dnn_get_async_result_ov(const DNNModel *model, AVFrame **in, AVFrame **out) +DNNAsyncStatusType ff_dnn_get_result_ov(const DNNModel *model, AVFrame **in, AVFrame **out) { OVModel *ov_model = model->model; - return ff_dnn_get_async_result_common(ov_model->task_queue, in, out); + return ff_dnn_get_result_common(ov_model->task_queue, in, out); } DNNReturnType ff_dnn_flush_ov(const DNNModel *model) diff --git a/libavfilter/dnn/dnn_backend_openvino.h b/libavfilter/dnn/dnn_backend_openvino.h index 046d0c5b5a..0bbca0c057 100644 --- a/libavfilter/dnn/dnn_backend_openvino.h +++ b/libavfilter/dnn/dnn_backend_openvino.h @@ -32,8 +32,7 @@ DNNModel *ff_dnn_load_model_ov(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx); DNNReturnType ff_dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams *exec_params); -DNNReturnType ff_dnn_execute_model_async_ov(const DNNModel *model, DNNExecBaseParams *exec_params); -DNNAsyncStatusType ff_dnn_get_async_result_ov(const DNNModel *model, AVFrame **in, AVFrame **out); +DNNAsyncStatusType ff_dnn_get_result_ov(const DNNModel *model, AVFrame **in, AVFrame **out); DNNReturnType ff_dnn_flush_ov(const DNNModel *model); void ff_dnn_free_model_ov(DNNModel **model); diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c index ffec1b1328..4a0b561f29 100644 --- a/libavfilter/dnn/dnn_backend_tf.c +++ b/libavfilter/dnn/dnn_backend_tf.c @@ -42,6 +42,7 @@ typedef struct TFOptions{ char *sess_config; + uint8_t async; uint32_t nireq; } TFOptions; @@ -1121,34 +1122,6 @@ err: DNNReturnType ff_dnn_execute_model_tf(const DNNModel *model, DNNExecBaseParams *exec_params) { - TFModel *tf_model = model->model; - TFContext *ctx = &tf_model->ctx; - TaskItem task; - TFRequestItem *request; - - if (ff_check_exec_params(ctx, DNN_TF, model->func_type, exec_params) != 0) { - return DNN_ERROR; - } - - if (ff_dnn_fill_task(&task, exec_params, tf_model, 0, 1) != DNN_SUCCESS) { - return DNN_ERROR; - } - - if (extract_inference_from_task(&task, tf_model->inference_queue) != DNN_SUCCESS) { - av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); - return DNN_ERROR; - } - - request = ff_safe_queue_pop_front(tf_model->request_queue); - if (!request) { - av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); - return DNN_ERROR; - } - - return execute_model_tf(request, tf_model->inference_queue); -} - -DNNReturnType ff_dnn_execute_model_async_tf(const DNNModel *model, DNNExecBaseParams *exec_params) { TFModel *tf_model = model->model; TFContext *ctx = &tf_model->ctx; TaskItem *task; @@ -1164,7 +1137,7 @@ DNNReturnType ff_dnn_execute_model_async_tf(const DNNModel *model, DNNExecBasePa return DNN_ERROR; } - if (ff_dnn_fill_task(task, exec_params, tf_model, 1, 1) != DNN_SUCCESS) { + if (ff_dnn_fill_task(task, exec_params, tf_model, ctx->options.async, 1) != DNN_SUCCESS) { av_freep(&task); return DNN_ERROR; } @@ -1188,10 +1161,10 @@ DNNReturnType ff_dnn_execute_model_async_tf(const DNNModel *model, DNNExecBasePa return execute_model_tf(request, tf_model->inference_queue); } -DNNAsyncStatusType ff_dnn_get_async_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out) +DNNAsyncStatusType ff_dnn_get_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out) { TFModel *tf_model = model->model; - return ff_dnn_get_async_result_common(tf_model->task_queue, in, out); + return ff_dnn_get_result_common(tf_model->task_queue, in, out); } DNNReturnType ff_dnn_flush_tf(const DNNModel *model) diff --git a/libavfilter/dnn/dnn_backend_tf.h b/libavfilter/dnn/dnn_backend_tf.h index aec0fc2011..f14ea8c47a 100644 --- a/libavfilter/dnn/dnn_backend_tf.h +++ b/libavfilter/dnn/dnn_backend_tf.h @@ -32,8 +32,7 @@ DNNModel *ff_dnn_load_model_tf(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx); DNNReturnType ff_dnn_execute_model_tf(const DNNModel *model, DNNExecBaseParams *exec_params); -DNNReturnType ff_dnn_execute_model_async_tf(const DNNModel *model, DNNExecBaseParams *exec_params); -DNNAsyncStatusType ff_dnn_get_async_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out); +DNNAsyncStatusType ff_dnn_get_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out); DNNReturnType ff_dnn_flush_tf(const DNNModel *model); void ff_dnn_free_model_tf(DNNModel **model); diff --git a/libavfilter/dnn/dnn_interface.c b/libavfilter/dnn/dnn_interface.c index 81af934dd5..554a36b0dc 100644 --- a/libavfilter/dnn/dnn_interface.c +++ b/libavfilter/dnn/dnn_interface.c @@ -42,14 +42,15 @@ DNNModule *ff_get_dnn_module(DNNBackendType backend_type) case DNN_NATIVE: dnn_module->load_model = &ff_dnn_load_model_native; dnn_module->execute_model = &ff_dnn_execute_model_native; + dnn_module->get_result = &ff_dnn_get_result_native; + dnn_module->flush = &ff_dnn_flush_native; dnn_module->free_model = &ff_dnn_free_model_native; break; case DNN_TF: #if (CONFIG_LIBTENSORFLOW == 1) dnn_module->load_model = &ff_dnn_load_model_tf; dnn_module->execute_model = &ff_dnn_execute_model_tf; - dnn_module->execute_model_async = &ff_dnn_execute_model_async_tf; - dnn_module->get_async_result = &ff_dnn_get_async_result_tf; + dnn_module->get_result = &ff_dnn_get_result_tf; dnn_module->flush = &ff_dnn_flush_tf; dnn_module->free_model = &ff_dnn_free_model_tf; #else @@ -61,8 +62,7 @@ DNNModule *ff_get_dnn_module(DNNBackendType backend_type) #if (CONFIG_LIBOPENVINO == 1) dnn_module->load_model = &ff_dnn_load_model_ov; dnn_module->execute_model = &ff_dnn_execute_model_ov; - dnn_module->execute_model_async = &ff_dnn_execute_model_async_ov; - dnn_module->get_async_result = &ff_dnn_get_async_result_ov; + dnn_module->get_result = &ff_dnn_get_result_ov; dnn_module->flush = &ff_dnn_flush_ov; dnn_module->free_model = &ff_dnn_free_model_ov; #else diff --git a/libavfilter/dnn_filter_common.c b/libavfilter/dnn_filter_common.c index fcee7482c3..455eaa37f4 100644 --- a/libavfilter/dnn_filter_common.c +++ b/libavfilter/dnn_filter_common.c @@ -84,11 +84,6 @@ int ff_dnn_init(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *fil return AVERROR(EINVAL); } - if (!ctx->dnn_module->execute_model_async && ctx->async) { - ctx->async = 0; - av_log(filter_ctx, AV_LOG_WARNING, "this backend does not support async execution, roll back to sync.\n"); - } - #if !HAVE_PTHREAD_CANCEL if (ctx->async) { ctx->async = 0; @@ -141,18 +136,6 @@ DNNReturnType ff_dnn_execute_model(DnnContext *ctx, AVFrame *in_frame, AVFrame * return (ctx->dnn_module->execute_model)(ctx->model, &exec_params); } -DNNReturnType ff_dnn_execute_model_async(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame) -{ - DNNExecBaseParams exec_params = { - .input_name = ctx->model_inputname, - .output_names = (const char **)ctx->model_outputnames, - .nb_output = ctx->nb_outputs, - .in_frame = in_frame, - .out_frame = out_frame, - }; - return (ctx->dnn_module->execute_model_async)(ctx->model, &exec_params); -} - DNNReturnType ff_dnn_execute_model_classification(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame, const char *target) { DNNExecClassificationParams class_params = { @@ -165,12 +148,12 @@ DNNReturnType ff_dnn_execute_model_classification(DnnContext *ctx, AVFrame *in_f }, .target = target, }; - return (ctx->dnn_module->execute_model_async)(ctx->model, &class_params.base); + return (ctx->dnn_module->execute_model)(ctx->model, &class_params.base); } -DNNAsyncStatusType ff_dnn_get_async_result(DnnContext *ctx, AVFrame **in_frame, AVFrame **out_frame) +DNNAsyncStatusType ff_dnn_get_result(DnnContext *ctx, AVFrame **in_frame, AVFrame **out_frame) { - return (ctx->dnn_module->get_async_result)(ctx->model, in_frame, out_frame); + return (ctx->dnn_module->get_result)(ctx->model, in_frame, out_frame); } DNNReturnType ff_dnn_flush(DnnContext *ctx) diff --git a/libavfilter/dnn_filter_common.h b/libavfilter/dnn_filter_common.h index 8e43d8c8dc..4d92c1dc36 100644 --- a/libavfilter/dnn_filter_common.h +++ b/libavfilter/dnn_filter_common.h @@ -56,9 +56,8 @@ int ff_dnn_set_classify_post_proc(DnnContext *ctx, ClassifyPostProc post_proc); DNNReturnType ff_dnn_get_input(DnnContext *ctx, DNNData *input); DNNReturnType ff_dnn_get_output(DnnContext *ctx, int input_width, int input_height, int *output_width, int *output_height); DNNReturnType ff_dnn_execute_model(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame); -DNNReturnType ff_dnn_execute_model_async(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame); DNNReturnType ff_dnn_execute_model_classification(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame, const char *target); -DNNAsyncStatusType ff_dnn_get_async_result(DnnContext *ctx, AVFrame **in_frame, AVFrame **out_frame); +DNNAsyncStatusType ff_dnn_get_result(DnnContext *ctx, AVFrame **in_frame, AVFrame **out_frame); DNNReturnType ff_dnn_flush(DnnContext *ctx); void ff_dnn_uninit(DnnContext *ctx); diff --git a/libavfilter/dnn_interface.h b/libavfilter/dnn_interface.h index 5e9ffeb077..37e89d9789 100644 --- a/libavfilter/dnn_interface.h +++ b/libavfilter/dnn_interface.h @@ -114,10 +114,8 @@ typedef struct DNNModule{ DNNModel *(*load_model)(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx); // Executes model with specified input and output. Returns DNN_ERROR otherwise. DNNReturnType (*execute_model)(const DNNModel *model, DNNExecBaseParams *exec_params); - // Executes model with specified input and output asynchronously. Returns DNN_ERROR otherwise. - DNNReturnType (*execute_model_async)(const DNNModel *model, DNNExecBaseParams *exec_params); // Retrieve inference result. - DNNAsyncStatusType (*get_async_result)(const DNNModel *model, AVFrame **in, AVFrame **out); + DNNAsyncStatusType (*get_result)(const DNNModel *model, AVFrame **in, AVFrame **out); // Flush all the pending tasks. DNNReturnType (*flush)(const DNNModel *model); // Frees memory allocated for model. diff --git a/libavfilter/vf_derain.c b/libavfilter/vf_derain.c index 720cc70f5c..97bdb4e843 100644 --- a/libavfilter/vf_derain.c +++ b/libavfilter/vf_derain.c @@ -68,6 +68,7 @@ static int query_formats(AVFilterContext *ctx) static int filter_frame(AVFilterLink *inlink, AVFrame *in) { + DNNAsyncStatusType async_state = 0; AVFilterContext *ctx = inlink->dst; AVFilterLink *outlink = ctx->outputs[0]; DRContext *dr_context = ctx->priv; @@ -88,6 +89,12 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in) av_frame_free(&in); return AVERROR(EIO); } + do { + async_state = ff_dnn_get_result(&dr_context->dnnctx, &in, &out); + } while (async_state == DAST_NOT_READY); + + if (async_state != DAST_SUCCESS) + return AVERROR(EINVAL); av_frame_free(&in); diff --git a/libavfilter/vf_dnn_classify.c b/libavfilter/vf_dnn_classify.c index 904067f5f6..d5ee65d403 100644 --- a/libavfilter/vf_dnn_classify.c +++ b/libavfilter/vf_dnn_classify.c @@ -224,7 +224,7 @@ static int dnn_classify_flush_frame(AVFilterLink *outlink, int64_t pts, int64_t do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { av_assert0(in_frame == out_frame); ret = ff_filter_frame(outlink, out_frame); @@ -268,7 +268,7 @@ static int dnn_classify_activate(AVFilterContext *filter_ctx) do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { av_assert0(in_frame == out_frame); ret = ff_filter_frame(outlink, out_frame); diff --git a/libavfilter/vf_dnn_detect.c b/libavfilter/vf_dnn_detect.c index 200b5897c1..11de376753 100644 --- a/libavfilter/vf_dnn_detect.c +++ b/libavfilter/vf_dnn_detect.c @@ -424,7 +424,7 @@ static int dnn_detect_flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *o do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { av_assert0(in_frame == out_frame); ret = ff_filter_frame(outlink, out_frame); @@ -458,7 +458,7 @@ static int dnn_detect_activate_async(AVFilterContext *filter_ctx) if (ret < 0) return ret; if (ret > 0) { - if (ff_dnn_execute_model_async(&ctx->dnnctx, in, in) != DNN_SUCCESS) { + if (ff_dnn_execute_model(&ctx->dnnctx, in, in) != DNN_SUCCESS) { return AVERROR(EIO); } } @@ -468,7 +468,7 @@ static int dnn_detect_activate_async(AVFilterContext *filter_ctx) do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { av_assert0(in_frame == out_frame); ret = ff_filter_frame(outlink, out_frame); @@ -496,7 +496,7 @@ static int dnn_detect_activate_async(AVFilterContext *filter_ctx) return 0; } -static int dnn_detect_activate(AVFilterContext *filter_ctx) +static av_unused int dnn_detect_activate(AVFilterContext *filter_ctx) { DnnDetectContext *ctx = filter_ctx->priv; @@ -537,5 +537,5 @@ const AVFilter ff_vf_dnn_detect = { FILTER_INPUTS(dnn_detect_inputs), FILTER_OUTPUTS(dnn_detect_outputs), .priv_class = &dnn_detect_class, - .activate = dnn_detect_activate, + .activate = dnn_detect_activate_async, }; diff --git a/libavfilter/vf_dnn_processing.c b/libavfilter/vf_dnn_processing.c index 5f24955263..7435dd4959 100644 --- a/libavfilter/vf_dnn_processing.c +++ b/libavfilter/vf_dnn_processing.c @@ -328,7 +328,7 @@ static int flush_frame(AVFilterLink *outlink, int64_t pts, int64_t *out_pts) do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { if (isPlanarYUV(in_frame->format)) copy_uv_planes(ctx, out_frame, in_frame); @@ -370,7 +370,7 @@ static int activate_async(AVFilterContext *filter_ctx) return AVERROR(ENOMEM); } av_frame_copy_props(out, in); - if (ff_dnn_execute_model_async(&ctx->dnnctx, in, out) != DNN_SUCCESS) { + if (ff_dnn_execute_model(&ctx->dnnctx, in, out) != DNN_SUCCESS) { return AVERROR(EIO); } } @@ -380,7 +380,7 @@ static int activate_async(AVFilterContext *filter_ctx) do { AVFrame *in_frame = NULL; AVFrame *out_frame = NULL; - async_state = ff_dnn_get_async_result(&ctx->dnnctx, &in_frame, &out_frame); + async_state = ff_dnn_get_result(&ctx->dnnctx, &in_frame, &out_frame); if (out_frame) { if (isPlanarYUV(in_frame->format)) copy_uv_planes(ctx, out_frame, in_frame); @@ -410,7 +410,7 @@ static int activate_async(AVFilterContext *filter_ctx) return 0; } -static int activate(AVFilterContext *filter_ctx) +static av_unused int activate(AVFilterContext *filter_ctx) { DnnProcessingContext *ctx = filter_ctx->priv; @@ -454,5 +454,5 @@ const AVFilter ff_vf_dnn_processing = { FILTER_INPUTS(dnn_processing_inputs), FILTER_OUTPUTS(dnn_processing_outputs), .priv_class = &dnn_processing_class, - .activate = activate, + .activate = activate_async, }; diff --git a/libavfilter/vf_sr.c b/libavfilter/vf_sr.c index f009a868f8..f4fc84d251 100644 --- a/libavfilter/vf_sr.c +++ b/libavfilter/vf_sr.c @@ -119,6 +119,7 @@ static int config_output(AVFilterLink *outlink) static int filter_frame(AVFilterLink *inlink, AVFrame *in) { + DNNAsyncStatusType async_state = 0; AVFilterContext *context = inlink->dst; SRContext *ctx = context->priv; AVFilterLink *outlink = context->outputs[0]; @@ -148,6 +149,13 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in) return AVERROR(EIO); } + do { + async_state = ff_dnn_get_result(&ctx->dnnctx, &in, &out); + } while (async_state == DAST_NOT_READY); + + if (async_state != DAST_SUCCESS) + return AVERROR(EINVAL); + if (ctx->sws_uv_scale) { sws_scale(ctx->sws_uv_scale, (const uint8_t **)(in->data + 1), in->linesize + 1, 0, ctx->sws_uv_height, out->data + 1, out->linesize + 1);