From patchwork Sat Jul 31 05:35:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 29149 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a5d:965a:0:0:0:0:0 with SMTP id d26csp135982ios; Fri, 30 Jul 2021 22:43:42 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzJucGA+90tiEDhul8YMOAyPr388e94pvE3vbDfHpFIiS7YG+LjbzT0SykKeIuUHMA9sBbN X-Received: by 2002:a17:907:3e0d:: with SMTP id hp13mr5933183ejc.372.1627710222005; Fri, 30 Jul 2021 22:43:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1627710222; cv=none; d=google.com; s=arc-20160816; b=gWl2zFImtqPf9n8Tu3anfECNJKm8+xn3aowBZWH+LRjpI3Incrrr+p2W1bIpw7e8pS A2hMhoBwJiehgHMZxraZAZxFgznjLagR5IqRaH/+sslK3LsnSFIuQam1B0RFjDRtv8FW aQsFbmcz/041v5E93Ugnar709zB72jCE4eTiyleVLLMfUud3VG04o5u6GquH1gkfcYAd pIZ9Ye7kZJYWEzGaytbOq0keM0czcmvwqrIUwLEVeKM+chU6Bme1XvcSQWbgmYv9kOKA ndgLUyfN6D1KgvBgNKpLfPsMsKMH8DlsIHrjP2JsKvugYaH+ntuz5rSACc1y1L/dEm/4 tJCA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=gb/euwEosYPjiICTf+TYhcWVyXWLc2Qmqin1kXj329w=; b=h+70L4tgStS12LBHES0VyJQm6SStKe1SltWXAshQB3UnaK7v0C2ao5k49yRDnFjS9c U72Ef4ts+JzqLEvoSy3owY7D0bU528vDJqb0bqKkYtYdTVehNvSYJf3vEbilb+I5aWmn 9krCdpZCEKBlQPL4Daa/dOVSMuKry6zDYvXkWD8TGNnutxydeI02p3Tpo0wuXaYfYJVK xFbacbk5d3/dhoHOyL2mQWal2mHl+xdVDKwzEmifDiA5f3DmaTOUFRCH5zuireni9v1b QiMggxkwv+7rurr+Kw61KqUJeIUBqEaQOuWuZYtZfF+32URxVN2olUiFZMfeONeAC9jT 1d6g== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=pO3Y1hxf; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id x92si3944957edc.573.2021.07.30.22.43.41; Fri, 30 Jul 2021 22:43:41 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=pO3Y1hxf; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id BFA8768A2DC; Sat, 31 Jul 2021 08:43:39 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-io1-f42.google.com (mail-io1-f42.google.com [209.85.166.42]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 62D3B68A0F9 for ; Sat, 31 Jul 2021 08:43:33 +0300 (EEST) Received: by mail-io1-f42.google.com with SMTP id o14so164779ioa.12 for ; Fri, 30 Jul 2021 22:43:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1gR2ieYZsgYdyGMlPqx2hHN0kChDeyXxsh0YdOR5kIE=; b=pO3Y1hxfMMh1YDGZW8RfNMchB1d42QnvaktSld86GR3tmuwWUn+XeChn704pbNj7X1 yXgv2CWXn/GhW19C6w65RdNWfatbSflfArqAyp8eo+Kxf1Kr45hjZtGGVS87pAIPmBbZ TRCIntYR9EUZ++Pp3xzNGXlUOzTSf2cyBOvEJ6hZIlnr+QxrBUrYc2YI9X5FiP/3VU7/ mEdOZV323OdIreH+kt72Lui+2Dy0bjHFUxSpPop6mglPJ3E24KdTNeXnP2K/KloOo+ez IodQDVfGDM7c0AOxHPnBPeEQCKzktlgAM6m7dd+kUCsYmDfPvQq9VtqxTz4I/WFiIEm/ yaTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1gR2ieYZsgYdyGMlPqx2hHN0kChDeyXxsh0YdOR5kIE=; b=t9rXXCBn+7IF6JN1l/1u9DAJiFahdygHJLBPyYk5L4+dyb6YisPvOXnKg0MZ/jzIN6 ZxO4McitlyQzm0fiStohE5ko/+doIW5rpCAXOzV3H9kpgubv+tsmyn/8QFgg2jBvtO6i 46drmNgzJnzgVyJeaAZR0qRjzg9G35c9cWuLGBVPymdZW56wILmk3wbQ4Fi7CKf6LVw+ f99pnN62CJHu3PbiYihVcCx92M92QNhsyOm5gILSUWubjPc8QXs5ZaLwdKZVpATJUEO1 glVkswuo3ek/FTCAA02Xr5BVYHG1+GTB2RaMhjW+uhn48YnDDgv381GHt4b8Jwt2LgRN MC2w== X-Gm-Message-State: AOAM533DYfq8qY9hEdcCQHNaPRwpc5QehU19Wxb/CUXRqMEw4kOYGHUF 4AcR0rtzb5QCRjShzkbWu8Lnns/busXA6g== X-Received: by 2002:a63:210b:: with SMTP id h11mr5439031pgh.6.1627709761592; Fri, 30 Jul 2021 22:36:01 -0700 (PDT) Received: from localhost.localdomain ([103.157.220.250]) by smtp.googlemail.com with ESMTPSA id p34sm4312266pfh.172.2021.07.30.22.36.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 30 Jul 2021 22:36:01 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Sat, 31 Jul 2021 11:05:12 +0530 Message-Id: <20210731053516.7700-4-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210731053516.7700-1-shubhanshu.e01@gmail.com> References: <20210731053516.7700-1-shubhanshu.e01@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH 4/8] lavfi/dnn: Async Support for TensorFlow Backend X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: lTN//XZGYCLi This commit enables async execution in the TensorFlow backend and adds function to flush extra frames. The async execution mechanism executes the TFInferRequests on a detached thread. The following is the comparison of this mechanism with the existing sync mechanism on TensorFlow C API 2.5 GPU variant. Async Mode: 0m57.064s Sync Mode: 1m1.959s The above was performed on super resolution filter using ESPCN model. Signed-off-by: Shubhanshu Saxena --- libavfilter/dnn/dnn_backend_tf.c | 121 ++++++++++++++++++++++++++----- libavfilter/dnn/dnn_backend_tf.h | 3 + libavfilter/dnn/dnn_interface.c | 3 + 3 files changed, 109 insertions(+), 18 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c index 805a0328b6..d3658c3308 100644 --- a/libavfilter/dnn/dnn_backend_tf.c +++ b/libavfilter/dnn/dnn_backend_tf.c @@ -37,7 +37,6 @@ #include "dnn_io_proc.h" #include "dnn_backend_common.h" #include "safe_queue.h" -#include "queue.h" #include typedef struct TFOptions{ @@ -58,6 +57,7 @@ typedef struct TFModel{ TF_Status *status; SafeQueue *request_queue; Queue *inference_queue; + Queue *task_queue; } TFModel; /** @@ -74,7 +74,7 @@ typedef struct TFInferRequest { typedef struct TFRequestItem { TFInferRequest *infer_request; InferenceItem *inference; - // further properties will be added later for async + DNNAsyncExecModule exec_module; } TFRequestItem; #define OFFSET(x) offsetof(TFContext, x) @@ -88,6 +88,7 @@ static const AVOption dnn_tensorflow_options[] = { AVFILTER_DEFINE_CLASS(dnn_tensorflow); static DNNReturnType execute_model_tf(TFRequestItem *request, Queue *inference_queue); +static void infer_completion_callback(void *args); static void free_buffer(void *data, size_t length) { @@ -885,6 +886,9 @@ DNNModel *ff_dnn_load_model_tf(const char *model_filename, DNNFunctionType func_ av_freep(&item); goto err; } + item->exec_module.start_inference = &tf_start_inference; + item->exec_module.callback = &infer_completion_callback; + item->exec_module.args = item; if (ff_safe_queue_push_back(tf_model->request_queue, item) < 0) { av_freep(&item->infer_request); @@ -898,6 +902,11 @@ DNNModel *ff_dnn_load_model_tf(const char *model_filename, DNNFunctionType func_ goto err; } + tf_model->task_queue = ff_queue_create(); + if (!tf_model->task_queue) { + goto err; + } + model->model = tf_model; model->get_input = &get_input_tf; model->get_output = &get_output_tf; @@ -1060,7 +1069,6 @@ static DNNReturnType execute_model_tf(TFRequestItem *request, Queue *inference_q { TFModel *tf_model; TFContext *ctx; - TFInferRequest *infer_request; InferenceItem *inference; TaskItem *task; @@ -1073,23 +1081,14 @@ static DNNReturnType execute_model_tf(TFRequestItem *request, Queue *inference_q tf_model = task->model; ctx = &tf_model->ctx; - if (task->async) { - avpriv_report_missing_feature(ctx, "Async execution not supported"); + if (fill_model_input_tf(tf_model, request) != DNN_SUCCESS) { return DNN_ERROR; - } else { - if (fill_model_input_tf(tf_model, request) != DNN_SUCCESS) { - return DNN_ERROR; - } + } - infer_request = request->infer_request; - TF_SessionRun(tf_model->session, NULL, - infer_request->tf_input, &infer_request->input_tensor, 1, - infer_request->tf_outputs, infer_request->output_tensors, - task->nb_output, NULL, 0, NULL, - tf_model->status); - if (TF_GetCode(tf_model->status) != TF_OK) { - tf_free_request(infer_request); - av_log(ctx, AV_LOG_ERROR, "Failed to run session when executing model\n"); + if (task->async) { + return ff_dnn_start_inference_async(ctx, &request->exec_module); + } else { + if (tf_start_inference(request) != DNN_SUCCESS) { return DNN_ERROR; } infer_completion_callback(request); @@ -1126,6 +1125,83 @@ DNNReturnType ff_dnn_execute_model_tf(const DNNModel *model, DNNExecBaseParams * return execute_model_tf(request, tf_model->inference_queue); } +DNNReturnType ff_dnn_execute_model_async_tf(const DNNModel *model, DNNExecBaseParams *exec_params) { + TFModel *tf_model = model->model; + TFContext *ctx = &tf_model->ctx; + TaskItem *task; + TFRequestItem *request; + + if (ff_check_exec_params(ctx, DNN_TF, model->func_type, exec_params) != 0) { + return DNN_ERROR; + } + + task = av_malloc(sizeof(*task)); + if (!task) { + av_log(ctx, AV_LOG_ERROR, "unable to alloc memory for task item.\n"); + return DNN_ERROR; + } + + if (ff_dnn_fill_task(task, exec_params, tf_model, 1, 1) != DNN_SUCCESS) { + av_freep(&task); + return DNN_ERROR; + } + + if (ff_queue_push_back(tf_model->task_queue, task) < 0) { + av_freep(&task); + av_log(ctx, AV_LOG_ERROR, "unable to push back task_queue.\n"); + return DNN_ERROR; + } + + if (extract_inference_from_task(task, tf_model->inference_queue) != DNN_SUCCESS) { + av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); + return DNN_ERROR; + } + + request = ff_safe_queue_pop_front(tf_model->request_queue); + if (!request) { + av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); + return DNN_ERROR; + } + return execute_model_tf(request, tf_model->inference_queue); +} + +DNNAsyncStatusType ff_dnn_get_async_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out) +{ + TFModel *tf_model = model->model; + return ff_dnn_get_async_result_common(tf_model->task_queue, in, out); +} + +DNNReturnType ff_dnn_flush_tf(const DNNModel *model) +{ + TFModel *tf_model = model->model; + TFContext *ctx = &tf_model->ctx; + TFRequestItem *request; + DNNReturnType ret; + + if (ff_queue_size(tf_model->inference_queue) == 0) { + // no pending task need to flush + return DNN_SUCCESS; + } + + request = ff_safe_queue_pop_front(tf_model->request_queue); + if (!request) { + av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); + return DNN_ERROR; + } + + ret = fill_model_input_tf(tf_model, request); + if (ret != DNN_SUCCESS) { + av_log(ctx, AV_LOG_ERROR, "Failed to fill model input.\n"); + if (ff_safe_queue_push_back(tf_model->request_queue, request) < 0) { + av_freep(&request->infer_request); + av_freep(&request); + } + return ret; + } + + return ff_dnn_start_inference_async(ctx, &request->exec_module); +} + void ff_dnn_free_model_tf(DNNModel **model) { TFModel *tf_model; @@ -1134,6 +1210,7 @@ void ff_dnn_free_model_tf(DNNModel **model) tf_model = (*model)->model; while (ff_safe_queue_size(tf_model->request_queue) != 0) { TFRequestItem *item = ff_safe_queue_pop_front(tf_model->request_queue); + ff_dnn_async_module_cleanup(&item->exec_module); tf_free_request(item->infer_request); av_freep(&item->infer_request); av_freep(&item); @@ -1146,6 +1223,14 @@ void ff_dnn_free_model_tf(DNNModel **model) } ff_queue_destroy(tf_model->inference_queue); + while (ff_queue_size(tf_model->task_queue) != 0) { + TaskItem *item = ff_queue_pop_front(tf_model->task_queue); + av_frame_free(&item->in_frame); + av_frame_free(&item->out_frame); + av_freep(&item); + } + ff_queue_destroy(tf_model->task_queue); + if (tf_model->graph){ TF_DeleteGraph(tf_model->graph); } diff --git a/libavfilter/dnn/dnn_backend_tf.h b/libavfilter/dnn/dnn_backend_tf.h index 3dfd6e4280..aec0fc2011 100644 --- a/libavfilter/dnn/dnn_backend_tf.h +++ b/libavfilter/dnn/dnn_backend_tf.h @@ -32,6 +32,9 @@ DNNModel *ff_dnn_load_model_tf(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx); DNNReturnType ff_dnn_execute_model_tf(const DNNModel *model, DNNExecBaseParams *exec_params); +DNNReturnType ff_dnn_execute_model_async_tf(const DNNModel *model, DNNExecBaseParams *exec_params); +DNNAsyncStatusType ff_dnn_get_async_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out); +DNNReturnType ff_dnn_flush_tf(const DNNModel *model); void ff_dnn_free_model_tf(DNNModel **model); diff --git a/libavfilter/dnn/dnn_interface.c b/libavfilter/dnn/dnn_interface.c index 02e532fc1b..81af934dd5 100644 --- a/libavfilter/dnn/dnn_interface.c +++ b/libavfilter/dnn/dnn_interface.c @@ -48,6 +48,9 @@ DNNModule *ff_get_dnn_module(DNNBackendType backend_type) #if (CONFIG_LIBTENSORFLOW == 1) dnn_module->load_model = &ff_dnn_load_model_tf; dnn_module->execute_model = &ff_dnn_execute_model_tf; + dnn_module->execute_model_async = &ff_dnn_execute_model_async_tf; + dnn_module->get_async_result = &ff_dnn_get_async_result_tf; + dnn_module->flush = &ff_dnn_flush_tf; dnn_module->free_model = &ff_dnn_free_model_tf; #else av_freep(&dnn_module);