From patchwork Fri May 28 09:24:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shubhanshu Saxena X-Patchwork-Id: 27968 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a6b:b214:0:0:0:0:0 with SMTP id b20csp368843iof; Fri, 28 May 2021 02:27:10 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwGMGeIZv0/cyXUO+irCdwC2BP/163YRUmcbZ90BVsbqxnCy86msTfEoaR5iffmZOu7Awl4 X-Received: by 2002:a17:906:2bd3:: with SMTP id n19mr8057550ejg.210.1622194029998; Fri, 28 May 2021 02:27:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622194029; cv=none; d=google.com; s=arc-20160816; b=iJosB/pBhjVpcW8WpAdisY8fMcMAGxUYFju64v4YBohhGEjN50Ea7iZNaNPCHkSZvT YcTIGskbWd8HM94MjXLlAf+bZstWXq2Ytl1UmX7xjqMYnFDWyN32ftf6R0Uh56QLqzxE 0rkZRZL0VpSlZTosyyfNfq9RrGJnni4QRJ6+numbnfV8pGUtsmstA4aVeMxbsPLligj5 5l072RUkGwZyCTes4VW/GqV9Bp6DTozRjnFsTQ+EZd9PmMtB0s/V2R2AcxBreLxnwOst tWLnaNZFM+gUOF5rq1uX9ag02W4c0L5/igUIAg9ylVS4/Mfm9iTlIeWLKWqYi8lvs3Ne oRyA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=y9tm2CC+HH57S/b7kSf7gLEJa7rMEMwGDWqg8Ja6S1Y=; b=KTSFJrL8FinVkU3zJEhVruJNlnuH8Li1E7CB/fg2dWcrYuHK1VMbksVD5Tukm1tdK0 wTjrJmznS9EUftONj+MPR9chXZ4ZX+3CNZFNcxrXgxVKeGGd8eVFR54RlEorzXFOmrgm hyI7pa3C/o0SN1JzjgVF/X/6nEH57gqu1H2JRCKhAGLNzgA1eFyHrXTfM0JfoBET9P/k A511Ehda6JU5XgOMRiuOo3GNu26eOSL/ZGlRMT8E9GGZKF3h/zr10fx2YwZ4w3ooez5J 9/NSjhUyrlDr/hnw6t+w+xPXgFr4SjVUFaHYF5M9ozclIkl1ge2aue2PbqbgeyXepDh9 f5UA== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=FxqIdv3T; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id t10si3122567ejs.653.2021.05.28.02.27.09; Fri, 28 May 2021 02:27:09 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=FxqIdv3T; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id E02B568A000; Fri, 28 May 2021 12:26:10 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pg1-f182.google.com (mail-pg1-f182.google.com [209.85.215.182]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id CB0A8689F7D for ; Fri, 28 May 2021 12:26:02 +0300 (EEST) Received: by mail-pg1-f182.google.com with SMTP id q15so2064368pgg.12 for ; Fri, 28 May 2021 02:26:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cF+218VUfGoGEnuqUvTirbIosUCOEyi9IQHG6D+3bp4=; b=FxqIdv3TjVFkcV26s4Ww8KY7EQlAqRkJqEN7/nvhCERJ4XiCuae+mCqcov7WRaGA9z hFc4XYx8DuYoc8/rmX4hUdAG1jmvrpVCcAz+ITLPrNkD1ldXrfGwPOOTzDwMcRTzZ49M XWOFode0EO8eomoeuOarILHj+c90+nmvI7E9SZ4Oq8EtxpxpGfOdlKzAr1uOtVtA9mHF FXlwNm89KZmphNQ9rLo3Xayydbq/tBlJ4X/LbA+CHx5pZtHPP0XVq6Hqsl83RPLP3i7b aQsSGcJazLEWVO5i1A4Ue1vIF34rVyuk1m4m7cuJwXUX5PKuepLbSNjDpHEUQjKhoglY VFHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cF+218VUfGoGEnuqUvTirbIosUCOEyi9IQHG6D+3bp4=; b=BHNequxtHc2m6DL2ALQ+NeeD33vD9DSBllIZi66huTjgaT2Q9aOY9MhsxMSE7khwNa +Dlv34lprgNbYJhK/T42cT/GURXgGhBvVjvUBbS76r1mPWVqwsiADMtbUqsRJKwg8D8s wzO6KslzrpcbvTOWQfHMkbwi7YqYI1mUIxCq33UrYh7y3d+GUiuuPDGCYa8G4voFUIHR wc76Pxm7SVzn3tIQCXCYJrFNCyItUOUHhCdjQTlXohDkaluySxqElpB08P4QjRCO7RBT yi07zzv1kdQFW2AleK9iuQhRIDnhAO0Rc4aCAU063NNm23qSr2dbaaelTs6T6JMUBGhQ 7Xww== X-Gm-Message-State: AOAM533nTVxJmvx/JSZaHyJ1F+6c8FBrR0zaKLRRYG/KQZsLrsJ3safH 6meQ4w0oIJJ/hRsg794d5OMyykfbgcohow== X-Received: by 2002:a63:5d66:: with SMTP id o38mr8128871pgm.444.1622193960976; Fri, 28 May 2021 02:26:00 -0700 (PDT) Received: from Pavilion-x360.bbrouter ([103.133.121.241]) by smtp.googlemail.com with ESMTPSA id q24sm3846892pgk.32.2021.05.28.02.25.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 May 2021 02:26:00 -0700 (PDT) From: Shubhanshu Saxena To: ffmpeg-devel@ffmpeg.org Date: Fri, 28 May 2021 14:54:52 +0530 Message-Id: <20210528092454.31874-8-shubhanshu.e01@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210528092454.31874-1-shubhanshu.e01@gmail.com> References: <20210528092454.31874-1-shubhanshu.e01@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH 08/10] lavfi/dnn_backend_tf: Error Handling X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Shubhanshu Saxena Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: WlciRI0TAamv This commit adds handling for cases where an error may occur, clearing the allocated memory resources. Signed-off-by: Shubhanshu Saxena --- libavfilter/dnn/dnn_backend_tf.c | 100 +++++++++++++++++++++++-------- 1 file changed, 74 insertions(+), 26 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c index 5d34da5db1..31746deef4 100644 --- a/libavfilter/dnn/dnn_backend_tf.c +++ b/libavfilter/dnn/dnn_backend_tf.c @@ -114,14 +114,18 @@ static tf_infer_request* tf_create_inference_request(void) static DNNReturnType extract_inference_from_task(TaskItem *task, Queue *inference_queue) { + TFModel *tf_model = task->model; + TFContext *ctx = &tf_model->ctx; InferenceItem *inference = av_malloc(sizeof(*inference)); if (!inference) { + av_log(ctx, AV_LOG_ERROR, "Unable to allocate space for InferenceItem\n"); return DNN_ERROR; } task->inference_todo = 1; task->inference_done = 0; inference->task = task; if (ff_queue_push_back(inference_queue, inference) < 0) { + av_log(ctx, AV_LOG_ERROR, "Failed to push back inference_queue.\n"); av_freep(&inference); return DNN_ERROR; } @@ -232,14 +236,15 @@ static DNNReturnType get_output_tf(void *model, const char *input_name, int inpu if (!in_frame) { av_log(ctx, AV_LOG_ERROR, "Failed to allocate memory for input frame\n"); - return DNN_ERROR; + ret = DNN_ERROR; + goto final; } out_frame = av_frame_alloc(); if (!out_frame) { av_log(ctx, AV_LOG_ERROR, "Failed to allocate memory for output frame\n"); - av_frame_free(&in_frame); - return DNN_ERROR; + ret = DNN_ERROR; + goto final; } in_frame->width = input_width; @@ -256,19 +261,22 @@ static DNNReturnType get_output_tf(void *model, const char *input_name, int inpu if (extract_inference_from_task(&task, tf_model->inference_queue) != DNN_SUCCESS) { av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); - return DNN_ERROR; + ret = DNN_ERROR; + goto final; } request = ff_safe_queue_pop_front(tf_model->request_queue); if (!request) { av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); - return DNN_ERROR; + ret = DNN_ERROR; + goto final; } ret = execute_model_tf(request, tf_model->inference_queue); *output_width = out_frame->width; *output_height = out_frame->height; +final: av_frame_free(&out_frame); av_frame_free(&in_frame); return ret; @@ -788,18 +796,13 @@ DNNModel *ff_dnn_load_model_tf(const char *model_filename, DNNFunctionType func_ //parse options av_opt_set_defaults(&ctx); if (av_opt_set_from_string(&ctx, options, NULL, "=", "&") < 0) { - av_log(&tf_model->ctx, AV_LOG_ERROR, "Failed to parse options \"%s\"\n", options); - av_freep(&tf_model); - av_freep(&model); - return NULL; + av_log(&ctx, AV_LOG_ERROR, "Failed to parse options \"%s\"\n", options); + goto err; } if (load_tf_model(tf_model, model_filename) != DNN_SUCCESS){ if (load_native_model(tf_model, model_filename) != DNN_SUCCESS){ - av_freep(&tf_model); - av_freep(&model); - - return NULL; + goto err; } } @@ -808,14 +811,34 @@ DNNModel *ff_dnn_load_model_tf(const char *model_filename, DNNFunctionType func_ } tf_model->request_queue = ff_safe_queue_create(); + if (!tf_model->request_queue) { + goto err; + } for (int i = 0; i < ctx->options.nireq; i++) { RequestItem *item = av_mallocz(sizeof(*item)); + if (!item) { + goto err; + } item->infer_request = tf_create_inference_request(); - ff_safe_queue_push_back(tf_model->request_queue, item); + if (!item->infer_request) { + av_log(ctx, AV_LOG_ERROR, "Failed to allocate memory for TensorFlow inference request\n"); + av_freep(&item); + goto err; + } + + if (ff_safe_queue_push_back(tf_model->request_queue, item) < 0) { + av_freep(&item->infer_request); + av_freep(&item); + goto err; + } } tf_model->inference_queue = ff_queue_create(); + if (!tf_model->inference_queue) { + goto err; + } + model->model = tf_model; model->get_input = &get_input_tf; model->get_output = &get_output_tf; @@ -824,6 +847,9 @@ DNNModel *ff_dnn_load_model_tf(const char *model_filename, DNNFunctionType func_ model->func_type = func_type; return model; +err: + ff_dnn_free_model_tf(&model); + return NULL; } static DNNReturnType fill_model_input_tf(TFModel *tf_model, RequestItem *request) { @@ -838,24 +864,31 @@ static DNNReturnType fill_model_input_tf(TFModel *tf_model, RequestItem *request task = inference->task; request->inference = inference; - if (get_input_tf(tf_model, &input, task->input_name) != DNN_SUCCESS) - return DNN_ERROR; + if (get_input_tf(tf_model, &input, task->input_name) != DNN_SUCCESS) { + goto err; + } infer_request = request->infer_request; input.height = task->in_frame->height; input.width = task->in_frame->width; infer_request->tf_input = av_malloc(sizeof(TF_Output)); + if (!infer_request->tf_input) { + av_log(ctx, AV_LOG_ERROR, "Failed to allocate memory for input tensor\n"); + goto err; + } + infer_request->tf_input->oper = TF_GraphOperationByName(tf_model->graph, task->input_name); if (!infer_request->tf_input->oper){ av_log(ctx, AV_LOG_ERROR, "Could not find \"%s\" in model\n", task->input_name); - return DNN_ERROR; + goto err; } infer_request->tf_input->index = 0; + infer_request->input_tensor = allocate_input_tensor(&input); if (!infer_request->input_tensor){ av_log(ctx, AV_LOG_ERROR, "Failed to allocate memory for input tensor\n"); - return DNN_ERROR; + goto err; } input.data = (float *)TF_TensorData(infer_request->input_tensor); @@ -880,27 +913,35 @@ static DNNReturnType fill_model_input_tf(TFModel *tf_model, RequestItem *request infer_request->tf_outputs = av_malloc_array(task->nb_output, sizeof(TF_Output)); if (infer_request->tf_outputs == NULL) { av_log(ctx, AV_LOG_ERROR, "Failed to allocate memory for *tf_outputs\n"); - return DNN_ERROR; + goto err; } infer_request->output_tensors = av_mallocz_array(task->nb_output, sizeof(*infer_request->output_tensors)); if (!infer_request->output_tensors) { av_log(ctx, AV_LOG_ERROR, "Failed to allocate memory for output tensor\n"); - return DNN_ERROR; + goto err; } - for (int i = 0; i < task->nb_output; ++i) { infer_request->tf_outputs[i].oper = TF_GraphOperationByName(tf_model->graph, task->output_names[i]); if (!infer_request->tf_outputs[i].oper) { av_log(ctx, AV_LOG_ERROR, "Could not find output \"%s\" in model\n", task->output_names[i]); - return DNN_ERROR; + goto err; } infer_request->tf_outputs[i].index = 0; } return DNN_SUCCESS; +err: + for (uint32_t i = 0; i < task->nb_output; ++i) { + if (infer_request->output_tensors[i]) { + TF_DeleteTensor(infer_request->output_tensors[i]); + } + } + tf_free_request(infer_request); + return DNN_ERROR; } + static void infer_completion_callback(void *args) { RequestItem *request = args; InferenceItem *inference = request->inference; @@ -912,9 +953,8 @@ static void infer_completion_callback(void *args) { outputs = av_malloc_array(task->nb_output, sizeof(*outputs)); if (!outputs) { - tf_free_request(infer_request); av_log(ctx, AV_LOG_ERROR, "Failed to allocate memory for *outputs\n"); - return; + goto final; } for (uint32_t i = 0; i < task->nb_output; ++i) { @@ -952,7 +992,7 @@ static void infer_completion_callback(void *args) { } } av_log(ctx, AV_LOG_ERROR, "Tensorflow backend does not support this kind of dnn filter now\n"); - return; + goto final; } for (uint32_t i = 0; i < task->nb_output; ++i) { if (infer_request->output_tensors[i]) { @@ -960,9 +1000,13 @@ static void infer_completion_callback(void *args) { } } task->inference_done++; +final: tf_free_request(infer_request); av_freep(&outputs); - ff_safe_queue_push_back(tf_model->request_queue, request); + + if (ff_safe_queue_push_back(tf_model->request_queue, request) < 0) { + av_log(ctx, AV_LOG_ERROR, "Failed to push back request_queue.\n"); + } } static DNNReturnType execute_model_tf(RequestItem *request, Queue *inference_queue) @@ -974,6 +1018,10 @@ static DNNReturnType execute_model_tf(RequestItem *request, Queue *inference_que TaskItem *task; inference = ff_queue_peek_front(inference_queue); + if (!inference) { + av_log(NULL, AV_LOG_ERROR, "Failed to get inference item\n"); + return DNN_ERROR; + } task = inference->task; tf_model = task->model; ctx = &tf_model->ctx;