From patchwork Wed Mar 24 07:39:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fu, Ting" X-Patchwork-Id: 26592 Return-Path: X-Original-To: patchwork@ffaux-bg.ffmpeg.org Delivered-To: patchwork@ffaux-bg.ffmpeg.org Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by ffaux.localdomain (Postfix) with ESMTP id 0E7D3449F23 for ; Wed, 24 Mar 2021 09:48:05 +0200 (EET) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id A957E68AAAE; Wed, 24 Mar 2021 09:48:04 +0200 (EET) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id B970968A3AF for ; Wed, 24 Mar 2021 09:47:57 +0200 (EET) IronPort-SDR: 83BR/qyHcQzlMY4ybbK7IwVwX66t4+88+4ZmxKiy26KX/TbKC3l5pWijW4vhY6VftvWj0NLzDK w3d6IVcGzugw== X-IronPort-AV: E=McAfee;i="6000,8403,9932"; a="170623025" X-IronPort-AV: E=Sophos;i="5.81,274,1610438400"; d="scan'208";a="170623025" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Mar 2021 00:47:55 -0700 IronPort-SDR: LbR7jTl4GUCjUlTfCJ7YtSHHfYMoodcnjQ3Ij7a5w18JBOW7eG2RusjlyUg6NgW/lz/oCVNBHG /4C/NlPZOL1w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,274,1610438400"; d="scan'208";a="415367815" Received: from semmer-ubuntu.sh.intel.com ([10.239.159.83]) by orsmga008.jf.intel.com with ESMTP; 24 Mar 2021 00:47:54 -0700 From: Ting Fu To: ffmpeg-devel@ffmpeg.org Date: Wed, 24 Mar 2021 15:39:26 +0800 Message-Id: <20210324073928.3570-1-ting.fu@intel.com> X-Mailer: git-send-email 2.17.1 Subject: [FFmpeg-devel] [PATCH 1/3] lavfi/dnn_backend_tensorflow.c: fix mem leak in load_tf_model X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches MIME-Version: 1.0 Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" Signed-off-by: Ting Fu --- libavfilter/dnn/dnn_backend_tf.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c index 750a476726..e016571304 100644 --- a/libavfilter/dnn/dnn_backend_tf.c +++ b/libavfilter/dnn/dnn_backend_tf.c @@ -282,6 +282,9 @@ static DNNReturnType load_tf_model(TFModel *tf_model, const char *model_filename TF_SetConfig(sess_opts, sess_config, sess_config_length,tf_model->status); av_freep(&sess_config); if (TF_GetCode(tf_model->status) != TF_OK) { + TF_DeleteGraph(tf_model->graph); + TF_DeleteStatus(tf_model->status); + TF_DeleteSessionOptions(sess_opts); av_log(ctx, AV_LOG_ERROR, "Failed to set config for sess options with %s\n", tf_model->ctx.options.sess_config); return DNN_ERROR; @@ -292,6 +295,8 @@ static DNNReturnType load_tf_model(TFModel *tf_model, const char *model_filename TF_DeleteSessionOptions(sess_opts); if (TF_GetCode(tf_model->status) != TF_OK) { + TF_DeleteGraph(tf_model->graph); + TF_DeleteStatus(tf_model->status); av_log(ctx, AV_LOG_ERROR, "Failed to create new session with model graph\n"); return DNN_ERROR; } @@ -304,6 +309,9 @@ static DNNReturnType load_tf_model(TFModel *tf_model, const char *model_filename &init_op, 1, NULL, tf_model->status); if (TF_GetCode(tf_model->status) != TF_OK) { + TF_DeleteSession(tf_model->session, tf_model->status); + TF_DeleteGraph(tf_model->graph); + TF_DeleteStatus(tf_model->status); av_log(ctx, AV_LOG_ERROR, "Failed to run session when initializing\n"); return DNN_ERROR; } From patchwork Wed Mar 24 07:39:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fu, Ting" X-Patchwork-Id: 26593 Return-Path: X-Original-To: patchwork@ffaux-bg.ffmpeg.org Delivered-To: patchwork@ffaux-bg.ffmpeg.org Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by ffaux.localdomain (Postfix) with ESMTP id E4706449F23 for ; Wed, 24 Mar 2021 09:48:05 +0200 (EET) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id CBCBE68AA75; Wed, 24 Mar 2021 09:48:05 +0200 (EET) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id A7E9368A5DF for ; Wed, 24 Mar 2021 09:47:59 +0200 (EET) IronPort-SDR: pWaim6lRVTlgIiWZle6yRe59LUrI8J8JZuQvAZ2DLoWTiH64xMCuWNdiz3g07L8f+TGu2CjLKT MXIqCfu+Zxsg== X-IronPort-AV: E=McAfee;i="6000,8403,9932"; a="170623028" X-IronPort-AV: E=Sophos;i="5.81,274,1610438400"; d="scan'208";a="170623028" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Mar 2021 00:47:56 -0700 IronPort-SDR: PFt4WDIAxSuaZbEKsG+vydLKEM0wjiMXTp91MQ12KGaT4j61enP0eaJsOoLzCBsQ0aKljg57zv aE1ul1zpcI/Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,274,1610438400"; d="scan'208";a="415367818" Received: from semmer-ubuntu.sh.intel.com ([10.239.159.83]) by orsmga008.jf.intel.com with ESMTP; 24 Mar 2021 00:47:55 -0700 From: Ting Fu To: ffmpeg-devel@ffmpeg.org Date: Wed, 24 Mar 2021 15:39:27 +0800 Message-Id: <20210324073928.3570-2-ting.fu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210324073928.3570-1-ting.fu@intel.com> References: <20210324073928.3570-1-ting.fu@intel.com> Subject: [FFmpeg-devel] [PATCH 2/3] lavfi/dnn_backend_tensorflow.c: fix mem leak in load_native_model X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches MIME-Version: 1.0 Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" Signed-off-by: Ting Fu --- libavfilter/dnn/dnn_backend_tf.c | 55 ++++++++++++++++++-------------- 1 file changed, 31 insertions(+), 24 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c index e016571304..c18cb4063f 100644 --- a/libavfilter/dnn/dnn_backend_tf.c +++ b/libavfilter/dnn/dnn_backend_tf.c @@ -330,7 +330,7 @@ static DNNReturnType add_conv_layer(TFModel *tf_model, TF_Operation *transpose_o TF_OperationDescription *op_desc; TF_Output input; int64_t strides[] = {1, 1, 1, 1}; - TF_Tensor *tensor; + TF_Tensor *kernel_tensor = NULL, *biases_tensor = NULL; int64_t dims[4]; int dims_len; char name_buffer[NAME_BUFFER_SIZE]; @@ -347,17 +347,15 @@ static DNNReturnType add_conv_layer(TFModel *tf_model, TF_Operation *transpose_o dims[2] = params->kernel_size; dims[3] = params->input_num; dims_len = 4; - tensor = TF_AllocateTensor(TF_FLOAT, dims, dims_len, size * sizeof(float)); - memcpy(TF_TensorData(tensor), params->kernel, size * sizeof(float)); - TF_SetAttrTensor(op_desc, "value", tensor, tf_model->status); + kernel_tensor = TF_AllocateTensor(TF_FLOAT, dims, dims_len, size * sizeof(float)); + memcpy(TF_TensorData(kernel_tensor), params->kernel, size * sizeof(float)); + TF_SetAttrTensor(op_desc, "value", kernel_tensor, tf_model->status); if (TF_GetCode(tf_model->status) != TF_OK){ - av_log(ctx, AV_LOG_ERROR, "Failed to set value for kernel of conv layer %d\n", layer); - return DNN_ERROR; + goto err; } op = TF_FinishOperation(op_desc, tf_model->status); if (TF_GetCode(tf_model->status) != TF_OK){ - av_log(ctx, AV_LOG_ERROR, "Failed to add kernel to conv layer %d\n", layer); - return DNN_ERROR; + goto err; } snprintf(name_buffer, NAME_BUFFER_SIZE, "transpose%d", layer); @@ -370,8 +368,7 @@ static DNNReturnType add_conv_layer(TFModel *tf_model, TF_Operation *transpose_o TF_SetAttrType(op_desc, "Tperm", TF_INT32); op = TF_FinishOperation(op_desc, tf_model->status); if (TF_GetCode(tf_model->status) != TF_OK){ - av_log(ctx, AV_LOG_ERROR, "Failed to add transpose to conv layer %d\n", layer); - return DNN_ERROR; + goto err; } snprintf(name_buffer, NAME_BUFFER_SIZE, "conv2d%d", layer); @@ -385,8 +382,7 @@ static DNNReturnType add_conv_layer(TFModel *tf_model, TF_Operation *transpose_o TF_SetAttrString(op_desc, "padding", "VALID", 5); *cur_op = TF_FinishOperation(op_desc, tf_model->status); if (TF_GetCode(tf_model->status) != TF_OK){ - av_log(ctx, AV_LOG_ERROR, "Failed to add conv2d to conv layer %d\n", layer); - return DNN_ERROR; + goto err; } snprintf(name_buffer, NAME_BUFFER_SIZE, "conv_biases%d", layer); @@ -394,17 +390,15 @@ static DNNReturnType add_conv_layer(TFModel *tf_model, TF_Operation *transpose_o TF_SetAttrType(op_desc, "dtype", TF_FLOAT); dims[0] = params->output_num; dims_len = 1; - tensor = TF_AllocateTensor(TF_FLOAT, dims, dims_len, params->output_num * sizeof(float)); - memcpy(TF_TensorData(tensor), params->biases, params->output_num * sizeof(float)); - TF_SetAttrTensor(op_desc, "value", tensor, tf_model->status); + biases_tensor = TF_AllocateTensor(TF_FLOAT, dims, dims_len, params->output_num * sizeof(float)); + memcpy(TF_TensorData(biases_tensor), params->biases, params->output_num * sizeof(float)); + TF_SetAttrTensor(op_desc, "value", biases_tensor, tf_model->status); if (TF_GetCode(tf_model->status) != TF_OK){ - av_log(ctx, AV_LOG_ERROR, "Failed to set value for conv_biases of conv layer %d\n", layer); - return DNN_ERROR; + goto err; } op = TF_FinishOperation(op_desc, tf_model->status); if (TF_GetCode(tf_model->status) != TF_OK){ - av_log(ctx, AV_LOG_ERROR, "Failed to add conv_biases to conv layer %d\n", layer); - return DNN_ERROR; + goto err; } snprintf(name_buffer, NAME_BUFFER_SIZE, "bias_add%d", layer); @@ -416,8 +410,7 @@ static DNNReturnType add_conv_layer(TFModel *tf_model, TF_Operation *transpose_o TF_SetAttrType(op_desc, "T", TF_FLOAT); *cur_op = TF_FinishOperation(op_desc, tf_model->status); if (TF_GetCode(tf_model->status) != TF_OK){ - av_log(ctx, AV_LOG_ERROR, "Failed to add bias_add to conv layer %d\n", layer); - return DNN_ERROR; + goto err; } snprintf(name_buffer, NAME_BUFFER_SIZE, "activation%d", layer); @@ -440,11 +433,15 @@ static DNNReturnType add_conv_layer(TFModel *tf_model, TF_Operation *transpose_o TF_SetAttrType(op_desc, "T", TF_FLOAT); *cur_op = TF_FinishOperation(op_desc, tf_model->status); if (TF_GetCode(tf_model->status) != TF_OK){ - av_log(ctx, AV_LOG_ERROR, "Failed to add activation function to conv layer %d\n", layer); - return DNN_ERROR; + goto err; } return DNN_SUCCESS; +err: + TF_DeleteTensor(kernel_tensor); + TF_DeleteTensor(biases_tensor); + av_log(ctx, AV_LOG_ERROR, "Failed to add conv layer %d\n", layer); + return DNN_ERROR; } static DNNReturnType add_depth_to_space_layer(TFModel *tf_model, TF_Operation **cur_op, @@ -499,11 +496,13 @@ static DNNReturnType add_pad_layer(TFModel *tf_model, TF_Operation **cur_op, pads[7] = params->paddings[3][1]; TF_SetAttrTensor(op_desc, "value", tensor, tf_model->status); if (TF_GetCode(tf_model->status) != TF_OK){ + TF_DeleteTensor(tensor); av_log(ctx, AV_LOG_ERROR, "Failed to set value for pad of layer %d\n", layer); return DNN_ERROR; } op = TF_FinishOperation(op_desc, tf_model->status); if (TF_GetCode(tf_model->status) != TF_OK){ + TF_DeleteTensor(tensor); av_log(ctx, AV_LOG_ERROR, "Failed to add pad to layer %d\n", layer); return DNN_ERROR; } @@ -519,6 +518,7 @@ static DNNReturnType add_pad_layer(TFModel *tf_model, TF_Operation **cur_op, TF_SetAttrString(op_desc, "mode", "SYMMETRIC", 9); *cur_op = TF_FinishOperation(op_desc, tf_model->status); if (TF_GetCode(tf_model->status) != TF_OK){ + TF_DeleteTensor(tensor); av_log(ctx, AV_LOG_ERROR, "Failed to add mirror_pad to layer %d\n", layer); return DNN_ERROR; } @@ -546,11 +546,13 @@ static DNNReturnType add_maximum_layer(TFModel *tf_model, TF_Operation **cur_op, *y = params->val.y; TF_SetAttrTensor(op_desc, "value", tensor, tf_model->status); if (TF_GetCode(tf_model->status) != TF_OK){ + TF_DeleteTensor(tensor); av_log(ctx, AV_LOG_ERROR, "Failed to set value for maximum/y of layer %d", layer); return DNN_ERROR; } op = TF_FinishOperation(op_desc, tf_model->status); if (TF_GetCode(tf_model->status) != TF_OK){ + TF_DeleteTensor(tensor); av_log(ctx, AV_LOG_ERROR, "Failed to add maximum/y to layer %d\n", layer); return DNN_ERROR; } @@ -565,6 +567,7 @@ static DNNReturnType add_maximum_layer(TFModel *tf_model, TF_Operation **cur_op, TF_SetAttrType(op_desc, "T", TF_FLOAT); *cur_op = TF_FinishOperation(op_desc, tf_model->status); if (TF_GetCode(tf_model->status) != TF_OK){ + TF_DeleteTensor(tensor); av_log(ctx, AV_LOG_ERROR, "Failed to add maximum to layer %d\n", layer); return DNN_ERROR; } @@ -579,7 +582,7 @@ static DNNReturnType load_native_model(TFModel *tf_model, const char *model_file TF_OperationDescription *op_desc; TF_Operation *op; TF_Operation *transpose_op; - TF_Tensor *tensor; + TF_Tensor *tensor = NULL; TF_Output input; int32_t *transpose_perm; int64_t transpose_perm_shape[] = {4}; @@ -600,6 +603,7 @@ static DNNReturnType load_native_model(TFModel *tf_model, const char *model_file #define CLEANUP_ON_ERROR(tf_model) \ { \ + TF_DeleteTensor(tensor); \ TF_DeleteGraph(tf_model->graph); \ TF_DeleteStatus(tf_model->status); \ av_log(ctx, AV_LOG_ERROR, "Failed to set value or add operator to layer\n"); \ @@ -627,6 +631,9 @@ static DNNReturnType load_native_model(TFModel *tf_model, const char *model_file CLEANUP_ON_ERROR(tf_model); } transpose_op = TF_FinishOperation(op_desc, tf_model->status); + if (TF_GetCode(tf_model->status) != TF_OK){ + CLEANUP_ON_ERROR(tf_model); + } for (layer = 0; layer < native_model->layers_num; ++layer){ switch (native_model->layers[layer].type){ From patchwork Wed Mar 24 07:39:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Fu, Ting" X-Patchwork-Id: 26594 Return-Path: X-Original-To: patchwork@ffaux-bg.ffmpeg.org Delivered-To: patchwork@ffaux-bg.ffmpeg.org Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by ffaux.localdomain (Postfix) with ESMTP id 01DD3449F23 for ; Wed, 24 Mar 2021 09:48:11 +0200 (EET) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id DA58D68AB18; Wed, 24 Mar 2021 09:48:10 +0200 (EET) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 151BB68807E for ; Wed, 24 Mar 2021 09:48:03 +0200 (EET) IronPort-SDR: EIciSH3YRw3Wy2JOfBZJdKN3dwFYvcWD1gVqVmzweR1m9SZd3PvMGmCL8S9TczG1ya7C26koVn X+UrcIYVyxtg== X-IronPort-AV: E=McAfee;i="6000,8403,9932"; a="170623039" X-IronPort-AV: E=Sophos;i="5.81,274,1610438400"; d="scan'208";a="170623039" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Mar 2021 00:47:57 -0700 IronPort-SDR: 4e5AG8xrN5aLLwgU5Rc+yIf8VaOFmyFVHzhJ2b6xeUzRL8wcQlRzFNeAU4XIbS9HkwI5FI5Edk 3iva0n1Kr/sg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,274,1610438400"; d="scan'208";a="415367825" Received: from semmer-ubuntu.sh.intel.com ([10.239.159.83]) by orsmga008.jf.intel.com with ESMTP; 24 Mar 2021 00:47:56 -0700 From: Ting Fu To: ffmpeg-devel@ffmpeg.org Date: Wed, 24 Mar 2021 15:39:28 +0800 Message-Id: <20210324073928.3570-3-ting.fu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210324073928.3570-1-ting.fu@intel.com> References: <20210324073928.3570-1-ting.fu@intel.com> Subject: [FFmpeg-devel] [PATCH 3/3] lavfi/dnn_backend_tensorflow.c: fix mem leak in execute_model_tf X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches MIME-Version: 1.0 Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" Signed-off-by: Ting Fu --- libavfilter/dnn/dnn_backend_tf.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c index c18cb4063f..c0aa510630 100644 --- a/libavfilter/dnn/dnn_backend_tf.c +++ b/libavfilter/dnn/dnn_backend_tf.c @@ -766,18 +766,21 @@ static DNNReturnType execute_model_tf(const DNNModel *model, const char *input_n if (nb_output != 1) { // currently, the filter does not need multiple outputs, // so we just pending the support until we really need it. + TF_DeleteTensor(input_tensor); avpriv_report_missing_feature(ctx, "multiple outputs"); return DNN_ERROR; } tf_outputs = av_malloc_array(nb_output, sizeof(*tf_outputs)); if (tf_outputs == NULL) { + TF_DeleteTensor(input_tensor); av_log(ctx, AV_LOG_ERROR, "Failed to allocate memory for *tf_outputs\n"); \ return DNN_ERROR; } output_tensors = av_mallocz_array(nb_output, sizeof(*output_tensors)); if (!output_tensors) { + TF_DeleteTensor(input_tensor); av_freep(&tf_outputs); av_log(ctx, AV_LOG_ERROR, "Failed to allocate memory for output tensor\n"); \ return DNN_ERROR; @@ -786,6 +789,7 @@ static DNNReturnType execute_model_tf(const DNNModel *model, const char *input_n for (int i = 0; i < nb_output; ++i) { tf_outputs[i].oper = TF_GraphOperationByName(tf_model->graph, output_names[i]); if (!tf_outputs[i].oper) { + TF_DeleteTensor(input_tensor); av_freep(&tf_outputs); av_freep(&output_tensors); av_log(ctx, AV_LOG_ERROR, "Could not find output \"%s\" in model\n", output_names[i]); \ @@ -799,6 +803,7 @@ static DNNReturnType execute_model_tf(const DNNModel *model, const char *input_n tf_outputs, output_tensors, nb_output, NULL, 0, NULL, tf_model->status); if (TF_GetCode(tf_model->status) != TF_OK) { + TF_DeleteTensor(input_tensor); av_freep(&tf_outputs); av_freep(&output_tensors); av_log(ctx, AV_LOG_ERROR, "Failed to run session when executing model\n");