diff mbox series

[FFmpeg-devel,2/3] lavfi/dnn: fix mem leak in TF backend error handle

Message ID 20230316030016.4096-2-ting.fu@intel.com
State New
Headers show
Series [FFmpeg-devel,1/3] lavfi/dnn: fix corruption when TF backend infer failed | expand

Checks

Context Check Description
yinshiyou/make_loongarch64 success Make finished
yinshiyou/make_fate_loongarch64 success Make fate finished
andriy/make_x86 success Make finished
andriy/make_fate_x86 success Make fate finished

Commit Message

Ting Fu March 16, 2023, 3 a.m. UTC
Signed-off-by: Ting Fu <ting.fu@intel.com>
---
 libavfilter/dnn/dnn_backend_tf.c | 5 +++++
 1 file changed, 5 insertions(+)

Comments

Guo, Yejun March 24, 2023, 3:48 a.m. UTC | #1
> -----Original Message-----
> From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of Ting
> Fu
> Sent: Thursday, March 16, 2023 11:00 AM
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH 2/3] lavfi/dnn: fix mem leak in TF backend
> error handle
> 
> Signed-off-by: Ting Fu <ting.fu@intel.com>
> ---
>  libavfilter/dnn/dnn_backend_tf.c | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/libavfilter/dnn/dnn_backend_tf.c
> b/libavfilter/dnn/dnn_backend_tf.c
> index fb1a5f1350..5d809a8694 100644
> --- a/libavfilter/dnn/dnn_backend_tf.c
> +++ b/libavfilter/dnn/dnn_backend_tf.c
> @@ -176,6 +176,7 @@ static int tf_start_inference(void *args)
>      if (TF_GetCode(request->status) != TF_OK) {
>          av_log(&tf_model->ctx, AV_LOG_ERROR, "%s", TF_Message(request-
> >status));
>          tf_free_request(infer_request);
> +        av_freep(&request);

Will request be freed in the queue management code?

Others in this patch looks fine.
Ting Fu March 24, 2023, 7:54 a.m. UTC | #2
> -----Original Message-----
> From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of Guo,
> Yejun
> Sent: Friday, March 24, 2023 11:49 AM
> To: FFmpeg development discussions and patches <ffmpeg-
> devel@ffmpeg.org>
> Subject: Re: [FFmpeg-devel] [PATCH 2/3] lavfi/dnn: fix mem leak in TF
> backend error handle
> 
> 
> 
> > -----Original Message-----
> > From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of
> Ting
> > Fu
> > Sent: Thursday, March 16, 2023 11:00 AM
> > To: ffmpeg-devel@ffmpeg.org
> > Subject: [FFmpeg-devel] [PATCH 2/3] lavfi/dnn: fix mem leak in TF
> > backend error handle
> >
> > Signed-off-by: Ting Fu <ting.fu@intel.com>
> > ---
> >  libavfilter/dnn/dnn_backend_tf.c | 5 +++++
> >  1 file changed, 5 insertions(+)
> >
> > diff --git a/libavfilter/dnn/dnn_backend_tf.c
> > b/libavfilter/dnn/dnn_backend_tf.c
> > index fb1a5f1350..5d809a8694 100644
> > --- a/libavfilter/dnn/dnn_backend_tf.c
> > +++ b/libavfilter/dnn/dnn_backend_tf.c
> > @@ -176,6 +176,7 @@ static int tf_start_inference(void *args)
> >      if (TF_GetCode(request->status) != TF_OK) {
> >          av_log(&tf_model->ctx, AV_LOG_ERROR, "%s",
> > TF_Message(request-
> > >status));
> >          tf_free_request(infer_request);
> > +        av_freep(&request);
> 
> Will request be freed in the queue management code?
> 
> Others in this patch looks fine.
As I checked, the queue management code did not free such variables. But it was free in ff_dnn_free_model_tf function. So, I deleted this line in PATCH V2.
And added an extra code, which also help free mem in 'model' when execute model failed, in line 1130. Hope this make sense to you.
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org
> with subject "unsubscribe".
diff mbox series

Patch

diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c
index fb1a5f1350..5d809a8694 100644
--- a/libavfilter/dnn/dnn_backend_tf.c
+++ b/libavfilter/dnn/dnn_backend_tf.c
@@ -176,6 +176,7 @@  static int tf_start_inference(void *args)
     if (TF_GetCode(request->status) != TF_OK) {
         av_log(&tf_model->ctx, AV_LOG_ERROR, "%s", TF_Message(request->status));
         tf_free_request(infer_request);
+        av_freep(&request);
         return DNN_GENERIC_ERROR;
     }
     return 0;
@@ -466,6 +467,7 @@  static int load_tf_model(TFModel *tf_model, const char *model_filename)
     {
         TF_DeleteGraph(tf_model->graph);
         tf_model->graph = NULL;
+        av_freep(&sess_config);
         av_log(ctx, AV_LOG_ERROR, "Failed to create new session with model graph\n");
         return DNN_GENERIC_ERROR;
     }
@@ -484,6 +486,7 @@  static int load_tf_model(TFModel *tf_model, const char *model_filename)
             tf_model->graph = NULL;
             TF_DeleteStatus(tf_model->status);
             tf_model->status = NULL;
+            av_freep(&sess_config);
             av_log(ctx, AV_LOG_ERROR, "Failed to run session when initializing\n");
             return DNN_GENERIC_ERROR;
         }
@@ -1177,12 +1180,14 @@  int ff_dnn_execute_model_tf(const DNNModel *model, DNNExecBaseParams *exec_param
 
     ret = extract_lltask_from_task(task, tf_model->lltask_queue);
     if (ret != 0) {
+        av_freep(&task);
         av_log(ctx, AV_LOG_ERROR, "unable to extract last level task from task.\n");
         return ret;
     }
 
     request = ff_safe_queue_pop_front(tf_model->request_queue);
     if (!request) {
+        av_freep(&task);
         av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n");
         return AVERROR(EINVAL);
     }