From patchwork Sun Apr 28 06:46:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhao Zhili X-Patchwork-Id: 48325 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a21:1509:b0:1a9:af23:56c1 with SMTP id nq9csp1323755pzb; Sat, 27 Apr 2024 23:48:20 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCVDlIkwP5mITG9h+EHwmKu7rWRt1ZOzODl0uzjRKJo/p762QKonvQTht35ZE9RJ+xQz/JKajGmu4rFw6cFJLrn0UN1sqEg9riGQ6Q== X-Google-Smtp-Source: AGHT+IFll21K6nieh0JRl0AulKkWWkFvjPE9UGgJ15RWzR0HRbskTo0Qz+T1rkP2EwJeWU0qzmjw X-Received: by 2002:a17:906:eb4f:b0:a58:fc3b:8c6e with SMTP id mc15-20020a170906eb4f00b00a58fc3b8c6emr311832ejb.69.1714286899972; Sat, 27 Apr 2024 23:48:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1714286899; cv=none; d=google.com; s=arc-20160816; b=Wl/gaZQKsl7nlFCskhqSbzkMZipGvRec6wSxWedqbOxe1f7XTJnatiEqy79sybRcpR lkBJdRzEvX6a9oY8FIB6+GpAFBl0040htc99Sb5KpVRQshPHPXEf3gianDXbga4DgeiM 95Zo85rsoOhFGN0X3bA/3uX29dRAfWiCNfRv2wTV1WPfmIVsS8Dbx8+nCX+gEvBFRDX9 Zp57bt2D/8r/IQQXAYTwLumOScLD0O6otV8pw554tOJ7LZUUVekMEFdAdOJzCUJL+MjL f6KB0ctA8k1zlcFeBY+0X3Sd+mt/wF65puyHS8hdaw6nKfUVKAt85GTKsC0oGb3+biwx nOIA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:date:to:from:message-id :dkim-signature:delivered-to; bh=78Dxn+zOSVBbJbC3L0yA7NOLuaQcls9QKKZ4Npo0y1Q=; fh=HnHYuZ9XgUo86ZRXTLWWmQxhslYEI9B9taZ5X1DLFfc=; b=Jd9K4/qaMAXfVSRH5yw7u8iAFWdVYkujKlKrRj63uWSGM1Olj0x6q5oZnIZ6aRR67r I9vXSpx3sO4inaB7TqFEDBitB2vDH6IjFRomIZyXeAABUpIfk6QexWBPe0szSLLQgDRL ez4f82nPhGhUTjZCTFprihqWxSj18lm39Bpl1KaIK7dnThVUYOy5gPqqcZhpYcpg3w27 44ue+Q83qHN0iULNvJqynCX7/cJkfyqChdAPRU9jQE/vwqarRDV5B1leaZ0tw3O351cn Vf2hIY5kVRj+IVew/bpX6gYiNHLo043tL/i0wu0O4rDmFgCchQJtrvoXQcaEp7HCnFxI RUjg==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@foxmail.com header.s=s201512 header.b=ZWivKc59; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=foxmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id bv17-20020a170906b1d100b00a51ab9a250fsi12680042ejb.384.2024.04.27.23.48.19; Sat, 27 Apr 2024 23:48:19 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@foxmail.com header.s=s201512 header.b=ZWivKc59; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=foxmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 8763968D403; Sun, 28 Apr 2024 09:47:26 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from out203-205-251-66.mail.qq.com (out203-205-251-66.mail.qq.com [203.205.251.66]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 5F38468D2D2 for ; Sun, 28 Apr 2024 09:47:09 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=foxmail.com; s=s201512; t=1714286818; bh=LUmfEw0dSibiIKCforQGKYds7w5+VLt7fXuFG+K/OQ4=; h=From:To:Cc:Subject:Date; b=ZWivKc59HgxLHHJhJQLFK49oCFYRlzhg/V5RBs5naazWRYVCDOviQZNcAjrHlPifH yPv3nVG/7nBl9nHBgfF5jUszQqlCqXGmdLvXLhnOEw6d+uvktPU0y/vgV6qKsFKiAD CaxKElM0HHJD/8rQrkOXj5bb1FPgqvZuDfBfFWDU= Received: from localhost.localdomain ([119.147.10.207]) by newxmesmtplogicsvrszc5-2.qq.com (NewEsmtp) with SMTP id BB93D84A; Sun, 28 Apr 2024 14:46:57 +0800 X-QQ-mid: xmsmtpt1714286817tae7vklz4 Message-ID: X-QQ-XMAILINFO: MyIXMys/8kCtNXVnZmFR/Zsl/m3HwwUwD/HkXeVn0owQDFKYVccZNsv5BUVCeN 7dZ/b9fH7wZ+UD9Xi0QZ6aizRItN3m2P4Xgl3wwDJlITgDc8lihM2fbwRrjequBEXnAYSZY0t9eu ePD4Tr+FoI/mgHLCOCk53XW2r+sWoU8YJgstXOnPKCwXWsGbF2i79uxmD+UbvtrXEaZ7ninfOi30 ngDnxbHPUT8tEMY2v0EiaKiO4rlJ4o+G/YuYsmv/7zfLueZCRE7dNXUrBsatLgC+EIGtqouviy5w iIiwsA9Yk6MA6rmIuqst8oe4mscz+nfl4etz2c1cPvQTm0UrPVAoTm1qOezchy2Z3Q7XsFu9/BCo +yceMy3Uho+PMl7m+a+MowI7GOgIEEAtrk1wVLKnNgKba6dy5LyAvRkwsFiCR5Y7DoR9+aKxo8GT OTmnCWfN8kOEc+kvD5IE+yl9Lgb2pxbxD+MevRHI5djL8G/KHHydZ6xyZY5QuNVaCFexm+mDPZ+4 q21FBOxzNHkpMFquwYUcs8C05ruiKQeaaI/MY8+JU32OHSCQzSCRo16DXnrQUpfnwuVtf+hNkHeM VQzbQIPBBYKzEm4plH7/QKD/E/8NErVMz3mU17ixe/0Wyq4Z4Np+Sl0z5KZxJvlN+GEMySkhvoas 71S6tWmQLGdlCb56INvK/enhW8ALm/cuIBzsitEVs4wj3qh4QbZIZ4W1IPtD57sbIF2OdiRMleuZ d2axG4EoFL4m2xLx0HQN9+jbrjDYFLokt2K9FB1Tw/9sHnrl2x5ODlifehAV5LZg++xKiRnHG/HG 9Vuuy9d2fyfzyD7OuzbfYjItlIAphRGkv9wScpw1ueWSmRZkwtqjTvXw8fotnKk10riMqwT56W6d q1UtI/oA7KIkK1P3qEiKSfD1Cw5zsb1SnV1qmkedtN3bT/XbouIZPJ2YT5CTGERGTHpQKkyi+GMA yMMVnIGgDb3Xwl06FjYjWGSe6UNoaz4yEl+g/5+8cp+Rr9/DdBK0OgBhagmcPUfp2R0Q1HbAo= X-QQ-XMRINFO: OWPUhxQsoeAVDbp3OJHYyFg= From: Zhao Zhili To: ffmpeg-devel@ffmpeg.org Date: Sun, 28 Apr 2024 14:46:47 +0800 X-OQ-MSGID: <20240428064655.106853-1-quinkblack@foxmail.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH WIP v2 1/9] avfilter/dnn: Refactor DNN parameter configuration system X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Zhao Zhili Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: PO7EXrBYIBN5 From: Zhao Zhili This patch trying to resolve mulitiple issues related to parameter configuration: Firstly, each DNN filters duplicate DNN_COMMON_OPTIONS, which should be the common options of backend. Secondly, backend options are hidden behind the scene. It's a AV_OPT_TYPE_STRING backend_configs for user, and parsed by each backend. We don't know each backend support what kind of options from the help message. Third, DNN backends duplicate DNN_BACKEND_COMMON_OPTIONS. Last but not the least, pass backend options via AV_OPT_TYPE_STRING makes it hard to pass AV_OPT_TYPE_BINARY to backend, if not impossible. This patch puts backend common options and each backend options inside DnnContext to reduce code duplication, make options user friendly, and easy to extend for future usecase. There is a known issue that, for a filter which only support one or two of the backends, the help message still show the option of all three backends. Each DNN filter should be able to run on any backend. Current issue is mostly due to incomplete implementation (e.g., libtorch only support DFT_PROCESS_FRAME), and lack of maintenance on the filters. For example, ./ffmpeg -h filter=dnn_processing dnn_processing AVOptions: dnn_backend ..FV....... DNN backend (from INT_MIN to INT_MAX) (default tensorflow) tensorflow 1 ..FV....... tensorflow backend flag openvino 2 ..FV....... openvino backend flag torch 3 ..FV....... torch backend flag dnn_base AVOptions: model ..F........ path to model file input ..F........ input name of the model output ..F........ output name of the model backend_configs ..F.......P backend configs (deprecated) options ..F.......P backend configs (deprecated) nireq ..F........ number of request (from 0 to INT_MAX) (default 0) async ..F........ use DNN async inference (default true) device ..F........ device to run model dnn_tensorflow AVOptions: sess_config ..F........ config for SessionOptions dnn_openvino AVOptions: batch_size ..F........ batch size per request (from 1 to 1000) (default 1) input_resizable ..F........ can input be resizable or not (default false) layout ..F........ input layout of model (from 0 to 2) (default none) none 0 ..F........ none nchw 1 ..F........ nchw nhwc 2 ..F........ nhwc scale ..F........ Add scale preprocess operation. Divide each element of input by specified value. (from INT_MIN to INT_MAX) (default 0) mean ..F........ Add mean preprocess operation. Subtract specified value from each element of input. (from INT_MIN to INT_MAX) (default 0) dnn_th AVOptions: optimize ..F........ turn on graph executor optimization (from 0 to 1) (default 0) --- libavfilter/dnn/dnn_backend_common.h | 13 ++- libavfilter/dnn/dnn_backend_openvino.c | 146 ++++++++++--------------- libavfilter/dnn/dnn_backend_tf.c | 82 +++++--------- libavfilter/dnn/dnn_backend_torch.cpp | 67 ++++-------- libavfilter/dnn/dnn_interface.c | 81 ++++++++++++++ libavfilter/dnn_filter_common.c | 32 +++++- libavfilter/dnn_filter_common.h | 37 +++---- libavfilter/dnn_interface.h | 66 ++++++++++- libavfilter/vf_derain.c | 5 +- libavfilter/vf_dnn_classify.c | 3 +- libavfilter/vf_dnn_detect.c | 3 +- libavfilter/vf_dnn_processing.c | 3 +- libavfilter/vf_sr.c | 5 +- 13 files changed, 314 insertions(+), 229 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_common.h b/libavfilter/dnn/dnn_backend_common.h index 42c67c7040..9f5d37b3e0 100644 --- a/libavfilter/dnn/dnn_backend_common.h +++ b/libavfilter/dnn/dnn_backend_common.h @@ -28,9 +28,16 @@ #include "../dnn_interface.h" #include "libavutil/thread.h" -#define DNN_BACKEND_COMMON_OPTIONS \ - { "nireq", "number of request", OFFSET(options.nireq), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, INT_MAX, FLAGS }, \ - { "async", "use DNN async inference", OFFSET(options.async), AV_OPT_TYPE_BOOL, { .i64 = 1 }, 0, 1, FLAGS }, +#define DNN_DEFINE_CLASS_EXT(name, desc, options) \ + { \ + .class_name = desc, \ + .item_name = av_default_item_name, \ + .option = options, \ + .version = LIBAVUTIL_VERSION_INT, \ + .category = AV_CLASS_CATEGORY_FILTER, \ + } +#define DNN_DEFINE_CLASS(fname) \ + DNN_DEFINE_CLASS_EXT(fname, #fname, fname##_options) // one task for one function call from dnn interface typedef struct TaskItem { diff --git a/libavfilter/dnn/dnn_backend_openvino.c b/libavfilter/dnn/dnn_backend_openvino.c index 374f21b7a1..c4b0682f11 100644 --- a/libavfilter/dnn/dnn_backend_openvino.c +++ b/libavfilter/dnn/dnn_backend_openvino.c @@ -40,24 +40,8 @@ #endif #include "dnn_backend_common.h" -typedef struct OVOptions{ - char *device_type; - int nireq; - uint8_t async; - int batch_size; - int input_resizable; - DNNLayout layout; - float scale; - float mean; -} OVOptions; - -typedef struct OVContext { - const AVClass *class; - OVOptions options; -} OVContext; - typedef struct OVModel{ - OVContext ctx; + DnnContext *ctx; DNNModel *model; #if HAVE_OPENVINO2 ov_core_t *core; @@ -98,24 +82,20 @@ typedef struct OVRequestItem { generated_string = generated_string ? av_asprintf("%s %s", generated_string, iterate_string) : \ av_asprintf("%s", iterate_string); -#define OFFSET(x) offsetof(OVContext, x) +#define OFFSET(x) offsetof(OVOptions, x) #define FLAGS AV_OPT_FLAG_FILTERING_PARAM static const AVOption dnn_openvino_options[] = { - { "device", "device to run model", OFFSET(options.device_type), AV_OPT_TYPE_STRING, { .str = "CPU" }, 0, 0, FLAGS }, - DNN_BACKEND_COMMON_OPTIONS - { "batch_size", "batch size per request", OFFSET(options.batch_size), AV_OPT_TYPE_INT, { .i64 = 1 }, 1, 1000, FLAGS}, - { "input_resizable", "can input be resizable or not", OFFSET(options.input_resizable), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, FLAGS }, - { "layout", "input layout of model", OFFSET(options.layout), AV_OPT_TYPE_INT, { .i64 = DL_NONE}, DL_NONE, DL_NHWC, FLAGS, .unit = "layout" }, + { "batch_size", "batch size per request", OFFSET(batch_size), AV_OPT_TYPE_INT, { .i64 = 1 }, 1, 1000, FLAGS}, + { "input_resizable", "can input be resizable or not", OFFSET(input_resizable), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, FLAGS }, + { "layout", "input layout of model", OFFSET(layout), AV_OPT_TYPE_INT, { .i64 = DL_NONE}, DL_NONE, DL_NHWC, FLAGS, .unit = "layout" }, { "none", "none", 0, AV_OPT_TYPE_CONST, { .i64 = DL_NONE }, 0, 0, FLAGS, .unit = "layout"}, { "nchw", "nchw", 0, AV_OPT_TYPE_CONST, { .i64 = DL_NCHW }, 0, 0, FLAGS, .unit = "layout"}, { "nhwc", "nhwc", 0, AV_OPT_TYPE_CONST, { .i64 = DL_NHWC }, 0, 0, FLAGS, .unit = "layout"}, - { "scale", "Add scale preprocess operation. Divide each element of input by specified value.", OFFSET(options.scale), AV_OPT_TYPE_FLOAT, { .dbl = 0 }, INT_MIN, INT_MAX, FLAGS}, - { "mean", "Add mean preprocess operation. Subtract specified value from each element of input.", OFFSET(options.mean), AV_OPT_TYPE_FLOAT, { .dbl = 0 }, INT_MIN, INT_MAX, FLAGS}, + { "scale", "Add scale preprocess operation. Divide each element of input by specified value.", OFFSET(scale), AV_OPT_TYPE_FLOAT, { .dbl = 0 }, INT_MIN, INT_MAX, FLAGS}, + { "mean", "Add mean preprocess operation. Subtract specified value from each element of input.", OFFSET(mean), AV_OPT_TYPE_FLOAT, { .dbl = 0 }, INT_MIN, INT_MAX, FLAGS}, { NULL } }; -AVFILTER_DEFINE_CLASS(dnn_openvino); - #if HAVE_OPENVINO2 static const struct { ov_status_e status; @@ -199,7 +179,7 @@ static int fill_model_input_ov(OVModel *ov_model, OVRequestItem *request) DNNData input; LastLevelTaskItem *lltask; TaskItem *task; - OVContext *ctx = &ov_model->ctx; + DnnContext *ctx = ov_model->ctx; #if HAVE_OPENVINO2 int64_t* dims; ov_status_e status; @@ -292,7 +272,7 @@ static int fill_model_input_ov(OVModel *ov_model, OVRequestItem *request) input.scale = 1; input.mean = 0; - for (int i = 0; i < ctx->options.batch_size; ++i) { + for (int i = 0; i < ctx->ov_option.batch_size; ++i) { lltask = ff_queue_pop_front(ov_model->lltask_queue); if (!lltask) { break; @@ -360,7 +340,7 @@ static void infer_completion_callback(void *args) OVModel *ov_model = task->model; SafeQueue *requestq = ov_model->request_queue; DNNData *outputs; - OVContext *ctx = &ov_model->ctx; + DnnContext *ctx = ov_model->ctx; #if HAVE_OPENVINO2 size_t* dims; ov_status_e status; @@ -410,9 +390,9 @@ static void infer_completion_callback(void *args) outputs[i].dims[2] = output_shape.rank > 1 ? dims[output_shape.rank - 2] : 1; outputs[i].dims[3] = output_shape.rank > 0 ? dims[output_shape.rank - 1] : 1; av_assert0(request->lltask_count <= dims[0]); - outputs[i].layout = ctx->options.layout; - outputs[i].scale = ctx->options.scale; - outputs[i].mean = ctx->options.mean; + outputs[i].layout = ctx->ov_option.layout; + outputs[i].scale = ctx->ov_option.scale; + outputs[i].mean = ctx->ov_option.mean; ov_shape_free(&output_shape); ov_tensor_free(output_tensor); output_tensor = NULL; @@ -452,9 +432,9 @@ static void infer_completion_callback(void *args) output.dims[i] = dims.dims[i]; av_assert0(request->lltask_count <= dims.dims[0]); output.dt = precision_to_datatype(precision); - output.layout = ctx->options.layout; - output.scale = ctx->options.scale; - output.mean = ctx->options.mean; + output.layout = ctx->ov_option.layout; + output.scale = ctx->ov_option.scale; + output.mean = ctx->ov_option.mean; outputs = &output; #endif @@ -590,7 +570,6 @@ static void dnn_free_model_ov(DNNModel **model) av_free(ov_model->all_output_names); av_free(ov_model->all_input_names); #endif - av_opt_free(&ov_model->ctx); av_freep(&ov_model); av_freep(model); } @@ -599,7 +578,7 @@ static void dnn_free_model_ov(DNNModel **model) static int init_model_ov(OVModel *ov_model, const char *input_name, const char **output_names, int nb_outputs) { int ret = 0; - OVContext *ctx = &ov_model->ctx; + DnnContext *ctx = ov_model->ctx; #if HAVE_OPENVINO2 ov_status_e status; ov_preprocess_input_tensor_info_t* input_tensor_info = NULL; @@ -610,7 +589,7 @@ static int init_model_ov(OVModel *ov_model, const char *input_name, const char * ov_layout_t* NCHW_layout = NULL; const char* NHWC_desc = "NHWC"; const char* NCHW_desc = "NCHW"; - const char* device = ctx->options.device_type; + const char* device = ctx->device ? ctx->device : "CPU"; #else IEStatusCode status; ie_available_devices_t a_dev; @@ -618,17 +597,17 @@ static int init_model_ov(OVModel *ov_model, const char *input_name, const char * char *all_dev_names = NULL; #endif // We scale pixel by default when do frame processing. - if (fabsf(ctx->options.scale) < 1e-6f) - ctx->options.scale = ov_model->model->func_type == DFT_PROCESS_FRAME ? 255 : 1; + if (fabsf(ctx->ov_option.scale) < 1e-6f) + ctx->ov_option.scale = ov_model->model->func_type == DFT_PROCESS_FRAME ? 255 : 1; // batch size - if (ctx->options.batch_size <= 0) { - ctx->options.batch_size = 1; + if (ctx->ov_option.batch_size <= 0) { + ctx->ov_option.batch_size = 1; } #if HAVE_OPENVINO2 - if (ctx->options.batch_size > 1) { + if (ctx->ov_option.batch_size > 1) { avpriv_report_missing_feature(ctx, "Do not support batch_size > 1 for now," "change batch_size to 1.\n"); - ctx->options.batch_size = 1; + ctx->ov_option.batch_size = 1; } status = ov_preprocess_prepostprocessor_create(ov_model->ov_model, &ov_model->preprocess); @@ -677,9 +656,9 @@ static int init_model_ov(OVModel *ov_model, const char *input_name, const char * ret = ov2_map_error(status, NULL); goto err; } - if (ctx->options.layout == DL_NCHW) + if (ctx->ov_option.layout == DL_NCHW) status = ov_preprocess_input_model_info_set_layout(input_model_info, NCHW_layout); - else if (ctx->options.layout == DL_NHWC) + else if (ctx->ov_option.layout == DL_NHWC) status = ov_preprocess_input_model_info_set_layout(input_model_info, NHWC_layout); if (status != OK) { av_log(ctx, AV_LOG_ERROR, "Failed to get set input model layout\n"); @@ -725,7 +704,7 @@ static int init_model_ov(OVModel *ov_model, const char *input_name, const char * } if (ov_model->model->func_type != DFT_PROCESS_FRAME) status |= ov_preprocess_output_set_element_type(output_tensor_info, F32); - else if (fabsf(ctx->options.scale - 1) > 1e-6f || fabsf(ctx->options.mean) > 1e-6f) + else if (fabsf(ctx->ov_option.scale - 1) > 1e-6f || fabsf(ctx->ov_option.mean) > 1e-6f) status |= ov_preprocess_output_set_element_type(output_tensor_info, F32); else status |= ov_preprocess_output_set_element_type(output_tensor_info, U8); @@ -740,7 +719,7 @@ static int init_model_ov(OVModel *ov_model, const char *input_name, const char * ov_model->output_info = NULL; } // set preprocess steps. - if (fabsf(ctx->options.scale - 1) > 1e-6f || fabsf(ctx->options.mean) > 1e-6f) { + if (fabsf(ctx->ov_option.scale - 1) > 1e-6f || fabsf(ctx->ov_option.mean) > 1e-6f) { ov_preprocess_preprocess_steps_t* input_process_steps = NULL; status = ov_preprocess_input_info_get_preprocess_steps(ov_model->input_info, &input_process_steps); if (status != OK) { @@ -749,8 +728,8 @@ static int init_model_ov(OVModel *ov_model, const char *input_name, const char * goto err; } status = ov_preprocess_preprocess_steps_convert_element_type(input_process_steps, F32); - status |= ov_preprocess_preprocess_steps_mean(input_process_steps, ctx->options.mean); - status |= ov_preprocess_preprocess_steps_scale(input_process_steps, ctx->options.scale); + status |= ov_preprocess_preprocess_steps_mean(input_process_steps, ctx->ov_option.mean); + status |= ov_preprocess_preprocess_steps_scale(input_process_steps, ctx->ov_option.scale); if (status != OK) { av_log(ctx, AV_LOG_ERROR, "Failed to set preprocess steps\n"); ov_preprocess_preprocess_steps_free(input_process_steps); @@ -824,7 +803,7 @@ static int init_model_ov(OVModel *ov_model, const char *input_name, const char * ov_layout_free(NCHW_layout); ov_layout_free(NHWC_layout); #else - if (ctx->options.batch_size > 1) { + if (ctx->ov_option.batch_size > 1) { input_shapes_t input_shapes; status = ie_network_get_input_shapes(ov_model->network, &input_shapes); if (status != OK) { @@ -832,7 +811,7 @@ static int init_model_ov(OVModel *ov_model, const char *input_name, const char * goto err; } for (int i = 0; i < input_shapes.shape_num; i++) - input_shapes.shapes[i].shape.dims[0] = ctx->options.batch_size; + input_shapes.shapes[i].shape.dims[0] = ctx->ov_option.batch_size; status = ie_network_reshape(ov_model->network, input_shapes); ie_network_input_shapes_free(&input_shapes); if (status != OK) { @@ -882,7 +861,7 @@ static int init_model_ov(OVModel *ov_model, const char *input_name, const char * } } - status = ie_core_load_network(ov_model->core, ov_model->network, ctx->options.device_type, &config, &ov_model->exe_network); + status = ie_core_load_network(ov_model->core, ov_model->network, ctx->device, &config, &ov_model->exe_network); if (status != OK) { av_log(ctx, AV_LOG_ERROR, "Failed to load OpenVINO model network\n"); status = ie_core_get_available_devices(ov_model->core, &a_dev); @@ -895,15 +874,15 @@ static int init_model_ov(OVModel *ov_model, const char *input_name, const char * APPEND_STRING(all_dev_names, a_dev.devices[i]) } av_log(ctx, AV_LOG_ERROR,"device %s may not be supported, all available devices are: \"%s\"\n", - ctx->options.device_type, all_dev_names); + ctx->device, all_dev_names); ret = AVERROR(ENODEV); goto err; } #endif // create infer_requests for async execution - if (ctx->options.nireq <= 0) { + if (ctx->nireq <= 0) { // the default value is a rough estimation - ctx->options.nireq = av_cpu_count() / 2 + 1; + ctx->nireq = av_cpu_count() / 2 + 1; } ov_model->request_queue = ff_safe_queue_create(); @@ -912,7 +891,7 @@ static int init_model_ov(OVModel *ov_model, const char *input_name, const char * goto err; } - for (int i = 0; i < ctx->options.nireq; i++) { + for (int i = 0; i < ctx->nireq; i++) { OVRequestItem *item = av_mallocz(sizeof(*item)); if (!item) { ret = AVERROR(ENOMEM); @@ -945,7 +924,7 @@ static int init_model_ov(OVModel *ov_model, const char *input_name, const char * } #endif - item->lltasks = av_malloc_array(ctx->options.batch_size, sizeof(*item->lltasks)); + item->lltasks = av_malloc_array(ctx->ov_option.batch_size, sizeof(*item->lltasks)); if (!item->lltasks) { ret = AVERROR(ENOMEM); goto err; @@ -994,7 +973,7 @@ static int execute_model_ov(OVRequestItem *request, Queue *inferenceq) LastLevelTaskItem *lltask; int ret = 0; TaskItem *task; - OVContext *ctx; + DnnContext *ctx; OVModel *ov_model; if (ff_queue_size(inferenceq) == 0) { @@ -1010,7 +989,7 @@ static int execute_model_ov(OVRequestItem *request, Queue *inferenceq) lltask = ff_queue_peek_front(inferenceq); task = lltask->task; ov_model = task->model; - ctx = &ov_model->ctx; + ctx = ov_model->ctx; ret = fill_model_input_ov(ov_model, request); if (ret != 0) { @@ -1084,8 +1063,8 @@ err: static int get_input_ov(void *model, DNNData *input, const char *input_name) { OVModel *ov_model = model; - OVContext *ctx = &ov_model->ctx; - int input_resizable = ctx->options.input_resizable; + DnnContext *ctx = ov_model->ctx; + int input_resizable = ctx->ov_option.input_resizable; #if HAVE_OPENVINO2 ov_shape_t input_shape = {0}; @@ -1291,7 +1270,7 @@ static int get_output_ov(void *model, const char *input_name, int input_width, i #endif int ret; OVModel *ov_model = model; - OVContext *ctx = &ov_model->ctx; + DnnContext *ctx = ov_model->ctx; TaskItem task; OVRequestItem *request; DNNExecBaseParams exec_params = { @@ -1308,7 +1287,7 @@ static int get_output_ov(void *model, const char *input_name, int input_width, i } #if HAVE_OPENVINO2 - if (ctx->options.input_resizable) { + if (ctx->ov_option.input_resizable) { status = ov_partial_shape_create(4, dims, &partial_shape); if (status != OK) { av_log(ctx, AV_LOG_ERROR, "Failed to create partial shape.\n"); @@ -1339,7 +1318,7 @@ static int get_output_ov(void *model, const char *input_name, int input_width, i if (!ov_model->compiled_model) { #else - if (ctx->options.input_resizable) { + if (ctx->ov_option.input_resizable) { status = ie_network_get_input_shapes(ov_model->network, &input_shapes); input_shapes.shapes->shape.dims[2] = input_height; input_shapes.shapes->shape.dims[3] = input_width; @@ -1386,11 +1365,10 @@ err: return ret; } -static DNNModel *dnn_load_model_ov(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx) +static DNNModel *dnn_load_model_ov(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *filter_ctx) { DNNModel *model = NULL; OVModel *ov_model = NULL; - OVContext *ctx = NULL; #if HAVE_OPENVINO2 ov_core_t* core = NULL; ov_model_t* ovmodel = NULL; @@ -1411,17 +1389,9 @@ static DNNModel *dnn_load_model_ov(const char *model_filename, DNNFunctionType f av_freep(&model); return NULL; } + ov_model->ctx = ctx; model->model = ov_model; ov_model->model = model; - ov_model->ctx.class = &dnn_openvino_class; - ctx = &ov_model->ctx; - - //parse options - av_opt_set_defaults(ctx); - if (av_opt_set_from_string(ctx, options, NULL, "=", "&") < 0) { - av_log(ctx, AV_LOG_ERROR, "Failed to parse options \"%s\"\n", options); - goto err; - } #if HAVE_OPENVINO2 status = ov_core_create(&core); @@ -1430,13 +1400,13 @@ static DNNModel *dnn_load_model_ov(const char *model_filename, DNNFunctionType f } ov_model->core = core; - status = ov_core_read_model(core, model_filename, NULL, &ovmodel); + status = ov_core_read_model(core, ctx->model_filename, NULL, &ovmodel); if (status != OK) { ov_version_t ver; status = ov_get_openvino_version(&ver); av_log(NULL, AV_LOG_ERROR, "Failed to read the network from model file %s,\n" "Please check if the model version matches the runtime OpenVINO Version:\n", - model_filename); + ctx->model_filename); if (status == OK) { av_log(NULL, AV_LOG_ERROR, "BuildNumber: %s\n", ver.buildNumber); } @@ -1452,13 +1422,13 @@ static DNNModel *dnn_load_model_ov(const char *model_filename, DNNFunctionType f if (status != OK) goto err; - status = ie_core_read_network(ov_model->core, model_filename, NULL, &ov_model->network); + status = ie_core_read_network(ov_model->core, ctx->model_filename, NULL, &ov_model->network); if (status != OK) { ie_version_t ver; ver = ie_c_api_version(); av_log(ctx, AV_LOG_ERROR, "Failed to read the network from model file %s,\n" "Please check if the model version matches the runtime OpenVINO %s\n", - model_filename, ver.api_version); + ctx->model_filename, ver.api_version); ie_version_free(&ver); goto err; } @@ -1496,7 +1466,6 @@ static DNNModel *dnn_load_model_ov(const char *model_filename, DNNFunctionType f model->get_input = &get_input_ov; model->get_output = &get_output_ov; - model->options = options; model->filter_ctx = filter_ctx; model->func_type = func_type; @@ -1510,7 +1479,7 @@ err: static int dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams *exec_params) { OVModel *ov_model = model->model; - OVContext *ctx = &ov_model->ctx; + DnnContext *ctx = ov_model->ctx; OVRequestItem *request; TaskItem *task; int ret; @@ -1539,7 +1508,7 @@ static int dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams *exec_p return AVERROR(ENOMEM); } - ret = ff_dnn_fill_task(task, exec_params, ov_model, ctx->options.async, 1); + ret = ff_dnn_fill_task(task, exec_params, ov_model, ctx->async, 1); if (ret != 0) { av_freep(&task); return ret; @@ -1557,8 +1526,8 @@ static int dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams *exec_p return ret; } - if (ctx->options.async) { - while (ff_queue_size(ov_model->lltask_queue) >= ctx->options.batch_size) { + if (ctx->async) { + while (ff_queue_size(ov_model->lltask_queue) >= ctx->ov_option.batch_size) { request = ff_safe_queue_pop_front(ov_model->request_queue); if (!request) { av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); @@ -1581,7 +1550,7 @@ static int dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams *exec_p return AVERROR(ENOSYS); } - if (ctx->options.batch_size > 1) { + if (ctx->ov_option.batch_size > 1) { avpriv_report_missing_feature(ctx, "batch mode for sync execution"); return AVERROR(ENOSYS); } @@ -1604,7 +1573,7 @@ static DNNAsyncStatusType dnn_get_result_ov(const DNNModel *model, AVFrame **in, static int dnn_flush_ov(const DNNModel *model) { OVModel *ov_model = model->model; - OVContext *ctx = &ov_model->ctx; + DnnContext *ctx = ov_model->ctx; OVRequestItem *request; #if HAVE_OPENVINO2 ov_status_e status; @@ -1652,6 +1621,7 @@ static int dnn_flush_ov(const DNNModel *model) } const DNNModule ff_dnn_backend_openvino = { + .clazz = DNN_DEFINE_CLASS(dnn_openvino), .load_model = dnn_load_model_ov, .execute_model = dnn_execute_model_ov, .get_result = dnn_get_result_ov, diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c index 2ed17c3c87..d24591b90b 100644 --- a/libavfilter/dnn/dnn_backend_tf.c +++ b/libavfilter/dnn/dnn_backend_tf.c @@ -36,19 +36,8 @@ #include "safe_queue.h" #include -typedef struct TFOptions{ - char *sess_config; - uint8_t async; - uint32_t nireq; -} TFOptions; - -typedef struct TFContext { - const AVClass *class; - TFOptions options; -} TFContext; - -typedef struct TFModel{ - TFContext ctx; +typedef struct TFModel { + DnnContext *ctx; DNNModel *model; TF_Graph *graph; TF_Session *session; @@ -76,15 +65,13 @@ typedef struct TFRequestItem { DNNAsyncExecModule exec_module; } TFRequestItem; -#define OFFSET(x) offsetof(TFContext, x) +#define OFFSET(x) offsetof(TFOptions, x) #define FLAGS AV_OPT_FLAG_FILTERING_PARAM static const AVOption dnn_tensorflow_options[] = { - { "sess_config", "config for SessionOptions", OFFSET(options.sess_config), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS }, - DNN_BACKEND_COMMON_OPTIONS + { "sess_config", "config for SessionOptions", OFFSET(sess_config), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS }, { NULL } }; -AVFILTER_DEFINE_CLASS(dnn_tensorflow); static int execute_model_tf(TFRequestItem *request, Queue *lltask_queue); static void infer_completion_callback(void *args); @@ -160,7 +147,7 @@ static int tf_start_inference(void *args) TFModel *tf_model = task->model; if (!request) { - av_log(&tf_model->ctx, AV_LOG_ERROR, "TFRequestItem is NULL\n"); + av_log(tf_model->ctx, AV_LOG_ERROR, "TFRequestItem is NULL\n"); return AVERROR(EINVAL); } @@ -170,7 +157,7 @@ static int tf_start_inference(void *args) task->nb_output, NULL, 0, NULL, request->status); if (TF_GetCode(request->status) != TF_OK) { - av_log(&tf_model->ctx, AV_LOG_ERROR, "%s", TF_Message(request->status)); + av_log(tf_model->ctx, AV_LOG_ERROR, "%s", TF_Message(request->status)); return DNN_GENERIC_ERROR; } return 0; @@ -198,7 +185,7 @@ static inline void destroy_request_item(TFRequestItem **arg) { static int extract_lltask_from_task(TaskItem *task, Queue *lltask_queue) { TFModel *tf_model = task->model; - TFContext *ctx = &tf_model->ctx; + DnnContext *ctx = tf_model->ctx; LastLevelTaskItem *lltask = av_malloc(sizeof(*lltask)); if (!lltask) { av_log(ctx, AV_LOG_ERROR, "Unable to allocate space for LastLevelTaskItem\n"); @@ -278,7 +265,7 @@ static TF_Tensor *allocate_input_tensor(const DNNData *input) static int get_input_tf(void *model, DNNData *input, const char *input_name) { TFModel *tf_model = model; - TFContext *ctx = &tf_model->ctx; + DnnContext *ctx = tf_model->ctx; TF_Status *status; TF_DataType dt; int64_t dims[4]; @@ -328,7 +315,7 @@ static int get_output_tf(void *model, const char *input_name, int input_width, i { int ret; TFModel *tf_model = model; - TFContext *ctx = &tf_model->ctx; + DnnContext *ctx = tf_model->ctx; TaskItem task; TFRequestItem *request; DNNExecBaseParams exec_params = { @@ -399,7 +386,7 @@ static int hex_to_data(uint8_t *data, const char *p) static int load_tf_model(TFModel *tf_model, const char *model_filename) { - TFContext *ctx = &tf_model->ctx; + DnnContext *ctx = tf_model->ctx; TF_Buffer *graph_def; TF_ImportGraphDefOptions *graph_opts; TF_SessionOptions *sess_opts; @@ -408,7 +395,7 @@ static int load_tf_model(TFModel *tf_model, const char *model_filename) int sess_config_length = 0; // prepare the sess config data - if (tf_model->ctx.options.sess_config != NULL) { + if (ctx->tf_option.sess_config != NULL) { const char *config; /* tf_model->ctx.options.sess_config is hex to present the serialized proto @@ -416,11 +403,11 @@ static int load_tf_model(TFModel *tf_model, const char *model_filename) proto in a python script, tools/python/tf_sess_config.py is a script example to generate the configs of sess_config. */ - if (strncmp(tf_model->ctx.options.sess_config, "0x", 2) != 0) { + if (strncmp(ctx->tf_option.sess_config, "0x", 2) != 0) { av_log(ctx, AV_LOG_ERROR, "sess_config should start with '0x'\n"); return AVERROR(EINVAL); } - config = tf_model->ctx.options.sess_config + 2; + config = ctx->tf_option.sess_config + 2; sess_config_length = hex_to_data(NULL, config); sess_config = av_mallocz(sess_config_length + AV_INPUT_BUFFER_PADDING_SIZE); @@ -461,7 +448,7 @@ static int load_tf_model(TFModel *tf_model, const char *model_filename) if (TF_GetCode(tf_model->status) != TF_OK) { TF_DeleteSessionOptions(sess_opts); av_log(ctx, AV_LOG_ERROR, "Failed to set config for sess options with %s\n", - tf_model->ctx.options.sess_config); + ctx->tf_option.sess_config); return DNN_GENERIC_ERROR; } } @@ -529,15 +516,14 @@ static void dnn_free_model_tf(DNNModel **model) TF_DeleteStatus(tf_model->status); } av_freep(&tf_model); - av_freep(model); + av_freep(&model); } } -static DNNModel *dnn_load_model_tf(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx) +static DNNModel *dnn_load_model_tf(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *filter_ctx) { DNNModel *model = NULL; TFModel *tf_model = NULL; - TFContext *ctx = NULL; model = av_mallocz(sizeof(DNNModel)); if (!model){ @@ -551,23 +537,15 @@ static DNNModel *dnn_load_model_tf(const char *model_filename, DNNFunctionType f } model->model = tf_model; tf_model->model = model; - ctx = &tf_model->ctx; - ctx->class = &dnn_tensorflow_class; - - //parse options - av_opt_set_defaults(ctx); - if (av_opt_set_from_string(ctx, options, NULL, "=", "&") < 0) { - av_log(ctx, AV_LOG_ERROR, "Failed to parse options \"%s\"\n", options); - goto err; - } + tf_model->ctx = ctx; - if (load_tf_model(tf_model, model_filename) != 0){ - av_log(ctx, AV_LOG_ERROR, "Failed to load TensorFlow model: \"%s\"\n", model_filename); + if (load_tf_model(tf_model, ctx->model_filename) != 0){ + av_log(ctx, AV_LOG_ERROR, "Failed to load TensorFlow model: \"%s\"\n", ctx->model_filename); goto err; } - if (ctx->options.nireq <= 0) { - ctx->options.nireq = av_cpu_count() / 2 + 1; + if (ctx->nireq <= 0) { + ctx->nireq = av_cpu_count() / 2 + 1; } #if !HAVE_PTHREAD_CANCEL @@ -582,7 +560,7 @@ static DNNModel *dnn_load_model_tf(const char *model_filename, DNNFunctionType f goto err; } - for (int i = 0; i < ctx->options.nireq; i++) { + for (int i = 0; i < ctx->nireq; i++) { TFRequestItem *item = av_mallocz(sizeof(*item)); if (!item) { goto err; @@ -617,7 +595,6 @@ static DNNModel *dnn_load_model_tf(const char *model_filename, DNNFunctionType f model->get_input = &get_input_tf; model->get_output = &get_output_tf; - model->options = options; model->filter_ctx = filter_ctx; model->func_type = func_type; @@ -632,7 +609,7 @@ static int fill_model_input_tf(TFModel *tf_model, TFRequestItem *request) { LastLevelTaskItem *lltask; TaskItem *task; TFInferRequest *infer_request = NULL; - TFContext *ctx = &tf_model->ctx; + DnnContext *ctx = tf_model->ctx; int ret = 0; lltask = ff_queue_pop_front(tf_model->lltask_queue); @@ -728,7 +705,7 @@ static void infer_completion_callback(void *args) { DNNData *outputs; TFInferRequest *infer_request = request->infer_request; TFModel *tf_model = task->model; - TFContext *ctx = &tf_model->ctx; + DnnContext *ctx = tf_model->ctx; outputs = av_calloc(task->nb_output, sizeof(*outputs)); if (!outputs) { @@ -787,7 +764,7 @@ err: static int execute_model_tf(TFRequestItem *request, Queue *lltask_queue) { TFModel *tf_model; - TFContext *ctx; + DnnContext *ctx; LastLevelTaskItem *lltask; TaskItem *task; int ret = 0; @@ -800,7 +777,7 @@ static int execute_model_tf(TFRequestItem *request, Queue *lltask_queue) lltask = ff_queue_peek_front(lltask_queue); task = lltask->task; tf_model = task->model; - ctx = &tf_model->ctx; + ctx = tf_model->ctx; ret = fill_model_input_tf(tf_model, request); if (ret != 0) { @@ -833,7 +810,7 @@ err: static int dnn_execute_model_tf(const DNNModel *model, DNNExecBaseParams *exec_params) { TFModel *tf_model = model->model; - TFContext *ctx = &tf_model->ctx; + DnnContext *ctx = tf_model->ctx; TaskItem *task; TFRequestItem *request; int ret = 0; @@ -849,7 +826,7 @@ static int dnn_execute_model_tf(const DNNModel *model, DNNExecBaseParams *exec_p return AVERROR(ENOMEM); } - ret = ff_dnn_fill_task(task, exec_params, tf_model, ctx->options.async, 1); + ret = ff_dnn_fill_task(task, exec_params, tf_model, ctx->async, 1); if (ret != 0) { av_log(ctx, AV_LOG_ERROR, "Fill task with invalid parameter(s).\n"); av_freep(&task); @@ -887,7 +864,7 @@ static DNNAsyncStatusType dnn_get_result_tf(const DNNModel *model, AVFrame **in, static int dnn_flush_tf(const DNNModel *model) { TFModel *tf_model = model->model; - TFContext *ctx = &tf_model->ctx; + DnnContext *ctx = tf_model->ctx; TFRequestItem *request; int ret; @@ -915,6 +892,7 @@ static int dnn_flush_tf(const DNNModel *model) } const DNNModule ff_dnn_backend_tf = { + .clazz = DNN_DEFINE_CLASS(dnn_tensorflow), .load_model = dnn_load_model_tf, .execute_model = dnn_execute_model_tf, .get_result = dnn_get_result_tf, diff --git a/libavfilter/dnn/dnn_backend_torch.cpp b/libavfilter/dnn/dnn_backend_torch.cpp index ae55893a50..abdef1f178 100644 --- a/libavfilter/dnn/dnn_backend_torch.cpp +++ b/libavfilter/dnn/dnn_backend_torch.cpp @@ -36,18 +36,8 @@ extern "C" { #include "safe_queue.h" } -typedef struct THOptions{ - char *device_name; - int optimize; -} THOptions; - -typedef struct THContext { - const AVClass *c_class; - THOptions options; -} THContext; - typedef struct THModel { - THContext ctx; + DnnContext *ctx; DNNModel *model; torch::jit::Module *jit_model; SafeQueue *request_queue; @@ -67,20 +57,17 @@ typedef struct THRequestItem { } THRequestItem; -#define OFFSET(x) offsetof(THContext, x) +#define OFFSET(x) offsetof(THOptions, x) #define FLAGS AV_OPT_FLAG_FILTERING_PARAM static const AVOption dnn_th_options[] = { - { "device", "device to run model", OFFSET(options.device_name), AV_OPT_TYPE_STRING, { .str = "cpu" }, 0, 0, FLAGS }, - { "optimize", "turn on graph executor optimization", OFFSET(options.optimize), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 1, FLAGS}, + { "optimize", "turn on graph executor optimization", OFFSET(optimize), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 1, FLAGS}, { NULL } }; -AVFILTER_DEFINE_CLASS(dnn_th); - static int extract_lltask_from_task(TaskItem *task, Queue *lltask_queue) { THModel *th_model = (THModel *)task->model; - THContext *ctx = &th_model->ctx; + DnnContext *ctx = th_model->ctx; LastLevelTaskItem *lltask = (LastLevelTaskItem *)av_malloc(sizeof(*lltask)); if (!lltask) { av_log(ctx, AV_LOG_ERROR, "Failed to allocate memory for LastLevelTaskItem\n"); @@ -153,7 +140,6 @@ static void dnn_free_model_th(DNNModel **model) } ff_queue_destroy(th_model->task_queue); delete th_model->jit_model; - av_opt_free(&th_model->ctx); av_freep(&th_model); av_freep(model); } @@ -181,7 +167,7 @@ static int fill_model_input_th(THModel *th_model, THRequestItem *request) TaskItem *task = NULL; THInferRequest *infer_request = NULL; DNNData input = { 0 }; - THContext *ctx = &th_model->ctx; + DnnContext *ctx = th_model->ctx; int ret, width_idx, height_idx, channel_idx; lltask = (LastLevelTaskItem *)ff_queue_pop_front(th_model->lltask_queue); @@ -241,7 +227,7 @@ static int th_start_inference(void *args) LastLevelTaskItem *lltask = NULL; TaskItem *task = NULL; THModel *th_model = NULL; - THContext *ctx = NULL; + DnnContext *ctx = NULL; std::vector inputs; torch::NoGradGuard no_grad; @@ -253,9 +239,9 @@ static int th_start_inference(void *args) lltask = request->lltask; task = lltask->task; th_model = (THModel *)task->model; - ctx = &th_model->ctx; + ctx = th_model->ctx; - if (ctx->options.optimize) + if (ctx->torch_option.optimize) torch::jit::setGraphExecutorOptimize(true); else torch::jit::setGraphExecutorOptimize(false); @@ -292,7 +278,7 @@ static void infer_completion_callback(void *args) { outputs.dims[2] = sizes.at(2); // H outputs.dims[3] = sizes.at(3); // W } else { - avpriv_report_missing_feature(&th_model->ctx, "Support of this kind of model"); + avpriv_report_missing_feature(th_model->ctx, "Support of this kind of model"); goto err; } @@ -304,7 +290,7 @@ static void infer_completion_callback(void *args) { if (th_model->model->frame_post_proc != NULL) { th_model->model->frame_post_proc(task->out_frame, &outputs, th_model->model->filter_ctx); } else { - ff_proc_from_dnn_to_frame(task->out_frame, &outputs, &th_model->ctx); + ff_proc_from_dnn_to_frame(task->out_frame, &outputs, th_model->ctx); } } else { task->out_frame->width = outputs.dims[dnn_get_width_idx_by_layout(outputs.layout)]; @@ -312,7 +298,7 @@ static void infer_completion_callback(void *args) { } break; default: - avpriv_report_missing_feature(&th_model->ctx, "model function type %d", th_model->model->func_type); + avpriv_report_missing_feature(th_model->ctx, "model function type %d", th_model->model->func_type); goto err; } task->inference_done++; @@ -322,7 +308,7 @@ err: if (ff_safe_queue_push_back(th_model->request_queue, request) < 0) { destroy_request_item(&request); - av_log(&th_model->ctx, AV_LOG_ERROR, "Unable to push back request_queue when failed to start inference.\n"); + av_log(th_model->ctx, AV_LOG_ERROR, "Unable to push back request_queue when failed to start inference.\n"); } } @@ -352,7 +338,7 @@ static int execute_model_th(THRequestItem *request, Queue *lltask_queue) goto err; } if (task->async) { - avpriv_report_missing_feature(&th_model->ctx, "LibTorch async"); + avpriv_report_missing_feature(th_model->ctx, "LibTorch async"); } else { ret = th_start_inference((void *)(request)); if (ret != 0) { @@ -375,7 +361,7 @@ static int get_output_th(void *model, const char *input_name, int input_width, i { int ret = 0; THModel *th_model = (THModel*) model; - THContext *ctx = &th_model->ctx; + DnnContext *ctx = th_model->ctx; TaskItem task = { 0 }; THRequestItem *request = NULL; DNNExecBaseParams exec_params = { @@ -424,12 +410,12 @@ static THInferRequest *th_create_inference_request(void) return request; } -static DNNModel *dnn_load_model_th(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx) +static DNNModel *dnn_load_model_th(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *filter_ctx) { DNNModel *model = NULL; THModel *th_model = NULL; THRequestItem *item = NULL; - THContext *ctx; + const char *device_name = ctx->device ? ctx->device : "cpu"; model = (DNNModel *)av_mallocz(sizeof(DNNModel)); if (!model) { @@ -443,24 +429,17 @@ static DNNModel *dnn_load_model_th(const char *model_filename, DNNFunctionType f } th_model->model = model; model->model = th_model; - th_model->ctx.c_class = &dnn_th_class; - ctx = &th_model->ctx; - //parse options - av_opt_set_defaults(ctx); - if (av_opt_set_from_string(ctx, options, NULL, "=", "&") < 0) { - av_log(ctx, AV_LOG_ERROR, "Failed to parse options \"%s\"\n", options); - return NULL; - } + th_model->ctx = ctx; - c10::Device device = c10::Device(ctx->options.device_name); + c10::Device device = c10::Device(device_name); if (!device.is_cpu()) { - av_log(ctx, AV_LOG_ERROR, "Not supported device:\"%s\"\n", ctx->options.device_name); + av_log(ctx, AV_LOG_ERROR, "Not supported device:\"%s\"\n", device_name); goto fail; } try { th_model->jit_model = new torch::jit::Module; - (*th_model->jit_model) = torch::jit::load(model_filename); + (*th_model->jit_model) = torch::jit::load(ctx->model_filename); } catch (const c10::Error& e) { av_log(ctx, AV_LOG_ERROR, "Failed to load torch model\n"); goto fail; @@ -502,7 +481,6 @@ static DNNModel *dnn_load_model_th(const char *model_filename, DNNFunctionType f model->get_input = &get_input_th; model->get_output = &get_output_th; - model->options = NULL; model->filter_ctx = filter_ctx; model->func_type = func_type; return model; @@ -519,7 +497,7 @@ fail: static int dnn_execute_model_th(const DNNModel *model, DNNExecBaseParams *exec_params) { THModel *th_model = (THModel *)model->model; - THContext *ctx = &th_model->ctx; + DnnContext *ctx = th_model->ctx; TaskItem *task; THRequestItem *request; int ret = 0; @@ -582,7 +560,7 @@ static int dnn_flush_th(const DNNModel *model) request = (THRequestItem *)ff_safe_queue_pop_front(th_model->request_queue); if (!request) { - av_log(&th_model->ctx, AV_LOG_ERROR, "unable to get infer request.\n"); + av_log(th_model->ctx, AV_LOG_ERROR, "unable to get infer request.\n"); return AVERROR(EINVAL); } @@ -590,6 +568,7 @@ static int dnn_flush_th(const DNNModel *model) } extern const DNNModule ff_dnn_backend_torch = { + .clazz = DNN_DEFINE_CLASS(dnn_th), .load_model = dnn_load_model_th, .execute_model = dnn_execute_model_th, .get_result = dnn_get_result_th, diff --git a/libavfilter/dnn/dnn_interface.c b/libavfilter/dnn/dnn_interface.c index b9f71aea53..ebd308cd84 100644 --- a/libavfilter/dnn/dnn_interface.c +++ b/libavfilter/dnn/dnn_interface.c @@ -25,11 +25,59 @@ #include "../dnn_interface.h" #include "libavutil/mem.h" +#include "libavutil/opt.h" +#include "libavfilter/internal.h" extern const DNNModule ff_dnn_backend_openvino; extern const DNNModule ff_dnn_backend_tf; extern const DNNModule ff_dnn_backend_torch; +#define OFFSET(x) offsetof(DnnContext, x) +#define FLAGS AV_OPT_FLAG_FILTERING_PARAM +static const AVOption dnn_base_options[] = { + {"model", "path to model file", + OFFSET(model_filename), AV_OPT_TYPE_STRING, {.str = NULL}, 0, 0, FLAGS}, + {"input", "input name of the model", + OFFSET(model_inputname), AV_OPT_TYPE_STRING, {.str = NULL}, 0, 0, FLAGS}, + {"output", "output name of the model", + OFFSET(model_outputnames_string), AV_OPT_TYPE_STRING, {.str = NULL}, 0, 0, FLAGS}, + {"backend_configs", "backend configs (deprecated)", + OFFSET(backend_options), AV_OPT_TYPE_STRING, {.str = NULL}, 0, 0, FLAGS | AV_OPT_FLAG_DEPRECATED}, + {"options", "backend configs (deprecated)", + OFFSET(backend_options), AV_OPT_TYPE_STRING, {.str = NULL}, 0, 0, FLAGS | AV_OPT_FLAG_DEPRECATED}, + {"nireq", "number of request", + OFFSET(nireq), AV_OPT_TYPE_INT, {.i64 = 0}, 0, INT_MAX, FLAGS}, + {"async", "use DNN async inference", + OFFSET(async), AV_OPT_TYPE_BOOL, {.i64 = 1}, 0, 1, FLAGS}, + {"device", "device to run model", + OFFSET(device), AV_OPT_TYPE_STRING, {.str = NULL}, 0, 0, FLAGS}, + {NULL} +}; + +AVFILTER_DEFINE_CLASS(dnn_base); + +typedef struct DnnBackendInfo { + const size_t offset; + union { + const AVClass *class; + const DNNModule *module; + }; +} DnnBackendInfo; + +static const DnnBackendInfo dnn_backend_info_list[] = { + {0, .class = &dnn_base_class}, + // Must keep the same order as in DNNOptions, so offset value in incremental order +#if CONFIG_LIBTENSORFLOW + {offsetof(DnnContext, tf_option), .module = &ff_dnn_backend_tf}, +#endif +#if CONFIG_LIBOPENVINO + {offsetof(DnnContext, ov_option), .module = &ff_dnn_backend_openvino}, +#endif +#if CONFIG_LIBTORCH + {offsetof(DnnContext, torch_option), .module = &ff_dnn_backend_torch}, +#endif +}; + const DNNModule *ff_get_dnn_module(DNNBackendType backend_type, void *log_ctx) { switch(backend_type){ @@ -52,3 +100,36 @@ const DNNModule *ff_get_dnn_module(DNNBackendType backend_type, void *log_ctx) return NULL; } } + +void *ff_dnn_child_next(DnnContext *obj, void *prev) { + size_t pre_offset; + char *ptr; + + if (!prev) { + obj->clazz = &dnn_base_class; + return obj; + } + + pre_offset = (char *)prev - (char *)obj; + for (int i = 0; i < FF_ARRAY_ELEMS(dnn_backend_info_list) - 1; i++) { + if (dnn_backend_info_list[i].offset == pre_offset) { + ptr = (char *)obj + dnn_backend_info_list[i + 1].offset; + *(const AVClass **) ptr = dnn_backend_info_list[i + 1].class; + return ptr; + } + } + + return NULL; +} + +const AVClass *ff_dnn_child_class_iterate(void **iter) +{ + uintptr_t i = (uintptr_t) *iter; + + if (i < FF_ARRAY_ELEMS(dnn_backend_info_list)) { + *iter = (void *)(i + 1); + return dnn_backend_info_list[i].class; + } + + return NULL; +} \ No newline at end of file diff --git a/libavfilter/dnn_filter_common.c b/libavfilter/dnn_filter_common.c index 5e76b9ba45..3dd51badf6 100644 --- a/libavfilter/dnn_filter_common.c +++ b/libavfilter/dnn_filter_common.c @@ -19,6 +19,7 @@ #include "dnn_filter_common.h" #include "libavutil/avstring.h" #include "libavutil/mem.h" +#include "libavutil/opt.h" #define MAX_SUPPORTED_OUTPUTS_NB 4 @@ -52,6 +53,17 @@ static char **separate_output_names(const char *expr, const char *val_sep, int * return parsed_vals; } +typedef struct DnnFilterBase { + const AVClass *class; + DnnContext dnnctx; +} DnnFilterBase; + +void *ff_dnn_filter_child_next(void *obj, void *prev) +{ + DnnFilterBase *base = obj; + return ff_dnn_child_next(&base->dnnctx, prev); +} + int ff_dnn_init(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *filter_ctx) { DNNBackendType backend = ctx->backend_type; @@ -91,7 +103,25 @@ int ff_dnn_init(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *fil return AVERROR(EINVAL); } - ctx->model = (ctx->dnn_module->load_model)(ctx->model_filename, func_type, ctx->backend_options, filter_ctx); + if (ctx->backend_options) { + void *child = NULL; + + av_log(filter_ctx, AV_LOG_WARNING, + "backend_configs is deprecated, please set backend options directly\n"); + while (child = ff_dnn_child_next(ctx, child)) { + if (*(const AVClass **)child == &ctx->dnn_module->clazz) { + int ret = av_opt_set_from_string(child, ctx->backend_options, + NULL, "=", "&"); + if (ret < 0) { + av_log(filter_ctx, AV_LOG_ERROR, "failed to parse options \"%s\"\n", + ctx->backend_options); + return ret; + } + } + } + } + + ctx->model = (ctx->dnn_module->load_model)(ctx, func_type, filter_ctx); if (!ctx->model) { av_log(filter_ctx, AV_LOG_ERROR, "could not load DNN model\n"); return AVERROR(EINVAL); diff --git a/libavfilter/dnn_filter_common.h b/libavfilter/dnn_filter_common.h index 30871ee381..854b80fd53 100644 --- a/libavfilter/dnn_filter_common.h +++ b/libavfilter/dnn_filter_common.h @@ -26,28 +26,21 @@ #include "dnn_interface.h" -typedef struct DnnContext { - char *model_filename; - DNNBackendType backend_type; - char *model_inputname; - char *model_outputnames_string; - char *backend_options; - int async; - - char **model_outputnames; - uint32_t nb_outputs; - const DNNModule *dnn_module; - DNNModel *model; -} DnnContext; - -#define DNN_COMMON_OPTIONS \ - { "model", "path to model file", OFFSET(model_filename), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS },\ - { "input", "input name of the model", OFFSET(model_inputname), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS },\ - { "output", "output name of the model", OFFSET(model_outputnames_string), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS },\ - { "backend_configs", "backend configs", OFFSET(backend_options), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS },\ - { "options", "backend configs (deprecated, use backend_configs)", OFFSET(backend_options), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS | AV_OPT_FLAG_DEPRECATED},\ - { "async", "use DNN async inference (ignored, use backend_configs='async=1')", OFFSET(async), AV_OPT_TYPE_BOOL, { .i64 = 1}, 0, 1, FLAGS}, - +#define AVFILTER_DNN_DEFINE_CLASS_EXT(name, desc, options) \ + static const AVClass name##_class = { \ + .class_name = desc, \ + .item_name = av_default_item_name, \ + .option = options, \ + .version = LIBAVUTIL_VERSION_INT, \ + .category = AV_CLASS_CATEGORY_FILTER, \ + .child_next = ff_dnn_filter_child_next, \ + .child_class_iterate = ff_dnn_child_class_iterate, \ + } + +#define AVFILTER_DNN_DEFINE_CLASS(fname) \ + AVFILTER_DNN_DEFINE_CLASS_EXT(fname, #fname, fname##_options) + +void *ff_dnn_filter_child_next(void *obj, void *prev); int ff_dnn_init(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *filter_ctx); int ff_dnn_set_frame_proc(DnnContext *ctx, FramePrePostProc pre_proc, FramePrePostProc post_proc); diff --git a/libavfilter/dnn_interface.h b/libavfilter/dnn_interface.h index 63f492e690..a58001bab2 100644 --- a/libavfilter/dnn_interface.h +++ b/libavfilter/dnn_interface.h @@ -93,8 +93,6 @@ typedef int (*ClassifyPostProc)(AVFrame *frame, DNNData *output, uint32_t bbox_i typedef struct DNNModel{ // Stores model that can be different for different backends. void *model; - // Stores options when the model is executed by the backend - const char *options; // Stores FilterContext used for the interaction between AVFrame and DNNData AVFilterContext *filter_ctx; // Stores function type of the model @@ -117,10 +115,65 @@ typedef struct DNNModel{ ClassifyPostProc classify_post_proc; } DNNModel; +typedef struct TFOptions{ + const AVClass *clazz; + + char *sess_config; +} TFOptions; + +typedef struct OVOptions { + const AVClass *clazz; + + int batch_size; + int input_resizable; + DNNLayout layout; + float scale; + float mean; +} OVOptions; + +typedef struct THOptions { + const AVClass *clazz; + int optimize; +} THOptions; + +typedef struct DNNModule DNNModule; + +typedef struct DnnContext { + const AVClass *clazz; + + DNNModel *model; + + char *model_filename; + DNNBackendType backend_type; + char *model_inputname; + char *model_outputnames_string; + char *backend_options; + int async; + + char **model_outputnames; + uint32_t nb_outputs; + const DNNModule *dnn_module; + + int nireq; + char *device; + +#if CONFIG_LIBTENSORFLOW + TFOptions tf_option; +#endif + +#if CONFIG_LIBOPENVINO + OVOptions ov_option; +#endif +#if CONFIG_LIBTORCH + THOptions torch_option; +#endif +} DnnContext; + // Stores pointers to functions for loading, executing, freeing DNN models for one of the backends. -typedef struct DNNModule{ +struct DNNModule { + const AVClass clazz; // Loads model and parameters from given file. Returns NULL if it is not possible. - DNNModel *(*load_model)(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx); + DNNModel *(*load_model)(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *filter_ctx); // Executes model with specified input and output. Returns the error code otherwise. int (*execute_model)(const DNNModel *model, DNNExecBaseParams *exec_params); // Retrieve inference result. @@ -129,11 +182,14 @@ typedef struct DNNModule{ int (*flush)(const DNNModel *model); // Frees memory allocated for model. void (*free_model)(DNNModel **model); -} DNNModule; +}; // Initializes DNNModule depending on chosen backend. const DNNModule *ff_get_dnn_module(DNNBackendType backend_type, void *log_ctx); +void *ff_dnn_child_next(DnnContext *obj, void *prev); +const AVClass *ff_dnn_child_class_iterate(void **iter); + static inline int dnn_get_width_idx_by_layout(DNNLayout layout) { return layout == DL_NHWC ? 2 : 3; diff --git a/libavfilter/vf_derain.c b/libavfilter/vf_derain.c index c8848dd7ba..f47019a3b4 100644 --- a/libavfilter/vf_derain.c +++ b/libavfilter/vf_derain.c @@ -46,13 +46,10 @@ static const AVOption derain_options[] = { #if (CONFIG_LIBTENSORFLOW == 1) { "tensorflow", "tensorflow backend flag", 0, AV_OPT_TYPE_CONST, { .i64 = 1 }, 0, 0, FLAGS, .unit = "backend" }, #endif - { "model", "path to model file", OFFSET(dnnctx.model_filename), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS }, - { "input", "input name of the model", OFFSET(dnnctx.model_inputname), AV_OPT_TYPE_STRING, { .str = "x" }, 0, 0, FLAGS }, - { "output", "output name of the model", OFFSET(dnnctx.model_outputnames_string), AV_OPT_TYPE_STRING, { .str = "y" }, 0, 0, FLAGS }, { NULL } }; -AVFILTER_DEFINE_CLASS(derain); +AVFILTER_DNN_DEFINE_CLASS(derain); static int filter_frame(AVFilterLink *inlink, AVFrame *in) { diff --git a/libavfilter/vf_dnn_classify.c b/libavfilter/vf_dnn_classify.c index 1f8f227e3a..f863f7fcff 100644 --- a/libavfilter/vf_dnn_classify.c +++ b/libavfilter/vf_dnn_classify.c @@ -50,14 +50,13 @@ static const AVOption dnn_classify_options[] = { #if (CONFIG_LIBOPENVINO == 1) { "openvino", "openvino backend flag", 0, AV_OPT_TYPE_CONST, { .i64 = DNN_OV }, 0, 0, FLAGS, .unit = "backend" }, #endif - DNN_COMMON_OPTIONS { "confidence", "threshold of confidence", OFFSET2(confidence), AV_OPT_TYPE_FLOAT, { .dbl = 0.5 }, 0, 1, FLAGS}, { "labels", "path to labels file", OFFSET2(labels_filename), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS }, { "target", "which one to be classified", OFFSET2(target), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS }, { NULL } }; -AVFILTER_DEFINE_CLASS(dnn_classify); +AVFILTER_DNN_DEFINE_CLASS(dnn_classify); static int dnn_classify_post_proc(AVFrame *frame, DNNData *output, uint32_t bbox_index, AVFilterContext *filter_ctx) { diff --git a/libavfilter/vf_dnn_detect.c b/libavfilter/vf_dnn_detect.c index bacea3ef29..cdfa355ef0 100644 --- a/libavfilter/vf_dnn_detect.c +++ b/libavfilter/vf_dnn_detect.c @@ -70,7 +70,6 @@ static const AVOption dnn_detect_options[] = { #if (CONFIG_LIBOPENVINO == 1) { "openvino", "openvino backend flag", 0, AV_OPT_TYPE_CONST, { .i64 = DNN_OV }, 0, 0, FLAGS, .unit = "backend" }, #endif - DNN_COMMON_OPTIONS { "confidence", "threshold of confidence", OFFSET2(confidence), AV_OPT_TYPE_FLOAT, { .dbl = 0.5 }, 0, 1, FLAGS}, { "labels", "path to labels file", OFFSET2(labels_filename), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS }, { "model_type", "DNN detection model type", OFFSET2(model_type), AV_OPT_TYPE_INT, { .i64 = DDMT_SSD }, INT_MIN, INT_MAX, FLAGS, .unit = "model_type" }, @@ -85,7 +84,7 @@ static const AVOption dnn_detect_options[] = { { NULL } }; -AVFILTER_DEFINE_CLASS(dnn_detect); +AVFILTER_DNN_DEFINE_CLASS(dnn_detect); static inline float sigmoid(float x) { return 1.f / (1.f + exp(-x)); diff --git a/libavfilter/vf_dnn_processing.c b/libavfilter/vf_dnn_processing.c index fdac31665e..ed6ad1f959 100644 --- a/libavfilter/vf_dnn_processing.c +++ b/libavfilter/vf_dnn_processing.c @@ -54,11 +54,10 @@ static const AVOption dnn_processing_options[] = { #if (CONFIG_LIBTORCH == 1) { "torch", "torch backend flag", 0, AV_OPT_TYPE_CONST, { .i64 = DNN_TH }, 0, 0, FLAGS, "backend" }, #endif - DNN_COMMON_OPTIONS { NULL } }; -AVFILTER_DEFINE_CLASS(dnn_processing); +AVFILTER_DNN_DEFINE_CLASS(dnn_processing); static av_cold int init(AVFilterContext *context) { diff --git a/libavfilter/vf_sr.c b/libavfilter/vf_sr.c index 60683b5209..e33a3b4641 100644 --- a/libavfilter/vf_sr.c +++ b/libavfilter/vf_sr.c @@ -50,13 +50,10 @@ static const AVOption sr_options[] = { { "tensorflow", "tensorflow backend flag", 0, AV_OPT_TYPE_CONST, { .i64 = 1 }, 0, 0, FLAGS, .unit = "backend" }, #endif { "scale_factor", "scale factor for SRCNN model", OFFSET(scale_factor), AV_OPT_TYPE_INT, { .i64 = 2 }, 2, 4, FLAGS }, - { "model", "path to model file specifying network architecture and its parameters", OFFSET(dnnctx.model_filename), AV_OPT_TYPE_STRING, {.str=NULL}, 0, 0, FLAGS }, - { "input", "input name of the model", OFFSET(dnnctx.model_inputname), AV_OPT_TYPE_STRING, { .str = "x" }, 0, 0, FLAGS }, - { "output", "output name of the model", OFFSET(dnnctx.model_outputnames_string), AV_OPT_TYPE_STRING, { .str = "y" }, 0, 0, FLAGS }, { NULL } }; -AVFILTER_DEFINE_CLASS(sr); +AVFILTER_DNN_DEFINE_CLASS(sr); static av_cold int init(AVFilterContext *context) { From patchwork Sun Apr 28 06:46:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhao Zhili X-Patchwork-Id: 48324 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a21:1509:b0:1a9:af23:56c1 with SMTP id nq9csp1323716pzb; Sat, 27 Apr 2024 23:48:10 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCXAvoJsTz51TUeRWQ8NsKy7T2kXH8C2fXCtibdhMVJWGUjJCJGafBYcDcjmaQ4b9mnABVOhqE1LzhDMEvSGSDzP9SPowF4C+8I+FQ== X-Google-Smtp-Source: AGHT+IH5ANQZnbyfDU6ieawemB58J4SpGh6Oi5W5/9v5vOsITwQnBmrxm4WaWeodO3SlCb28e6ju X-Received: by 2002:a50:9fc1:0:b0:572:2efe:4d14 with SMTP id c59-20020a509fc1000000b005722efe4d14mr5951084edf.10.1714286890438; Sat, 27 Apr 2024 23:48:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1714286890; cv=none; d=google.com; s=arc-20160816; b=aE3L/F23hPyPX6qEEFkiTHGKbq9RwViAhvpUmbzIglMyILkfxnJ1hQfT+qobA1zWyT 9t+0O3BM89NuhOPA1uJpiTFIAbavjO6N8sETfP8MZ8ZEQpMgclhkyuHe7t9TzJ0oa/ZY OZZWlXwd+18RsUx9WTRVVtOsY+0bupgSVZAARu4E8J2BHC+A3aBoriFL0SDOM344Unh4 a4VqeSW/aHO7jJ7JyU097Ag32MeV2U7awK3sqhxbCdNaZVELXeav7hsTfeFV64ByFldI U7WqXB38QEStWXrWWPkiqBfxmTzQ0FrfDxQVzOPsRtJdTX0GNU3HLgL8WVDMEW+KCMpa JN/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to:date :to:from:message-id:dkim-signature:delivered-to; bh=keuiO01UiEPaQ56y9U9tydp78EvLfvfowOBsqmbTUMs=; fh=HnHYuZ9XgUo86ZRXTLWWmQxhslYEI9B9taZ5X1DLFfc=; b=uAWJoMiJk/n2FT3q3o76HBtOElCXPHaMOB0AvT3j4MYSOpc9evNwNcMmTuDTM6otmj SsI66LV3A37pcAgWuCg1QOs3hc8Ss9UEzVCWqyjRZ8F2nSjnP8xpCplz9akGB+A7aM9m L8MTSKtfyitSuKjwG4zP4Ij+LfomZ4TX71EfB/2VRENjLfZz72qbPX6OPVUCQfO+nOWQ oJRByAfLLtFhmNnrDWFWl7RuVrEf/hHH8yI13f5in47LQ+2ubGCQ5mhAsGM8MgLl4QPH d1VPoQKuszgR7A2KfRyXwrCTdOtRFZ2v6r0qhTyqeUoOpR7hW5IqOftYkfUqjpHgCAt5 nqAQ==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@foxmail.com header.s=s201512 header.b=RIuRpItk; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=foxmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id o8-20020aa7d3c8000000b005722e4d0f72si5715490edr.641.2024.04.27.23.48.10; Sat, 27 Apr 2024 23:48:10 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@foxmail.com header.s=s201512 header.b=RIuRpItk; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=foxmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 5F29F68D3F4; Sun, 28 Apr 2024 09:47:25 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from out203-205-221-191.mail.qq.com (out203-205-221-191.mail.qq.com [203.205.221.191]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 61FC468D2EA for ; Sun, 28 Apr 2024 09:47:08 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=foxmail.com; s=s201512; t=1714286818; bh=e51EtxHOu+KQP06rtPnENZW/tXAvLTbCtDASrgZximU=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=RIuRpItkDBBjXikjKOJJVnLb2AmlMeHqjNjv9chhSnLincfG7WjpsLe0KfEAudk8j zQm9o7hxICmVDzL1zXE4hL9rcMeFsC7zgvyZLeJu0CZnboP1piZlM9QiWmxxqr2c0t HZYSCGw/3hm6O6sf7/9d2FBPQX3a/hFPiR6oeIpE= Received: from localhost.localdomain ([119.147.10.207]) by newxmesmtplogicsvrszc5-2.qq.com (NewEsmtp) with SMTP id BB93D84A; Sun, 28 Apr 2024 14:46:57 +0800 X-QQ-mid: xmsmtpt1714286818td2w8flfb Message-ID: X-QQ-XMAILINFO: M5WvXNp9ZPrQuH8LLoRzRVF+MdBpNFF+J1zh1PTwDvGz8wrgFf+03crAZAbMVS +PIBoMwq8zl2N9z8LFCq7aUoOHqLm4uzSrnX7RZhvrEdWGbhN1ZmtXq8BO3d2C7PqLMaIizSqwX+ jdM2Kqw7sSWWUwKCF4IAa76e0noWubGPg8SYDO6dpoctYcNAzfpBOIhfhaRjQYHPbl+RfsfBdTUZ lDUiejCS2oIpQlANDiGtGepti7MfyeqQOvNTSxu14XiHbIdhEDEYVCotUARUOHB8ZOZ8xyT2zLWG NxxvoAwMoBEdhndcM16Cb9KZLw06VUPQ63tvIeONZPZAOmC4AKjACijKmmBalp1iFshwzLdOBxPT DRWNlYIGAA128jz/M8ur2qHTreuCdZE9DG8a1C1h4eIPb2nZhT5fHQpSjjNNpPMpmSMPDf9bONnn ISFNiIlW4FtrZ0oOEEOEi4RF2ag4YlYeQnuZFt5lD7JWmXoTiiXVUnid11okHVKB0zJ5tFgh+gNg DHUW2eL8nDmg2Pqmx6QRgPcMWIkgy54J0CNlf/sj/2JP7+ohDfucmH39MxL0tXFXnNG5KaSJ1DTz NHU4Sk8TBR1r6Bg+y9n1yjabHjWjkaUl3IrN/Wknq5XOYEmUbJFw2jdqKgCVGSCsOmbkLG19a6su /b1HHo+UNJzyCnCbAXlK+0duow0NZHJ377pERd0O8VAbvKfrLNkWAs1s8JAXE6MtVwAihb27yXEN nA56OmV9Plsmhq2IeTpug7fGUnTCPXxgKLiS/UYKo+MQwKUBgElEhD21wbwzZCY905Vwp6Gg8g0J FwarJ/6StlzhmxI7ZVH5AiBfYlvraj9A0aNYA8rHMA2MyZQUJtyIWbHx5Sbbqsnz3b1U4LW/roBO AlEeFRbyJ63CKdObkJFcMDRyiJdOpMa4dMqVS9tmi+6WzLMZzKrAxdzgMIcHP0rS1/xvY0pCsCkB 3Hm20Ua07Y9j84qMCIuAELisg1CCxnq31hZ1vOBZPdva3+rb/8/f6+eNHvFZXBBb9argbdIN8f2b obYPrM0C6it9HhpbEVTRzIwlxEyRU= X-QQ-XMRINFO: MPJ6Tf5t3I/ycC2BItcBVIA= From: Zhao Zhili To: ffmpeg-devel@ffmpeg.org Date: Sun, 28 Apr 2024 14:46:48 +0800 X-OQ-MSGID: <20240428064655.106853-2-quinkblack@foxmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240428064655.106853-1-quinkblack@foxmail.com> References: <20240428064655.106853-1-quinkblack@foxmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH WIP v2 2/9] avfilter/dnn_backend_openvino: Fix free context at random place X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Zhao Zhili Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: uurlOgsPHjs/ From: Zhao Zhili It will be freed again by ff_dnn_uninit. --- libavfilter/dnn/dnn_backend_openvino.c | 1 - 1 file changed, 1 deletion(-) diff --git a/libavfilter/dnn/dnn_backend_openvino.c b/libavfilter/dnn/dnn_backend_openvino.c index c4b0682f11..769ba0a54b 100644 --- a/libavfilter/dnn/dnn_backend_openvino.c +++ b/libavfilter/dnn/dnn_backend_openvino.c @@ -959,7 +959,6 @@ err: if (input_model_info) ov_preprocess_input_model_info_free(input_model_info); #endif - dnn_free_model_ov(&ov_model->model); return ret; } From patchwork Sun Apr 28 06:46:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhao Zhili X-Patchwork-Id: 48321 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a21:1509:b0:1a9:af23:56c1 with SMTP id nq9csp1323613pzb; Sat, 27 Apr 2024 23:47:45 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCXWhhl3nKOTC+FokysGhJYDcnpBpzKh8Qmowecu4RPLpoyKCwvrUFIjQy11kUtVn8tI7i9UwcaHjiPBuHlzgZiVEdyd8jHIz2devw== X-Google-Smtp-Source: AGHT+IEUVmUnxX5BOLfUfzddmT6l9KB3Bddn3brK2Ss4A50U3cN6bsUHvq55FmXrxNCGtQKTGxe3 X-Received: by 2002:a05:6402:1c96:b0:572:51d0:3bb0 with SMTP id cy22-20020a0564021c9600b0057251d03bb0mr3400958edb.3.1714286864791; Sat, 27 Apr 2024 23:47:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1714286864; cv=none; d=google.com; s=arc-20160816; b=XNQ9MHttEeh4KNt7yRE66Prby40mN8fSF+6uq4akgzLxwF8Fqh4gghCbEcuLaafwHB DcJ5cD6mlgTd0C0WOXhtM+vUeijYlANMaWnHTq/Gg15FxD9NaQBCmZZFt8L6wceZf8F8 IL+Vm3TRsvtyG10NIwS60eoocEHdxp6Y0NRn8CHR2FgYStCYQARyXJ0STqz5SM4D3hHD baKkOmw/BezNpKua2W1otwZNy1dBtTwX6laEsfzlqYhFyMxm/TPMVVYu5DEBgpUv9UIX EViJgEH/ZDmKhPx9MokM7v9oZJrEDHAgKD599XT4gUsusok/v6xeSpM3NNLjX02fhoMP wiig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to:date :to:from:message-id:dkim-signature:delivered-to; bh=q28SO0LScS/xtFl5jFyMixzXAjjCm2BSdG0vh4vwGXg=; fh=HnHYuZ9XgUo86ZRXTLWWmQxhslYEI9B9taZ5X1DLFfc=; b=EGxP0vcgKQwXyCkTP8MiGHrK34M8LXzlSK21iPjlMSarf8LUHtKZqiuC4UXzy8n2WQ pDYxrRfZmy0WAvk7GYr/MYYpvf4kmPcrP4ws3xmnXUKI9IJWLSulLuVrXpajAm8rjxpq 8V4Py07Z5txaIA+MMCu2Aj06EhTDn+O0mvBR8BSKiYeg4Pc09qeFy1AeZy3nQUAbMC6i cXoecA4FfE8JaBIvyvT1ItRv8G/jj8QcUzHS92rS/Yp1cnfZBf4DE+akoOy6r3g+0k/L aTLvYN7vWtH1ZR76/yJ+vBAzegPtd6Hdxb6jvq8M9XA4G6V+CRGfit+7/iEiglCEWZE/ Jlmg==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@foxmail.com header.s=s201512 header.b=wzvDkSvh; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=foxmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id d11-20020a50f68b000000b005722fedaa95si5428709edn.82.2024.04.27.23.47.44; Sat, 27 Apr 2024 23:47:44 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@foxmail.com header.s=s201512 header.b=wzvDkSvh; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=foxmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 3531368D3C5; Sun, 28 Apr 2024 09:47:21 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from out203-205-221-221.mail.qq.com (out203-205-221-221.mail.qq.com [203.205.221.221]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 5B58F68D2A2 for ; Sun, 28 Apr 2024 09:47:09 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=foxmail.com; s=s201512; t=1714286819; bh=HAukkNZJvEOwRRpmfNHUU5zmhF6whRNbp+SKycaRzbs=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=wzvDkSvhf9wkFc4+1E4qGOgbFh3qSZZkOtNFj6Eg0kmzWvWB61uBt9LCim9b+GbVw w1DrzawLJ/RvRgIz0XEq/vc1k6PsajtNxJlrq4Yb/qsWougUlIdx4cTiv+Gz2m+Spc oqopYfF5BOpvXNxgxwuCXpD7VD+6Asu+rlRNI5J0= Received: from localhost.localdomain ([119.147.10.207]) by newxmesmtplogicsvrszc5-2.qq.com (NewEsmtp) with SMTP id BB93D84A; Sun, 28 Apr 2024 14:46:57 +0800 X-QQ-mid: xmsmtpt1714286818toei9zd1r Message-ID: X-QQ-XMAILINFO: OZZSS56D9fAjUvQnEwa4OdHB6QuL6gSxhTM1eFIa6x2oD+5Fa/jOaMktpvT26y L1GAzaiL2NUpgyU/nMDD2b9DKBFcP6IvaebEPJcYv6B6tXzaIY9iYP3h32ubLq3fYLGDHVHgIoln vrBPXY9u8dFRGvHEac45g4SmHagQgI5oOKaKYDtcpY3KHrNlhWmd3Vr2fHRX5mn6kPKa3ArQpvRJ cF6jtXCcwclYNe+vbADychKukLEQS/S9TwlN1g9iXgMU2yvOTSwLlG8/gOTw+Xv3gjtlONBqn2L9 JL3ouwRMnxwEBB7Nytm1/n06ajB6vznktgFat7OAK0TscekUvFzZeFtv8vBmG1NkNy7QlCj0J2sE Fgrf7wutuBcEyfjw0XXtkWmwZX+wLx+YF+ehp3zZOCDcIolokgcHTseexEo/WcLQuhQ0xqUCyO9C luIteWIWDCSdDj60tKPG5rrFyhmfdTmbQPtDer03YEUdr3+gEZ6CBuz+G9LaCKqaWp7qmBqQYv1Q p7VRrY5rOzeXbFAKxTsmwwKK8fqt5Ol8e0EQgtodP1HTFJtaTFebgpWgjPVt/IY89iFW9eFWh9At HwxwMXk9TUT3yikP6sP1rDphK5ri0VeCtX3FkzVLRpEVA4l3TSofMIRKFvzd4eCvScGxdygoAOrU kNGnuMTzzU0NoaVlXU8ckTphuW86ZO4o06m1CVNwxlQDBklY7k6zhgysfaukc6SmDtGo0mKU3Ld6 ODGx3LpSk6JHyYRc1CO0adF6TEgnmWHUia0FUkUK4vzDHeTnUCU9jJpjc126nF3otMp3b4x1B04d b8tvm6hfbCX/IZXr/+XoCeVk9e2DsVUq3cB73qME+b5aLvFhceCUqKSzL78FUGB4KQ2wzCN3PagW sfFKaK4kGWawQAO0yaTK9LJYuqGZkq7Cd+x1N/dUyoTVofk9fvhtPPWhroLnkcPp2Cm3H7EsCQa9 jmQkNXgGqoBBvtR4Zki50HWrzzhTr4MnlQwsxUR7zoJqOigqomGlAZ3BN41Dtu X-QQ-XMRINFO: MPJ6Tf5t3I/ycC2BItcBVIA= From: Zhao Zhili To: ffmpeg-devel@ffmpeg.org Date: Sun, 28 Apr 2024 14:46:49 +0800 X-OQ-MSGID: <20240428064655.106853-3-quinkblack@foxmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240428064655.106853-1-quinkblack@foxmail.com> References: <20240428064655.106853-1-quinkblack@foxmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH WIP v2 3/9] avfilter/dnn_backend_openvino: simplify memory allocation X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Zhao Zhili Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: /HeKMSPKRQdf From: Zhao Zhili --- libavfilter/dnn/dnn_backend_openvino.c | 47 +++++++++++--------------- 1 file changed, 20 insertions(+), 27 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_openvino.c b/libavfilter/dnn/dnn_backend_openvino.c index 769ba0a54b..1acc54b791 100644 --- a/libavfilter/dnn/dnn_backend_openvino.c +++ b/libavfilter/dnn/dnn_backend_openvino.c @@ -41,8 +41,8 @@ #include "dnn_backend_common.h" typedef struct OVModel{ + DNNModel model; DnnContext *ctx; - DNNModel *model; #if HAVE_OPENVINO2 ov_core_t *core; ov_model_t *ov_model; @@ -300,11 +300,11 @@ static int fill_model_input_ov(OVModel *ov_model, OVRequestItem *request) return ov2_map_error(status, NULL); } #endif - switch (ov_model->model->func_type) { + switch (ov_model->model.func_type) { case DFT_PROCESS_FRAME: if (task->do_ioproc) { - if (ov_model->model->frame_pre_proc != NULL) { - ov_model->model->frame_pre_proc(task->in_frame, &input, ov_model->model->filter_ctx); + if (ov_model->model.frame_pre_proc != NULL) { + ov_model->model.frame_pre_proc(task->in_frame, &input, ov_model->model.filter_ctx); } else { ff_proc_from_frame_to_dnn(task->in_frame, &input, ctx); } @@ -442,11 +442,11 @@ static void infer_completion_callback(void *args) for (int i = 0; i < request->lltask_count; ++i) { task = request->lltasks[i]->task; - switch (ov_model->model->func_type) { + switch (ov_model->model.func_type) { case DFT_PROCESS_FRAME: if (task->do_ioproc) { - if (ov_model->model->frame_post_proc != NULL) { - ov_model->model->frame_post_proc(task->out_frame, outputs, ov_model->model->filter_ctx); + if (ov_model->model.frame_post_proc != NULL) { + ov_model->model.frame_post_proc(task->out_frame, outputs, ov_model->model.filter_ctx); } else { ff_proc_from_dnn_to_frame(task->out_frame, outputs, ctx); } @@ -458,23 +458,23 @@ static void infer_completion_callback(void *args) } break; case DFT_ANALYTICS_DETECT: - if (!ov_model->model->detect_post_proc) { + if (!ov_model->model.detect_post_proc) { av_log(ctx, AV_LOG_ERROR, "detect filter needs to provide post proc\n"); goto end; } - ov_model->model->detect_post_proc(task->in_frame, outputs, + ov_model->model.detect_post_proc(task->in_frame, outputs, ov_model->nb_outputs, - ov_model->model->filter_ctx); + ov_model->model.filter_ctx); break; case DFT_ANALYTICS_CLASSIFY: - if (!ov_model->model->classify_post_proc) { + if (!ov_model->model.classify_post_proc) { av_log(ctx, AV_LOG_ERROR, "classify filter needs to provide post proc\n"); goto end; } for (int output_i = 0; output_i < ov_model->nb_outputs; output_i++) - ov_model->model->classify_post_proc(task->in_frame, outputs, + ov_model->model.classify_post_proc(task->in_frame, outputs, request->lltasks[i]->bbox_index, - ov_model->model->filter_ctx); + ov_model->model.filter_ctx); break; default: av_assert0(!"should not reach here"); @@ -571,7 +571,7 @@ static void dnn_free_model_ov(DNNModel **model) av_free(ov_model->all_input_names); #endif av_freep(&ov_model); - av_freep(model); + *model = NULL; } @@ -598,7 +598,7 @@ static int init_model_ov(OVModel *ov_model, const char *input_name, const char * #endif // We scale pixel by default when do frame processing. if (fabsf(ctx->ov_option.scale) < 1e-6f) - ctx->ov_option.scale = ov_model->model->func_type == DFT_PROCESS_FRAME ? 255 : 1; + ctx->ov_option.scale = ov_model->model.func_type == DFT_PROCESS_FRAME ? 255 : 1; // batch size if (ctx->ov_option.batch_size <= 0) { ctx->ov_option.batch_size = 1; @@ -702,7 +702,7 @@ static int init_model_ov(OVModel *ov_model, const char *input_name, const char * ret = ov2_map_error(status, NULL); goto err; } - if (ov_model->model->func_type != DFT_PROCESS_FRAME) + if (ov_model->model.func_type != DFT_PROCESS_FRAME) status |= ov_preprocess_output_set_element_type(output_tensor_info, F32); else if (fabsf(ctx->ov_option.scale - 1) > 1e-6f || fabsf(ctx->ov_option.mean) > 1e-6f) status |= ov_preprocess_output_set_element_type(output_tensor_info, F32); @@ -1280,7 +1280,7 @@ static int get_output_ov(void *model, const char *input_name, int input_width, i .out_frame = NULL, }; - if (ov_model->model->func_type != DFT_PROCESS_FRAME) { + if (ov_model->model.func_type != DFT_PROCESS_FRAME) { av_log(ctx, AV_LOG_ERROR, "Get output dim only when processing frame.\n"); return AVERROR(EINVAL); } @@ -1342,7 +1342,7 @@ static int get_output_ov(void *model, const char *input_name, int input_width, i goto err; } - ret = extract_lltask_from_task(ov_model->model->func_type, &task, ov_model->lltask_queue, NULL); + ret = extract_lltask_from_task(ov_model->model.func_type, &task, ov_model->lltask_queue, NULL); if (ret != 0) { av_log(ctx, AV_LOG_ERROR, "unable to extract inference from task.\n"); goto err; @@ -1378,19 +1378,12 @@ static DNNModel *dnn_load_model_ov(DnnContext *ctx, DNNFunctionType func_type, A IEStatusCode status; #endif - model = av_mallocz(sizeof(DNNModel)); - if (!model){ - return NULL; - } - ov_model = av_mallocz(sizeof(OVModel)); - if (!ov_model) { - av_freep(&model); + if (!ov_model) return NULL; - } ov_model->ctx = ctx; + model = &ov_model->model; model->model = ov_model; - ov_model->model = model; #if HAVE_OPENVINO2 status = ov_core_create(&core); From patchwork Sun Apr 28 06:46:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhao Zhili X-Patchwork-Id: 48322 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a21:1509:b0:1a9:af23:56c1 with SMTP id nq9csp1323644pzb; Sat, 27 Apr 2024 23:47:53 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCVvOiqKMGgok6iFYyVDRciMvQrbOK5656RU3gf5NRMQ+u5qDpPZQwLDsixLFQ93cJplfhh0x09SxkxlpmaJKJXS47stJWGebrf7Dg== X-Google-Smtp-Source: AGHT+IHtPsZRSVLAGgxtOKVTjUC+NUr7qYRACYqp//oF8Ko3oCJZU5RNmwKyRsbmsqpzs4zkXDVA X-Received: by 2002:a05:6512:3d20:b0:51d:b9a:f4b0 with SMTP id d32-20020a0565123d2000b0051d0b9af4b0mr3860755lfv.43.1714286872787; Sat, 27 Apr 2024 23:47:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1714286872; cv=none; d=google.com; s=arc-20160816; b=vka9uxPlD18bIkzy+sL+Qmk13fdotR7spjhb8cv8PaAc6MH1C2jVgT7ud9kmfEGFYM E9OYwNka/K2UR0TgUbwNKToTel+PT3qblqcHoGETI5YFri2yU43pSV9NiiHI+MGHU2p6 7pbpHDqSzS0OWl3IxGiYiPTRw5d3rNykmqbMfSPyZXUW1qhyexGf+Ie23E39kbNXaqKl t4sJzmqnNjfNttqknd6/qAwk0H023fgbnXq4q3HHiBaIz83Zz3MwooW9rJRGJm+hcIXs a6uj9ZnODctezHZVID8QKo166bgN4QkI33eJQ5d2ell4OPFXBcTSlpwQDIUN70fjVzhg zyrg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to:date :to:from:message-id:dkim-signature:delivered-to; bh=4J0uSrESCcvYgXCjegzZhhawMSin0wKS9wlbV007N1c=; fh=HnHYuZ9XgUo86ZRXTLWWmQxhslYEI9B9taZ5X1DLFfc=; b=Ua9fk9zew4YK/SkBnrkwrfLodWMie+E3YNQWh07cSfpAT/T1A+i6/fmlY4wa8MpuAp Y8Rp4xiceWxn9FvdAsLB/nvi5qxPdPLP19YiDWlCN9ichCQXYxp5Y/ttwFx8ElbSlxL6 l3AbJ8oyb3PGO3tEycpdHoxud4xzHs/gyEmjhfNBEHFL/uX4ijr9BiSmgfxWRacg20P7 ZdZUmtQIp2cOyXeuas50loA0y2T6X61RZNY8ooafEKJgJjl5lXrZEnqYIjDjgqEoAo// EU0LxxD8su58Kvaa87J6mWRZa3dXJLpCa5jeBY6CNG0+6HJbcflyu9Vq6qrYWLCY5/+3 hDtw==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@foxmail.com header.s=s201512 header.b=tCAiBkzU; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=foxmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id w7-20020a170906b18700b00a58c09d5541si3393973ejy.138.2024.04.27.23.47.52; Sat, 27 Apr 2024 23:47:52 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@foxmail.com header.s=s201512 header.b=tCAiBkzU; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=foxmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 85BB468D390; Sun, 28 Apr 2024 09:47:22 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from out203-205-221-245.mail.qq.com (out203-205-221-245.mail.qq.com [203.205.221.245]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 5888168C2F0 for ; Sun, 28 Apr 2024 09:47:08 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=foxmail.com; s=s201512; t=1714286819; bh=1wklaRLE14l6xLaWgU45zthbEJNQvCoOjvvidDHrzgc=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=tCAiBkzU09wLBOV1uzSGhAN8H49iOudHbe+vzJsQ7TYwlfoflli0XCPTIpyt+P1xE xFO7jZtcCdZcdTGXyfUv8gZg5vlFv2nevweJK65N/gnEUOsMUCXm0wT6T+QkvAmetj bQpR0JNkcJkk+ycN4T2yopHmB//BkUZA8+nqo7xU= Received: from localhost.localdomain ([119.147.10.207]) by newxmesmtplogicsvrszc5-2.qq.com (NewEsmtp) with SMTP id BB93D84A; Sun, 28 Apr 2024 14:46:57 +0800 X-QQ-mid: xmsmtpt1714286819t6j89pmfa Message-ID: X-QQ-XMAILINFO: NT9b64/RmY6NRm/A5c6lGEhx2YpRbQ53WYeTj9tGUsZPk5dE0IUkUgO7Lsj+f1 6AY06A7waAAK7GV32SWTbQIWLOnKwvtU1OX9I9Nt9+4fcPeu4e7W3C1sieQv/TmESHKeRQhN9IC7 Gc7YfJ8p7mPJCmcf2/nMngsTFZtSBTO5XN9eMeVHxW22WN67w8UxXkjtSBiVWIAPgbDqSs7blA3V +dLosByPFynTzg4TxqxQZqVcCKD8a0/vMgjvTZWB1T8IF2XLdgcz/O2RGyGQ9uLe0ox1hZlrI0KR +oqRUJbs2iR7BX5K9HuGbWMB2fq1J5e+pDz7m/ZvpcqWkgCvinDQP6Q7dAV260ihHKMWRkUh9A/4 BCGO1nz2yBpZKyhYZG1Wa+OBwzF6xHK3XasZIIXZ3ojAoCHoX/kokim/tcwQOtS9cQ8Cp2U8mniD OnzsI8ctWfI29VNMrUHbSea61OeUG2Rxx/1xZ5Kwh/L+sVq0sTrGE5j9j+3VGvRa6TlpJxoh98MJ +oMzizziFj/oOuAT+6MnUlKluHykrrxOUM1pdXY6agkbVBoQZ+7OwBTnFDONEhTJD15XIXTc/M6K T62gy3sCZFwvtRrxLw/iKuEf7XJ1x6Odx3hxGlBTaSlzzjBrfWXR1RaR6Eu0jv3rq0RXdbRWO7/A OBClJUpxgzc0bSozWaQC7y+mjY+ei34T+5rOLHxcVNaCrq0TbR/s2mnNzhe8rMGcTt4FDEi54/Gx qaHNE0iwZkEz6dC8iW3qFZUGjDhQRj1YNWpzYE8tfE8yKdhDoRKZrKkRUZSDuHCwzs6JlIhHHgfD UJOHDOPJj5bMSzpIH34ToCqYUn1LFsi+302iHXcOTp7V1sjnDH3VLg9qIcUj7hMgSASwWNuAu1Hn VPvjNMdPmDUJNYcdXDYtDa2sC6DjqluEANFLbyaodVUeP7MRkiDX5isLLTxyulLW+tZa9Tyi4q9U FnM8GeI5ZkNFFynb1bn21qzEDA9Joj3ChNjDRaYn2XS+36LhdyBUsqaQ4oocfR X-QQ-XMRINFO: Nq+8W0+stu50PRdwbJxPCL0= From: Zhao Zhili To: ffmpeg-devel@ffmpeg.org Date: Sun, 28 Apr 2024 14:46:50 +0800 X-OQ-MSGID: <20240428064655.106853-4-quinkblack@foxmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240428064655.106853-1-quinkblack@foxmail.com> References: <20240428064655.106853-1-quinkblack@foxmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH WIP v2 4/9] avfilter/dnn_backend_tf: Remove one level of indentation X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Zhao Zhili Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: uCrgLqZ2jf3C From: Zhao Zhili --- libavfilter/dnn/dnn_backend_tf.c | 63 ++++++++++++++++---------------- 1 file changed, 32 insertions(+), 31 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c index d24591b90b..60f9e57fb7 100644 --- a/libavfilter/dnn/dnn_backend_tf.c +++ b/libavfilter/dnn/dnn_backend_tf.c @@ -483,41 +483,42 @@ static void dnn_free_model_tf(DNNModel **model) { TFModel *tf_model; - if (*model){ - tf_model = (*model)->model; - while (ff_safe_queue_size(tf_model->request_queue) != 0) { - TFRequestItem *item = ff_safe_queue_pop_front(tf_model->request_queue); - destroy_request_item(&item); - } - ff_safe_queue_destroy(tf_model->request_queue); + if (!model || !*model) + return; - while (ff_queue_size(tf_model->lltask_queue) != 0) { - LastLevelTaskItem *item = ff_queue_pop_front(tf_model->lltask_queue); - av_freep(&item); - } - ff_queue_destroy(tf_model->lltask_queue); + tf_model = (*model)->model; + while (ff_safe_queue_size(tf_model->request_queue) != 0) { + TFRequestItem *item = ff_safe_queue_pop_front(tf_model->request_queue); + destroy_request_item(&item); + } + ff_safe_queue_destroy(tf_model->request_queue); - while (ff_queue_size(tf_model->task_queue) != 0) { - TaskItem *item = ff_queue_pop_front(tf_model->task_queue); - av_frame_free(&item->in_frame); - av_frame_free(&item->out_frame); - av_freep(&item); - } - ff_queue_destroy(tf_model->task_queue); + while (ff_queue_size(tf_model->lltask_queue) != 0) { + LastLevelTaskItem *item = ff_queue_pop_front(tf_model->lltask_queue); + av_freep(&item); + } + ff_queue_destroy(tf_model->lltask_queue); - if (tf_model->graph){ - TF_DeleteGraph(tf_model->graph); - } - if (tf_model->session){ - TF_CloseSession(tf_model->session, tf_model->status); - TF_DeleteSession(tf_model->session, tf_model->status); - } - if (tf_model->status){ - TF_DeleteStatus(tf_model->status); - } - av_freep(&tf_model); - av_freep(&model); + while (ff_queue_size(tf_model->task_queue) != 0) { + TaskItem *item = ff_queue_pop_front(tf_model->task_queue); + av_frame_free(&item->in_frame); + av_frame_free(&item->out_frame); + av_freep(&item); + } + ff_queue_destroy(tf_model->task_queue); + + if (tf_model->graph){ + TF_DeleteGraph(tf_model->graph); + } + if (tf_model->session){ + TF_CloseSession(tf_model->session, tf_model->status); + TF_DeleteSession(tf_model->session, tf_model->status); + } + if (tf_model->status){ + TF_DeleteStatus(tf_model->status); } + av_freep(&tf_model); + av_freep(&model); } static DNNModel *dnn_load_model_tf(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *filter_ctx) From patchwork Sun Apr 28 06:46:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhao Zhili X-Patchwork-Id: 48326 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a21:1509:b0:1a9:af23:56c1 with SMTP id nq9csp1323792pzb; Sat, 27 Apr 2024 23:48:29 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCUgvxCCTOHdhcGcpTd33ZMA3qj4Cfm8HUGdLshaTctuT0jEMsi1VMm8HPnELYPLE2Iok8eGxMXywomoNS1ecE39W6n3KaEO14MbSA== X-Google-Smtp-Source: AGHT+IGBnoCLFDYi+lBy0rVp8SQ8GPkl1RY7liwuhhUBT2QD3FUAd1OcvY6T3+sdlGTfUTEbBrp4 X-Received: by 2002:ac2:46d1:0:b0:51a:c5fe:1c82 with SMTP id p17-20020ac246d1000000b0051ac5fe1c82mr5487385lfo.16.1714286909648; Sat, 27 Apr 2024 23:48:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1714286909; cv=none; d=google.com; s=arc-20160816; b=Szoc9dfRhPqj5ay3uED7msAT3ivyqAa6r7MM4aYGSebHUQgE7LwAG4JOKqUVDbmOqQ 0qQsU7HJ/MzI8D05I9cAEn1ax5Ht5sMjyd+6f+DRE1FT0J8wszjrE3hGMJsTRZ4WvmmF i2RimDPy+Sqx9Ion5k3rOAHd1gKnU7G4m/sey1zK32M5SxtsqJGbUszazY9Yo3iLSiQH p3qS8QNtOpnkjHhw5g8MPiID54+F5HCPzpTSS5G4NF8/iDlOO53xEndnR2FF7op9+jhB mACvoyBd2tcyXcu8F91yEuHplb3oqvlIOFch4MMqEA9Ya4k4DNQ4rQH6zs8t31Djy0E6 4+wg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to:date :to:from:message-id:dkim-signature:delivered-to; bh=LlpizsvWWYaYRVDjm8+2epOCyk3Z2WFYLgjaDgZIX2Q=; fh=HnHYuZ9XgUo86ZRXTLWWmQxhslYEI9B9taZ5X1DLFfc=; b=Fw+wBp/9YeW+dOdHEtBbS45bAiocge//wDheGll78OVYfxxR4+nmhe5066Xl4CImUu 80CaLT6F+CRd3m6/vWuMXxt2lQJI/yeJuSfjCccBBXHbikEOtNojqbTbwagvdAwPiy2Y /fXf9bAE0g/gI1Ls/IoFwEDGmVmdJd0R4yqB5ncdjNgEPKPL7lvQVtP3LtYuPQoOcRYs KktBl8NKx6DLlyKCM1HXICKerk+44HTf9sjLJpGYMHkmLIdk5ZH1yAz9MmlsG52UOH1L X0P4F59ykyGtBLYqQ4KEInSSOW2csY75fW2Sfq76NPLOyDU8rjgvYCf2Y13A6BVTlRJA 0/5Q==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@foxmail.com header.s=s201512 header.b=sxC3EwA1; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=foxmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id i7-20020a0564020f0700b005727067da70si1286549eda.138.2024.04.27.23.48.29; Sat, 27 Apr 2024 23:48:29 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@foxmail.com header.s=s201512 header.b=sxC3EwA1; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=foxmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 85B6068D2EA; Sun, 28 Apr 2024 09:47:34 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from out203-205-251-73.mail.qq.com (out203-205-251-73.mail.qq.com [203.205.251.73]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 36A6068D40A for ; Sun, 28 Apr 2024 09:47:29 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=foxmail.com; s=s201512; t=1714286839; bh=Dnif5iayuKoESi5vD/iCTlE3RzxJBVD1FT+10ZIhx2s=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=sxC3EwA1NJVSeyEQyQ+8BKngeTMLQOyzDGK/D+b5l7ImAF0XFy4Vl1Zvfm0Sx5JOW IbcsHZEiTG3VkQef2XJoBt+TjGBW+5XQIYmiA96uHWY+neCKBnTuBC+gGMX9fQ/oH1 2aytsb8nf//r7fFPsdiNnNNq8LPRrcPGg2Y3EBhQ= Received: from localhost.localdomain ([119.147.10.207]) by newxmesmtplogicsvrszc5-2.qq.com (NewEsmtp) with SMTP id BB93D84A; Sun, 28 Apr 2024 14:46:57 +0800 X-QQ-mid: xmsmtpt1714286819toxl8kh9b Message-ID: X-QQ-XMAILINFO: OQhZ3T0tjf0aURA6zWZnF9Kf6oil2ilw7xIXC+R79HOjtnoU0WyPOlg93nM3vt 0ZnXGIZzP6CahlsJJxz9LTxmCKRns87elpjntEawXhyyGU4s4Ae8oauaskrPISoCNceBR+y0wga4 28DqXYpObv0cLh9G95tBWiiGFYVDqnxwZKxiFZy+PA+a1HK5SCZBK1TiRLo92h25ni4/wM5XGVdM tWdP0sZKxbhilCbBx9tWDGcjysvsG/q4KbuVsgYPt98e4pWcVGgL8zJZ7J29Nb1ZMvrtCl1isUAz stk3sNZeLJz22vbEhJOq8hvr6psSzVGoZINMK04bde+8uhHVjZ/1Ap2bg9RxZbVfpY/XJyy/dFNU x/z1gdIycriI/aWFfyjSOjLMpWE81OYQCqmdnZAT6Avf+eTQ7X47+SAY5tlMwAvrCm0Dtrhkwi+g WkNpRLGBRI4rNzncsh5u1t0YTIhS+4KHmVxujPjS/kAJxS9rXe6RuFOn0gXOXe2nlHZBpfE1Ywz4 ths9+zRDx9j15V+hwjIqgTmRHvXNBf+7etpy83iiIPM8HSbyX3M9xmBnqrS0Seeb9atMtrU1OZtY CTlGp0Eo3QFLq2FArXurFIAiYfLBDo6jWaRzbFwntGGRFmEfuKW5q66cyp92RAzM3M9OuRe+sK8J kpguXTpqeYEJZXHiO79BCQmnX/XlIQwE7/ocIMYqNsPaRzVMPAGaQjZEWx7WJLwNU1iRz3hXOnWl mtSonwWmbfCIqa7oiJ2FR4YRYrh60fvQRWaHqypR4kzD9hMMJ4TxuK47KN4vsF3UMTe1aV9UGb2e GguBEugqBXA/NJjqxzeIik+Y6SUCb0jE89EbDppk3Eo3PbQEGhOOSRVWBKYeMwKgi4n5Ti12Hv9/ n9KUG/5ZYaRTKm1CQcsC7IESfo9wKbmFcrlxy6LuGn9owfDmcmgEd7PXxbI8uIRYo6yiQt+9YFOE TNtz0MFLCwDoCz0JAyACP9EkOEmwOolCaBV0nlbdEEy0qsBUSS0K1rSMhOKRsipf0iFaKrneacg6 yLxMrQ1bSGp40e3LopdacLh2KvFxw= X-QQ-XMRINFO: NS+P29fieYNw95Bth2bWPxk= From: Zhao Zhili To: ffmpeg-devel@ffmpeg.org Date: Sun, 28 Apr 2024 14:46:51 +0800 X-OQ-MSGID: <20240428064655.106853-5-quinkblack@foxmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240428064655.106853-1-quinkblack@foxmail.com> References: <20240428064655.106853-1-quinkblack@foxmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH WIP v2 5/9] avfilter/dnn_backend_tf: Fix free context at random place X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Zhao Zhili Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: hj/AyO7fai5Q From: Zhao Zhili It will be freed again by ff_dnn_uninit. --- libavfilter/dnn/dnn_backend_tf.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c index 60f9e57fb7..3b4de6d13f 100644 --- a/libavfilter/dnn/dnn_backend_tf.c +++ b/libavfilter/dnn/dnn_backend_tf.c @@ -804,7 +804,7 @@ err: if (ff_safe_queue_push_back(tf_model->request_queue, request) < 0) { destroy_request_item(&request); } - dnn_free_model_tf(&tf_model->model); + return ret; } From patchwork Sun Apr 28 06:46:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhao Zhili X-Patchwork-Id: 48327 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a21:1509:b0:1a9:af23:56c1 with SMTP id nq9csp1323821pzb; Sat, 27 Apr 2024 23:48:38 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCWs7VV2J7y0wZw7YZsKy9I/yNZst/wXSopf4Kpf7dmrXU7XlOPC+dtl4X27zYWYopsK5vaKRYIxYCDO62B++p7KWDe3CgWWnVl/8A== X-Google-Smtp-Source: AGHT+IFWGiyFCiakUPeTr2481TP80Gvg2yND2P7UxZN+6cJ0Fi7a5wKSiKv4P1vBP7mJvPzYvzT+ X-Received: by 2002:a05:6512:3609:b0:51b:7c36:da61 with SMTP id f9-20020a056512360900b0051b7c36da61mr3905511lfs.56.1714286917904; Sat, 27 Apr 2024 23:48:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1714286917; cv=none; d=google.com; s=arc-20160816; b=TBQH+8rk3GnUKHajwbsbH1ldrea2CwXgrqFJTB+ksrC7in8KvE55voUPyGj2+6upoP p8W0L6KZoTuztFroE5MkAQuBGtFbgtigEHUKODovDoSu/rdz9ibHV/wUSNq8GMkM0kFN dOtZ4uY5/yGfmf+d1qeX9qkXlBv01fpH0Rh7m8nyQFsWmeLyc8HI1W+AfEocLvL66glU 591xti9blrZbcXCV3Q9C2IP4p6LQ5IF7tmIfC9RCHTMiUdBzHVuvq3g6r3x9VCghVr8C U+xo8umUCooiuOTdmjOErc+bLN8UMhmsNkpR3EeIB8QFaKCjETJxm4r+Cu3yLNrSM4OT 322Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to:date :to:from:message-id:dkim-signature:delivered-to; bh=qw7VzAp/hTieMGJntDLmrgnHKyE7G0rdV5FEBhIhEPo=; fh=HnHYuZ9XgUo86ZRXTLWWmQxhslYEI9B9taZ5X1DLFfc=; b=hHcLmgKrd0ClXIczctQciYBoxL8Cf92adWLHelJqlfV0Cf6XoEdQw/ApHquM/pTEZT B46K1xAqBG+zfejVrEkqrRU5fLshoJcJ0gLttcvSDyZV2w/a5RYq4NFaudFZAW7XoGua xdIeJhH5v0qcvuRyBAEdcxGc93Tf9Jg4k5pTjG4mo4YetUzTIjrsvJnuJOMa9UQCL7JU wHIAWnSdCpu3BR9pi5qlZvQqkdqrWogVGQ/QY1hMP2lLJWkRA9eP8dV6js+flkuBx+Jy 3N875HM8m1ZC+yoeBTcPx4r2ZeRfG5Ko9DNzlYRHQfpJot7E5nl9Jpan2jN4fJIqY1aj zAeg==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@foxmail.com header.s=s201512 header.b=fNbI10as; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=foxmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id b61-20020a509f43000000b00571b66e952asi12957555edf.506.2024.04.27.23.48.37; Sat, 27 Apr 2024 23:48:37 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@foxmail.com header.s=s201512 header.b=fNbI10as; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=foxmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 6BD4468D429; Sun, 28 Apr 2024 09:48:22 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from out203-205-221-245.mail.qq.com (out203-205-221-245.mail.qq.com [203.205.221.245]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 7099568D377 for ; Sun, 28 Apr 2024 09:48:14 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=foxmail.com; s=s201512; t=1714286891; bh=PxeaxHxaSPHlb8fmcLD367j4TZd/T8OvN3kDxoDx0Ac=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=fNbI10asxaOB4c9w2A6V4I6tB5eCCiL5hFCKWl39TEuPaKy4YaiXapGofmoR05cuO A2pwsDv8lW9EdMCSigoW1zFty2e4xTtlJIAOMnVMIZ21TDQct0xckhkBG16hWr8CXD c5IgKd6Z39AZ3udGd1dOrzB1e9oZG6u/sRx9ZT88= Received: from localhost.localdomain ([119.147.10.207]) by newxmesmtplogicsvrszc5-2.qq.com (NewEsmtp) with SMTP id BB93D84A; Sun, 28 Apr 2024 14:46:57 +0800 X-QQ-mid: xmsmtpt1714286820tvi9oq12w Message-ID: X-QQ-XMAILINFO: OW8nL0XA5GqlPMWpy/naMs1yHpZfqYJYvlBfaAgJ2HGs9GYCvkptBtKcjuTm8e hIeecJh4UDV9e8zJ18jOGtkQHenV2GK3Otq8ou7a/hb8F3D++YM9N7w3RVgwlPjBavs2enbPRm3t ZnZCV6Nsr2hKMPu4/Tqgc5CiTG84BZCL6wUNgjsQEb8dOyJtfQPMtjtsR4vNI3rpzoJnr89vDM6V qMVWzOnT8m/sqGNO6EV1XG3KBS0nUWrmU94hSg1tqy/VT7Wy72bAqp8nllLcqPcf2VFbvPvz74Rq zLu4lMwTJaIDAMHUTCjSFNGRacgrovpPq7YH9cH0YxwdvMVEf63cn9vsbo+ep7FuXxGXQ3xOmoRX UaHnaz8vMQ3t3pGf+JTDYGT7LZkGex7tVbtwz/0KrgJLZ4MllzS/F4qRgRnvlrHdni5DkQMBGzSu 6aGjByB9ocsrQsAEXr70ZOmTv1F5h8mQYxG5lJv1F6vavKuyeeAEuIVtElU2KVnJMfsFfLNzugeW QMVAKZH2+1WkV4ALAhDPLUlhw55kjMfNKJoWsUDoK3zD55CiGeTsDrlbNl7jaMl1Px6IG2B9rTbh /cxLTichm5tmgKizuGp7inf0TFXhtJHIsW5tTJYgSnggbhI1n8oW3Fv/jbLw7RueK71hnjRdh9p8 DU3/FklqBZYx8psCaCcZTLOJWckBm9sI4wZ3rb2sYu65jpNnMcsFV03CqLNbCOwR8vT9WECTTKLb PEYCiBBgYBXKIoMHl8FqULGgwMZRI41pEr0mUyJNf13atBNWyEDUWTUPXA6nVtnZ5XjqID3+Aimd KnfJQeIG41rE+O0YZksheimKiWGeE2/6aevQazKQ34q29JtgXnSYcrOT/BByEWsnH60UcGu5ReL6 RWQkyYiDD9znbl0Pg2OgxlgKPke4tJ2gAqbjfZCiucSe66eU1UbFeQ6BYQnk7LUYy/sLnBfq9e57 0BGqL+UAXfYXQNqhPiR1JyGUQ3p3OKu+QehQCaGIHXqGYlcT8KfndXXfxMahJa4brnwUhdP/O7bl QOZJ+otEg9AopCpcuPONvXrZrkIlg= X-QQ-XMRINFO: M/715EihBoGSf6IYSX1iLFg= From: Zhao Zhili To: ffmpeg-devel@ffmpeg.org Date: Sun, 28 Apr 2024 14:46:52 +0800 X-OQ-MSGID: <20240428064655.106853-6-quinkblack@foxmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240428064655.106853-1-quinkblack@foxmail.com> References: <20240428064655.106853-1-quinkblack@foxmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH WIP v2 6/9] avfilter/dnn_backend_tf: Simplify memory allocation X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Zhao Zhili Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: 0+OO4gr2qHM0 From: Zhao Zhili --- libavfilter/dnn/dnn_backend_tf.c | 33 +++++++++++++------------------- 1 file changed, 13 insertions(+), 20 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c index 3b4de6d13f..c7716e696d 100644 --- a/libavfilter/dnn/dnn_backend_tf.c +++ b/libavfilter/dnn/dnn_backend_tf.c @@ -37,8 +37,8 @@ #include typedef struct TFModel { + DNNModel model; DnnContext *ctx; - DNNModel *model; TF_Graph *graph; TF_Session *session; TF_Status *status; @@ -518,7 +518,7 @@ static void dnn_free_model_tf(DNNModel **model) TF_DeleteStatus(tf_model->status); } av_freep(&tf_model); - av_freep(&model); + *model = NULL; } static DNNModel *dnn_load_model_tf(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *filter_ctx) @@ -526,18 +526,11 @@ static DNNModel *dnn_load_model_tf(DnnContext *ctx, DNNFunctionType func_type, A DNNModel *model = NULL; TFModel *tf_model = NULL; - model = av_mallocz(sizeof(DNNModel)); - if (!model){ - return NULL; - } - tf_model = av_mallocz(sizeof(TFModel)); - if (!tf_model){ - av_freep(&model); + if (!tf_model) return NULL; - } + model = &tf_model->model; model->model = tf_model; - tf_model->model = model; tf_model->ctx = ctx; if (load_tf_model(tf_model, ctx->model_filename) != 0){ @@ -650,11 +643,11 @@ static int fill_model_input_tf(TFModel *tf_model, TFRequestItem *request) { } input.data = (float *)TF_TensorData(infer_request->input_tensor); - switch (tf_model->model->func_type) { + switch (tf_model->model.func_type) { case DFT_PROCESS_FRAME: if (task->do_ioproc) { - if (tf_model->model->frame_pre_proc != NULL) { - tf_model->model->frame_pre_proc(task->in_frame, &input, tf_model->model->filter_ctx); + if (tf_model->model.frame_pre_proc != NULL) { + tf_model->model.frame_pre_proc(task->in_frame, &input, tf_model->model.filter_ctx); } else { ff_proc_from_frame_to_dnn(task->in_frame, &input, ctx); } @@ -664,7 +657,7 @@ static int fill_model_input_tf(TFModel *tf_model, TFRequestItem *request) { ff_frame_to_dnn_detect(task->in_frame, &input, ctx); break; default: - avpriv_report_missing_feature(ctx, "model function type %d", tf_model->model->func_type); + avpriv_report_missing_feature(ctx, "model function type %d", tf_model->model.func_type); break; } @@ -724,12 +717,12 @@ static void infer_completion_callback(void *args) { outputs[i].data = TF_TensorData(infer_request->output_tensors[i]); outputs[i].dt = (DNNDataType)TF_TensorType(infer_request->output_tensors[i]); } - switch (tf_model->model->func_type) { + switch (tf_model->model.func_type) { case DFT_PROCESS_FRAME: //it only support 1 output if it's frame in & frame out if (task->do_ioproc) { - if (tf_model->model->frame_post_proc != NULL) { - tf_model->model->frame_post_proc(task->out_frame, outputs, tf_model->model->filter_ctx); + if (tf_model->model.frame_post_proc != NULL) { + tf_model->model.frame_post_proc(task->out_frame, outputs, tf_model->model.filter_ctx); } else { ff_proc_from_dnn_to_frame(task->out_frame, outputs, ctx); } @@ -741,11 +734,11 @@ static void infer_completion_callback(void *args) { } break; case DFT_ANALYTICS_DETECT: - if (!tf_model->model->detect_post_proc) { + if (!tf_model->model.detect_post_proc) { av_log(ctx, AV_LOG_ERROR, "Detect filter needs provide post proc\n"); return; } - tf_model->model->detect_post_proc(task->in_frame, outputs, task->nb_output, tf_model->model->filter_ctx); + tf_model->model.detect_post_proc(task->in_frame, outputs, task->nb_output, tf_model->model.filter_ctx); break; default: av_log(ctx, AV_LOG_ERROR, "Tensorflow backend does not support this kind of dnn filter now\n"); From patchwork Sun Apr 28 06:46:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhao Zhili X-Patchwork-Id: 48320 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a21:1509:b0:1a9:af23:56c1 with SMTP id nq9csp1323581pzb; Sat, 27 Apr 2024 23:47:36 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCVFuL1t1tTchLIJnN6W9zaIqb9N+rVIAPj/iOWU1x33rsJRTtwTvlcOISNvj9NUcBWiWFRT82N1nO748Pq9uXtgk5tIfK72RtgDAQ== X-Google-Smtp-Source: AGHT+IExU5NmfYuV6HPvei3UVBsnjNnwY0ybEdTsA+a0QUDw2uVxcmDeZXSrM7SDNcKPdMFcozND X-Received: by 2002:a05:6402:1caf:b0:572:5f28:1f25 with SMTP id cz15-20020a0564021caf00b005725f281f25mr2984027edb.7.1714286856146; Sat, 27 Apr 2024 23:47:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1714286856; cv=none; d=google.com; s=arc-20160816; b=saeYV5UsmX0JGMOyy+IT3NZQs/9PHC+jgmgsr6yoDuxJYLbelryVSK+5uYD6eDQtPj Y1uTqnrn/Wp6MP3fl7hAAt500+W+idblpaXTxz5iRObTzQxU3mnwD/NXO8dW5KKemVkm N8Fzz7wAjvB8dLPoU0wKM3OO6Hg1nxIN7vyjUemV1KMFk2VmX7bomNWkas4h8kPzpYsK XGWQ/Se7qO5qGuas0Y8IBrpPT7Q9bJheW+dcUIChjYzmrd/0w3R4C1OeYG/8HBCxOo6+ JA4Lo648tL44O26oyWnENfoSRNLtX8SKWQdf8L4KqNiwG8bNQd1HlNtbf3QlRtAG3MR/ vwMg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to:date :to:from:message-id:dkim-signature:delivered-to; bh=6YKkislD9HrmFXCzt+kV4QkjjuLxomhieYfWvKnx0cI=; fh=HnHYuZ9XgUo86ZRXTLWWmQxhslYEI9B9taZ5X1DLFfc=; b=OK55vsOZyaek1CVTaQ92yjZdVu1yI6c3mqtPhtYZB2wQphW7M3ZhZ6qkbtlL08JaTt yR0QJWvxPHMLbvN7VMp0wztJcYEwhraHkAQsT0MTeYF/B8lgYZBc/SY1CXI6rBvxPCEG GFBGHF//yYtH84rRpoiSuY0nbC+6Worw37EkfNr7Zixtqow0UR4s6NAGt4z68cOfTLcB DAGUOtNmeMfU67mD+gl3+DspzmpiKHLR0ar/8JxAoIWSabaioTLH78V3Xq38tv/jUw5x SsfpJca74cDZmVpSKg8jvh3H1ZggY2f2hpD/TjdoMjRkG52gB2MVsUBPLjlm3vl5wJf1 J8zQ==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@foxmail.com header.s=s201512 header.b=ciFurZqm; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=foxmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id h2-20020a056402280200b00572032a81fesi8981317ede.92.2024.04.27.23.47.35; Sat, 27 Apr 2024 23:47:36 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@foxmail.com header.s=s201512 header.b=ciFurZqm; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=foxmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id E856068D377; Sun, 28 Apr 2024 09:47:19 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from out162-62-57-252.mail.qq.com (out162-62-57-252.mail.qq.com [162.62.57.252]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 6E7EE68D2F9 for ; Sun, 28 Apr 2024 09:47:10 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=foxmail.com; s=s201512; t=1714286821; bh=t8GRrPziAy/DBR0es5Cliii90QdzrzGP9RaqvJDGK/M=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=ciFurZqmEswvI5pMKMNB+h2v9VEqQNeuASbCF41SSacBr6UrtIVKnBF1ruw8gk7pA x0uBiuF8I+hWmGqRipc5MD3vKbFZ2VPZ9ycYY1Q1S+7pjNHSqepPJ06kuE2YX2zT/9 YySIrJ0NfX379dY14XNZ+8Lgb4JUvzS5G4RmKEhI= Received: from localhost.localdomain ([119.147.10.207]) by newxmesmtplogicsvrszc5-2.qq.com (NewEsmtp) with SMTP id BB93D84A; Sun, 28 Apr 2024 14:46:57 +0800 X-QQ-mid: xmsmtpt1714286820tp4ysgr8a Message-ID: X-QQ-XMAILINFO: M7uElAZZZMmFS0fmYMjT6XV2NUlFifYb8X0I/BUm1eikeAF6enDvuGSBPnToBC dZLrgDIADrUVbkBZCStisR9iy3J4HQu6ligOazShlvu3kNo78Y+Egbjk7gJijmxo9B4Z7pQ5CK8H bsEfh71fPTWzKlKhpX9l4SBRixatBebHgam88HWCngeOEWliAd7O/T4bGiaBmn1d0A560T3d3ncU ei3NZnYkleArz19LwiaDgT5PmfGjagceTLA41E2Q6DSe8waEQBA3rLMEWPzGVrH3H5++Lc+7z0uJ 6pmJaH4nyGzRo9QSayhL9Uhvh5d9Z1Q3wi8U4rTTaZetz0eSCgFCJlhF0G6M8opUJFmVWf9UjdOo qS95bpR7eQUtQEFFm2jBPOlIfOgfpFWqNAszPTZSy4km9ZbxJyNESwJsnk3YU/+Z/zbVMQ+tw6cF 6l1sdWOZIuKNtbyioA83Xgi78JBN3n//4y0m50sn7uvDE2hXRjx0MMqoOtfrANZA3G8CFKhqEcfd ugtHPBD3uL5T0C2ke0fWEnSOnguIDvZ/5rl2czJXh4X2SAo5hVm8qszC0FulHFpQ0g2HMQmO1PId /srpjm3itYnuxYdooIRt5rT1f1YtWYaI7hxZqJ2mxxbec6F1xPHwxcopVAvk7NV7X/gXasqZwakO 6Si8O9TaqYSeYUDIIQC58qwK021UNfJupeAlFi1v1cHnV4SjaqpYc2tN88wF2PZ7NquoEysu1skY Kgs0oXzqVRt3YgJUoNesB1gSfRT3ITZ0aMXE56JNyGePENOBU5x2BxeEcTW5zmGuV6u2zOew5l4j zTR6KZIqIB5hSAWZqTOsEr7r5BJxhKa+0SB0sGPXPwntnYrivXwDXZXHCIvnBoq+ftUFekDiSFyp majoe+lc4d6fKyYBfl4q5UVEbRTjlUAqz52A9y5d5O641IA4U/m2bQvsVvlynjkEhX9BYPLCaMPV th/RlmeTTI+7/fi0e8LiHjAXLlXlIEBTJwjeJs5P0cWKbKqAXaGZJJ7qK5/aE26rrRrUktSDIafm 5eGXySh1Z5C7eIfpohWPVoRgoYQ0A= X-QQ-XMRINFO: M/715EihBoGSf6IYSX1iLFg= From: Zhao Zhili To: ffmpeg-devel@ffmpeg.org Date: Sun, 28 Apr 2024 14:46:53 +0800 X-OQ-MSGID: <20240428064655.106853-7-quinkblack@foxmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240428064655.106853-1-quinkblack@foxmail.com> References: <20240428064655.106853-1-quinkblack@foxmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH WIP v2 7/9] avfilter/dnn_backend_torch: Simplify memory allocation X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Zhao Zhili Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: 24OiuR2cslBs From: Zhao Zhili --- libavfilter/dnn/dnn_backend_torch.cpp | 31 +++++++++++---------------- 1 file changed, 12 insertions(+), 19 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_torch.cpp b/libavfilter/dnn/dnn_backend_torch.cpp index abdef1f178..818ec5b713 100644 --- a/libavfilter/dnn/dnn_backend_torch.cpp +++ b/libavfilter/dnn/dnn_backend_torch.cpp @@ -37,8 +37,8 @@ extern "C" { } typedef struct THModel { + DNNModel model; DnnContext *ctx; - DNNModel *model; torch::jit::Module *jit_model; SafeQueue *request_queue; Queue *task_queue; @@ -141,7 +141,7 @@ static void dnn_free_model_th(DNNModel **model) ff_queue_destroy(th_model->task_queue); delete th_model->jit_model; av_freep(&th_model); - av_freep(model); + *model = NULL; } static int get_input_th(void *model, DNNData *input, const char *input_name) @@ -195,19 +195,19 @@ static int fill_model_input_th(THModel *th_model, THRequestItem *request) infer_request->input_tensor = new torch::Tensor(); infer_request->output = new torch::Tensor(); - switch (th_model->model->func_type) { + switch (th_model->model.func_type) { case DFT_PROCESS_FRAME: input.scale = 255; if (task->do_ioproc) { - if (th_model->model->frame_pre_proc != NULL) { - th_model->model->frame_pre_proc(task->in_frame, &input, th_model->model->filter_ctx); + if (th_model->model.frame_pre_proc != NULL) { + th_model->model.frame_pre_proc(task->in_frame, &input, th_model->model.filter_ctx); } else { ff_proc_from_frame_to_dnn(task->in_frame, &input, ctx); } } break; default: - avpriv_report_missing_feature(NULL, "model function type %d", th_model->model->func_type); + avpriv_report_missing_feature(NULL, "model function type %d", th_model->model.func_type); break; } *infer_request->input_tensor = torch::from_blob(input.data, @@ -282,13 +282,13 @@ static void infer_completion_callback(void *args) { goto err; } - switch (th_model->model->func_type) { + switch (th_model->model.func_type) { case DFT_PROCESS_FRAME: if (task->do_ioproc) { outputs.scale = 255; outputs.data = output->data_ptr(); - if (th_model->model->frame_post_proc != NULL) { - th_model->model->frame_post_proc(task->out_frame, &outputs, th_model->model->filter_ctx); + if (th_model->model.frame_post_proc != NULL) { + th_model->model.frame_post_proc(task->out_frame, &outputs, th_model->model.filter_ctx); } else { ff_proc_from_dnn_to_frame(task->out_frame, &outputs, th_model->ctx); } @@ -298,7 +298,7 @@ static void infer_completion_callback(void *args) { } break; default: - avpriv_report_missing_feature(th_model->ctx, "model function type %d", th_model->model->func_type); + avpriv_report_missing_feature(th_model->ctx, "model function type %d", th_model->model.func_type); goto err; } task->inference_done++; @@ -417,17 +417,10 @@ static DNNModel *dnn_load_model_th(DnnContext *ctx, DNNFunctionType func_type, A THRequestItem *item = NULL; const char *device_name = ctx->device ? ctx->device : "cpu"; - model = (DNNModel *)av_mallocz(sizeof(DNNModel)); - if (!model) { - return NULL; - } - th_model = (THModel *)av_mallocz(sizeof(THModel)); - if (!th_model) { - av_freep(&model); + if (!th_model) return NULL; - } - th_model->model = model; + model = &th_model->model; model->model = th_model; th_model->ctx = ctx; From patchwork Sun Apr 28 06:46:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhao Zhili X-Patchwork-Id: 48319 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a21:1509:b0:1a9:af23:56c1 with SMTP id nq9csp1323527pzb; Sat, 27 Apr 2024 23:47:24 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCW6/vkXyaPm16wqEeWeHasY0caA6O1DVeOCfnSILi11wmdsgewRyCCe6VRlO4KMwI05PCwFKguH9vV7wB0lk8HB3y8tm1rr/17EJg== X-Google-Smtp-Source: AGHT+IEYOI+C31PxUKeSpeoLZP937IAdcEpW5+VlJsWm5JjVSf2x9Yc/wobiyHHPidsgQam+jhZ6 X-Received: by 2002:a17:906:fcc2:b0:a58:e75e:b059 with SMTP id qx2-20020a170906fcc200b00a58e75eb059mr1814278ejb.0.1714286844299; Sat, 27 Apr 2024 23:47:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1714286844; cv=none; d=google.com; s=arc-20160816; b=ZxctI8c6mrIRkTt+fjWhWG0dSgJzfFwQJKvXrb6mLYBEE6asWsN54nTc3O9fgc9txk hAUnSJrUfiQR1tHFsoe5oxgnlEusiaCybAbikygkkYA3kyvUM84S9ZYlNIJZaHDEro54 xhRjEVkXn5KQG/V4tXJBBm6J79fiGSGlG7CtoL42IoBwO69X7sLeRIwUXAFH4YpyD+z3 zNs32kYKUJkVxw+QlHdmoEqiaMOMUwjDJNkWbnL0KViV7fA2KLKhHzVe1vUQg1sUXW+p 4e+XEsfS1rOmZFOXPgntLBZy4GYcl+Td4w/S7UxQzaNKkYTnWqvMcO+1OKCXkkvieJa6 lXGg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to:date :to:from:message-id:dkim-signature:delivered-to; bh=zTca2QLIDYI9vhsGKs5muBQUEJCu+/Y/EpGeeqPbq9c=; fh=HnHYuZ9XgUo86ZRXTLWWmQxhslYEI9B9taZ5X1DLFfc=; b=KW7XUSVswweeqIUH9NH5SnbeNfDK+6a5GrVhWzUFwqqJAmrDPBxXEckECdQaROjDqV G0Wp3eJ9+C069Yt9qg/jSJdXdXe33/liydnEfc/Xi9r2vPazr8sEZpZSX1WGOAaL0RNy IyBlFywWOCOtEczIS30ah3unTsMsRPD2ZOxJIT9KZDQDBlJhH2iQOmyHP1GQEKh5Prx2 R9n0Zfhxn3ifltnullL1tx+8uTJXpGUnPg8hyYbCTerxCS1ouSOk5FPfvv1ZMdKx1N0f Ww5mpbIpCjz9nCCl/T7TYj0yrrkEyg7pYUkn9fhpjzRbClOwBSfOPMkmHK2HkHGQlt99 LziQ==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@foxmail.com header.s=s201512 header.b=YU7jwTSK; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=foxmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id jg5-20020a170907970500b00a58f5ef91d8si612936ejc.194.2024.04.27.23.47.23; Sat, 27 Apr 2024 23:47:24 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@foxmail.com header.s=s201512 header.b=YU7jwTSK; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=foxmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id D5B2768D2D2; Sun, 28 Apr 2024 09:47:18 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from out162-62-57-49.mail.qq.com (out162-62-57-49.mail.qq.com [162.62.57.49]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 72A6568D2FB for ; Sun, 28 Apr 2024 09:47:10 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=foxmail.com; s=s201512; t=1714286821; bh=nsWtAbgvPJH/yDNzXTXl5ebSPP/orxnMX9rlg7tL9DQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=YU7jwTSKWTAUWmwoY7u5lW7ZXQ5JswFpJdoNcAZsPtm1e0rH0abAUd41/3mBTH78w tkq6XiaNHRoXobQ4+BBfO6dWPXg/wX8C68QohWtPw/snr86i4yswrCxnEDT9mOw0VK 1L1t8WfUr9LAy20Si/6mnLhPutkxACvbN0agfPvQ= Received: from localhost.localdomain ([119.147.10.207]) by newxmesmtplogicsvrszc5-2.qq.com (NewEsmtp) with SMTP id BB93D84A; Sun, 28 Apr 2024 14:46:57 +0800 X-QQ-mid: xmsmtpt1714286821thvhv1w3l Message-ID: X-QQ-XMAILINFO: NNoVXaH09J179VPrDWxAsTxDGbxML6PSgn89HfbkkfzZ3FMjAcnhLD/9cF6JP3 B4P7cOS9KzXcuT/F4rHJXbzmiCsKkgPhlzAkWnNm7KnYjgQgwgOq1Aii+VJB/aepjG1GbJila2UB IxTzktEujl1t8V8CWkGaZs3+0KZqSExPa/PLhvs+uBhgTqu1FNnEQ+iaHTghJeXNfR/TGPtrtN+w ReAFMhkw9ADUM0lPw/aEpQ7WE1fKoYd1ZMCsyHa+sEIgH4GcsjNpujDRU0zKaNp9DEzruDPhdruY BEWMjiVm6Tj47Ur6r5P2bJ6JnHZlbus2VaPSjUGePRdkbsvXP24mb47DV6JOjZSSAaY70QT+95OY L74S8B0q4BHjYiZxUsGf2N/4kAYjeLSbmo4xjBHIsLZfrrwFT7HdEmkLDknVKVtyX3jzbhi9CEzS 0slcSGexEo75MCpidXdLEPoprqdFxykaRj4iyO3QaemLQgCD+oE1b/+1eQmbmxY58vrOYSkGAbt9 feYaspSh6Spt4aZqA4v6FHEGA91sa0C/kNgPnGR9hpaAArT5FSQzdq1poL4vpK/wS59keeJyrqjT pSKSys2s2uJmN8rwPkCuNDtmFZVQGCsuT+PgtEQYWDp7lKnYUFFOZbAVzkgapijRtPRtBSkZSGQY +A0tNEs46bs1QJqFbDSAW4mtqnYU4BmhcR1k9Z+3cqYvpalbvQMIDHyS+7j+rmOe2JkP6rADHAj+ ASynuiCrQ6S6j9T5aEaGpRTaLBCnne8ZP7o+mIhgUOhHqkWjkgLxZ3HfCiY0K9099n9d9JyRKdsx hAz8d3J/NAr256MQjG37/7EtY20LlVvyVzbD5uqqdJKVREHJu3Nix2YeI52JnjtmD2vl26rYNGS5 gglEVb1YH4C9Jd+sM1VxUqOR79WN70Di9NIc1pd2168Mb79M/JrC5SJQpBbHFoXrDaMVJU1C3AjW 4pCD/NPBpk3Z99JL5xcTqKF6hmdF7xdFuhbM04StUO2zN+ktf57QAWN9cHg0mbomwh5tD5uKoULL BNqdJYPEcTQITW+cIfRw+izVD2KA0= X-QQ-XMRINFO: Mp0Kj//9VHAxr69bL5MkOOs= From: Zhao Zhili To: ffmpeg-devel@ffmpeg.org Date: Sun, 28 Apr 2024 14:46:54 +0800 X-OQ-MSGID: <20240428064655.106853-8-quinkblack@foxmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240428064655.106853-1-quinkblack@foxmail.com> References: <20240428064655.106853-1-quinkblack@foxmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH WIP v2 8/9] avfilter/dnn: Remove a level of dereference X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Zhao Zhili Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: BGl32kP5/5/y From: Zhao Zhili For code such as 'model->model = ov_model' is confusing. We can just drop the member variable and use cast to get the subclass. --- libavfilter/dnn/dnn_backend_openvino.c | 17 ++++++++--------- libavfilter/dnn/dnn_backend_tf.c | 19 +++++++++---------- libavfilter/dnn/dnn_backend_torch.cpp | 15 +++++++-------- libavfilter/dnn_filter_common.c | 6 +++--- libavfilter/dnn_interface.h | 6 ++---- 5 files changed, 29 insertions(+), 34 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_openvino.c b/libavfilter/dnn/dnn_backend_openvino.c index 1acc54b791..d8a6820dc2 100644 --- a/libavfilter/dnn/dnn_backend_openvino.c +++ b/libavfilter/dnn/dnn_backend_openvino.c @@ -517,7 +517,7 @@ static void dnn_free_model_ov(DNNModel **model) if (!model || !*model) return; - ov_model = (*model)->model; + ov_model = (OVModel *)(*model); while (ff_safe_queue_size(ov_model->request_queue) != 0) { OVRequestItem *item = ff_safe_queue_pop_front(ov_model->request_queue); if (item && item->infer_request) { @@ -1059,9 +1059,9 @@ err: return ret; } -static int get_input_ov(void *model, DNNData *input, const char *input_name) +static int get_input_ov(DNNModel *model, DNNData *input, const char *input_name) { - OVModel *ov_model = model; + OVModel *ov_model = (OVModel *)model; DnnContext *ctx = ov_model->ctx; int input_resizable = ctx->ov_option.input_resizable; @@ -1255,7 +1255,7 @@ static int extract_lltask_from_task(DNNFunctionType func_type, TaskItem *task, Q } } -static int get_output_ov(void *model, const char *input_name, int input_width, int input_height, +static int get_output_ov(DNNModel *model, const char *input_name, int input_width, int input_height, const char *output_name, int *output_width, int *output_height) { #if HAVE_OPENVINO2 @@ -1268,7 +1268,7 @@ static int get_output_ov(void *model, const char *input_name, int input_width, i input_shapes_t input_shapes; #endif int ret; - OVModel *ov_model = model; + OVModel *ov_model = (OVModel *)model; DnnContext *ctx = ov_model->ctx; TaskItem task; OVRequestItem *request; @@ -1383,7 +1383,6 @@ static DNNModel *dnn_load_model_ov(DnnContext *ctx, DNNFunctionType func_type, A return NULL; ov_model->ctx = ctx; model = &ov_model->model; - model->model = ov_model; #if HAVE_OPENVINO2 status = ov_core_create(&core); @@ -1470,7 +1469,7 @@ err: static int dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams *exec_params) { - OVModel *ov_model = model->model; + OVModel *ov_model = (OVModel *)model; DnnContext *ctx = ov_model->ctx; OVRequestItem *request; TaskItem *task; @@ -1558,13 +1557,13 @@ static int dnn_execute_model_ov(const DNNModel *model, DNNExecBaseParams *exec_p static DNNAsyncStatusType dnn_get_result_ov(const DNNModel *model, AVFrame **in, AVFrame **out) { - OVModel *ov_model = model->model; + OVModel *ov_model = (OVModel *)model; return ff_dnn_get_result_common(ov_model->task_queue, in, out); } static int dnn_flush_ov(const DNNModel *model) { - OVModel *ov_model = model->model; + OVModel *ov_model = (OVModel *)model; DnnContext *ctx = ov_model->ctx; OVRequestItem *request; #if HAVE_OPENVINO2 diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c index c7716e696d..06ea6cbb8c 100644 --- a/libavfilter/dnn/dnn_backend_tf.c +++ b/libavfilter/dnn/dnn_backend_tf.c @@ -262,9 +262,9 @@ static TF_Tensor *allocate_input_tensor(const DNNData *input) input_dims[1] * input_dims[2] * input_dims[3] * size); } -static int get_input_tf(void *model, DNNData *input, const char *input_name) +static int get_input_tf(DNNModel *model, DNNData *input, const char *input_name) { - TFModel *tf_model = model; + TFModel *tf_model = (TFModel *)model; DnnContext *ctx = tf_model->ctx; TF_Status *status; TF_DataType dt; @@ -310,11 +310,11 @@ static int get_input_tf(void *model, DNNData *input, const char *input_name) return 0; } -static int get_output_tf(void *model, const char *input_name, int input_width, int input_height, +static int get_output_tf(DNNModel *model, const char *input_name, int input_width, int input_height, const char *output_name, int *output_width, int *output_height) { int ret; - TFModel *tf_model = model; + TFModel *tf_model = (TFModel *)model; DnnContext *ctx = tf_model->ctx; TaskItem task; TFRequestItem *request; @@ -486,7 +486,7 @@ static void dnn_free_model_tf(DNNModel **model) if (!model || !*model) return; - tf_model = (*model)->model; + tf_model = (TFModel *)(*model); while (ff_safe_queue_size(tf_model->request_queue) != 0) { TFRequestItem *item = ff_safe_queue_pop_front(tf_model->request_queue); destroy_request_item(&item); @@ -530,7 +530,6 @@ static DNNModel *dnn_load_model_tf(DnnContext *ctx, DNNFunctionType func_type, A if (!tf_model) return NULL; model = &tf_model->model; - model->model = tf_model; tf_model->ctx = ctx; if (load_tf_model(tf_model, ctx->model_filename) != 0){ @@ -611,7 +610,7 @@ static int fill_model_input_tf(TFModel *tf_model, TFRequestItem *request) { task = lltask->task; request->lltask = lltask; - ret = get_input_tf(tf_model, &input, task->input_name); + ret = get_input_tf(&tf_model->model, &input, task->input_name); if (ret != 0) { goto err; } @@ -803,7 +802,7 @@ err: static int dnn_execute_model_tf(const DNNModel *model, DNNExecBaseParams *exec_params) { - TFModel *tf_model = model->model; + TFModel *tf_model = (TFModel *)model; DnnContext *ctx = tf_model->ctx; TaskItem *task; TFRequestItem *request; @@ -851,13 +850,13 @@ static int dnn_execute_model_tf(const DNNModel *model, DNNExecBaseParams *exec_p static DNNAsyncStatusType dnn_get_result_tf(const DNNModel *model, AVFrame **in, AVFrame **out) { - TFModel *tf_model = model->model; + TFModel *tf_model = (TFModel *)model; return ff_dnn_get_result_common(tf_model->task_queue, in, out); } static int dnn_flush_tf(const DNNModel *model) { - TFModel *tf_model = model->model; + TFModel *tf_model = (TFModel *)model; DnnContext *ctx = tf_model->ctx; TFRequestItem *request; int ret; diff --git a/libavfilter/dnn/dnn_backend_torch.cpp b/libavfilter/dnn/dnn_backend_torch.cpp index 818ec5b713..24e9f2c8e2 100644 --- a/libavfilter/dnn/dnn_backend_torch.cpp +++ b/libavfilter/dnn/dnn_backend_torch.cpp @@ -119,7 +119,7 @@ static void dnn_free_model_th(DNNModel **model) if (!model || !*model) return; - th_model = (THModel *) (*model)->model; + th_model = (THModel *) (*model); while (ff_safe_queue_size(th_model->request_queue) != 0) { THRequestItem *item = (THRequestItem *)ff_safe_queue_pop_front(th_model->request_queue); destroy_request_item(&item); @@ -144,7 +144,7 @@ static void dnn_free_model_th(DNNModel **model) *model = NULL; } -static int get_input_th(void *model, DNNData *input, const char *input_name) +static int get_input_th(DNNModel *model, DNNData *input, const char *input_name) { input->dt = DNN_FLOAT; input->order = DCO_RGB; @@ -179,7 +179,7 @@ static int fill_model_input_th(THModel *th_model, THRequestItem *request) task = lltask->task; infer_request = request->infer_request; - ret = get_input_th(th_model, &input, NULL); + ret = get_input_th(&th_model->model, &input, NULL); if ( ret != 0) { goto err; } @@ -356,7 +356,7 @@ err: return ret; } -static int get_output_th(void *model, const char *input_name, int input_width, int input_height, +static int get_output_th(DNNModel *model, const char *input_name, int input_width, int input_height, const char *output_name, int *output_width, int *output_height) { int ret = 0; @@ -421,7 +421,6 @@ static DNNModel *dnn_load_model_th(DnnContext *ctx, DNNFunctionType func_type, A if (!th_model) return NULL; model = &th_model->model; - model->model = th_model; th_model->ctx = ctx; c10::Device device = c10::Device(device_name); @@ -489,7 +488,7 @@ fail: static int dnn_execute_model_th(const DNNModel *model, DNNExecBaseParams *exec_params) { - THModel *th_model = (THModel *)model->model; + THModel *th_model = (THModel *)model; DnnContext *ctx = th_model->ctx; TaskItem *task; THRequestItem *request; @@ -538,13 +537,13 @@ static int dnn_execute_model_th(const DNNModel *model, DNNExecBaseParams *exec_p static DNNAsyncStatusType dnn_get_result_th(const DNNModel *model, AVFrame **in, AVFrame **out) { - THModel *th_model = (THModel *)model->model; + THModel *th_model = (THModel *)model; return ff_dnn_get_result_common(th_model->task_queue, in, out); } static int dnn_flush_th(const DNNModel *model) { - THModel *th_model = (THModel *)model->model; + THModel *th_model = (THModel *)model; THRequestItem *request; if (ff_queue_size(th_model->lltask_queue) == 0) diff --git a/libavfilter/dnn_filter_common.c b/libavfilter/dnn_filter_common.c index 3dd51badf6..132dd75550 100644 --- a/libavfilter/dnn_filter_common.c +++ b/libavfilter/dnn_filter_common.c @@ -151,15 +151,15 @@ int ff_dnn_set_classify_post_proc(DnnContext *ctx, ClassifyPostProc post_proc) int ff_dnn_get_input(DnnContext *ctx, DNNData *input) { - return ctx->model->get_input(ctx->model->model, input, ctx->model_inputname); + return ctx->model->get_input(ctx->model, input, ctx->model_inputname); } int ff_dnn_get_output(DnnContext *ctx, int input_width, int input_height, int *output_width, int *output_height) { char * output_name = ctx->model_outputnames && ctx->backend_type != DNN_TH ? ctx->model_outputnames[0] : NULL; - return ctx->model->get_output(ctx->model->model, ctx->model_inputname, input_width, input_height, - (const char *)output_name, output_width, output_height); + return ctx->model->get_output(ctx->model, ctx->model_inputname, input_width, input_height, + (const char *)output_name, output_width, output_height); } int ff_dnn_execute_model(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame) diff --git a/libavfilter/dnn_interface.h b/libavfilter/dnn_interface.h index a58001bab2..4e14a42d00 100644 --- a/libavfilter/dnn_interface.h +++ b/libavfilter/dnn_interface.h @@ -91,17 +91,15 @@ typedef int (*DetectPostProc)(AVFrame *frame, DNNData *output, uint32_t nb, AVFi typedef int (*ClassifyPostProc)(AVFrame *frame, DNNData *output, uint32_t bbox_index, AVFilterContext *filter_ctx); typedef struct DNNModel{ - // Stores model that can be different for different backends. - void *model; // Stores FilterContext used for the interaction between AVFrame and DNNData AVFilterContext *filter_ctx; // Stores function type of the model DNNFunctionType func_type; // Gets model input information // Just reuse struct DNNData here, actually the DNNData.data field is not needed. - int (*get_input)(void *model, DNNData *input, const char *input_name); + int (*get_input)(struct DNNModel *model, DNNData *input, const char *input_name); // Gets model output width/height with given input w/h - int (*get_output)(void *model, const char *input_name, int input_width, int input_height, + int (*get_output)(struct DNNModel *model, const char *input_name, int input_width, int input_height, const char *output_name, int *output_width, int *output_height); // set the pre process to transfer data from AVFrame to DNNData // the default implementation within DNN is used if it is not provided by the filter From patchwork Sun Apr 28 06:46:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhao Zhili X-Patchwork-Id: 48323 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a21:1509:b0:1a9:af23:56c1 with SMTP id nq9csp1323670pzb; Sat, 27 Apr 2024 23:48:01 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCXmsSEFeDnKBg09vDP+LyoFxYq+8qC2rMpzMqSIWjS+ipY9w0Z+9sqXt6YycPB4FBtzuYl4Z3/vXv9bo5ON3xqxC9QGsO8Gay8jow== X-Google-Smtp-Source: AGHT+IHR/OIRrcxP3KH8ZZczD2CJAFIrqTXp0+0az+7zBAdjHoHNKfKZnvTtvgzlx8QRdDg/kImZ X-Received: by 2002:a17:906:6a1c:b0:a58:fd7a:febc with SMTP id qw28-20020a1709066a1c00b00a58fd7afebcmr141188ejc.6.1714286881496; Sat, 27 Apr 2024 23:48:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1714286881; cv=none; d=google.com; s=arc-20160816; b=X1AprdVh8Izv/6m7M9m77u3jiJLSaqK/3xm8wcaoOgK68RHn+QyhOAoHlAjOihBlJu pb9KNM6NqDyh2L/Bg1wvfF9pR3p2OP6iBFiZWY26GRWvBNcfiycRkRxTq7e3A3OUom/g IhY/bjQ9eZ1MK/ayNrfETyPaC7UG7Up2JZcKqpPNuZUT7OA91DFaQcQ/a8tQ1GbGYrQQ /tNSisomf/8v84An9oOIpg8l6TeOFHsoCrjtfjqTK15od8IIob8Yy0Fbj83gAm836J9A H9+6k53EAWZD1GSB7SZxf7vjt6H9tRbrnC29b58g6hSyvttcMEB5zcsaFGsJWG8nM++G MqRQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to:date :to:from:message-id:dkim-signature:delivered-to; bh=y/lQmfxAUw5A5Y6DvO+4OdjyXWzRSFtcFG+DyIGI6IQ=; fh=HnHYuZ9XgUo86ZRXTLWWmQxhslYEI9B9taZ5X1DLFfc=; b=T4kmZEFqJD81BD778X+sIarv2DdrR9/HXInbmGLW+cNXVJm/vPWVIsYjmqOYwfVx+y AYkd6KPPZZuiNTrfJQUtVoitWy0oBJrD8HuYeqD5QGnhquJxAYPDXXZmIiFATS4CHT0r x6QbpElfLCbqkoF/wnwRKu9Flof21KzaqPDvdKyCZ86/mp+UGie5pMADetLW3jVbBRBh CilK3HHp09NBREEOOhnBeuleq3rgqIgolsfn+mJTMk9oe86OEiY/vmkBpqnSJfrdCvDf 7VF67hnlxnNlIa8t4bZUqE3osylYFR/zVOuwFb/Qu1rnL6pp5GMNVTBvV8EAyxX/f2f/ 7IeA==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@foxmail.com header.s=s201512 header.b=JkffapFP; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=foxmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id p9-20020a170906a00900b00a55376d3bf3si13080125ejy.453.2024.04.27.23.48.01; Sat, 27 Apr 2024 23:48:01 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@foxmail.com header.s=s201512 header.b=JkffapFP; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=foxmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id EDA8E68D2A2; Sun, 28 Apr 2024 09:47:23 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from out162-62-58-216.mail.qq.com (out162-62-58-216.mail.qq.com [162.62.58.216]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id A899D68D304 for ; Sun, 28 Apr 2024 09:47:10 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=foxmail.com; s=s201512; t=1714286822; bh=8ePUb8R2DKpOxnVBvEOLW+SaeQDSrkIueKLbXtQD+lk=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=JkffapFPLyTHJNE1nnff+Ld2YCy56w/AJ1jLAB4xPM4w75UJVTs6pDJgRJoQf0fRF uqMyPjAnEQir4792MuS0sOz18A8eryX91O11r3H32MN+i7fSHzLpT0or6ggf2+L3vh afT0mZhV7yO42NkecAnyc2R0bwZJBxt/Y+U3iKiU= Received: from localhost.localdomain ([119.147.10.207]) by newxmesmtplogicsvrszc5-2.qq.com (NewEsmtp) with SMTP id BB93D84A; Sun, 28 Apr 2024 14:46:57 +0800 X-QQ-mid: xmsmtpt1714286821tjjmmvml5 Message-ID: X-QQ-XMAILINFO: NQR8mRxMnur91gIxrQVtefQ/yVqhGA3KTie5ZvOU9r2KPlrTorIRPwpC3DN162 zlBt16us1fBtJEYE03YyzD9kMkDRsGlPkSPbeu0vZEFsiaoiClYDGoGYMPJ4HwJBQ5zb5oh0C1eF bVPRfHmdVWGqAJcwEjJ10cJ2O/Quiam3HEfWS62UmuS3i4Pm1jrl61fmrpGdB89NEOWKQVzL74IT 01LoKbRNI7Xk63XNA9xm5522hftbABkZpm5Iva8A5jqLK3RUowVyav+eu7TOSlRiZ+ULaNMmDuT4 5juQ6Or7cTNr441TOUFX072wH4B37b3ZDChAqzgCvaJSM8UtfQYlVJZrT58UNKe4GTT7JQT9Pl5J BLObscuRCm1wkCuCZC4w6Vlnwx0ojsQM9Prl+nK7Ld8XiQHsGbw+zi19dGGQQ8tC99cjBpm145wV VEvRMZD+EyfhMFZ5EoiRacuVNzE6Rxfk8Y77uyvxXvQt+RJ8k1TD1YNMQ1yuElMD8LspomH1lKzX StNgUWLojnJgwyjEG9fR+4G7iGcHZ/GkJ8pE5Q2rmVFrqkuXDdgETHhqVVjKm1kk12HweNcABzZL HYKgHVkW4hVO8EtAKnD/xtC3zLbcXNM3lP2YwRqV7EK6duV6bN/qPUzg++FLkhXQDDa2pkNzYi74 Vl+fKGB8I89oDT/vvz/mCWYoebUybavBjemJVCZgGppaS+cZcHzxwge2OvNWWVNp48Icn8suFdfr BRLmZeYye+DPHp/SjaarEmJIzxSpoyps6ISGZ9+f3HT1x3CTf3wkBV3RW3iBLnY9T/CdwFgDB0h7 orakXQFU1VloDWFhBZQCN2RsNUw2pyM6a/QKPy6p4DPOhd+nnI5NymNN8RedV3tLiWGB1pxIBIlJ ofR3A9z1NPkV7OyAVBJgz0zmGq29TriPP1OH0p0PZCclDpf+1iHMVwnmzMUJNFjm68GmNUtPt6Wh xCSgr5ZdSZpleivl5fMSXKQ3tkMhxmjRYWsf439m8gnPRe40OR1GvS6HbO2Bax X-QQ-XMRINFO: OD9hHCdaPRBwq3WW+NvGbIU= From: Zhao Zhili To: ffmpeg-devel@ffmpeg.org Date: Sun, 28 Apr 2024 14:46:55 +0800 X-OQ-MSGID: <20240428064655.106853-9-quinkblack@foxmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240428064655.106853-1-quinkblack@foxmail.com> References: <20240428064655.106853-1-quinkblack@foxmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH WIP v2 9/9] avfilter/dnn: Use dnn_backend_info_list to search for dnn module X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Zhao Zhili Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: /n0Kgw6CSnt1 From: Zhao Zhili --- libavfilter/dnn/dnn_backend_openvino.c | 1 + libavfilter/dnn/dnn_backend_tf.c | 1 + libavfilter/dnn/dnn_backend_torch.cpp | 1 + libavfilter/dnn/dnn_interface.c | 26 ++++++++------------------ libavfilter/dnn_interface.h | 1 + 5 files changed, 12 insertions(+), 18 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_openvino.c b/libavfilter/dnn/dnn_backend_openvino.c index d8a6820dc2..9c699cdc8c 100644 --- a/libavfilter/dnn/dnn_backend_openvino.c +++ b/libavfilter/dnn/dnn_backend_openvino.c @@ -1613,6 +1613,7 @@ static int dnn_flush_ov(const DNNModel *model) const DNNModule ff_dnn_backend_openvino = { .clazz = DNN_DEFINE_CLASS(dnn_openvino), + .type = DNN_OV, .load_model = dnn_load_model_ov, .execute_model = dnn_execute_model_ov, .get_result = dnn_get_result_ov, diff --git a/libavfilter/dnn/dnn_backend_tf.c b/libavfilter/dnn/dnn_backend_tf.c index 06ea6cbb8c..6afefe8115 100644 --- a/libavfilter/dnn/dnn_backend_tf.c +++ b/libavfilter/dnn/dnn_backend_tf.c @@ -886,6 +886,7 @@ static int dnn_flush_tf(const DNNModel *model) const DNNModule ff_dnn_backend_tf = { .clazz = DNN_DEFINE_CLASS(dnn_tensorflow), + .type = DNN_TF, .load_model = dnn_load_model_tf, .execute_model = dnn_execute_model_tf, .get_result = dnn_get_result_tf, diff --git a/libavfilter/dnn/dnn_backend_torch.cpp b/libavfilter/dnn/dnn_backend_torch.cpp index 24e9f2c8e2..2557264713 100644 --- a/libavfilter/dnn/dnn_backend_torch.cpp +++ b/libavfilter/dnn/dnn_backend_torch.cpp @@ -561,6 +561,7 @@ static int dnn_flush_th(const DNNModel *model) extern const DNNModule ff_dnn_backend_torch = { .clazz = DNN_DEFINE_CLASS(dnn_th), + .type = DNN_TH, .load_model = dnn_load_model_th, .execute_model = dnn_execute_model_th, .get_result = dnn_get_result_th, diff --git a/libavfilter/dnn/dnn_interface.c b/libavfilter/dnn/dnn_interface.c index ebd308cd84..cce3c45856 100644 --- a/libavfilter/dnn/dnn_interface.c +++ b/libavfilter/dnn/dnn_interface.c @@ -80,25 +80,15 @@ static const DnnBackendInfo dnn_backend_info_list[] = { const DNNModule *ff_get_dnn_module(DNNBackendType backend_type, void *log_ctx) { - switch(backend_type){ - #if (CONFIG_LIBTENSORFLOW == 1) - case DNN_TF: - return &ff_dnn_backend_tf; - #endif - #if (CONFIG_LIBOPENVINO == 1) - case DNN_OV: - return &ff_dnn_backend_openvino; - #endif - #if (CONFIG_LIBTORCH == 1) - case DNN_TH: - return &ff_dnn_backend_torch; - #endif - default: - av_log(log_ctx, AV_LOG_ERROR, - "Module backend_type %d is not supported or enabled.\n", - backend_type); - return NULL; + for (int i = 1; i < FF_ARRAY_ELEMS(dnn_backend_info_list); i++) { + if (dnn_backend_info_list[i].module->type == backend_type) + return dnn_backend_info_list[i].module; } + + av_log(log_ctx, AV_LOG_ERROR, + "Module backend_type %d is not supported or enabled.\n", + backend_type); + return NULL; } void *ff_dnn_child_next(DnnContext *obj, void *prev) { diff --git a/libavfilter/dnn_interface.h b/libavfilter/dnn_interface.h index 4e14a42d00..4b25ac2b84 100644 --- a/libavfilter/dnn_interface.h +++ b/libavfilter/dnn_interface.h @@ -170,6 +170,7 @@ typedef struct DnnContext { // Stores pointers to functions for loading, executing, freeing DNN models for one of the backends. struct DNNModule { const AVClass clazz; + DNNBackendType type; // Loads model and parameters from given file. Returns NULL if it is not possible. DNNModel *(*load_model)(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *filter_ctx); // Executes model with specified input and output. Returns the error code otherwise.