From patchwork Tue Jun 1 22:04:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wenlong Ding X-Patchwork-Id: 28039 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a6b:b214:0:0:0:0:0 with SMTP id b20csp204226iof; Tue, 1 Jun 2021 22:13:23 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxxYEsDMh15BXWdZn5iVO5w5fNT+q2Lszvf6JXM42/CaZZ60n+EbaG006V45YIRIVCJeZPn X-Received: by 2002:a50:fd0d:: with SMTP id i13mr36934717eds.163.1622610803649; Tue, 01 Jun 2021 22:13:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622610803; cv=none; d=google.com; s=arc-20160816; b=WTvkx9N/5n/D7Bh5DCjj27A6DJAt81W4exYd+fYOG4bklIvIPmJ6yhMPfoQOUabICe oXNk1FO5Rxrl+xkJzOCUaYFSXe/soKm84EOx23XaT2U0tK7kRU2PM41csbvubbBvrzFM 8Snb2kohSAyp3h39g7DPRfhdFpFxMU0cZdz/obMQI84JDklWLMqNruzzStuorPDKCYA4 MIMZjusuwipsQUeRos+SNFQTkiE5nWB1YXbmiR4FDPGYP4GE0ReDleudMUJAq4jfAO8B Z/r6fUza0+ejjDGYm9VH+oOnzM01OSDNHF0bgQU53qFy6xUxLnk6Xl7TT+aX6MPq+XwH nt2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:message-id:date:to:from :ironport-sdr:ironport-sdr:delivered-to; bh=0zkGQJuqQgFIM1KCST8OmbodXyoCF7tzl8Gm7x1008w=; b=A4+7tzbU5w8wL7r0U360kIVLgRA3Xtp8+fGg75NtNzHrrXwKPlGA9cIRCJtStUnjSh 3kP3kAfr047XnAaQiWFVIYxm9EVu3fI/H0zMmGlqrlPKpFHPd73P40JSIcGJTcQg/JSd QcsxSVUEkAiR0FBUiboPT0WdpZlDgvSJ9diluyB7ov6IpqiZFncVPS9r1G+8QEFOZ/CA 4GhktMiEegsuIN5I1QdN4gfv7C4vkwEnREwC2VxiWKTbLRw5yEcObVlBn9dny6I6wCJo uzq7+TdFBk0qXqeYgjJ8iQvFGQZ3YRN1BwYbbSLLR+RlZlP2N0fb9FuvbfX/+tSisjpR WaAw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id q5si19352135ejy.439.2021.06.01.22.12.59; Tue, 01 Jun 2021 22:13:23 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 508CD689A29; Wed, 2 Jun 2021 08:12:55 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id AE77A6806FE for ; Wed, 2 Jun 2021 08:12:46 +0300 (EEST) IronPort-SDR: DgpT34nPBarG1+/N0zW4GZ/sOa7gHFXthiGu5yhZwCp/x6KNqj10850Zg06Or3A5OiW+RDHHe3 rtCyUUNcqBwg== X-IronPort-AV: E=McAfee;i="6200,9189,10002"; a="267578004" X-IronPort-AV: E=Sophos;i="5.83,241,1616482800"; d="scan'208";a="267578004" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2021 22:12:44 -0700 IronPort-SDR: 5efWSOSrOZHXwW45HL1QcZhdRrTiTDz7Fl5ckpF5sHz9GZNaFSUDpBkAjgzqglSY5l9Uu8Mf1e HkjrV2ppegjw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,241,1616482800"; d="scan'208";a="447257827" Received: from dwl-h370m-d3h.sh.intel.com ([10.239.98.152]) by fmsmga008.fm.intel.com with ESMTP; 01 Jun 2021 22:12:42 -0700 From: Wenlong Ding To: ffmpeg-devel@ffmpeg.org Date: Wed, 2 Jun 2021 06:04:47 +0800 Message-Id: <20210601220448.108767-1-wenlong.ding@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH 1/2] lavfi/dnn/dnn_backend_native_layer_batchnormalization: add BN support X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Wenlong Ding Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: /zppQJ9Cqml2 Signed-off-by: Wenlong Ding --- libavfilter/dnn/Makefile | 1 + libavfilter/dnn/dnn_backend_native.h | 1 + ..._backend_native_layer_batchnormalization.c | 119 ++++++++++++++++++ ..._backend_native_layer_batchnormalization.h | 44 +++++++ libavfilter/dnn/dnn_backend_native_layers.c | 2 + tools/python/convert_from_tensorflow.py | 95 +++++++++++++- 6 files changed, 258 insertions(+), 4 deletions(-) create mode 100644 libavfilter/dnn/dnn_backend_native_layer_batchnormalization.c create mode 100644 libavfilter/dnn/dnn_backend_native_layer_batchnormalization.h diff --git a/libavfilter/dnn/Makefile b/libavfilter/dnn/Makefile index 4cfbce0efc..5f9952f447 100644 --- a/libavfilter/dnn/Makefile +++ b/libavfilter/dnn/Makefile @@ -13,6 +13,7 @@ OBJS-$(CONFIG_DNN) += dnn/dnn_backend_native_layer_dep OBJS-$(CONFIG_DNN) += dnn/dnn_backend_native_layer_maximum.o OBJS-$(CONFIG_DNN) += dnn/dnn_backend_native_layer_mathbinary.o OBJS-$(CONFIG_DNN) += dnn/dnn_backend_native_layer_mathunary.o +OBJS-$(CONFIG_DNN) += dnn/dnn_backend_native_layer_batchnormalization.o DNN-OBJS-$(CONFIG_LIBTENSORFLOW) += dnn/dnn_backend_tf.o DNN-OBJS-$(CONFIG_LIBOPENVINO) += dnn/dnn_backend_openvino.o diff --git a/libavfilter/dnn/dnn_backend_native.h b/libavfilter/dnn/dnn_backend_native.h index 89bcb8e358..1f5915d547 100644 --- a/libavfilter/dnn/dnn_backend_native.h +++ b/libavfilter/dnn/dnn_backend_native.h @@ -46,6 +46,7 @@ typedef enum { DLT_MATH_UNARY = 6, DLT_AVG_POOL = 7, DLT_DENSE = 8, + DLT_BATCHNORMALIZATION = 9, DLT_COUNT } DNNLayerType; diff --git a/libavfilter/dnn/dnn_backend_native_layer_batchnormalization.c b/libavfilter/dnn/dnn_backend_native_layer_batchnormalization.c new file mode 100644 index 0000000000..3daac75367 --- /dev/null +++ b/libavfilter/dnn/dnn_backend_native_layer_batchnormalization.c @@ -0,0 +1,119 @@ +/* + * Copyright (c) 2021 + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +/** + * @file + * DNN native backend implementation. + */ + +#include "dnn_backend_native.h" +#include "libavutil/avassert.h" +#include "dnn_backend_native_layer_batchnormalization.h" + +int ff_dnn_load_layer_batchnormalization(Layer *layer, AVIOContext *model_file_context, int file_size, int operands_num) +{ + DnnLayerBatchnormalizationParams *params; + int dnn_size = 0; + params = av_malloc(sizeof(*params)); + if (!params) + return 0; + + params->channel = avio_rl32(model_file_context); + dnn_size += 4; + + params->mean = av_malloc_array(params->channel, sizeof(*params->mean)); + if (!params->mean) { + av_freep(¶ms); + return 0; + } + + params->variance = av_malloc_array(params->channel, sizeof(*params->variance)); + if (!params->variance) { + av_freep(¶ms->mean); + av_freep(¶ms); + return 0; + } + + for (int32_t i = 0; i < params->channel; ++i) { + params->mean[i] = av_int2float(avio_rl32(model_file_context)); + } + for (int32_t i = 0; i < params->channel; ++i) { + params->variance[i] = av_int2float(avio_rl32(model_file_context)); + } + dnn_size += params->channel * 4 * 2; + + params->variance_eps = av_int2float(avio_rl32(model_file_context)); + params->scale = av_int2float(avio_rl32(model_file_context)); + params->offset = av_int2float(avio_rl32(model_file_context)); + dnn_size += 12; + + layer->params = params; + layer->input_operand_indexes[0] = (int32_t)avio_rl32(model_file_context); + layer->output_operand_index = (int32_t)avio_rl32(model_file_context); + dnn_size += 8; + if (layer->input_operand_indexes[0] >= operands_num || layer->output_operand_index >= operands_num) { + return 0; + } + + return dnn_size; +} + +int ff_dnn_execute_layer_batchnormalization(DnnOperand *operands, const int32_t *input_operand_indexes, + int32_t output_operand_index, const void *parameters, NativeContext *ctx) +{ + const DnnOperand *input = &operands[input_operand_indexes[0]]; + DnnOperand *output = &operands[output_operand_index]; + const DnnLayerBatchnormalizationParams *params = parameters; + int dims_count; + const float *src; + float *dst; + int32_t channel_num = params->channel; + const float *mean = params->mean; + const float *variance = params->variance; + float variance_eps = params->variance_eps; + float scale = params->scale; + float offset = params->offset; + + if (params->channel != input->dims[3]) + return DNN_ERROR; + + for (int i = 0; i < 4; ++i) + output->dims[i] = input->dims[i]; + + output->data_type = input->data_type; + output->length = ff_calculate_operand_data_length(output); + if (output->length <= 0) { + av_log(ctx, AV_LOG_ERROR, "The output data length overflow\n"); + return DNN_ERROR; + } + output->data = av_realloc(output->data, output->length); + if (!output->data) { + av_log(ctx, AV_LOG_ERROR, "Failed to reallocate memory for output\n"); + return DNN_ERROR; + } + + dims_count = ff_calculate_operand_dims_count(output); + src = input->data; + dst = output->data; + for (int i = 0; i < dims_count; ++i) { + dst[i] = scale * (src[i] - mean[i % channel_num]) / sqrt(variance[i % channel_num] + variance_eps) + offset; + } + return 0; +} diff --git a/libavfilter/dnn/dnn_backend_native_layer_batchnormalization.h b/libavfilter/dnn/dnn_backend_native_layer_batchnormalization.h new file mode 100644 index 0000000000..8d87430fcc --- /dev/null +++ b/libavfilter/dnn/dnn_backend_native_layer_batchnormalization.h @@ -0,0 +1,44 @@ +/* + * Copyright (c) 2021 + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +/** + * @file + * DNN inference functions interface for native backend. + */ + +#ifndef AVFILTER_DNN_DNN_BACKEND_NATIVE_LAYER_BATCHNORMALIZATION_H +#define AVFILTER_DNN_DNN_BACKEND_NATIVE_LAYER_BATCHNORMALIZATION_H + +#include "dnn_backend_native.h" + +typedef struct DnnLayerBatchnormalizationParams{ + int32_t channel; + float *mean; + float *variance; + float offset; + float scale; + float variance_eps; +}DnnLayerBatchnormalizationParams; + +int ff_dnn_load_layer_batchnormalization(Layer *layer, AVIOContext *model_file_context, int file_size, int operands_num); +int ff_dnn_execute_layer_batchnormalization(DnnOperand *operands, const int32_t *input_operand_indexes, + int32_t output_operand_index, const void *parameters, NativeContext *ctx); + +#endif diff --git a/libavfilter/dnn/dnn_backend_native_layers.c b/libavfilter/dnn/dnn_backend_native_layers.c index 492939fd36..da1cd6e541 100644 --- a/libavfilter/dnn/dnn_backend_native_layers.c +++ b/libavfilter/dnn/dnn_backend_native_layers.c @@ -28,6 +28,7 @@ #include "dnn_backend_native_layer_mathunary.h" #include "dnn_backend_native_layer_avgpool.h" #include "dnn_backend_native_layer_dense.h" +#include "dnn_backend_native_layer_batchnormalization.h" const LayerFunc ff_layer_funcs[DLT_COUNT] = { {NULL, NULL}, @@ -39,4 +40,5 @@ const LayerFunc ff_layer_funcs[DLT_COUNT] = { {ff_dnn_execute_layer_math_unary, ff_dnn_load_layer_math_unary}, {ff_dnn_execute_layer_avg_pool, ff_dnn_load_layer_avg_pool}, {ff_dnn_execute_layer_dense, ff_dnn_load_layer_dense}, + {ff_dnn_execute_layer_batchnormalization, ff_dnn_load_layer_batchnormalization}, }; diff --git a/tools/python/convert_from_tensorflow.py b/tools/python/convert_from_tensorflow.py index 38e64c1c94..3c7e9909fe 100644 --- a/tools/python/convert_from_tensorflow.py +++ b/tools/python/convert_from_tensorflow.py @@ -74,7 +74,8 @@ class TFConverter: self.dense_scope_names = set() self.dense_scopename_inputname_dict = {} self.op2code = {'Conv2D':1, 'DepthToSpace':2, 'MirrorPad':3, 'Maximum':4, - 'MathBinary':5, 'MathUnary':6, 'AvgPool':7, 'MatMul':8} + 'MathBinary':5, 'MathUnary':6, 'AvgPool':7, 'MatMul':8, + 'BatchNormalization':9} self.mathbin2code = {'Sub':0, 'Add':1, 'Mul':2, 'RealDiv':3, 'Minimum':4, 'FloorMod':5} self.mathun2code = {'Abs':0, 'Sin':1, 'Cos':2, 'Tan':3, 'Asin':4, 'Acos':5, 'Atan':6, 'Sinh':7, 'Cosh':8, 'Tanh':9, 'Asinh':10, @@ -82,6 +83,8 @@ class TFConverter: 'Exp':16} self.mirrorpad_mode = {'CONSTANT':0, 'REFLECT':1, 'SYMMETRIC':2} self.name_operand_dict = {} + self.batchnorm_scope_names = set() + self.batchnorm_scopename_inputname_dict = {} def add_operand(self, name, type): @@ -145,6 +148,29 @@ class TFConverter: return knode, bnode, anode + def get_batchnorm_params(self, batchnorm_scope_name): + variance_node_name = self.name_node_dict[batchnorm_scope_name + '/add'].input[0] + variance_eps_node_name = self.name_node_dict[batchnorm_scope_name + '/add'].input[1] + scale_node_name = self.name_node_dict[batchnorm_scope_name + '/mul'].input[1] + mean_node_name = self.name_node_dict[batchnorm_scope_name + '/mul_2'].input[0] + offset_node_name = self.name_node_dict[batchnorm_scope_name + '/sub'].input[0] + + variance_node = self.name_node_dict[variance_node_name] + variance_eps_node = self.name_node_dict[variance_eps_node_name] + scale_node = self.name_node_dict[scale_node_name] + mean_node = self.name_node_dict[mean_node_name] + offset_node = self.name_node_dict[offset_node_name] + # the add_1 name is possible be changed into the output name, + # if the add_1 name is not in self.name_node_dict, and sub.next is the last op which is Identity + if batchnorm_scope_name + '/add_1' in self.name_node_dict: + output_node = self.name_node_dict[batchnorm_scope_name + '/add_1'] + elif batchnorm_scope_name + '/sub' in self.edges: + output_node = self.edges[batchnorm_scope_name + '/sub'][0] + else: + output_node = None + return variance_node, variance_eps_node, scale_node, mean_node, offset_node, output_node + + def dump_complex_conv2d_to_file(self, node, f): assert(node.op == 'Conv2D') self.layer_number = self.layer_number + 1 @@ -372,6 +398,38 @@ class TFConverter: np.array([output_operand_index],dtype=np.uint32).tofile(f) + def dump_batchnormalization_to_file(self, node, f): + self.layer_number = self.layer_number + 1 + self.converted_nodes.add(node.name) + scope_name = TFConverter.get_scope_name(node.name) + variance_node, variance_eps_node, scale_node, mean_node, offset_node, output_node = self.get_batchnorm_params(scope_name.split('/')[0]) + + bn_tensor_mean = mean_node.attr['value'] + channel_mean = bn_tensor_mean.tensor.tensor_shape.dim[0].size + mean = np.frombuffer(bn_tensor_mean.tensor.tensor_content, np.float32) + bn_tensor_variance = variance_node.attr['value'] + channel_variance = bn_tensor_variance.tensor.tensor_shape.dim[0].size + variance = np.frombuffer(bn_tensor_variance.tensor.tensor_content, np.float32) + + offset = offset_node.attr['value'].tensor.float_val + scale = scale_node.attr['value'].tensor.float_val + variance_eps = variance_eps_node.attr['value'].tensor.float_val + + channel = channel_mean + np.array([self.op2code['BatchNormalization'], channel], dtype=np.uint32).tofile(f) + np.array([mean], dtype=np.float32).tofile(f) + np.array([variance], dtype=np.float32).tofile(f) + np.array([variance_eps, scale, offset], dtype=np.float32).tofile(f) + + input_name = self.batchnorm_scopename_inputname_dict[scope_name.split('/')[0]] + input_operand_index = self.add_operand(input_name, Operand.IOTYPE_INPUT) + if output_node is not None: + output_operand_index = self.add_operand(output_node.name, Operand.IOTYPE_OUTPUT) + np.array([input_operand_index, output_operand_index],dtype=np.uint32).tofile(f) + else: + print('batchnormazalition node is error') + + def dump_avg_pool_to_file(self, node, f): assert(node.op == 'AvgPool') self.layer_number = self.layer_number + 1 @@ -417,7 +475,10 @@ class TFConverter: if node.op == 'MatMul': self.dump_dense_to_file(node, f) continue - + if self.in_batchnorm_scope(node.name): + if node.op == 'Sub': + self.dump_batchnormalization_to_file(node,f) + continue if node.op == 'Conv2D': self.dump_simple_conv2d_to_file(node, f) @@ -541,8 +602,20 @@ class TFConverter: return True return False + + def in_batchnorm_scope(self, name): + inner_scope = TFConverter.get_scope_name(name) + if inner_scope == "": + return False + for scope in self.batchnorm_scope_names: + index = inner_scope.find(scope) + if index == 0: + return True + return False + + def generate_sub_block_op_scope_info(self): - # mostly, conv2d/dense is a sub block in graph, get the scope name + # mostly, conv2d/dense/batchnorm is a sub block in graph, get the scope name for node in self.nodes: if node.op == 'Conv2D': scope = TFConverter.get_scope_name(node.name) @@ -562,8 +635,17 @@ class TFConverter: if scope + '/kernel' not in self.name_node_dict and scope.split('/Tensordot')[0] + '/kernel' not in self.name_node_dict: continue self.dense_scope_names.add(scope.split('/Tensordot')[0]) + elif node.op == 'Sub': + scope = TFConverter.get_scope_name(node.name) + # for the case tf.nn.batchnormalization is called directly + if scope == '': + continue + # for the case tf.nn.batchnormalization is called with a scope + if scope + '/Rsqrt' not in self.name_node_dict and scope + '/add' not in self.name_node_dict and scope + '/mul' not in self.name_node_dict and scope + '/mul_1' not in self.name_node_dict and scope + '/sub' not in self.name_node_dict and scope + '/mul_2' not in self.name_node_dict: + continue + self.batchnorm_scope_names.add(scope) - # get the input name to the conv2d/dense sub block + # get the input name to the conv2d/dense/batchnorm sub block for node in self.nodes: scope = TFConverter.get_scope_name(node.name) if scope in self.conv2d_scope_names: @@ -581,6 +663,11 @@ class TFConverter: for inp in node.input: if TFConverter.get_scope_name(inp).find(scope)<0 and TFConverter.get_scope_name(inp).find(scope.split('/')[0])<0: self.dense_scopename_inputname_dict[scope.split('/Tensordot')[0]] = inp + elif scope in self.batchnorm_scope_names: + if node.op == 'Mul' and scope + '/mul_1' == node.name: + for inp in node.input: + if TFConverter.get_scope_name(inp) != scope: + self.batchnorm_scopename_inputname_dict[scope] = inp def run(self): From patchwork Tue Jun 1 22:04:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wenlong Ding X-Patchwork-Id: 28040 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a6b:b214:0:0:0:0:0 with SMTP id b20csp204312iof; Tue, 1 Jun 2021 22:13:35 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy29xEri8pu6ogS0hXKlMXVE0kRe3BdbLpIcnhXwWrwphigqsGCDsG995btJUOpGNaTagDr X-Received: by 2002:a17:906:6c8a:: with SMTP id s10mr32284796ejr.276.1622610814887; Tue, 01 Jun 2021 22:13:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622610814; cv=none; d=google.com; s=arc-20160816; b=x9b5V3+QS1AAmgg0Yo2R0b0mDNZzey8R6X9SiZAXRf30xsy9U2Mp7lVSuJ7T2UimpT 5M5JSIwMrYb5wjPxJfmZy2Wv5MWkaqIkEkWJbscPRDng8NL7DSFgSy+2xx2hexiwWayt zcvYzE2SrMYBKLFlPQurQxHItr1JDYKqyU9UB+d4DjfI2xGKkwBP0d5E9+m9T7XpblkA nbDb0J5Ni7eCMSuEYWDgL4BiTFVkawQwyEa1YW9cwxZbvaWREwQWC3M6sVJ92Whhz9Dt yf/Ue34SG94ZnZEWvjocUqlFXzdnNQOmPrCTatYKAVM4uuMCTjeTU4VUSgwemxVmGVQH imRQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:ironport-sdr:ironport-sdr:delivered-to; bh=bqWoesDsS4A8SbjXJFBhojcQeThRN8eE36MLmTOETVc=; b=ED9vJ216lkJYagRHPSk3TLZ2Gg56d7sxUzXXL9fuP1oLPr6aJdIZ9xavYW95JiZISk nFMER9D6lRvxmHZ2t0Ir/pYMYTM/7ZmXxI8MqKz65dE4ox6cvPjj8nT1hROQbV5VvZSw XWKvLujzvO3jCp5Kvtq+gfSl3IqziQUwsaPRgzkeIZopXKCHvTgJqDcXVz+eUpNP6dt2 NZ286RdTszg1KVikQNAbU0p4a80hYKuAedd13jRAqpuZvj6i2Gvp/JwEXjr8J6bBtl9H qIr5SlVXtSAPWbYIKb8h6sjuImenjU3MHyCxAyqOSSXXZp2Cz0T4sgvilYIuWLPNMxZf U+ag== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id m7si1194774edc.575.2021.06.01.22.13.12; Tue, 01 Jun 2021 22:13:34 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 6FA92689D32; Wed, 2 Jun 2021 08:12:57 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id F0C016899B3 for ; Wed, 2 Jun 2021 08:12:49 +0300 (EEST) IronPort-SDR: wfHAQa7RWooW/ugTk8vXC71yVph5EK/z9hJD4m8iu1BXufO1hg70c3+92Jj2g9MDAF1QJufaFI RDyNy8WwbwLg== X-IronPort-AV: E=McAfee;i="6200,9189,10002"; a="267578063" X-IronPort-AV: E=Sophos;i="5.83,241,1616482800"; d="scan'208";a="267578063" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2021 22:12:48 -0700 IronPort-SDR: esYcwaXMDe/ETQf84DJgEyORicIniaZ0tL0huI++8lSLUPcAEE/3ChadI+x5HC4yCHVrjh04sk ngZtyQxL9hDQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,241,1616482800"; d="scan'208";a="447257834" Received: from dwl-h370m-d3h.sh.intel.com ([10.239.98.152]) by fmsmga008.fm.intel.com with ESMTP; 01 Jun 2021 22:12:47 -0700 From: Wenlong Ding To: ffmpeg-devel@ffmpeg.org Date: Wed, 2 Jun 2021 06:04:48 +0800 Message-Id: <20210601220448.108767-2-wenlong.ding@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210601220448.108767-1-wenlong.ding@intel.com> References: <20210601220448.108767-1-wenlong.ding@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH 2/2] tests/dnn/dnn-layer-batchnormalization-test: add unit test for BN X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Wenlong Ding Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: M1uvw73rJMmn Signed-off-by: Wenlong Ding --- tests/dnn/.gitignore | 1 + tests/dnn/Makefile | 1 + tests/dnn/dnn-layer-batchnormalization-test.c | 117 ++++++++++++++++++ tests/fate/dnn.mak | 5 + 4 files changed, 124 insertions(+) create mode 100644 tests/dnn/dnn-layer-batchnormalization-test.c diff --git a/tests/dnn/.gitignore b/tests/dnn/.gitignore index 03b04d6653..f432352ca8 100644 --- a/tests/dnn/.gitignore +++ b/tests/dnn/.gitignore @@ -6,3 +6,4 @@ /dnn-layer-mathunary-test /dnn-layer-avgpool-test /dnn-layer-dense-test +/dnn-layer-batchnormalization-test diff --git a/tests/dnn/Makefile b/tests/dnn/Makefile index ef827520de..058342b677 100644 --- a/tests/dnn/Makefile +++ b/tests/dnn/Makefile @@ -6,6 +6,7 @@ DNNTESTPROGS += dnn-layer-mathbinary DNNTESTPROGS += dnn-layer-maximum DNNTESTPROGS += dnn-layer-mathunary DNNTESTPROGS += dnn-layer-avgpool +DNNTESTPROGS += dnn-layer-batchnormalization DNNTESTOBJS := $(DNNTESTOBJS:%=$(DNNTESTSDIR)%) $(DNNTESTPROGS:%=$(DNNTESTSDIR)/%-test.o) DNNTESTPROGS := $(DNNTESTPROGS:%=$(DNNTESTSDIR)/%-test$(EXESUF)) diff --git a/tests/dnn/dnn-layer-batchnormalization-test.c b/tests/dnn/dnn-layer-batchnormalization-test.c new file mode 100644 index 0000000000..ab6b8faf99 --- /dev/null +++ b/tests/dnn/dnn-layer-batchnormalization-test.c @@ -0,0 +1,117 @@ +/* + * Copyright (c) 2021 + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include +#include +#include +#include "libavfilter/dnn/dnn_backend_native_layer_batchnormalization.h" +#include "libavutil/avassert.h" + +#define EPSON 0.00001 + +static int test(void) +{ + // the input data is generated with below python code. + /* + import numpy as np + + in_data = np.random.rand(1, 5, 4, 3) + + print('input_data:') + print(in_data.shape) + print(list(in_data.flatten())) + */ + DnnLayerBatchnormalizationParams params; + DnnOperand operands[2]; + int32_t input_indexes[1]; + float input_data[1*5*4*3] = { + 0.03592029475149139, 0.8177031019998039, 0.7439828967882774, 0.12351433689912827, 0.10151796406770697, 0.3746884445121409, 0.17609107818009584, + 0.9180125678699762, 0.413786266728411, 0.15388845485290492, 0.6527147025219426, 0.8921708075708997, 0.9315511108099018, 0.6089771636865586, + 0.9096167333314585, 0.35798486932034357, 0.242417367935445, 0.3354539420380003, 0.5885835336919439, 0.2330763092626813, 0.19528398816193004, + 0.3537750057949528, 0.8908506440029637, 0.6745876262153178, 0.7975390896917102, 0.42115552951085633, 0.026252384244236926, 0.5987492499419249, + 0.2969445642379439, 0.6612326239920433, 0.5186612970859783, 0.9154125186721356, 0.11647732132450173, 0.32631441601002287, 0.5222573083721183, + 0.47294093689301375, 0.009744493333811666, 0.034835869186598645, 0.14643674017261454, 0.06114675456320906, 0.07678534832197992, 0.1968478741676435, + 0.3126360178524745, 0.6361975996395828, 0.18012602596211258, 0.8609077898378013, 0.8446431696067955, 0.7497824163136522, 0.8651808467465752, + 0.922506696896085, 0.6057714540829449, 0.6226843061111148, 0.3894522432026163, 0.15585932050469298, 0.5738772930525835, 0.4978427887293949, + 0.6015976578669766, 0.9169836327963268, 0.6957180046975255, 0.49003527036757355 + }; + + float expected_output[1*5*4*3]; + float *output; + int32_t channel = 3; + float mean[3] = {0.1, 0.15, 0.14}; + float variance[3] = {0.12, 0.21, 0.24}; + float variance_eps = 0.0001; + float scale = 1.0; + float offset = 0.1; + + for (int i = 0; i < 1*5*4*3; ++i) { + expected_output[i] = scale * (input_data[i] - mean[i % channel]) / sqrt(variance[i % channel] + variance_eps) + offset; + } + + params.channel = channel; + params.mean = av_malloc_array(params.channel, sizeof(*params.mean)); + if (!params.mean) { + return 1; + } + params.mean[0] = mean[0]; + params.mean[1] = mean[1]; + params.mean[2] = mean[2]; + params.variance = av_malloc_array(params.channel, sizeof(*params.variance)); + if (!params.variance) { + av_freep(¶ms.mean); + return 1; + } + params.variance[0] = variance[0]; + params.variance[1] = variance[1]; + params.variance[2] = variance[2]; + params.scale = scale; + params.offset = offset; + params.variance_eps = variance_eps; + + operands[0].data = input_data; + operands[0].dims[0] = 1; + operands[0].dims[1] = 5; + operands[0].dims[2] = 4; + operands[0].dims[3] = 3; + operands[1].data = NULL; + + input_indexes[0] = 0; + ff_dnn_execute_layer_batchnormalization(operands, input_indexes, 1, ¶ms, NULL); + output = operands[1].data; + for (int i = 0; i < sizeof(expected_output) / sizeof(float); i++) { + if (fabs(output[i] - expected_output[i]) > EPSON) { + printf("at index %d, output: %f, expected_output: %f\n", i, output[i], expected_output[i]); + av_freep(¶ms.mean); + av_freep(¶ms.variance); + av_freep(&output); + return 1; + } + } + av_freep(¶ms.mean); + av_freep(¶ms.variance); + av_freep(&output); + return 0; +} + +int main(int argc, char **argv) +{ + return test(); +} diff --git a/tests/fate/dnn.mak b/tests/fate/dnn.mak index ef07ee45f8..cf741c7dd9 100644 --- a/tests/fate/dnn.mak +++ b/tests/fate/dnn.mak @@ -38,6 +38,11 @@ fate-dnn-layer-avgpool: $(DNNTESTSDIR)/dnn-layer-avgpool-test$(EXESUF) fate-dnn-layer-avgpool: CMD = run $(DNNTESTSDIR)/dnn-layer-avgpool-test$(EXESUF) fate-dnn-layer-avgpool: CMP = null +FATE_DNN += fate-dnn-layer-batchnormalization +fate-dnn-layer-batchnormalization: $(DNNTESTSDIR)/dnn-layer-batchnormalization-test$(EXESUF) +fate-dnn-layer-batchnormalization: CMD = run $(DNNTESTSDIR)/dnn-layer-batchnormalization-test$(EXESUF) +fate-dnn-layer-batchnormalization: CMP = null + FATE-$(CONFIG_DNN) += $(FATE_DNN) fate-dnn: $(FATE_DNN)