diff mbox

[FFmpeg-devel,V2,2/2] avfilter/dnn: unify the layer execution function in native mode

Message ID 1569303248-13431-1-git-send-email-yejun.guo@intel.com
State Superseded
Headers show

Commit Message

Guo, Yejun Sept. 24, 2019, 5:34 a.m. UTC
with this change, the code will be simpler when more layers supported.

Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
---
 libavfilter/dnn/dnn_backend_native.c               | 38 ++++++++--------------
 libavfilter/dnn/dnn_backend_native_layer_conv2d.c  |  4 ++-
 libavfilter/dnn/dnn_backend_native_layer_conv2d.h  |  3 +-
 .../dnn/dnn_backend_native_layer_depth2space.c     |  5 ++-
 .../dnn/dnn_backend_native_layer_depth2space.h     |  3 +-
 libavfilter/dnn/dnn_backend_native_layer_maximum.c |  4 ++-
 libavfilter/dnn/dnn_backend_native_layer_maximum.h |  3 +-
 libavfilter/dnn/dnn_backend_native_layer_pad.c     |  5 +--
 libavfilter/dnn/dnn_backend_native_layer_pad.h     |  4 +--
 libavfilter/dnn/dnn_backend_native_layers.hxx      |  4 +++
 tests/dnn/dnn-layer-conv2d-test.c                  |  4 +--
 tests/dnn/dnn-layer-depth2space-test.c             |  4 ++-
 tests/dnn/dnn-layer-maximum-test.c                 |  2 +-
 tests/dnn/dnn-layer-pad-test.c                     |  6 ++--
 14 files changed, 48 insertions(+), 41 deletions(-)
 create mode 100644 libavfilter/dnn/dnn_backend_native_layers.hxx

Comments

Guo, Yejun Oct. 4, 2019, 12:58 p.m. UTC | #1
> -----Original Message-----
> From: Guo, Yejun
> Sent: Tuesday, September 24, 2019 1:34 PM
> To: ffmpeg-devel@ffmpeg.org
> Cc: Guo, Yejun <yejun.guo@intel.com>
> Subject: [PATCH V2 2/2] avfilter/dnn: unify the layer execution function in native
> mode
> 
> with this change, the code will be simpler when more layers supported.
> 
> Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
> ---
>  libavfilter/dnn/dnn_backend_native.c               | 38
> ++++++++--------------
>  libavfilter/dnn/dnn_backend_native_layer_conv2d.c  |  4 ++-
>  libavfilter/dnn/dnn_backend_native_layer_conv2d.h  |  3 +-
>  .../dnn/dnn_backend_native_layer_depth2space.c     |  5 ++-
>  .../dnn/dnn_backend_native_layer_depth2space.h     |  3 +-
>  libavfilter/dnn/dnn_backend_native_layer_maximum.c |  4 ++-
>  libavfilter/dnn/dnn_backend_native_layer_maximum.h |  3 +-
>  libavfilter/dnn/dnn_backend_native_layer_pad.c     |  5 +--
>  libavfilter/dnn/dnn_backend_native_layer_pad.h     |  4 +--
>  libavfilter/dnn/dnn_backend_native_layers.hxx      |  4 +++
>  tests/dnn/dnn-layer-conv2d-test.c                  |  4 +--
>  tests/dnn/dnn-layer-depth2space-test.c             |  4 ++-
>  tests/dnn/dnn-layer-maximum-test.c                 |  2 +-
>  tests/dnn/dnn-layer-pad-test.c                     |  6 ++--
>  14 files changed, 48 insertions(+), 41 deletions(-)
>  create mode 100644 libavfilter/dnn/dnn_backend_native_layers.hxx
> 

this patch set ask for review, thanks.

I'll add another patch to unify the layer load function after the holiday.

I might have several students help me to support more native layers in c code,
so they can just focus on his own separate dnn_backend_layer_xxx.h/c files.


btw, my other patches for vf_dnn_rgb_processing is still in
https://github.com/guoyejun/ffmpeg/tree/dnn0927
Pedro Arthur Oct. 4, 2019, 4:09 p.m. UTC | #2
Hi,

Em sex, 4 de out de 2019 às 09:59, Guo, Yejun <yejun.guo@intel.com>
escreveu:

>
>
> > -----Original Message-----
> > From: Guo, Yejun
> > Sent: Tuesday, September 24, 2019 1:34 PM
> > To: ffmpeg-devel@ffmpeg.org
> > Cc: Guo, Yejun <yejun.guo@intel.com>
> > Subject: [PATCH V2 2/2] avfilter/dnn: unify the layer execution function
> in native
> > mode
> >
> > with this change, the code will be simpler when more layers supported.
> >
> > Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
> > ---
> >  libavfilter/dnn/dnn_backend_native.c               | 38
> > ++++++++--------------
> >  libavfilter/dnn/dnn_backend_native_layer_conv2d.c  |  4 ++-
> >  libavfilter/dnn/dnn_backend_native_layer_conv2d.h  |  3 +-
> >  .../dnn/dnn_backend_native_layer_depth2space.c     |  5 ++-
> >  .../dnn/dnn_backend_native_layer_depth2space.h     |  3 +-
> >  libavfilter/dnn/dnn_backend_native_layer_maximum.c |  4 ++-
> >  libavfilter/dnn/dnn_backend_native_layer_maximum.h |  3 +-
> >  libavfilter/dnn/dnn_backend_native_layer_pad.c     |  5 +--
> >  libavfilter/dnn/dnn_backend_native_layer_pad.h     |  4 +--
> >  libavfilter/dnn/dnn_backend_native_layers.hxx      |  4 +++
>
I don't think this naming pattern is used in the code base (at least I
could not find anything similar).
Also I think the code would be much more clean without using macros, for
example just define an array containing the function pointers and index it
using the enum type.

rest looks good.

> >  tests/dnn/dnn-layer-conv2d-test.c                  |  4 +--
> >  tests/dnn/dnn-layer-depth2space-test.c             |  4 ++-
> >  tests/dnn/dnn-layer-maximum-test.c                 |  2 +-
> >  tests/dnn/dnn-layer-pad-test.c                     |  6 ++--
> >  14 files changed, 48 insertions(+), 41 deletions(-)
> >  create mode 100644 libavfilter/dnn/dnn_backend_native_layers.hxx
> >
>
> this patch set ask for review, thanks.
>
> I'll add another patch to unify the layer load function after the holiday.
>
> I might have several students help me to support more native layers in c
> code,
> so they can just focus on his own separate dnn_backend_layer_xxx.h/c files.
>
>
> btw, my other patches for vf_dnn_rgb_processing is still in
> https://github.com/guoyejun/ffmpeg/tree/dnn0927
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
Guo, Yejun Oct. 8, 2019, 7:32 a.m. UTC | #3
> -----Original Message-----

> From: ffmpeg-devel [mailto:ffmpeg-devel-bounces@ffmpeg.org] On Behalf Of

> Pedro Arthur

> Sent: Saturday, October 05, 2019 12:09 AM

> To: FFmpeg development discussions and patches <ffmpeg-devel@ffmpeg.org>

> Subject: Re: [FFmpeg-devel] [PATCH V2 2/2] avfilter/dnn: unify the layer

> execution function in native mode

> 

> Hi,

> 

> Em sex, 4 de out de 2019 às 09:59, Guo, Yejun <yejun.guo@intel.com>

> escreveu:

> 

> >

> >

> > > -----Original Message-----

> > > From: Guo, Yejun

> > > Sent: Tuesday, September 24, 2019 1:34 PM

> > > To: ffmpeg-devel@ffmpeg.org

> > > Cc: Guo, Yejun <yejun.guo@intel.com>

> > > Subject: [PATCH V2 2/2] avfilter/dnn: unify the layer execution function

> > in native

> > > mode

> > >

> > > with this change, the code will be simpler when more layers supported.

> > >

> > > Signed-off-by: Guo, Yejun <yejun.guo@intel.com>

> > > ---

> > >  libavfilter/dnn/dnn_backend_native.c               | 38

> > > ++++++++--------------

> > >  libavfilter/dnn/dnn_backend_native_layer_conv2d.c  |  4 ++-

> > >  libavfilter/dnn/dnn_backend_native_layer_conv2d.h  |  3 +-

> > >  .../dnn/dnn_backend_native_layer_depth2space.c     |  5 ++-

> > >  .../dnn/dnn_backend_native_layer_depth2space.h     |  3 +-

> > >  libavfilter/dnn/dnn_backend_native_layer_maximum.c |  4 ++-

> > >  libavfilter/dnn/dnn_backend_native_layer_maximum.h |  3 +-

> > >  libavfilter/dnn/dnn_backend_native_layer_pad.c     |  5 +--

> > >  libavfilter/dnn/dnn_backend_native_layer_pad.h     |  4 +--

> > >  libavfilter/dnn/dnn_backend_native_layers.hxx      |  4 +++

> >

> I don't think this naming pattern is used in the code base (at least I

> could not find anything similar).

> Also I think the code would be much more clean without using macros, for

> example just define an array containing the function pointers and index it

> using the enum type.


thanks, I'll change to function pointers and also change the execute function names.

> 

> rest looks good.

> 

> > >  tests/dnn/dnn-layer-conv2d-test.c                  |  4 +--

> > >  tests/dnn/dnn-layer-depth2space-test.c             |  4 ++-

> > >  tests/dnn/dnn-layer-maximum-test.c                 |  2 +-

> > >  tests/dnn/dnn-layer-pad-test.c                     |  6 ++--

> > >  14 files changed, 48 insertions(+), 41 deletions(-)

> > >  create mode 100644 libavfilter/dnn/dnn_backend_native_layers.hxx

> > >

> >

> > this patch set ask for review, thanks.

> >

> > I'll add another patch to unify the layer load function after the holiday.

> >

> > I might have several students help me to support more native layers in c

> > code,

> > so they can just focus on his own separate dnn_backend_layer_xxx.h/c files.

> >

> >

> > btw, my other patches for vf_dnn_rgb_processing is still in

> > https://github.com/guoyejun/ffmpeg/tree/dnn0927
diff mbox

Patch

diff --git a/libavfilter/dnn/dnn_backend_native.c b/libavfilter/dnn/dnn_backend_native.c
index 97549d3..18d6fbe 100644
--- a/libavfilter/dnn/dnn_backend_native.c
+++ b/libavfilter/dnn/dnn_backend_native.c
@@ -331,10 +331,6 @@  DNNReturnType ff_dnn_execute_model_native(const DNNModel *model, DNNData *output
 {
     ConvolutionalNetwork *network = (ConvolutionalNetwork *)model->model;
     int32_t layer;
-    ConvolutionalParams *conv_params;
-    DepthToSpaceParams *depth_to_space_params;
-    LayerPadParams *pad_params;
-    DnnLayerMaximumParams *maximum_params;
     uint32_t nb = FFMIN(nb_output, network->nb_output);
 
     if (network->layers_num <= 0 || network->operands_num <= 0)
@@ -344,28 +340,22 @@  DNNReturnType ff_dnn_execute_model_native(const DNNModel *model, DNNData *output
 
     for (layer = 0; layer < network->layers_num; ++layer){
         switch (network->layers[layer].type){
-        case DLT_CONV2D:
-            conv_params = (ConvolutionalParams *)network->layers[layer].params;
-            convolve(network->operands, network->layers[layer].input_operand_indexes,
-                     network->layers[layer].output_operand_index, conv_params);
-            break;
-        case DLT_DEPTH_TO_SPACE:
-            depth_to_space_params = (DepthToSpaceParams *)network->layers[layer].params;
-            depth_to_space(network->operands, network->layers[layer].input_operand_indexes,
-                           network->layers[layer].output_operand_index, depth_to_space_params->block_size);
-            break;
-        case DLT_MIRROR_PAD:
-            pad_params = (LayerPadParams *)network->layers[layer].params;
-            dnn_execute_layer_pad(network->operands, network->layers[layer].input_operand_indexes,
-                                  network->layers[layer].output_operand_index, pad_params);
-            break;
-        case DLT_MAXIMUM:
-            maximum_params = (DnnLayerMaximumParams *)network->layers[layer].params;
-            dnn_execute_layer_maximum(network->operands, network->layers[layer].input_operand_indexes,
-                                  network->layers[layer].output_operand_index, maximum_params);
-            break;
+
+#define SETUP_LAYER(layer_type)                                                          \
+        case layer_type:                                                                 \
+            dnn_execute_layer_##layer_type(network->operands,                            \
+                                           network->layers[layer].input_operand_indexes, \
+                                           network->layers[layer].output_operand_index,  \
+                                           network->layers[layer].params);               \
+        break;
+#include "dnn_backend_native_layers.hxx"
+#undef SETUP_LAYER
+
         case DLT_INPUT:
             return DNN_ERROR;
+        default:
+            av_assert0(!"should not reach here");
+            break;
         }
     }
 
diff --git a/libavfilter/dnn/dnn_backend_native_layer_conv2d.c b/libavfilter/dnn/dnn_backend_native_layer_conv2d.c
index b13b431..d0b34b8 100644
--- a/libavfilter/dnn/dnn_backend_native_layer_conv2d.c
+++ b/libavfilter/dnn/dnn_backend_native_layer_conv2d.c
@@ -23,7 +23,8 @@ 
 
 #define CLAMP_TO_EDGE(x, w) ((x) < 0 ? 0 : ((x) >= (w) ? (w - 1) : (x)))
 
-int convolve(DnnOperand *operands, const int32_t *input_operand_indexes, int32_t output_operand_index, const ConvolutionalParams *conv_params)
+int dnn_execute_layer_DLT_CONV2D(DnnOperand *operands, const int32_t *input_operand_indexes,
+                                 int32_t output_operand_index, const void *parameters)
 {
     float *output;
     int32_t input_operand_index = input_operand_indexes[0];
@@ -32,6 +33,7 @@  int convolve(DnnOperand *operands, const int32_t *input_operand_indexes, int32_t
     int width = operands[input_operand_index].dims[2];
     int channel = operands[input_operand_index].dims[3];
     const float *input = operands[input_operand_index].data;
+    const ConvolutionalParams *conv_params = (const ConvolutionalParams *)parameters;
 
     int radius = conv_params->kernel_size >> 1;
     int src_linesize = width * conv_params->input_num;
diff --git a/libavfilter/dnn/dnn_backend_native_layer_conv2d.h b/libavfilter/dnn/dnn_backend_native_layer_conv2d.h
index 7ddfff3..84cb9b7 100644
--- a/libavfilter/dnn/dnn_backend_native_layer_conv2d.h
+++ b/libavfilter/dnn/dnn_backend_native_layer_conv2d.h
@@ -35,5 +35,6 @@  typedef struct ConvolutionalParams{
     float *biases;
 } ConvolutionalParams;
 
-int convolve(DnnOperand *operands, const int32_t *input_operand_indexes, int32_t output_operand_index, const ConvolutionalParams *conv_params);
+int dnn_execute_layer_DLT_CONV2D(DnnOperand *operands, const int32_t *input_operand_indexes,
+                                 int32_t output_operand_index, const void *parameters);
 #endif
diff --git a/libavfilter/dnn/dnn_backend_native_layer_depth2space.c b/libavfilter/dnn/dnn_backend_native_layer_depth2space.c
index a248764..9ac261b 100644
--- a/libavfilter/dnn/dnn_backend_native_layer_depth2space.c
+++ b/libavfilter/dnn/dnn_backend_native_layer_depth2space.c
@@ -27,9 +27,12 @@ 
 #include "libavutil/avassert.h"
 #include "dnn_backend_native_layer_depth2space.h"
 
-int depth_to_space(DnnOperand *operands, const int32_t *input_operand_indexes, int32_t output_operand_index, int block_size)
+int dnn_execute_layer_DLT_DEPTH_TO_SPACE(DnnOperand *operands, const int32_t *input_operand_indexes,
+                                         int32_t output_operand_index, const void *parameters)
 {
     float *output;
+    const DepthToSpaceParams *params = (const DepthToSpaceParams *)parameters;
+    int block_size = params->block_size;
     int32_t input_operand_index = input_operand_indexes[0];
     int number = operands[input_operand_index].dims[0];
     int height = operands[input_operand_index].dims[1];
diff --git a/libavfilter/dnn/dnn_backend_native_layer_depth2space.h b/libavfilter/dnn/dnn_backend_native_layer_depth2space.h
index 8708be8..56eaf74 100644
--- a/libavfilter/dnn/dnn_backend_native_layer_depth2space.h
+++ b/libavfilter/dnn/dnn_backend_native_layer_depth2space.h
@@ -34,6 +34,7 @@  typedef struct DepthToSpaceParams{
     int block_size;
 } DepthToSpaceParams;
 
-int depth_to_space(DnnOperand *operands, const int32_t *input_operand_indexes, int32_t output_operand_index, int block_size);
+int dnn_execute_layer_DLT_DEPTH_TO_SPACE(DnnOperand *operands, const int32_t *input_operand_indexes,
+                                         int32_t output_operand_index, const void *parameters);
 
 #endif
diff --git a/libavfilter/dnn/dnn_backend_native_layer_maximum.c b/libavfilter/dnn/dnn_backend_native_layer_maximum.c
index a2669af..42d0439 100644
--- a/libavfilter/dnn/dnn_backend_native_layer_maximum.c
+++ b/libavfilter/dnn/dnn_backend_native_layer_maximum.c
@@ -27,10 +27,12 @@ 
 #include "libavutil/avassert.h"
 #include "dnn_backend_native_layer_maximum.h"
 
-int dnn_execute_layer_maximum(DnnOperand *operands, const int32_t *input_operand_indexes, int32_t output_operand_index, const DnnLayerMaximumParams *params)
+int dnn_execute_layer_DLT_MAXIMUM(DnnOperand *operands, const int32_t *input_operand_indexes,
+                                  int32_t output_operand_index, const void *parameters)
 {
     const DnnOperand *input = &operands[input_operand_indexes[0]];
     DnnOperand *output = &operands[output_operand_index];
+    const DnnLayerMaximumParams *params = (const DnnLayerMaximumParams *)parameters;
     int dims_count;
     const float *src;
     float *dst;
diff --git a/libavfilter/dnn/dnn_backend_native_layer_maximum.h b/libavfilter/dnn/dnn_backend_native_layer_maximum.h
index 6396e58..2f087b2 100644
--- a/libavfilter/dnn/dnn_backend_native_layer_maximum.h
+++ b/libavfilter/dnn/dnn_backend_native_layer_maximum.h
@@ -37,6 +37,7 @@  typedef struct DnnLayerMaximumParams{
     }val;
 } DnnLayerMaximumParams;
 
-int dnn_execute_layer_maximum(DnnOperand *operands, const int32_t *input_operand_indexes, int32_t output_operand_index, const DnnLayerMaximumParams *params);
+int dnn_execute_layer_DLT_MAXIMUM(DnnOperand *operands, const int32_t *input_operand_indexes,
+                                  int32_t output_operand_index, const void *parameters);
 
 #endif
diff --git a/libavfilter/dnn/dnn_backend_native_layer_pad.c b/libavfilter/dnn/dnn_backend_native_layer_pad.c
index c2905a7..99cf5f4 100644
--- a/libavfilter/dnn/dnn_backend_native_layer_pad.c
+++ b/libavfilter/dnn/dnn_backend_native_layer_pad.c
@@ -48,12 +48,13 @@  static int after_get_buddy(int given, int border, LayerPadModeParam mode)
     }
 }
 
-int dnn_execute_layer_pad(DnnOperand *operands, const int32_t *input_operand_indexes, int32_t output_operand_index,
-                           const LayerPadParams *params)
+int dnn_execute_layer_DLT_MIRROR_PAD(DnnOperand *operands, const int32_t *input_operand_indexes,
+                                     int32_t output_operand_index, const void *parameters)
 {
     int32_t before_paddings;
     int32_t after_paddings;
     float* output;
+    const LayerPadParams *params = (const LayerPadParams *)parameters;
 
     // suppose format is <N, H, W, C>
     int32_t input_operand_index = input_operand_indexes[0];
diff --git a/libavfilter/dnn/dnn_backend_native_layer_pad.h b/libavfilter/dnn/dnn_backend_native_layer_pad.h
index 7cc8213..bf9cf8d 100644
--- a/libavfilter/dnn/dnn_backend_native_layer_pad.h
+++ b/libavfilter/dnn/dnn_backend_native_layer_pad.h
@@ -36,7 +36,7 @@  typedef struct LayerPadParams{
     float constant_values;
 } LayerPadParams;
 
-int dnn_execute_layer_pad(DnnOperand *operands, const int32_t *input_operand_indexes, int32_t output_operand_index,
-                           const LayerPadParams *params);
+int dnn_execute_layer_DLT_MIRROR_PAD(DnnOperand *operands, const int32_t *input_operand_indexes,
+                                     int32_t output_operand_index, const void *parameters);
 
 #endif
diff --git a/libavfilter/dnn/dnn_backend_native_layers.hxx b/libavfilter/dnn/dnn_backend_native_layers.hxx
new file mode 100644
index 0000000..87f7ba9
--- /dev/null
+++ b/libavfilter/dnn/dnn_backend_native_layers.hxx
@@ -0,0 +1,4 @@ 
+SETUP_LAYER(DLT_CONV2D)
+SETUP_LAYER(DLT_DEPTH_TO_SPACE)
+SETUP_LAYER(DLT_MAXIMUM)
+SETUP_LAYER(DLT_MIRROR_PAD)
diff --git a/tests/dnn/dnn-layer-conv2d-test.c b/tests/dnn/dnn-layer-conv2d-test.c
index afc5391..8849fa6 100644
--- a/tests/dnn/dnn-layer-conv2d-test.c
+++ b/tests/dnn/dnn-layer-conv2d-test.c
@@ -113,7 +113,7 @@  static int test_with_same_dilate(void)
     operands[1].data = NULL;
 
     input_indexes[0] = 0;
-    convolve(operands, input_indexes, 1, &params);
+    dnn_execute_layer_DLT_CONV2D(operands, input_indexes, 1, &params);
 
     output = operands[1].data;
     for (int i = 0; i < sizeof(expected_output) / sizeof(float); i++) {
@@ -212,7 +212,7 @@  static int test_with_valid(void)
     operands[1].data = NULL;
 
     input_indexes[0] = 0;
-    convolve(operands, input_indexes, 1, &params);
+    dnn_execute_layer_DLT_CONV2D(operands, input_indexes, 1, &params);
 
     output = operands[1].data;
     for (int i = 0; i < sizeof(expected_output) / sizeof(float); i++) {
diff --git a/tests/dnn/dnn-layer-depth2space-test.c b/tests/dnn/dnn-layer-depth2space-test.c
index 87118de..3bb1612 100644
--- a/tests/dnn/dnn-layer-depth2space-test.c
+++ b/tests/dnn/dnn-layer-depth2space-test.c
@@ -48,6 +48,7 @@  static int test(void)
     print(list(output.flatten()))
     */
 
+    DepthToSpaceParams params;
     DnnOperand operands[2];
     int32_t input_indexes[1];
     float input[1*5*3*4] = {
@@ -79,7 +80,8 @@  static int test(void)
     operands[1].data = NULL;
 
     input_indexes[0] = 0;
-    depth_to_space(operands, input_indexes, 1, 2);
+    params.block_size = 2;
+    dnn_execute_layer_DLT_DEPTH_TO_SPACE(operands, input_indexes, 1, &params);
 
     output = operands[1].data;
     for (int i = 0; i < sizeof(expected_output) / sizeof(float); i++) {
diff --git a/tests/dnn/dnn-layer-maximum-test.c b/tests/dnn/dnn-layer-maximum-test.c
index 06daf64..27b7be6 100644
--- a/tests/dnn/dnn-layer-maximum-test.c
+++ b/tests/dnn/dnn-layer-maximum-test.c
@@ -45,7 +45,7 @@  static int test(void)
     operands[1].data = NULL;
 
     input_indexes[0] = 0;
-    dnn_execute_layer_maximum(operands, input_indexes, 1, &params);
+    dnn_execute_layer_DLT_MAXIMUM(operands, input_indexes, 1, &params);
 
     output = operands[1].data;
     for (int i = 0; i < sizeof(input) / sizeof(float); i++) {
diff --git a/tests/dnn/dnn-layer-pad-test.c b/tests/dnn/dnn-layer-pad-test.c
index 1fb2be1..7271ceb 100644
--- a/tests/dnn/dnn-layer-pad-test.c
+++ b/tests/dnn/dnn-layer-pad-test.c
@@ -79,7 +79,7 @@  static int test_with_mode_symmetric(void)
     operands[1].data = NULL;
 
     input_indexes[0] = 0;
-    dnn_execute_layer_pad(operands, input_indexes, 1, &params);
+    dnn_execute_layer_DLT_MIRROR_PAD(operands, input_indexes, 1, &params);
 
     output = operands[1].data;
     for (int i = 0; i < sizeof(expected_output) / sizeof(float); i++) {
@@ -144,7 +144,7 @@  static int test_with_mode_reflect(void)
     operands[1].data = NULL;
 
     input_indexes[0] = 0;
-    dnn_execute_layer_pad(operands, input_indexes, 1, &params);
+    dnn_execute_layer_DLT_MIRROR_PAD(operands, input_indexes, 1, &params);
 
     output = operands[1].data;
     for (int i = 0; i < sizeof(expected_output) / sizeof(float); i++) {
@@ -210,7 +210,7 @@  static int test_with_mode_constant(void)
     operands[1].data = NULL;
 
     input_indexes[0] = 0;
-    dnn_execute_layer_pad(operands, input_indexes, 1, &params);
+    dnn_execute_layer_DLT_MIRROR_PAD(operands, input_indexes, 1, &params);
 
     output = operands[1].data;
     for (int i = 0; i < sizeof(expected_output) / sizeof(float); i++) {