From patchwork Fri Jul 27 17:06:15 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Lavrushkin X-Patchwork-Id: 9822 Delivered-To: ffmpegpatchwork@gmail.com Received: by 2002:a02:104:0:0:0:0:0 with SMTP id c4-v6csp1144898jad; Fri, 27 Jul 2018 12:00:00 -0700 (PDT) X-Google-Smtp-Source: AAOMgpe131WutydtJTFhy67Wqg1tH+2gidSiayCDNecGKjMftG2u8Xuskn14Q32GgfaO6lHxqgiZ X-Received: by 2002:adf:b3d7:: with SMTP id x23-v6mr6007419wrd.253.1532718000729; Fri, 27 Jul 2018 12:00:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532718000; cv=none; d=google.com; s=arc-20160816; b=PYgUS2fcWfIflsldSxaTuqDcPPyhrJD0gA/W57mEMeCnCQ6ZaAdBX8Wftq9N6UdlFo S1hpF0AcifOXcPCIn2wOv6m4dUmb6M9/Ov/Fx1IfAcGbmDOJuZCUw44ZuhXgfeYw9f0P mapVBDgI9uSMzil4onlDYJ14ZAuvfOQllbP+6ttx5R2DMMmWtLiQrrdrCtkcAjzk4CpN GMlk+cpZSFd2zuqtBzW4U5NyKEPnuogGiPdbUv4hq/SlYqbnTN6TAVhvTN9jLNjVwlDj r9q2jP6zqLT8Gt1+g2NJXrHnuqjx188Cu5CpaoBGcGdETEI0USlGPMSSgIByzecI8dlN pCIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:reply-to:list-subscribe:list-help:list-post :list-archive:list-unsubscribe:list-id:precedence:subject:to :message-id:date:from:mime-version:dkim-signature:delivered-to :arc-authentication-results; bh=meLZ2szwNpTIAT/79iPEGWVXMvS+cXlI2e5PwiuL37A=; b=qxUNUWoB/P4I5Dr1iBE8laCMT3koU9trR8+Qq9maT05eq1haXBlguh4u4xvxIMl/s8 eZWUrGb3w+3aqMTZvlXB5cru78/smMFZscdT1+Dcg555j7q0F1fKYRNTyHDGFs0MCtiQ LPDDRj1sSiapnrYfmz/FZS3Kq6ZQBYo7hieRhZj3lO3sGdzQd0mMVcTKgxyAYX/s37kp RHuY5VKevFpa0VvqyFkLd+WwrePuBLnw6zHKHAf8L2N7c82nRzT7j65OuRgO1KlK53It bv6WuE+DrxhuuVgbNiRwXx2HEQ6lDs2QrdIebJFQoNwePQJhtTuolqBFfIOZxHdSBv+v QfmA== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=CUClDwLi; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id o22-v6si3542945wmc.170.2018.07.27.11.59.58; Fri, 27 Jul 2018 12:00:00 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20161025 header.b=CUClDwLi; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id DEFCC68A4BE; Fri, 27 Jul 2018 21:59:40 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-pl0-f42.google.com (mail-pl0-f42.google.com [209.85.160.42]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id CAB3068A547 for ; Fri, 27 Jul 2018 20:06:03 +0300 (EEST) Received: by mail-pl0-f42.google.com with SMTP id 6-v6so2571284plb.0 for ; Fri, 27 Jul 2018 10:06:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:from:date:message-id:subject:to; bh=CHeUajrFz14NGkMCJOy1IhNMyitrMDRN4J5g2BzMbRM=; b=CUClDwLi0KJCmBxIiz8NNIOhb60x3GCVNZsEOE9IJJqqn3iGNEQza9J54KewLc2o6/ F808UHpWSPTWEi6pd7g6d8Ozp/dGbiAhn2H1uKQvZrQo8hRoZoF9vp2/sj9D6+UryLqc 1iShTlDeqTPxg2CZTD+KO0++Gc5mlZne/JLwl0JWTQpJ/KK7G9pXWZ20+PVUhqXRA2IA 87D9dSFob10OGXQTS2rJQkRHhr5VnBcc6NBxbyo0eOG1L5bsGhHbh7+8IjthfKJURqTO q3W38QBXQOAiNhmn2tQmtBX3NLT6RYyVP2Z9LGLn2DECUbNNn8ny7nWEW3K6V0dF21hN RznA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=CHeUajrFz14NGkMCJOy1IhNMyitrMDRN4J5g2BzMbRM=; b=m780JQHbEIYhKfgF+1s0eiuZsQ/pmaYZZ93/IjdrB90quNmsQII6xFckE9PMQi4aDn l2OD9HYbwkq9NPZHZNia6vahfrqljcXI1oE084aRbwXJ2YqZFBEuw/cTn3kGyTktW5Kb cjemPKu71tnu9X6Ex6i2EMfRwGRFsehfISEjOOYe6XOKtZ/AUvsSGFLp5JvkBGNaPobY jWY2aOHxJazQQrg3tfT14VlHPdwEcPG0jedbF2CEf9WKdCZyZ0YSU5suKfmzs30l31S9 CgrB0hwkTRA/5/FhKme0kkMwJd5hFIGROWCfY7gzx/z8rxUmk5GsT1RKf5cmbdEUxWay /7kw== X-Gm-Message-State: AOUpUlFMWjZFjEGFEyfb11kVhJjiGKNcXgoar+fFhKT7Z3QV3/4nkp0H Xde1pzI49Z4b9C6/99ZJ3auQXKaZk9TQ7IQ5usELqphpwDE= X-Received: by 2002:a17:902:8541:: with SMTP id d1-v6mr6864448plo.81.1532711177085; Fri, 27 Jul 2018 10:06:17 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:a17:90a:501:0:0:0:0 with HTTP; Fri, 27 Jul 2018 10:06:15 -0700 (PDT) From: Sergey Lavrushkin Date: Fri, 27 Jul 2018 20:06:15 +0300 Message-ID: To: FFmpeg development discussions and patches , Pedro Arthur X-Mailman-Approved-At: Fri, 27 Jul 2018 21:59:39 +0300 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 Subject: [FFmpeg-devel] [GSOC] [PATCH] On the fly generation of default DNN models and code style fixes X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" Hello, The first patch provides on the fly generation of default DNN models, that eliminates data duplication for model weights. Also, files with internal weights were replaced with automatically generated one for models I trained. Scripts for training and generating these files can be found here: https://github.com/HighVoltageRocknRoll/sr Later, I will add a description to this repo on how to use it and benchmark results for trained models. The second patch fixes some code style issues for pointers in DNN module and sr filter. Are there any other code style fixes I should make for this code? From 9e01a27c1f23af58d7e93c3c446e1a2f900e2fc4 Mon Sep 17 00:00:00 2001 From: Sergey Lavrushkin Date: Fri, 27 Jul 2018 19:34:02 +0300 Subject: [PATCH 2/2] libavfilter: Code style fixes for pointers in DNN module and sr filter. --- libavfilter/dnn_backend_native.c | 84 +++++++++++++++--------------- libavfilter/dnn_backend_native.h | 8 +-- libavfilter/dnn_backend_tf.c | 108 +++++++++++++++++++-------------------- libavfilter/dnn_backend_tf.h | 8 +-- libavfilter/dnn_espcn.h | 6 +-- libavfilter/dnn_interface.c | 4 +- libavfilter/dnn_interface.h | 16 +++--- libavfilter/dnn_srcnn.h | 6 +-- libavfilter/vf_sr.c | 60 +++++++++++----------- 9 files changed, 150 insertions(+), 150 deletions(-) diff --git a/libavfilter/dnn_backend_native.c b/libavfilter/dnn_backend_native.c index 3e6b86280d..baefea7fcb 100644 --- a/libavfilter/dnn_backend_native.c +++ b/libavfilter/dnn_backend_native.c @@ -34,15 +34,15 @@ typedef enum {RELU, TANH, SIGMOID} ActivationFunc; typedef struct Layer{ LayerType type; - float* output; - void* params; + float *output; + void *params; } Layer; typedef struct ConvolutionalParams{ int32_t input_num, output_num, kernel_size; ActivationFunc activation; - float* kernel; - float* biases; + float *kernel; + float *biases; } ConvolutionalParams; typedef struct InputParams{ @@ -55,16 +55,16 @@ typedef struct DepthToSpaceParams{ // Represents simple feed-forward convolutional network. typedef struct ConvolutionalNetwork{ - Layer* layers; + Layer *layers; int32_t layers_num; } ConvolutionalNetwork; -static DNNReturnType set_input_output_native(void* model, DNNData* input, DNNData* output) +static DNNReturnType set_input_output_native(void *model, DNNData *input, DNNData *output) { - ConvolutionalNetwork* network = (ConvolutionalNetwork*)model; - InputParams* input_params; - ConvolutionalParams* conv_params; - DepthToSpaceParams* depth_to_space_params; + ConvolutionalNetwork *network = (ConvolutionalNetwork *)model; + InputParams *input_params; + ConvolutionalParams *conv_params; + DepthToSpaceParams *depth_to_space_params; int cur_width, cur_height, cur_channels; int32_t layer; @@ -72,7 +72,7 @@ static DNNReturnType set_input_output_native(void* model, DNNData* input, DNNDat return DNN_ERROR; } else{ - input_params = (InputParams*)network->layers[0].params; + input_params = (InputParams *)network->layers[0].params; input_params->width = cur_width = input->width; input_params->height = cur_height = input->height; input_params->channels = cur_channels = input->channels; @@ -88,14 +88,14 @@ static DNNReturnType set_input_output_native(void* model, DNNData* input, DNNDat for (layer = 1; layer < network->layers_num; ++layer){ switch (network->layers[layer].type){ case CONV: - conv_params = (ConvolutionalParams*)network->layers[layer].params; + conv_params = (ConvolutionalParams *)network->layers[layer].params; if (conv_params->input_num != cur_channels){ return DNN_ERROR; } cur_channels = conv_params->output_num; break; case DEPTH_TO_SPACE: - depth_to_space_params = (DepthToSpaceParams*)network->layers[layer].params; + depth_to_space_params = (DepthToSpaceParams *)network->layers[layer].params; if (cur_channels % (depth_to_space_params->block_size * depth_to_space_params->block_size) != 0){ return DNN_ERROR; } @@ -127,16 +127,16 @@ static DNNReturnType set_input_output_native(void* model, DNNData* input, DNNDat // layers_num,layer_type,layer_parameterss,layer_type,layer_parameters... // For CONV layer: activation_function, input_num, output_num, kernel_size, kernel, biases // For DEPTH_TO_SPACE layer: block_size -DNNModel* ff_dnn_load_model_native(const char* model_filename) +DNNModel *ff_dnn_load_model_native(const char *model_filename) { - DNNModel* model = NULL; - ConvolutionalNetwork* network = NULL; - AVIOContext* model_file_context; + DNNModel *model = NULL; + ConvolutionalNetwork *network = NULL; + AVIOContext *model_file_context; int file_size, dnn_size, kernel_size, i; int32_t layer; LayerType layer_type; - ConvolutionalParams* conv_params; - DepthToSpaceParams* depth_to_space_params; + ConvolutionalParams *conv_params; + DepthToSpaceParams *depth_to_space_params; model = av_malloc(sizeof(DNNModel)); if (!model){ @@ -155,7 +155,7 @@ DNNModel* ff_dnn_load_model_native(const char* model_filename) av_freep(&model); return NULL; } - model->model = (void*)network; + model->model = (void *)network; network->layers_num = 1 + (int32_t)avio_rl32(model_file_context); dnn_size = 4; @@ -251,10 +251,10 @@ DNNModel* ff_dnn_load_model_native(const char* model_filename) return model; } -static int set_up_conv_layer(Layer* layer, const float* kernel, const float* biases, ActivationFunc activation, +static int set_up_conv_layer(Layer *layer, const float *kernel, const float *biases, ActivationFunc activation, int32_t input_num, int32_t output_num, int32_t size) { - ConvolutionalParams* conv_params; + ConvolutionalParams *conv_params; int kernel_size; conv_params = av_malloc(sizeof(ConvolutionalParams)); @@ -282,11 +282,11 @@ static int set_up_conv_layer(Layer* layer, const float* kernel, const float* bia return DNN_SUCCESS; } -DNNModel* ff_dnn_load_default_model_native(DNNDefaultModel model_type) +DNNModel *ff_dnn_load_default_model_native(DNNDefaultModel model_type) { - DNNModel* model = NULL; - ConvolutionalNetwork* network = NULL; - DepthToSpaceParams* depth_to_space_params; + DNNModel *model = NULL; + ConvolutionalNetwork *network = NULL; + DepthToSpaceParams *depth_to_space_params; int32_t layer; model = av_malloc(sizeof(DNNModel)); @@ -299,7 +299,7 @@ DNNModel* ff_dnn_load_default_model_native(DNNDefaultModel model_type) av_freep(&model); return NULL; } - model->model = (void*)network; + model->model = (void *)network; switch (model_type){ case DNN_SRCNN: @@ -365,7 +365,7 @@ DNNModel* ff_dnn_load_default_model_native(DNNDefaultModel model_type) #define CLAMP_TO_EDGE(x, w) ((x) < 0 ? 0 : ((x) >= (w) ? (w - 1) : (x))) -static void convolve(const float* input, float* output, const ConvolutionalParams* conv_params, int width, int height) +static void convolve(const float *input, float *output, const ConvolutionalParams *conv_params, int width, int height) { int y, x, n_filter, ch, kernel_y, kernel_x; int radius = conv_params->kernel_size >> 1; @@ -403,7 +403,7 @@ static void convolve(const float* input, float* output, const ConvolutionalParam } } -static void depth_to_space(const float* input, float* output, int block_size, int width, int height, int channels) +static void depth_to_space(const float *input, float *output, int block_size, int width, int height, int channels) { int y, x, by, bx, ch; int new_channels = channels / (block_size * block_size); @@ -426,20 +426,20 @@ static void depth_to_space(const float* input, float* output, int block_size, in } } -DNNReturnType ff_dnn_execute_model_native(const DNNModel* model) +DNNReturnType ff_dnn_execute_model_native(const DNNModel *model) { - ConvolutionalNetwork* network = (ConvolutionalNetwork*)model->model; + ConvolutionalNetwork *network = (ConvolutionalNetwork *)model->model; int cur_width, cur_height, cur_channels; int32_t layer; - InputParams* input_params; - ConvolutionalParams* conv_params; - DepthToSpaceParams* depth_to_space_params; + InputParams *input_params; + ConvolutionalParams *conv_params; + DepthToSpaceParams *depth_to_space_params; if (network->layers_num <= 0 || network->layers[0].type != INPUT || !network->layers[0].output){ return DNN_ERROR; } else{ - input_params = (InputParams*)network->layers[0].params; + input_params = (InputParams *)network->layers[0].params; cur_width = input_params->width; cur_height = input_params->height; cur_channels = input_params->channels; @@ -451,12 +451,12 @@ DNNReturnType ff_dnn_execute_model_native(const DNNModel* model) } switch (network->layers[layer].type){ case CONV: - conv_params = (ConvolutionalParams*)network->layers[layer].params; + conv_params = (ConvolutionalParams *)network->layers[layer].params; convolve(network->layers[layer - 1].output, network->layers[layer].output, conv_params, cur_width, cur_height); cur_channels = conv_params->output_num; break; case DEPTH_TO_SPACE: - depth_to_space_params = (DepthToSpaceParams*)network->layers[layer].params; + depth_to_space_params = (DepthToSpaceParams *)network->layers[layer].params; depth_to_space(network->layers[layer - 1].output, network->layers[layer].output, depth_to_space_params->block_size, cur_width, cur_height, cur_channels); cur_height *= depth_to_space_params->block_size; @@ -471,19 +471,19 @@ DNNReturnType ff_dnn_execute_model_native(const DNNModel* model) return DNN_SUCCESS; } -void ff_dnn_free_model_native(DNNModel** model) +void ff_dnn_free_model_native(DNNModel **model) { - ConvolutionalNetwork* network; - ConvolutionalParams* conv_params; + ConvolutionalNetwork *network; + ConvolutionalParams *conv_params; int32_t layer; if (*model) { - network = (ConvolutionalNetwork*)(*model)->model; + network = (ConvolutionalNetwork *)(*model)->model; for (layer = 0; layer < network->layers_num; ++layer){ av_freep(&network->layers[layer].output); if (network->layers[layer].type == CONV){ - conv_params = (ConvolutionalParams*)network->layers[layer].params; + conv_params = (ConvolutionalParams *)network->layers[layer].params; av_freep(&conv_params->kernel); av_freep(&conv_params->biases); } diff --git a/libavfilter/dnn_backend_native.h b/libavfilter/dnn_backend_native.h index 599c1302e2..adbb7088b4 100644 --- a/libavfilter/dnn_backend_native.h +++ b/libavfilter/dnn_backend_native.h @@ -29,12 +29,12 @@ #include "dnn_interface.h" -DNNModel* ff_dnn_load_model_native(const char* model_filename); +DNNModel *ff_dnn_load_model_native(const char *model_filename); -DNNModel* ff_dnn_load_default_model_native(DNNDefaultModel model_type); +DNNModel *ff_dnn_load_default_model_native(DNNDefaultModel model_type); -DNNReturnType ff_dnn_execute_model_native(const DNNModel* model); +DNNReturnType ff_dnn_execute_model_native(const DNNModel *model); -void ff_dnn_free_model_native(DNNModel** model); +void ff_dnn_free_model_native(DNNModel **model); #endif diff --git a/libavfilter/dnn_backend_tf.c b/libavfilter/dnn_backend_tf.c index 21516471c3..6307c794a5 100644 --- a/libavfilter/dnn_backend_tf.c +++ b/libavfilter/dnn_backend_tf.c @@ -31,24 +31,24 @@ #include typedef struct TFModel{ - TF_Graph* graph; - TF_Session* session; - TF_Status* status; + TF_Graph *graph; + TF_Session *session; + TF_Status *status; TF_Output input, output; - TF_Tensor* input_tensor; - DNNData* output_data; + TF_Tensor *input_tensor; + DNNData *output_data; } TFModel; -static void free_buffer(void* data, size_t length) +static void free_buffer(void *data, size_t length) { av_freep(&data); } -static TF_Buffer* read_graph(const char* model_filename) +static TF_Buffer *read_graph(const char *model_filename) { - TF_Buffer* graph_buf; - unsigned char* graph_data = NULL; - AVIOContext* model_file_context; + TF_Buffer *graph_buf; + unsigned char *graph_data = NULL; + AVIOContext *model_file_context; long size, bytes_read; if (avio_open(&model_file_context, model_filename, AVIO_FLAG_READ) < 0){ @@ -70,20 +70,20 @@ static TF_Buffer* read_graph(const char* model_filename) } graph_buf = TF_NewBuffer(); - graph_buf->data = (void*)graph_data; + graph_buf->data = (void *)graph_data; graph_buf->length = size; graph_buf->data_deallocator = free_buffer; return graph_buf; } -static DNNReturnType set_input_output_tf(void* model, DNNData* input, DNNData* output) +static DNNReturnType set_input_output_tf(void *model, DNNData *input, DNNData *output) { - TFModel* tf_model = (TFModel*)model; + TFModel *tf_model = (TFModel *)model; int64_t input_dims[] = {1, input->height, input->width, input->channels}; - TF_SessionOptions* sess_opts; - const TF_Operation* init_op = TF_GraphOperationByName(tf_model->graph, "init"); - TF_Tensor* output_tensor; + TF_SessionOptions *sess_opts; + const TF_Operation *init_op = TF_GraphOperationByName(tf_model->graph, "init"); + TF_Tensor *output_tensor; // Input operation should be named 'x' tf_model->input.oper = TF_GraphOperationByName(tf_model->graph, "x"); @@ -99,7 +99,7 @@ static DNNReturnType set_input_output_tf(void* model, DNNData* input, DNNData* o if (!tf_model->input_tensor){ return DNN_ERROR; } - input->data = (float*)TF_TensorData(tf_model->input_tensor); + input->data = (float *)TF_TensorData(tf_model->input_tensor); // Output operation should be named 'y' tf_model->output.oper = TF_GraphOperationByName(tf_model->graph, "y"); @@ -156,12 +156,12 @@ static DNNReturnType set_input_output_tf(void* model, DNNData* input, DNNData* o return DNN_SUCCESS; } -DNNModel* ff_dnn_load_model_tf(const char* model_filename) +DNNModel *ff_dnn_load_model_tf(const char *model_filename) { - DNNModel* model = NULL; - TFModel* tf_model = NULL; - TF_Buffer* graph_def; - TF_ImportGraphDefOptions* graph_opts; + DNNModel *model = NULL; + TFModel *tf_model = NULL; + TF_Buffer *graph_def; + TF_ImportGraphDefOptions *graph_opts; model = av_malloc(sizeof(DNNModel)); if (!model){ @@ -197,25 +197,25 @@ DNNModel* ff_dnn_load_model_tf(const char* model_filename) return NULL; } - model->model = (void*)tf_model; + model->model = (void *)tf_model; model->set_input_output = &set_input_output_tf; return model; } -static TF_Operation* add_pad_op(TFModel* tf_model, TF_Operation* input_op, int32_t pad) +static TF_Operation *add_pad_op(TFModel *tf_model, TF_Operation *input_op, int32_t pad) { - TF_OperationDescription* op_desc; - TF_Operation* op; - TF_Tensor* tensor; + TF_OperationDescription *op_desc; + TF_Operation *op; + TF_Tensor *tensor; TF_Output input; - int32_t* pads; + int32_t *pads; int64_t pads_shape[] = {4, 2}; op_desc = TF_NewOperation(tf_model->graph, "Const", "pads"); TF_SetAttrType(op_desc, "dtype", TF_INT32); tensor = TF_AllocateTensor(TF_INT32, pads_shape, 2, 4 * 2 * sizeof(int32_t)); - pads = (int32_t*)TF_TensorData(tensor); + pads = (int32_t *)TF_TensorData(tensor); pads[0] = 0; pads[1] = 0; pads[2] = pad; pads[3] = pad; pads[4] = pad; pads[5] = pad; @@ -246,11 +246,11 @@ static TF_Operation* add_pad_op(TFModel* tf_model, TF_Operation* input_op, int32 return op; } -static TF_Operation* add_const_op(TFModel* tf_model, const float* values, const int64_t* dims, int dims_len, const char* name) +static TF_Operation *add_const_op(TFModel *tf_model, const float *values, const int64_t *dims, int dims_len, const char *name) { int dim; - TF_OperationDescription* op_desc; - TF_Tensor* tensor; + TF_OperationDescription *op_desc; + TF_Tensor *tensor; size_t len; op_desc = TF_NewOperation(tf_model->graph, "Const", name); @@ -269,25 +269,25 @@ static TF_Operation* add_const_op(TFModel* tf_model, const float* values, const return TF_FinishOperation(op_desc, tf_model->status); } -static TF_Operation* add_conv_layers(TFModel* tf_model, const float** consts, const int64_t** consts_dims, - const int* consts_dims_len, const char** activations, - TF_Operation* input_op, int layers_num) +static TF_Operation* add_conv_layers(TFModel *tf_model, const float **consts, const int64_t **consts_dims, + const int *consts_dims_len, const char **activations, + TF_Operation *input_op, int layers_num) { int i; - TF_OperationDescription* op_desc; - TF_Operation* op; - TF_Operation* transpose_op; + TF_OperationDescription *op_desc; + TF_Operation *op; + TF_Operation *transpose_op; TF_Output input; int64_t strides[] = {1, 1, 1, 1}; - int32_t* transpose_perm; - TF_Tensor* tensor; + int32_t *transpose_perm; + TF_Tensor *tensor; int64_t transpose_perm_shape[] = {4}; char name_buffer[256]; op_desc = TF_NewOperation(tf_model->graph, "Const", "transpose_perm"); TF_SetAttrType(op_desc, "dtype", TF_INT32); tensor = TF_AllocateTensor(TF_INT32, transpose_perm_shape, 1, 4 * sizeof(int32_t)); - transpose_perm = (int32_t*)TF_TensorData(tensor); + transpose_perm = (int32_t *)TF_TensorData(tensor); transpose_perm[0] = 1; transpose_perm[1] = 2; transpose_perm[2] = 3; @@ -368,13 +368,13 @@ static TF_Operation* add_conv_layers(TFModel* tf_model, const float** consts, co return input_op; } -DNNModel* ff_dnn_load_default_model_tf(DNNDefaultModel model_type) +DNNModel *ff_dnn_load_default_model_tf(DNNDefaultModel model_type) { - DNNModel* model = NULL; - TFModel* tf_model = NULL; - TF_OperationDescription* op_desc; - TF_Operation* op; - TF_Operation* const_ops_buffer[6]; + DNNModel *model = NULL; + TFModel *tf_model = NULL; + TF_OperationDescription *op_desc; + TF_Operation *op; + TF_Operation *const_ops_buffer[6]; TF_Output input; int64_t input_shape[] = {1, -1, -1, 1}; @@ -460,16 +460,16 @@ DNNModel* ff_dnn_load_default_model_tf(DNNDefaultModel model_type) CLEANUP_ON_ERROR(tf_model, model); } - model->model = (void*)tf_model; + model->model = (void *)tf_model; model->set_input_output = &set_input_output_tf; return model; } -DNNReturnType ff_dnn_execute_model_tf(const DNNModel* model) +DNNReturnType ff_dnn_execute_model_tf(const DNNModel *model) { - TFModel* tf_model = (TFModel*)model->model; - TF_Tensor* output_tensor; + TFModel *tf_model = (TFModel *)model->model; + TF_Tensor *output_tensor; TF_SessionRun(tf_model->session, NULL, &tf_model->input, &tf_model->input_tensor, 1, @@ -489,12 +489,12 @@ DNNReturnType ff_dnn_execute_model_tf(const DNNModel* model) } } -void ff_dnn_free_model_tf(DNNModel** model) +void ff_dnn_free_model_tf(DNNModel **model) { - TFModel* tf_model; + TFModel *tf_model; if (*model){ - tf_model = (TFModel*)(*model)->model; + tf_model = (TFModel *)(*model)->model; if (tf_model->graph){ TF_DeleteGraph(tf_model->graph); } diff --git a/libavfilter/dnn_backend_tf.h b/libavfilter/dnn_backend_tf.h index 08e4a568b3..357a82d948 100644 --- a/libavfilter/dnn_backend_tf.h +++ b/libavfilter/dnn_backend_tf.h @@ -29,12 +29,12 @@ #include "dnn_interface.h" -DNNModel* ff_dnn_load_model_tf(const char* model_filename); +DNNModel *ff_dnn_load_model_tf(const char *model_filename); -DNNModel* ff_dnn_load_default_model_tf(DNNDefaultModel model_type); +DNNModel *ff_dnn_load_default_model_tf(DNNDefaultModel model_type); -DNNReturnType ff_dnn_execute_model_tf(const DNNModel* model); +DNNReturnType ff_dnn_execute_model_tf(const DNNModel *model); -void ff_dnn_free_model_tf(DNNModel** model); +void ff_dnn_free_model_tf(DNNModel **model); #endif diff --git a/libavfilter/dnn_espcn.h b/libavfilter/dnn_espcn.h index 315ecf031d..a0dd61cd0d 100644 --- a/libavfilter/dnn_espcn.h +++ b/libavfilter/dnn_espcn.h @@ -5398,7 +5398,7 @@ static const long int espcn_conv3_bias_dims[] = { 4 }; -static const float* espcn_consts[] = { +static const float *espcn_consts[] = { espcn_conv1_kernel, espcn_conv1_bias, espcn_conv2_kernel, @@ -5407,7 +5407,7 @@ static const float* espcn_consts[] = { espcn_conv3_bias }; -static const long int* espcn_consts_dims[] = { +static const long int *espcn_consts_dims[] = { espcn_conv1_kernel_dims, espcn_conv1_bias_dims, espcn_conv2_kernel_dims, @@ -5429,7 +5429,7 @@ static const char espcn_tanh[] = "Tanh"; static const char espcn_sigmoid[] = "Sigmoid"; -static const char* espcn_activations[] = { +static const char *espcn_activations[] = { espcn_tanh, espcn_tanh, espcn_sigmoid diff --git a/libavfilter/dnn_interface.c b/libavfilter/dnn_interface.c index 87c90526be..ca7d6d1ea5 100644 --- a/libavfilter/dnn_interface.c +++ b/libavfilter/dnn_interface.c @@ -28,9 +28,9 @@ #include "dnn_backend_tf.h" #include "libavutil/mem.h" -DNNModule* ff_get_dnn_module(DNNBackendType backend_type) +DNNModule *ff_get_dnn_module(DNNBackendType backend_type) { - DNNModule* dnn_module; + DNNModule *dnn_module; dnn_module = av_malloc(sizeof(DNNModule)); if(!dnn_module){ diff --git a/libavfilter/dnn_interface.h b/libavfilter/dnn_interface.h index 6b820d1d5b..a69717ae62 100644 --- a/libavfilter/dnn_interface.h +++ b/libavfilter/dnn_interface.h @@ -33,31 +33,31 @@ typedef enum {DNN_NATIVE, DNN_TF} DNNBackendType; typedef enum {DNN_SRCNN, DNN_ESPCN} DNNDefaultModel; typedef struct DNNData{ - float* data; + float *data; int width, height, channels; } DNNData; typedef struct DNNModel{ // Stores model that can be different for different backends. - void* model; + void *model; // Sets model input and output, while allocating additional memory for intermediate calculations. // Should be called at least once before model execution. - DNNReturnType (*set_input_output)(void* model, DNNData* input, DNNData* output); + DNNReturnType (*set_input_output)(void *model, DNNData *input, DNNData *output); } DNNModel; // Stores pointers to functions for loading, executing, freeing DNN models for one of the backends. typedef struct DNNModule{ // Loads model and parameters from given file. Returns NULL if it is not possible. - DNNModel* (*load_model)(const char* model_filename); + DNNModel *(*load_model)(const char *model_filename); // Loads one of the default models - DNNModel* (*load_default_model)(DNNDefaultModel model_type); + DNNModel *(*load_default_model)(DNNDefaultModel model_type); // Executes model with specified input and output. Returns DNN_ERROR otherwise. - DNNReturnType (*execute_model)(const DNNModel* model); + DNNReturnType (*execute_model)(const DNNModel *model); // Frees memory allocated for model. - void (*free_model)(DNNModel** model); + void (*free_model)(DNNModel **model); } DNNModule; // Initializes DNNModule depending on chosen backend. -DNNModule* ff_get_dnn_module(DNNBackendType backend_type); +DNNModule *ff_get_dnn_module(DNNBackendType backend_type); #endif diff --git a/libavfilter/dnn_srcnn.h b/libavfilter/dnn_srcnn.h index 7ec11654b3..26143654b8 100644 --- a/libavfilter/dnn_srcnn.h +++ b/libavfilter/dnn_srcnn.h @@ -2110,7 +2110,7 @@ static const long int srcnn_conv3_bias_dims[] = { 1 }; -static const float* srcnn_consts[] = { +static const float *srcnn_consts[] = { srcnn_conv1_kernel, srcnn_conv1_bias, srcnn_conv2_kernel, @@ -2119,7 +2119,7 @@ static const float* srcnn_consts[] = { srcnn_conv3_bias }; -static const long int* srcnn_consts_dims[] = { +static const long int *srcnn_consts_dims[] = { srcnn_conv1_kernel_dims, srcnn_conv1_bias_dims, srcnn_conv2_kernel_dims, @@ -2139,7 +2139,7 @@ static const int srcnn_consts_dims_len[] = { static const char srcnn_relu[] = "Relu"; -static const char* srcnn_activations[] = { +static const char *srcnn_activations[] = { srcnn_relu, srcnn_relu, srcnn_relu diff --git a/libavfilter/vf_sr.c b/libavfilter/vf_sr.c index f3ca9a09a8..944a0e28e7 100644 --- a/libavfilter/vf_sr.c +++ b/libavfilter/vf_sr.c @@ -39,13 +39,13 @@ typedef struct SRContext { const AVClass *class; SRModel model_type; - char* model_filename; + char *model_filename; DNNBackendType backend_type; - DNNModule* dnn_module; - DNNModel* model; + DNNModule *dnn_module; + DNNModel *model; DNNData input, output; int scale_factor; - struct SwsContext* sws_context; + struct SwsContext *sws_context; int sws_slice_h; } SRContext; @@ -67,9 +67,9 @@ static const AVOption sr_options[] = { AVFILTER_DEFINE_CLASS(sr); -static av_cold int init(AVFilterContext* context) +static av_cold int init(AVFilterContext *context) { - SRContext* sr_context = context->priv; + SRContext *sr_context = context->priv; sr_context->dnn_module = ff_get_dnn_module(sr_context->backend_type); if (!sr_context->dnn_module){ @@ -98,12 +98,12 @@ static av_cold int init(AVFilterContext* context) return 0; } -static int query_formats(AVFilterContext* context) +static int query_formats(AVFilterContext *context) { const enum AVPixelFormat pixel_formats[] = {AV_PIX_FMT_YUV420P, AV_PIX_FMT_YUV422P, AV_PIX_FMT_YUV444P, AV_PIX_FMT_YUV410P, AV_PIX_FMT_YUV411P, AV_PIX_FMT_GRAY8, AV_PIX_FMT_NONE}; - AVFilterFormats* formats_list; + AVFilterFormats *formats_list; formats_list = ff_make_format_list(pixel_formats); if (!formats_list){ @@ -113,11 +113,11 @@ static int query_formats(AVFilterContext* context) return ff_set_common_formats(context, formats_list); } -static int config_props(AVFilterLink* inlink) +static int config_props(AVFilterLink *inlink) { - AVFilterContext* context = inlink->dst; - SRContext* sr_context = context->priv; - AVFilterLink* outlink = context->outputs[0]; + AVFilterContext *context = inlink->dst; + SRContext *sr_context = context->priv; + AVFilterLink *outlink = context->outputs[0]; DNNReturnType result; int sws_src_h, sws_src_w, sws_dst_h, sws_dst_w; @@ -202,18 +202,18 @@ static int config_props(AVFilterLink* inlink) } typedef struct ThreadData{ - uint8_t* data; + uint8_t *data; int data_linesize, height, width; } ThreadData; -static int uint8_to_float(AVFilterContext* context, void* arg, int jobnr, int nb_jobs) +static int uint8_to_float(AVFilterContext *context, void *arg, int jobnr, int nb_jobs) { - SRContext* sr_context = context->priv; - const ThreadData* td = arg; + SRContext *sr_context = context->priv; + const ThreadData *td = arg; const int slice_start = (td->height * jobnr ) / nb_jobs; const int slice_end = (td->height * (jobnr + 1)) / nb_jobs; - const uint8_t* src = td->data + slice_start * td->data_linesize; - float* dst = sr_context->input.data + slice_start * td->width; + const uint8_t *src = td->data + slice_start * td->data_linesize; + float *dst = sr_context->input.data + slice_start * td->width; int y, x; for (y = slice_start; y < slice_end; ++y){ @@ -227,14 +227,14 @@ static int uint8_to_float(AVFilterContext* context, void* arg, int jobnr, int nb return 0; } -static int float_to_uint8(AVFilterContext* context, void* arg, int jobnr, int nb_jobs) +static int float_to_uint8(AVFilterContext *context, void *arg, int jobnr, int nb_jobs) { - SRContext* sr_context = context->priv; - const ThreadData* td = arg; + SRContext *sr_context = context->priv; + const ThreadData *td = arg; const int slice_start = (td->height * jobnr ) / nb_jobs; const int slice_end = (td->height * (jobnr + 1)) / nb_jobs; - const float* src = sr_context->output.data + slice_start * td->width; - uint8_t* dst = td->data + slice_start * td->data_linesize; + const float *src = sr_context->output.data + slice_start * td->width; + uint8_t *dst = td->data + slice_start * td->data_linesize; int y, x; for (y = slice_start; y < slice_end; ++y){ @@ -248,12 +248,12 @@ static int float_to_uint8(AVFilterContext* context, void* arg, int jobnr, int nb return 0; } -static int filter_frame(AVFilterLink* inlink, AVFrame* in) +static int filter_frame(AVFilterLink *inlink, AVFrame *in) { - AVFilterContext* context = inlink->dst; - SRContext* sr_context = context->priv; - AVFilterLink* outlink = context->outputs[0]; - AVFrame* out = ff_get_video_buffer(outlink, outlink->w, outlink->h); + AVFilterContext *context = inlink->dst; + SRContext *sr_context = context->priv; + AVFilterLink *outlink = context->outputs[0]; + AVFrame *out = ff_get_video_buffer(outlink, outlink->w, outlink->h); ThreadData td; int nb_threads; DNNReturnType dnn_result; @@ -307,9 +307,9 @@ static int filter_frame(AVFilterLink* inlink, AVFrame* in) return ff_filter_frame(outlink, out); } -static av_cold void uninit(AVFilterContext* context) +static av_cold void uninit(AVFilterContext *context) { - SRContext* sr_context = context->priv; + SRContext *sr_context = context->priv; if (sr_context->dnn_module){ (sr_context->dnn_module->free_model)(&sr_context->model); -- 2.14.1