From patchwork Sun May 31 06:24:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Guiyong" X-Patchwork-Id: 20032 Return-Path: X-Original-To: patchwork@ffaux-bg.ffmpeg.org Delivered-To: patchwork@ffaux-bg.ffmpeg.org Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by ffaux.localdomain (Postfix) with ESMTP id 0C1A644992B for ; Sun, 31 May 2020 09:24:40 +0300 (EEST) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id DB31B68803D; Sun, 31 May 2020 09:24:39 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from shasxm03.verisilicon.com (unknown [101.89.135.45]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id E4D4C6809FF for ; Sun, 31 May 2020 09:24:31 +0300 (EEST) Content-Language: en-US DKIM-Signature: v=1; a=rsa-sha256; d=Verisilicon.com; s=default; c=simple/simple; t=1590906253; h=from:subject:to:date:message-id; bh=Dzll/Uax/Mtx8LRPyKEESCEbrvn62MLT6Y6DetEGbMQ=; b=SK3j2vTXXs5aCr2Q6kZwFq+nX1uDij/JVlgTR6pEuBBwWnwxETLjta8jEZVaqjgrJKqfM5CAlwX BlXN9EWCY6FSt645Qg3dl9d0P9djeriSdKMh+j80VfdzLxL98rwD2BSy14W22JSy2Z5y1Y0coih4X XRPZiN2uX3xFCAqzXLk= Received: from SHASXM03.verisilicon.com ([fe80::938:4dda:a2f9:38aa]) by SHASXM06.verisilicon.com ([fe80::59a8:ce34:dc14:ddda%16]) with mapi id 14.03.0123.003; Sun, 31 May 2020 14:24:11 +0800 From: "Zhang, Guiyong" To: "ffmpeg-devel@ffmpeg.org" Thread-Topic: [PATCH v2 1/8] avutil/hwcontext: Add VPE implementation Thread-Index: AdY3FBjV1FwU3QSySbOxhoNIlZqxxw== Date: Sun, 31 May 2020 06:24:10 +0000 Message-ID: <8847CE4FAA78C048A83C2AE8AA751D2770A5370D@SHASXM03.verisilicon.com> Accept-Language: en-US, zh-CN X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.10.25.45] x-tm-as-product-ver: SMEX-11.0.0.4179-8.100.1062-25452.001 x-tm-as-result: No--9.334600-0.000000-31 x-tm-as-user-approved-sender: Yes x-tm-as-user-blocked-sender: No MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v2 1/8] avutil/hwcontext: Add VPE implementation X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" VPE(VeriSilicon Paltform Engine) is VeriSilicon's hardware engine for multi formats video encoding and decoding. It is used with the VPE hwaccel codec API and library to initialize and use a VPE device which is within the hwcontext libavutil framework. Signed-off-by: Qin.Wang --- configure | 3 + libavutil/Makefile | 3 + libavutil/hwcontext.c | 4 + libavutil/hwcontext.h | 1 + libavutil/hwcontext_internal.h | 1 + libavutil/hwcontext_vpe.c | 402 +++++++++++++++++++++++++++++++++++++++++ libavutil/hwcontext_vpe.h | 52 ++++++ libavutil/pixdesc.c | 4 + libavutil/pixfmt.h | 7 + libavutil/version.h | 2 +- 10 files changed, 478 insertions(+), 1 deletion(-) create mode 100644 libavutil/hwcontext_vpe.c create mode 100644 libavutil/hwcontext_vpe.h diff --git a/configure b/configure index 8569a60..7f0f843 100755 --- a/configure +++ b/configure @@ -322,6 +322,7 @@ External library support: --enable-vulkan enable Vulkan code [no] --disable-xlib disable xlib [autodetect] --disable-zlib disable zlib [autodetect] + --enable-vpe enable vpe codec [no] The following libraries provide various hardware acceleration features: --disable-amf disable AMF video encoding code [autodetect] @@ -1822,6 +1823,7 @@ EXTERNAL_LIBRARY_LIST=" opengl pocketsphinx vapoursynth + vpe " HWACCEL_AUTODETECT_LIBRARY_LIST=" @@ -6476,6 +6478,7 @@ enabled rkmpp && { require_pkg_config rkmpp rockchip_mpp rockchip/r die "ERROR: rkmpp requires --enable-libdrm"; } } enabled vapoursynth && require_pkg_config vapoursynth "vapoursynth-script >= 42" VSScript.h vsscript_init +enabled vpe && require_pkg_config libvpi libvpi "vpe/vpi_types.h vpe/vpi_api.h" vpi_create if enabled gcrypt; then diff --git a/libavutil/Makefile b/libavutil/Makefile index 9b08372..5809b38 100644 --- a/libavutil/Makefile +++ b/libavutil/Makefile @@ -46,6 +46,7 @@ HEADERS = adler32.h \ hwcontext_videotoolbox.h \ hwcontext_vdpau.h \ hwcontext_vulkan.h \ + hwcontext_vpe.h \ imgutils.h \ intfloat.h \ intreadwrite.h \ @@ -184,6 +185,7 @@ OBJS-$(CONFIG_VAAPI) += hwcontext_vaapi.o OBJS-$(CONFIG_VIDEOTOOLBOX) += hwcontext_videotoolbox.o OBJS-$(CONFIG_VDPAU) += hwcontext_vdpau.o OBJS-$(CONFIG_VULKAN) += hwcontext_vulkan.o +OBJS-$(CONFIG_VPE) += hwcontext_vpe.o OBJS += $(COMPAT_OBJS:%=../compat/%) @@ -201,6 +203,7 @@ SKIPHEADERS-$(CONFIG_VAAPI) += hwcontext_vaapi.h SKIPHEADERS-$(CONFIG_VIDEOTOOLBOX) += hwcontext_videotoolbox.h SKIPHEADERS-$(CONFIG_VDPAU) += hwcontext_vdpau.h SKIPHEADERS-$(CONFIG_VULKAN) += hwcontext_vulkan.h +SKIPHEADERS-$(CONFIG_VPE) += hwcontext_vpe.h TESTPROGS = adler32 \ aes \ diff --git a/libavutil/hwcontext.c b/libavutil/hwcontext.c index d13d0f7..5b12013 100644 --- a/libavutil/hwcontext.c +++ b/libavutil/hwcontext.c @@ -62,6 +62,9 @@ static const HWContextType * const hw_table[] = { #if CONFIG_VULKAN &ff_hwcontext_type_vulkan, #endif +#if CONFIG_VPE + &ff_hwcontext_type_vpe, +#endif NULL, }; @@ -77,6 +80,7 @@ static const char *const hw_type_names[] = { [AV_HWDEVICE_TYPE_VIDEOTOOLBOX] = "videotoolbox", [AV_HWDEVICE_TYPE_MEDIACODEC] = "mediacodec", [AV_HWDEVICE_TYPE_VULKAN] = "vulkan", + [AV_HWDEVICE_TYPE_VPE] = "vpe", }; enum AVHWDeviceType av_hwdevice_find_type_by_name(const char *name) diff --git a/libavutil/hwcontext.h b/libavutil/hwcontext.h index 04d19d8..62d948e 100644 --- a/libavutil/hwcontext.h +++ b/libavutil/hwcontext.h @@ -37,6 +37,7 @@ enum AVHWDeviceType { AV_HWDEVICE_TYPE_OPENCL, AV_HWDEVICE_TYPE_MEDIACODEC, AV_HWDEVICE_TYPE_VULKAN, + AV_HWDEVICE_TYPE_VPE, }; typedef struct AVHWDeviceInternal AVHWDeviceInternal; diff --git a/libavutil/hwcontext_internal.h b/libavutil/hwcontext_internal.h index e626649..3190a3a 100644 --- a/libavutil/hwcontext_internal.h +++ b/libavutil/hwcontext_internal.h @@ -174,5 +174,6 @@ extern const HWContextType ff_hwcontext_type_vdpau; extern const HWContextType ff_hwcontext_type_videotoolbox; extern const HWContextType ff_hwcontext_type_mediacodec; extern const HWContextType ff_hwcontext_type_vulkan; +extern const HWContextType ff_hwcontext_type_vpe; #endif /* AVUTIL_HWCONTEXT_INTERNAL_H */ diff --git a/libavutil/hwcontext_vpe.c b/libavutil/hwcontext_vpe.c new file mode 100644 index 0000000..7b218e3 --- /dev/null +++ b/libavutil/hwcontext_vpe.c @@ -0,0 +1,402 @@ +/* + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include +#include +#include +#include +#include +#include +#include + +#include "buffer.h" +#include "pixfmt.h" +#include "pixdesc.h" +#include "hwcontext.h" +#include "hwcontext_internal.h" +#include "hwcontext_vpe.h" +#include "libavutil/opt.h" + +typedef struct VpeDevicePriv { + VpiSysInfo *sys_info; +} VpeDevicePriv; + +typedef struct VpeDeviceContext { + VpiMediaProc *media_proc; +} VpeDeviceContext; + +static const enum AVPixelFormat supported_sw_formats[] = { + AV_PIX_FMT_NV12, + AV_PIX_FMT_P010LE, + AV_PIX_FMT_YUV420P, + AV_PIX_FMT_YUV422P, + AV_PIX_FMT_NV21, + AV_PIX_FMT_YUV420P10LE, + AV_PIX_FMT_YUV420P10BE, + AV_PIX_FMT_YUV422P10LE, + AV_PIX_FMT_YUV422P10BE, + AV_PIX_FMT_P010BE, + AV_PIX_FMT_YUV444P, + AV_PIX_FMT_RGB24, + AV_PIX_FMT_BGR24, + AV_PIX_FMT_ARGB, + AV_PIX_FMT_RGBA, + AV_PIX_FMT_ABGR, + AV_PIX_FMT_BGRA, +}; + +static const enum AVPixelFormat supported_hw_formats[] = { + AV_PIX_FMT_VPE, +}; + +static int vpe_frames_get_constraints(AVHWDeviceContext *ctx, + const void *hwconfig, + AVHWFramesConstraints *constraints) +{ + int i; + + constraints->valid_sw_formats = + av_malloc_array(FF_ARRAY_ELEMS(supported_sw_formats) + 1, + sizeof(*constraints->valid_sw_formats)); + if (!constraints->valid_sw_formats) + return AVERROR(ENOMEM); + + for (i = 0; i < FF_ARRAY_ELEMS(supported_sw_formats); i++) + constraints->valid_sw_formats[i] = supported_sw_formats[i]; + constraints->valid_sw_formats[FF_ARRAY_ELEMS(supported_sw_formats)] = + AV_PIX_FMT_NONE; + + constraints->valid_hw_formats = + av_malloc_array(2, sizeof(*constraints->valid_hw_formats)); + if (!constraints->valid_hw_formats) { + return AVERROR(ENOMEM); + } + + constraints->valid_hw_formats[0] = AV_PIX_FMT_VPE; + constraints->valid_hw_formats[1] = AV_PIX_FMT_NONE; + + constraints->min_width = 144; + constraints->min_height = 144; + constraints->max_width = 4096; + constraints->max_height = 4096; + + return 0; +} + +static int vpe_transfer_get_formats(AVHWFramesContext *ctx, + enum AVHWFrameTransferDirection dir, + enum AVPixelFormat **formats) +{ + enum AVPixelFormat *fmts; + + fmts = av_malloc_array(2, sizeof(*fmts)); + if (!fmts) + return AVERROR(ENOMEM); + + fmts[0] = ctx->sw_format; + fmts[1] = AV_PIX_FMT_NONE; + + *formats = fmts; + + return 0; +} + +static int vpe_transfer_data_from(AVHWFramesContext *ctx, AVFrame *dst, + const AVFrame *src) +{ + VpeDeviceContext *priv = ctx->device_ctx->internal->priv; + VpiFrame *in_frame; + VpiFrame out_frame; + VpiCtrlCmdParam cmd_param; + + VpiPicInfo *pic_info; + int pp_index; + int i, ret; + + in_frame = (VpiFrame *)src->data[0]; + for (i = 1; i < PIC_INDEX_MAX_NUMBER; i++) { + pic_info = (VpiPicInfo *)src->buf[i]->data; + if (pic_info->enabled == 1) { + pp_index = i; + break; + } + } + if (i == PIC_INDEX_MAX_NUMBER) { + return AVERROR_EXTERNAL; + } + + cmd_param.cmd = VPI_CMD_HWDW_SET_INDEX; + cmd_param.data = (void *)&pp_index; + priv->media_proc->vpi->control(priv->media_proc->ctx, + (void *)&cmd_param, NULL); + + out_frame.data[0] = dst->data[0]; + out_frame.data[1] = dst->data[1]; + out_frame.data[2] = dst->data[2]; + out_frame.linesize[0] = dst->linesize[0]; + out_frame.linesize[1] = dst->linesize[1]; + out_frame.linesize[2] = dst->linesize[2]; + ret = priv->media_proc->vpi->process(priv->media_proc->ctx, + in_frame, &out_frame); + + return ret; +} + +static void vpe_buffer_free(void *opaque, uint8_t *data) +{ + AVHWFramesContext *ctx = opaque; + AVVpeDeviceContext *device_hwctx = ctx->device_ctx->hwctx; + VpiCtrlCmdParam cmd_param; + + cmd_param.cmd = VPI_CMD_FREE_FRAME_BUFFER; + cmd_param.data = (void *)data; + device_hwctx->func->control(&device_hwctx->device, &cmd_param, NULL); +} + +static AVBufferRef *vpe_pool_alloc(void *opaque, int size) +{ + AVHWFramesContext *ctx = opaque; + AVVpeDeviceContext *device_hwctx = ctx->device_ctx->hwctx; + VpiFrame *v_frame = NULL; + VpiCtrlCmdParam cmd_param; + AVBufferRef *ret; + + cmd_param.cmd = VPI_CMD_GET_FRAME_BUFFER; + device_hwctx->func->control(&device_hwctx->device, &cmd_param, (void *)&v_frame); + if (!v_frame) { + return NULL; + } + ret = av_buffer_create((uint8_t*)v_frame, size, + vpe_buffer_free, ctx, 0); + + return ret; +} + +static int vpe_frames_init(AVHWFramesContext *hwfc) +{ + AVVpeDeviceContext *device_hwctx = hwfc->device_ctx->hwctx; + AVVpeFramesContext *frame_hwctx = hwfc->hwctx; + VpiCtrlCmdParam cmd_param; + int size = 0; + int picinfo_size = 0; + int i; + + for (i = 0; i < FF_ARRAY_ELEMS(supported_sw_formats); i++) { + if (hwfc->sw_format == supported_sw_formats[i]) { + break; + } + } + if (i == FF_ARRAY_ELEMS(supported_sw_formats)) { + av_log(hwfc, AV_LOG_ERROR, "Pixel format '%s' is not supported\n", + av_get_pix_fmt_name(hwfc->sw_format)); + return AVERROR(ENOSYS); + } + + if (!hwfc->pool) { + cmd_param.cmd = VPI_CMD_GET_VPEFRAME_SIZE; + device_hwctx->func->control(&device_hwctx->device, &cmd_param, (void *)&size); + if (size == 0) { + return AVERROR_EXTERNAL; + } + frame_hwctx->frame = (VpiFrame *)av_mallocz(size); + if (!frame_hwctx->frame) { + return AVERROR(ENOMEM); + } + frame_hwctx->frame_size = size; + + cmd_param.cmd = VPI_CMD_GET_PICINFO_SIZE; + device_hwctx->func->control(&device_hwctx->device, &cmd_param, (void *)&picinfo_size); + if (picinfo_size == 0) { + return AVERROR_EXTERNAL; + } + frame_hwctx->pic_info_size = picinfo_size; + + hwfc->internal->pool_internal = + av_buffer_pool_init2(size, hwfc, vpe_pool_alloc, NULL); + if (!hwfc->internal->pool_internal) + return AVERROR(ENOMEM); + } + return 0; +} + +static int vpe_get_buffer(AVHWFramesContext *hwfc, AVFrame *frame) +{ + AVVpeFramesContext *vpeframe_ctx = hwfc->hwctx; + VpiPicInfo *pic_info; + int i; + + frame->buf[0] = av_buffer_pool_get(hwfc->pool); + if (!frame->buf[0]) + return AVERROR(ENOMEM); + + frame->data[0] = frame->buf[0]->data; + memset(frame->data[0], 0, vpeframe_ctx->frame_size); + + /* suppor max 4 channel low res buffer */ + for (i = 1; i < PIC_INDEX_MAX_NUMBER; i++) { + frame->buf[i] = av_buffer_alloc(vpeframe_ctx->pic_info_size); + if (frame->buf[i] == 0) { + return AVERROR(ENOMEM); + } + } + /* always enable the first channel */ + pic_info = (VpiPicInfo *)frame->buf[1]->data; + pic_info->enabled = 1; + + frame->format = AV_PIX_FMT_VPE; + frame->width = hwfc->width; + frame->height = hwfc->height; + return 0; +} + +static void vpe_device_uninit(AVHWDeviceContext *device_ctx) +{ + VpeDeviceContext *priv = device_ctx->internal->priv; + + if (priv->media_proc) { + if (priv->media_proc->ctx) { + priv->media_proc->vpi->close(priv->media_proc->ctx); + vpi_destroy(priv->media_proc->ctx); + priv->media_proc->ctx = NULL; + } + vpi_freep(&priv->media_proc); + } +} + +static int vpe_device_init(AVHWDeviceContext *device_ctx) +{ + VpeDeviceContext *priv = device_ctx->internal->priv; + int ret = 0; + + ret = vpi_get_media_proc_struct(&priv->media_proc); + if (ret || !priv->media_proc) { + return AVERROR(ENOMEM); + } + + ret = vpi_create(&priv->media_proc->ctx, &priv->media_proc->vpi, + HWDOWNLOAD_VPE); + if (ret != 0) + return AVERROR_EXTERNAL; + + ret = priv->media_proc->vpi->init(priv->media_proc->ctx, NULL); + if (ret != 0) + return AVERROR_EXTERNAL; + + return 0; +} + +static void vpe_device_free(AVHWDeviceContext *device_ctx) +{ + AVVpeDeviceContext *hwctx = device_ctx->hwctx; + VpeDevicePriv *priv = (VpeDevicePriv *)device_ctx->user_opaque; + + vpi_destroy(priv->sys_info); + + vpi_freep(&priv->sys_info); + av_freep(&priv); + + if (hwctx->device >= 0) { + close(hwctx->device); + } +} + +static int vpe_device_create(AVHWDeviceContext *device_ctx, const char *device, + AVDictionary *opts, int flags) +{ + AVVpeDeviceContext *hwctx = device_ctx->hwctx; + AVDictionaryEntry *opt; + VpeDevicePriv *priv; + const char *path; + int ret; + + priv = av_mallocz(sizeof(*priv)); + if (!priv) + return AVERROR(ENOMEM); + ret = vpi_get_sys_info_struct(&priv->sys_info); + if (ret || !priv->sys_info) { + av_freep(&priv); + return AVERROR(ENOMEM); + } + + device_ctx->user_opaque = priv; + device_ctx->free = vpe_device_free; + + // Try to open the device as a transcode path. + // Default to using the first node if the user did not + // supply a path. + path = device ? device : "/dev/transcoder0"; + hwctx->device = open(path, O_RDWR); + if (hwctx->device == -1) { + av_log(device_ctx, AV_LOG_ERROR, "failed to open hw device\n"); + return AVERROR(errno); + } + + priv->sys_info->device = hwctx->device; + priv->sys_info->device_name = av_strdup(path); + if (!priv->sys_info->device_name) { + return AVERROR(ENOMEM); + } + priv->sys_info->priority = VPE_TASK_VOD; + priv->sys_info->sys_log_level = 0; + + if (opts) { + opt = av_dict_get(opts, "priority", NULL, 0); + if (opt) { + if (!strcmp(opt->value, "live")) { + priv->sys_info->priority = VPE_TASK_LIVE; + } else if (!strcmp(opt->value, "vod")) { + priv->sys_info->priority = VPE_TASK_VOD; + } else { + av_log(device_ctx, AV_LOG_ERROR, "Unknow priority : %s\n", + opt->value); + return AVERROR_INVALIDDATA; + } + } + + opt = av_dict_get(opts, "fbloglevel", NULL, 0); + if (opt) { + priv->sys_info->sys_log_level = atoi(opt->value); + } + } + + if (vpi_create(&priv->sys_info, &hwctx->func, HWCONTEXT_VPE) != 0) { + return AVERROR_EXTERNAL; + } + + return 0; +} + +const HWContextType ff_hwcontext_type_vpe = { + .type = AV_HWDEVICE_TYPE_VPE, + .name = "VPE", + + .device_hwctx_size = sizeof(AVVpeDeviceContext), + .device_priv_size = sizeof(VpeDeviceContext), + .frames_hwctx_size = sizeof(AVVpeFramesContext), + + .device_create = vpe_device_create, + .device_init = vpe_device_init, + .device_uninit = vpe_device_uninit, + .frames_get_constraints = vpe_frames_get_constraints, + .frames_init = vpe_frames_init, + .frames_get_buffer = vpe_get_buffer, + .transfer_get_formats = vpe_transfer_get_formats, + .transfer_data_from = vpe_transfer_data_from, + + .pix_fmts = (const enum AVPixelFormat[]){ AV_PIX_FMT_VPE, AV_PIX_FMT_NONE }, +}; diff --git a/libavutil/hwcontext_vpe.h b/libavutil/hwcontext_vpe.h new file mode 100644 index 0000000..6fd5291 --- /dev/null +++ b/libavutil/hwcontext_vpe.h @@ -0,0 +1,52 @@ +/* + * Verisilicon VPI Video codec + * Copyright (c) 2019 VeriSilicon, Inc. + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#ifndef AVUTIL_HWCONTEXT_VPE_H +#define AVUTIL_HWCONTEXT_VPE_H + +#include +#include + +/** + * @file + * An API-specific header for AV_HWDEVICE_TYPE_VPE. + */ + +/** + * This struct is allocated as AVHWDeviceContext.hwctx + * It will save some device level info + */ + +typedef struct AVVpeDeviceContext { + int device; + VpiApi *func; +} AVVpeDeviceContext; + +/** + * This struct is allocated as AVHWFramesContext.hwctx + * It will save some frame level info + */ +typedef struct AVVpeFramesContext { + VpiFrame *frame; + int frame_size; + int pic_info_size; +} AVVpeFramesContext; +#endif /* AVUTIL_HWCONTEXT_VPE_H */ diff --git a/libavutil/pixdesc.c b/libavutil/pixdesc.c index 9d61c52..f2dc9fa 100644 --- a/libavutil/pixdesc.c +++ b/libavutil/pixdesc.c @@ -2371,6 +2371,10 @@ static const AVPixFmtDescriptor av_pix_fmt_descriptors[AV_PIX_FMT_NB] = { .name = "vulkan", .flags = AV_PIX_FMT_FLAG_HWACCEL, }, + [AV_PIX_FMT_VPE] = { + .name = "vpe", + .flags = AV_PIX_FMT_FLAG_HWACCEL, + }, }; #if FF_API_PLUS1_MINUS1 FF_ENABLE_DEPRECATION_WARNINGS diff --git a/libavutil/pixfmt.h b/libavutil/pixfmt.h index 1c625cf..252261e 100644 --- a/libavutil/pixfmt.h +++ b/libavutil/pixfmt.h @@ -358,6 +358,13 @@ enum AVPixelFormat { AV_PIX_FMT_Y210BE, ///< packed YUV 4:2:2 like YUYV422, 20bpp, data in the high bits, big-endian AV_PIX_FMT_Y210LE, ///< packed YUV 4:2:2 like YUYV422, 20bpp, data in the high bits, little-endian + /** + * VPE hardware images. + * + * data[0] points to a VpiFrame + */ + AV_PIX_FMT_VPE, + AV_PIX_FMT_NB ///< number of pixel formats, DO NOT USE THIS if you want to link with shared libav* because the number of formats might differ between versions }; diff --git a/libavutil/version.h b/libavutil/version.h index 7acecf5..5d5cec6 100644 --- a/libavutil/version.h +++ b/libavutil/version.h @@ -79,7 +79,7 @@ */ #define LIBAVUTIL_VERSION_MAJOR 56 -#define LIBAVUTIL_VERSION_MINOR 49 +#define LIBAVUTIL_VERSION_MINOR 50 #define LIBAVUTIL_VERSION_MICRO 100 #define LIBAVUTIL_VERSION_INT AV_VERSION_INT(LIBAVUTIL_VERSION_MAJOR, \ From patchwork Sun May 31 06:26:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Guiyong" X-Patchwork-Id: 20033 Return-Path: X-Original-To: patchwork@ffaux-bg.ffmpeg.org Delivered-To: patchwork@ffaux-bg.ffmpeg.org Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by ffaux.localdomain (Postfix) with ESMTP id 35B6B449D3F for ; Sun, 31 May 2020 09:26:37 +0300 (EEST) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 0345468826C; Sun, 31 May 2020 09:26:37 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from shasxm03.verisilicon.com (unknown [101.89.135.45]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id B14B468803D for ; Sun, 31 May 2020 09:26:29 +0300 (EEST) Content-Language: en-US DKIM-Signature: v=1; a=rsa-sha256; d=Verisilicon.com; s=default; c=simple/simple; t=1590906385; h=from:subject:to:date:message-id; bh=gqaV617v+FYbifjE0gkxgDtQN44bVnOnLIcIn0gvWI0=; b=fZyLWaJ0eLVYY3mMVan0qgY742un30chGRGcb5AczfyzM8ufHwMvQCLhhHeri6+LPBX9lk6gjn4 1CPx4zImVVjiPBaMhjYLIfEGKfvsLiCGqeY7UeDM9GG3TEqHEBKuikaq2RxPiJRPIDqxms2abqlAl moqV/A4Ll1kZ4qduxNQ= Received: from SHASXM03.verisilicon.com ([fe80::938:4dda:a2f9:38aa]) by SHASXM06.verisilicon.com ([fe80::59a8:ce34:dc14:ddda%16]) with mapi id 14.03.0123.003; Sun, 31 May 2020 14:26:24 +0800 From: "Zhang, Guiyong" To: "ffmpeg-devel@ffmpeg.org" Thread-Topic: [PATCH v2 2/8] avcodec/h264dec: Add h264 VPE HW decoder Thread-Index: AdY3FGfuWILhYoHhQDejtMPDGNVeuw== Date: Sun, 31 May 2020 06:26:23 +0000 Message-ID: <8847CE4FAA78C048A83C2AE8AA751D2770A53722@SHASXM03.verisilicon.com> Accept-Language: en-US, zh-CN X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.10.25.45] x-tm-as-product-ver: SMEX-11.0.0.4179-8.100.1062-25452.001 x-tm-as-result: No--4.544700-0.000000-31 x-tm-as-user-approved-sender: Yes x-tm-as-user-blocked-sender: No MIME-Version: 1.0 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 Subject: [FFmpeg-devel] [PATCH v2 2/8] avcodec/h264dec: Add h264 VPE HW decoder X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" This decoder uses VPI(VPE Interface) API and library for h264 decoding. Signed-off-by: Qin.Wang --- configure | 1 + libavcodec/Makefile | 1 + libavcodec/allcodecs.c | 1 + libavcodec/vpe_dec_common.c | 476 ++++++++++++++++++++++++++++++++++++++++++++ libavcodec/vpe_dec_common.h | 81 ++++++++ libavcodec/vpe_h264dec.c | 69 +++++++ 6 files changed, 629 insertions(+) create mode 100644 libavcodec/vpe_dec_common.c create mode 100644 libavcodec/vpe_dec_common.h create mode 100644 libavcodec/vpe_h264dec.c -- 1.8.3.1 diff --git a/configure b/configure index 7f0f843..196842a 100755 --- a/configure +++ b/configure @@ -3064,6 +3064,7 @@ h264_vaapi_encoder_select="cbs_h264 vaapi_encode" h264_v4l2m2m_decoder_deps="v4l2_m2m h264_v4l2_m2m" h264_v4l2m2m_decoder_select="h264_mp4toannexb_bsf" h264_v4l2m2m_encoder_deps="v4l2_m2m h264_v4l2_m2m" +h264_vpe_decoder_deps="vpe" hevc_amf_encoder_deps="amf" hevc_cuvid_decoder_deps="cuvid" hevc_cuvid_decoder_select="hevc_mp4toannexb_bsf" diff --git a/libavcodec/Makefile b/libavcodec/Makefile index 0a3bbc7..0c80880 100644 --- a/libavcodec/Makefile +++ b/libavcodec/Makefile @@ -761,6 +761,7 @@ OBJS-$(CONFIG_ZLIB_DECODER) += lcldec.o OBJS-$(CONFIG_ZLIB_ENCODER) += lclenc.o OBJS-$(CONFIG_ZMBV_DECODER) += zmbv.o OBJS-$(CONFIG_ZMBV_ENCODER) += zmbvenc.o +OBJS-$(CONFIG_H264_VPE_DECODER) += vpe_h264dec.o vpe_dec_common.o # (AD)PCM decoders/encoders OBJS-$(CONFIG_PCM_ALAW_DECODER) += pcm.o diff --git a/libavcodec/allcodecs.c b/libavcodec/allcodecs.c index 5240d0a..32f6fd6 100644 --- a/libavcodec/allcodecs.c +++ b/libavcodec/allcodecs.c @@ -808,6 +808,7 @@ extern AVCodec ff_vp9_mediacodec_decoder; extern AVCodec ff_vp9_qsv_decoder; extern AVCodec ff_vp9_vaapi_encoder; extern AVCodec ff_vp9_qsv_encoder; +extern AVCodec ff_h264_vpe_decoder; // The iterate API is not usable with ossfuzz due to the excessive size of binaries created #if CONFIG_OSSFUZZ diff --git a/libavcodec/vpe_dec_common.c b/libavcodec/vpe_dec_common.c new file mode 100644 index 0000000..c4d9947 --- /dev/null +++ b/libavcodec/vpe_dec_common.c @@ -0,0 +1,476 @@ +/* + * Verisilicon VPE Video Decoder Common interface + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include "libavutil/pixfmt.h" +#include "decode.h" +#include "internal.h" + +#include "vpe_dec_common.h" + +/** + * Initialize hw frame and device + * + * The avctx->hw_frames_ctx is the reference to the AVHWFramesContext. + * If it exists, get its data, otherwise create the hw_frame_ctx + * then initialize hw_frame_ctx. + */ +static int vpe_dec_init_hwctx(AVCodecContext *avctx) +{ + int ret = 0; + AVHWFramesContext *hwframe_ctx; + + if (!avctx->hw_frames_ctx) { + if (avctx->hw_device_ctx) { + avctx->hw_frames_ctx = av_hwframe_ctx_alloc(avctx->hw_device_ctx); + if (!avctx->hw_frames_ctx) { + av_log(avctx, AV_LOG_ERROR, "av_hwframe_ctx_alloc failed\n"); + ret = AVERROR(ENOMEM); + goto error; + } + } else { + av_log(avctx, AV_LOG_ERROR, "No hw frame/device available\n"); + ret = AVERROR(EINVAL); + goto error; + } + } + + hwframe_ctx = (AVHWFramesContext *)avctx->hw_frames_ctx->data; + hwframe_ctx->format = AV_PIX_FMT_VPE; + if (avctx->bits_per_raw_sample == 10) { + hwframe_ctx->sw_format = AV_PIX_FMT_P010LE; + } else { + hwframe_ctx->sw_format = AV_PIX_FMT_NV12; + } + hwframe_ctx->width = avctx->width; + hwframe_ctx->height = avctx->height; + if ((ret = av_hwframe_ctx_init(avctx->hw_frames_ctx)) < 0) { + av_log(avctx, AV_LOG_ERROR, "av_hwframe_ctx_init failed\n"); + return ret; + } + return 0; + +error: + av_log(avctx, AV_LOG_ERROR, "vpe_dec_init_hwctx failed\n"); + return ret; +} + +/** + * Notify the external decoder to release frame buffer + */ +static void vpe_decode_picture_consume(VpeDecCtx *dec_ctx, VpiFrame *vpi_frame) +{ + VpiCtrlCmdParam cmd_param; + + // make decoder release DPB + cmd_param.cmd = VPI_CMD_DEC_PIC_CONSUME; + cmd_param.data = (void *)vpi_frame; + dec_ctx->vpi->control(dec_ctx->ctx, (void *)&cmd_param, NULL); +} + +int ff_vpe_decode_init(AVCodecContext *avctx, VpiPlugin type) +{ + AVHWFramesContext *hwframe_ctx; + AVVpeFramesContext *vpeframe_ctx; + VpeDecCtx *dec_ctx = avctx->priv_data; + VpiCtrlCmdParam cmd_param; + int ret; + + // Init vpe hwcontext + ret = vpe_dec_init_hwctx(avctx); + if (ret) { + av_log(avctx, AV_LOG_ERROR, "vpe init hwctx failure\n"); + return ret; + } + + hwframe_ctx = (AVHWFramesContext *)avctx->hw_frames_ctx->data; + vpeframe_ctx = (AVVpeFramesContext *)hwframe_ctx->hwctx; + + // Create the VPE context + ret = vpi_create(&dec_ctx->ctx, &dec_ctx->vpi, type); + if (ret) { + av_log(avctx, AV_LOG_ERROR, "vpi create failure ret %d\n", ret); + return AVERROR_EXTERNAL; + } + + // get the decoder init option struct + cmd_param.cmd = VPI_CMD_DEC_INIT_OPTION; + ret = dec_ctx->vpi->control(dec_ctx->ctx, (void *)&cmd_param, (void *)&dec_ctx->dec_setting); + if (ret != 0) { + return AVERROR(ENOMEM); + } + + dec_ctx->avctx = avctx; + dec_ctx->dec_setting->pp_setting = dec_ctx->pp_setting; + dec_ctx->dec_setting->transcode = dec_ctx->transcode; + dec_ctx->dec_setting->frame = vpeframe_ctx->frame; + avctx->pix_fmt = AV_PIX_FMT_VPE; + + // Initialize the VPE decoder + ret = dec_ctx->vpi->init(dec_ctx->ctx, dec_ctx->dec_setting); + if (ret) { + av_log(avctx, AV_LOG_ERROR, "vpi decode init failure\n"); + return AVERROR_EXTERNAL; + } + + // get the packet buffer struct info from external decoder + // to store the avpk buf info + cmd_param.cmd = VPI_CMD_DEC_GET_STRM_BUF_PKT; + ret = dec_ctx->vpi->control(dec_ctx->ctx, (void *)&cmd_param, (void *)&dec_ctx->buffered_pkt); + if (ret != 0) { + return AVERROR(ENOMEM); + } + + return 0; +} + +/** + * clear the unsed frames + */ +static void vpe_clear_unused_frames(VpeDecCtx *dec_ctx) +{ + VpeDecFrame *cur_frame = dec_ctx->frame_list; + + while (cur_frame) { + if (cur_frame->used && !cur_frame->vpi_frame->locked) { + vpe_decode_picture_consume(dec_ctx, cur_frame->vpi_frame); + cur_frame->used = 0; + av_frame_unref(cur_frame->av_frame); + } + cur_frame = cur_frame->next; + } +} + +/** + * alloc frame from hwcontext for external codec + */ +static int vpe_alloc_frame(AVCodecContext *avctx, VpeDecCtx *dec_ctx, VpeDecFrame *dec_frame) +{ + int ret; + + ret = ff_get_buffer(avctx, dec_frame->av_frame, AV_GET_BUFFER_FLAG_REF); + if (ret < 0) { + av_log(NULL, AV_LOG_DEBUG, "return ret %d\n", ret); + return ret; + } + + dec_frame->vpi_frame = (VpiFrame*)dec_frame->av_frame->data[0]; + dec_frame->used = 1; + + return 0; +} + +/** + * get frame from the linked list + */ +static int vpe_get_frame(AVCodecContext *avctx, VpeDecCtx *dec_ctx, VpiFrame **vpi_frame) +{ + VpeDecFrame *dec_frame, **last; + int ret; + + vpe_clear_unused_frames(dec_ctx); + + dec_frame = dec_ctx->frame_list; + last = &dec_ctx->frame_list; + while (dec_frame) { + if (!dec_frame->used) { + ret = vpe_alloc_frame(avctx, dec_ctx, dec_frame); + if (ret < 0) + return ret; + *vpi_frame = dec_frame->vpi_frame; + return 0; + } + + last = &dec_frame->next; + dec_frame = dec_frame->next; + } + + dec_frame = av_mallocz(sizeof(*dec_frame)); + if (!dec_frame) + return AVERROR(ENOMEM); + dec_frame->av_frame = av_frame_alloc(); + if (!dec_frame->av_frame) { + av_freep(&dec_frame); + return AVERROR(ENOMEM); + } + *last = dec_frame; + + ret = vpe_alloc_frame(avctx, dec_ctx, dec_frame); + if (ret < 0) + return ret; + + *vpi_frame = dec_frame->vpi_frame; + + return 0; +} + +/** + * find frame from the linked list + */ +static VpeDecFrame *vpe_find_frame(VpeDecCtx *dec_ctx, VpiFrame *vpi_frame) +{ + VpeDecFrame *cur_frame = dec_ctx->frame_list; + + while (cur_frame) { + if (vpi_frame == cur_frame->vpi_frame) + return cur_frame; + cur_frame = cur_frame->next; + } + return NULL; +} + +/** + * Output the frame raw data + */ +static int vpe_output_frame(AVCodecContext *avctx, VpiFrame *vpi_frame, + AVFrame *out_frame) +{ + VpeDecCtx *dec_ctx = avctx->priv_data; + VpeDecFrame *dec_frame = NULL; + AVFrame *src_frame = NULL; + int ret; + + dec_frame = vpe_find_frame(dec_ctx, vpi_frame); + if (dec_frame == NULL) { + av_log(avctx, AV_LOG_ERROR, "Can't find matched frame from pool\n"); + return AVERROR_BUG; + } + src_frame = dec_frame->av_frame; + ret = av_frame_ref(out_frame, src_frame); + if (ret < 0) + return ret; + + out_frame->linesize[0] = vpi_frame->linesize[0]; + out_frame->linesize[1] = vpi_frame->linesize[1]; + out_frame->linesize[2] = vpi_frame->linesize[2]; + out_frame->key_frame = vpi_frame->key_frame; + out_frame->pts = vpi_frame->pts; + out_frame->pkt_dts = vpi_frame->pkt_dts; + + out_frame->hw_frames_ctx = av_buffer_ref(avctx->hw_frames_ctx); + if (!out_frame->hw_frames_ctx) { + ret = AVERROR(ENOMEM); + } + + return 0; +} + +/** + * receive VpiFrame from external decoder + */ +static int vpe_dec_receive(AVCodecContext *avctx, AVFrame *frame) +{ + VpeDecCtx *dec_ctx = avctx->priv_data; + VpiFrame *vpi_frame = NULL; + int ret; + + vpe_clear_unused_frames(dec_ctx); + ret = dec_ctx->vpi->decode_get_frame(dec_ctx->ctx, (void *)&vpi_frame); + if (ret == 1) { + // get one output frame + if (0 == vpe_output_frame(avctx, vpi_frame, frame)) { + return 0; + } else { + return AVERROR_EXTERNAL; + } + } else if (ret == 2) { + return AVERROR_EOF; + } else { + return AVERROR(EAGAIN); + } +} + +/** + * release avpkt buf which is released by external decoder + */ +static int vpe_release_stream_mem(VpeDecCtx *dec_ctx) +{ + VpiCtrlCmdParam cmd_param; + AVBufferRef *ref = NULL; + int ret; + + cmd_param.cmd = VPI_CMD_DEC_GET_USED_STRM_MEM; + /* get the used packet buffer info from external decoder */ + ret = dec_ctx->vpi->control(dec_ctx->ctx, (void *)&cmd_param, (void *)&ref); + if (ret != 0) { + return AVERROR_EXTERNAL; + } + if (ref) { + /* unref the input avpkt buffer */ + av_buffer_unref(&ref); + return 1; + } + return 0; +} + +int ff_vpe_decode_receive_frame(AVCodecContext *avctx, AVFrame *frame) +{ + VpeDecCtx *dec_ctx = avctx->priv_data; + AVPacket avpkt = { 0 }; + AVBufferRef *ref = NULL; + VpiFrame *in_vpi_frame = NULL; + VpiCtrlCmdParam cmd_param; + int strm_buf_count; + int ret; + + ret = vpe_release_stream_mem(dec_ctx); + if (ret < 0) { + return ret; + } + /* poll for new frame */ + ret = vpe_dec_receive(avctx, frame); + if (ret != AVERROR(EAGAIN)) { + return ret; + } + + /* feed decoder */ + while (1) { + cmd_param.cmd = VPI_CMD_DEC_STRM_BUF_COUNT; + cmd_param.data = NULL; + ret = dec_ctx->vpi->control(dec_ctx->ctx, + (void*)&cmd_param, (void *)&strm_buf_count); + if (ret != 0) { + return AVERROR_EXTERNAL; + } + if (strm_buf_count == -1) { + /* no space, block for an output frame to get space */ + ret = vpe_dec_receive(avctx, frame); + if (ret != AVERROR(EAGAIN)) { + return ret; + } else { + continue; + } + } + + ret = vpe_release_stream_mem(dec_ctx); + if (ret < 0) { + return ret; + } + + /* fetch new packet or eof */ + ret = ff_decode_get_packet(avctx, &avpkt); + if (ret == 0) { + dec_ctx->buffered_pkt->data = avpkt.data; + dec_ctx->buffered_pkt->size = avpkt.size; + dec_ctx->buffered_pkt->pts = avpkt.pts; + dec_ctx->buffered_pkt->pkt_dts = avpkt.dts; + ref = av_buffer_ref(avpkt.buf); + if (!ref) { + av_packet_unref(&avpkt); + return AVERROR(ENOMEM); + } + /* ref the avpkt buffer + feed the packet buffer related info to external decoder */ + dec_ctx->buffered_pkt->opaque = (void *)ref; + av_packet_unref(&avpkt); + + /* get frame buffer from pool */ + ret = vpe_get_frame(avctx, dec_ctx, &in_vpi_frame); + if (ret < 0) { + return ret; + } + + cmd_param.cmd = VPI_CMD_DEC_SET_FRAME_BUFFER; + cmd_param.data = (void *)in_vpi_frame; + ret = dec_ctx->vpi->control(dec_ctx->ctx, (void *)&cmd_param, NULL); + if (ret != 0) { + return AVERROR_EXTERNAL; + } + } else if (ret == AVERROR_EOF) { + ret = dec_ctx->vpi->decode_put_packet(dec_ctx->ctx, + (void *)dec_ctx->buffered_pkt); + if (ret < 0) { + return AVERROR_EXTERNAL; + } else { + return AVERROR(EAGAIN); + } + } else if (ret == AVERROR(EAGAIN)) { + return vpe_dec_receive(avctx, frame); + } else if (ret < 0) { + return ret; + } + + /* try to flush packet data */ + if (dec_ctx->buffered_pkt->size > 0) { + ret = dec_ctx->vpi->decode_put_packet(dec_ctx->ctx, + (void *)dec_ctx->buffered_pkt); + if (ret > 0) { + dec_ctx->buffered_pkt->size -= ret; + if (dec_ctx->buffered_pkt->size != 0) { + return AVERROR_EXTERNAL; + } + return vpe_dec_receive(avctx, frame); + } else if (ret <= 0) { + return AVERROR_EXTERNAL; + } + } + } + + return AVERROR(EAGAIN); +} + + +av_cold int ff_vpe_decode_close(AVCodecContext *avctx) +{ + VpeDecCtx *dec_ctx = avctx->priv_data; + VpeDecFrame *cur_frame; + int ret; + + vpe_clear_unused_frames(dec_ctx); + if (dec_ctx->ctx == NULL) { + return 0; + } + dec_ctx->vpi->close(dec_ctx->ctx); + do { + ret = vpe_release_stream_mem(dec_ctx); + } while (ret == 1); + + cur_frame = dec_ctx->frame_list; + while (cur_frame) { + dec_ctx->frame_list = cur_frame->next; + av_frame_free(&cur_frame->av_frame); + av_freep(&cur_frame); + cur_frame = dec_ctx->frame_list; + } + vpi_destroy(dec_ctx->ctx); + + return 0; +} + +#define OFFSET(x) offsetof(VpeDecCtx, x) +#define VD AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_DECODING_PARAM +const AVOption vpe_decode_options[] = { + { "low_res", + "set output number and at most four output downscale configuration", + OFFSET(pp_setting), + AV_OPT_TYPE_STRING, + { .str = "" }, + 0, + 0, + VD }, + { "transcode", + "enable/disable transcoding", + OFFSET(transcode), + AV_OPT_TYPE_BOOL, + { .i64 = 0 }, + 0, + 1, + VD }, + { NULL }, +}; diff --git a/libavcodec/vpe_dec_common.h b/libavcodec/vpe_dec_common.h new file mode 100644 index 0000000..d3bc84a --- /dev/null +++ b/libavcodec/vpe_dec_common.h @@ -0,0 +1,81 @@ +/* + * Verisilicon VPE Video Decoder Common interface + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#ifndef AVCODEC_VPE_DEC_COMMON_H +#define AVCODEC_VPE_DEC_COMMON_H + +#include + +#include +#include + +#include "avcodec.h" +#include "libavutil/log.h" +#include "libavutil/hwcontext.h" +#include "libavutil/hwcontext_vpe.h" +#include "libavutil/buffer.h" +#include "libavutil/error.h" +#include "libavutil/frame.h" +#include "libavutil/opt.h" + +extern const AVOption vpe_decode_options[]; + +typedef struct VpeDecFrame { + AVFrame *av_frame; + // frame structure used for external codec + VpiFrame *vpi_frame; + // vpi_frame has been used + int used; + // the next linked list item + struct VpeDecFrame *next; +} VpeDecFrame; + +/** + * Communicating VPE parameters between libavcodec and the caller. + */ +typedef struct VpeDecCtx { + AVClass *av_class; + AVCodecContext *avctx; + + // VPI codec/filter context + VpiCtx ctx; + // VPI codec/filter API + VpiApi *vpi; + + // VPE decodec setting parameters + DecOption *dec_setting; + + // VPE decoder resieze config + uint8_t *pp_setting; + // VPE transcode enable + int transcode; + + // buffered packet feed to external decoder + VpiPacket *buffered_pkt; + + // VPE frame linked list + VpeDecFrame *frame_list; +} VpeDecCtx; + +int ff_vpe_decode_init(AVCodecContext *avctx, VpiPlugin type); +int ff_vpe_decode_receive_frame(AVCodecContext *avctx, AVFrame *frame); +int ff_vpe_decode_close(AVCodecContext *avctx); + +#endif /*AVCODEC_VPE_DEC_COMMON_H*/ diff --git a/libavcodec/vpe_h264dec.c b/libavcodec/vpe_h264dec.c new file mode 100644 index 0000000..1e530da --- /dev/null +++ b/libavcodec/vpe_h264dec.c @@ -0,0 +1,69 @@ +/* + * Verisilicon VPE H264 Decoder + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include "hwconfig.h" +#include "internal.h" +#include "libavutil/pixfmt.h" + +#include "vpe_dec_common.h" + +static av_cold int vpe_h264_decode_init(AVCodecContext *avctx) +{ + return ff_vpe_decode_init(avctx, H264DEC_VPE); +} + +static const AVClass vpe_h264_decode_class = { + .class_name = "h264d_vpe", + .item_name = av_default_item_name, + .option = vpe_decode_options, + .version = LIBAVUTIL_VERSION_INT, +}; + +static const AVCodecHWConfigInternal *vpe_hw_configs[] = + { &(const AVCodecHWConfigInternal){ + .public = + { + .pix_fmt = AV_PIX_FMT_VPE, + .methods = AV_CODEC_HW_CONFIG_METHOD_HW_DEVICE_CTX | + AV_CODEC_HW_CONFIG_METHOD_HW_FRAMES_CTX, + .device_type = AV_HWDEVICE_TYPE_VPE, + }, + .hwaccel = NULL, + }, + NULL }; + +AVCodec ff_h264_vpe_decoder = { + .name = "h264_vpe", + .long_name = NULL_IF_CONFIG_SMALL("H264 (VPE VC8000D)"), + .type = AVMEDIA_TYPE_VIDEO, + .id = AV_CODEC_ID_H264, + .priv_data_size = sizeof(VpeDecCtx), + .init = &vpe_h264_decode_init, + .receive_frame = &ff_vpe_decode_receive_frame, + .close = &ff_vpe_decode_close, + .priv_class = &vpe_h264_decode_class, + .capabilities = + AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE | AV_CODEC_CAP_AVOID_PROBING, + .caps_internal = FF_CODEC_CAP_INIT_CLEANUP, + .pix_fmts = (const enum AVPixelFormat[]){ AV_PIX_FMT_VPE, AV_PIX_FMT_NONE }, + .hw_configs = vpe_hw_configs, + .wrapper_name = "vpe", + .bsfs = "h264_mp4toannexb", +}; From patchwork Sun May 31 06:27:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Guiyong" X-Patchwork-Id: 20034 Return-Path: X-Original-To: patchwork@ffaux-bg.ffmpeg.org Delivered-To: patchwork@ffaux-bg.ffmpeg.org Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by ffaux.localdomain (Postfix) with ESMTP id 771C5449F63 for ; Sun, 31 May 2020 09:27:17 +0300 (EEST) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 52926688337; Sun, 31 May 2020 09:27:17 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from shasxm03.verisilicon.com (unknown [101.89.135.45]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id E344F68030F for ; Sun, 31 May 2020 09:27:09 +0300 (EEST) Content-Language: en-US DKIM-Signature: v=1; a=rsa-sha256; d=Verisilicon.com; s=default; c=simple/simple; t=1590906427; h=from:subject:to:date:message-id; bh=XnZ23l+UQyRnDILGRi03Q2Oc7+bj85X7kPOoNLzX8Ck=; b=RBzHDDUp0bo/5vOZMhhi5mThTttxJcQW8m8ZJOwIgCHeOPfcgPAj2IFGP+KHH488UVNyTpbJf1I LLy2HGJkT6X5O/5l68mvKy01G1J2vbB4eIoBGEXv+LVqnKJyns9zb5U/+iNUdXbXFT+5tr5dLkf2H ER6+FMVIHGDlo+SGpWk= Received: from SHASXM03.verisilicon.com ([fe80::938:4dda:a2f9:38aa]) by SHASXM06.verisilicon.com ([fe80::59a8:ce34:dc14:ddda%16]) with mapi id 14.03.0123.003; Sun, 31 May 2020 14:27:06 +0800 From: "Zhang, Guiyong" To: "ffmpeg-devel@ffmpeg.org" Thread-Topic: [PATCH v2 3/8] avcodec/hevcdec: Add hevc VPE HW decoder Thread-Index: AdY3FIFPZL9UOcApQhKy0I0tM9sfIg== Date: Sun, 31 May 2020 06:27:06 +0000 Message-ID: <8847CE4FAA78C048A83C2AE8AA751D2770A5372F@SHASXM03.verisilicon.com> Accept-Language: en-US, zh-CN X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.10.25.45] x-tm-as-product-ver: SMEX-11.0.0.4179-8.100.1062-25452.001 x-tm-as-result: No--7.324600-0.000000-31 x-tm-as-user-approved-sender: Yes x-tm-as-user-blocked-sender: No MIME-Version: 1.0 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 Subject: [FFmpeg-devel] [PATCH v2 3/8] avcodec/hevcdec: Add hevc VPE HW decoder X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" This decoder uses VPI(VPE Interface) API and library for hevc decoding. Signed-off-by: Qin.Wang --- configure | 1 + libavcodec/Makefile | 1 + libavcodec/allcodecs.c | 1 + libavcodec/vpe_hevcdec.c | 69 ++++++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 72 insertions(+) create mode 100755 libavcodec/vpe_hevcdec.c -- 1.8.3.1 diff --git a/configure b/configure index 196842a..129a4ee 100755 --- a/configure +++ b/configure @@ -3081,6 +3081,7 @@ hevc_vaapi_encoder_select="cbs_h265 vaapi_encode" hevc_v4l2m2m_decoder_deps="v4l2_m2m hevc_v4l2_m2m" hevc_v4l2m2m_decoder_select="hevc_mp4toannexb_bsf" hevc_v4l2m2m_encoder_deps="v4l2_m2m hevc_v4l2_m2m" +hevc_vpe_decoder_deps="vpe" mjpeg_cuvid_decoder_deps="cuvid" mjpeg_qsv_decoder_select="qsvdec" mjpeg_qsv_encoder_deps="libmfx" diff --git a/libavcodec/Makefile b/libavcodec/Makefile index 0c80880..3193b32 100644 --- a/libavcodec/Makefile +++ b/libavcodec/Makefile @@ -762,6 +762,7 @@ OBJS-$(CONFIG_ZLIB_ENCODER) += lclenc.o OBJS-$(CONFIG_ZMBV_DECODER) += zmbv.o OBJS-$(CONFIG_ZMBV_ENCODER) += zmbvenc.o OBJS-$(CONFIG_H264_VPE_DECODER) += vpe_h264dec.o vpe_dec_common.o +OBJS-$(CONFIG_HEVC_VPE_DECODER) += vpe_hevcdec.o vpe_dec_common.o # (AD)PCM decoders/encoders OBJS-$(CONFIG_PCM_ALAW_DECODER) += pcm.o diff --git a/libavcodec/allcodecs.c b/libavcodec/allcodecs.c index 32f6fd6..ae5a791 100644 --- a/libavcodec/allcodecs.c +++ b/libavcodec/allcodecs.c @@ -809,6 +809,7 @@ extern AVCodec ff_vp9_qsv_decoder; extern AVCodec ff_vp9_vaapi_encoder; extern AVCodec ff_vp9_qsv_encoder; extern AVCodec ff_h264_vpe_decoder; +extern AVCodec ff_hevc_vpe_decoder; // The iterate API is not usable with ossfuzz due to the excessive size of binaries created #if CONFIG_OSSFUZZ diff --git a/libavcodec/vpe_hevcdec.c b/libavcodec/vpe_hevcdec.c new file mode 100755 index 0000000..0c5a9df --- /dev/null +++ b/libavcodec/vpe_hevcdec.c @@ -0,0 +1,69 @@ +/* + * Verisilicon VPE HEVC Decoder + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include "hwconfig.h" +#include "internal.h" +#include "libavutil/pixfmt.h" + +#include "vpe_dec_common.h" + +static av_cold int vpe_hevc_decode_init(AVCodecContext *avctx) +{ + return ff_vpe_decode_init(avctx, HEVCDEC_VPE); +} + +static const AVClass vpe_hevc_decode_class = { + .class_name = "hevcd_vpe", + .item_name = av_default_item_name, + .option = vpe_decode_options, + .version = LIBAVUTIL_VERSION_INT, +}; + +static const AVCodecHWConfigInternal *vpe_hw_configs[] = + { &(const AVCodecHWConfigInternal){ + .public = + { + .pix_fmt = AV_PIX_FMT_VPE, + .methods = AV_CODEC_HW_CONFIG_METHOD_HW_DEVICE_CTX | + AV_CODEC_HW_CONFIG_METHOD_HW_FRAMES_CTX, + .device_type = AV_HWDEVICE_TYPE_VPE, + }, + .hwaccel = NULL, + }, + NULL }; + +AVCodec ff_hevc_vpe_decoder = { + .name = "hevc_vpe", + .long_name = NULL_IF_CONFIG_SMALL("HEVC (VPE VC8000D)"), + .type = AVMEDIA_TYPE_VIDEO, + .id = AV_CODEC_ID_HEVC, + .priv_data_size = sizeof(VpeDecCtx), + .init = &vpe_hevc_decode_init, + .receive_frame = &ff_vpe_decode_receive_frame, + .close = &ff_vpe_decode_close, + .priv_class = &vpe_hevc_decode_class, + .capabilities = + AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE | AV_CODEC_CAP_AVOID_PROBING, + .caps_internal = FF_CODEC_CAP_INIT_CLEANUP, + .pix_fmts = (const enum AVPixelFormat[]){ AV_PIX_FMT_VPE, AV_PIX_FMT_NONE }, + .hw_configs = vpe_hw_configs, + .wrapper_name = "vpe", + .bsfs = "hevc_mp4toannexb", +}; From patchwork Sun May 31 06:27:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Guiyong" X-Patchwork-Id: 20035 Return-Path: X-Original-To: patchwork@ffaux-bg.ffmpeg.org Delivered-To: patchwork@ffaux-bg.ffmpeg.org Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by ffaux.localdomain (Postfix) with ESMTP id 52177449F63 for ; Sun, 31 May 2020 09:27:58 +0300 (EEST) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 2E6D268834D; Sun, 31 May 2020 09:27:58 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from shasxm03.verisilicon.com (shasxm06.verisilicon.com [101.89.135.45]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id F0571681868 for ; Sun, 31 May 2020 09:27:50 +0300 (EEST) Content-Language: en-US DKIM-Signature: v=1; a=rsa-sha256; d=Verisilicon.com; s=default; c=simple/simple; t=1590906469; h=from:subject:to:date:message-id; bh=8du6k6DXreB5t96NhpRcVOCWkGlLZ9iuBb32dZOxs3M=; b=e0UuI+j2jITEJ/DJgI9xuWpfGdnzpUcOm3VYhd5GXvQ6ASPq/HyRgRN5SdRV7M7BAPTkHc1HXmE KQvR8JCGss+vy5mAQ9/pnCaFfuHP5dkv+aAp9hzHu8gJYE2uJUuPU8YCpOyrhAsZnX9koqa4Hzd53 FKdjzgb4SpTFvBUlPhw= Received: from SHASXM03.verisilicon.com ([fe80::938:4dda:a2f9:38aa]) by SHASXM06.verisilicon.com ([fe80::59a8:ce34:dc14:ddda%16]) with mapi id 14.03.0123.003; Sun, 31 May 2020 14:27:48 +0800 From: "Zhang, Guiyong" To: "ffmpeg-devel@ffmpeg.org" Thread-Topic: [PATCH v2 4/8] avcodec/vp9dec: Add vp9 VPE HW decoder Thread-Index: AdY3FJoovsy+cE2fRgSr0OPObCQCLQ== Date: Sun, 31 May 2020 06:27:47 +0000 Message-ID: <8847CE4FAA78C048A83C2AE8AA751D2770A5373C@SHASXM03.verisilicon.com> Accept-Language: en-US, zh-CN X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.10.25.45] x-tm-as-product-ver: SMEX-11.0.0.4179-8.100.1062-25452.001 x-tm-as-result: No--8.277300-0.000000-31 x-tm-as-user-approved-sender: Yes x-tm-as-user-blocked-sender: No MIME-Version: 1.0 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 Subject: [FFmpeg-devel] [PATCH v2 4/8] avcodec/vp9dec: Add vp9 VPE HW decoder X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" This decoder uses VPI(VPE Interface) API and library for vp9 decoding. Signed-off-by: Qin.Wang --- configure | 1 + libavcodec/Makefile | 1 + libavcodec/allcodecs.c | 1 + libavcodec/vpe_vp9dec.c | 67 +++++++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 70 insertions(+) create mode 100755 libavcodec/vpe_vp9dec.c -- 1.8.3.1 diff --git a/configure b/configure index 129a4ee..f116c37 100755 --- a/configure +++ b/configure @@ -3131,6 +3131,7 @@ vp9_vaapi_encoder_select="vaapi_encode" vp9_qsv_encoder_deps="libmfx MFX_CODEC_VP9" vp9_qsv_encoder_select="qsvenc" vp9_v4l2m2m_decoder_deps="v4l2_m2m vp9_v4l2_m2m" +vp9_vpe_decoder_deps="vpe" wmv3_crystalhd_decoder_select="crystalhd" # parsers diff --git a/libavcodec/Makefile b/libavcodec/Makefile index 3193b32..6b59ae7 100644 --- a/libavcodec/Makefile +++ b/libavcodec/Makefile @@ -763,6 +763,7 @@ OBJS-$(CONFIG_ZMBV_DECODER) += zmbv.o OBJS-$(CONFIG_ZMBV_ENCODER) += zmbvenc.o OBJS-$(CONFIG_H264_VPE_DECODER) += vpe_h264dec.o vpe_dec_common.o OBJS-$(CONFIG_HEVC_VPE_DECODER) += vpe_hevcdec.o vpe_dec_common.o +OBJS-$(CONFIG_VP9_VPE_DECODER) += vpe_vp9dec.o vpe_dec_common.o # (AD)PCM decoders/encoders OBJS-$(CONFIG_PCM_ALAW_DECODER) += pcm.o diff --git a/libavcodec/allcodecs.c b/libavcodec/allcodecs.c index ae5a791..5c5c32e 100644 --- a/libavcodec/allcodecs.c +++ b/libavcodec/allcodecs.c @@ -810,6 +810,7 @@ extern AVCodec ff_vp9_vaapi_encoder; extern AVCodec ff_vp9_qsv_encoder; extern AVCodec ff_h264_vpe_decoder; extern AVCodec ff_hevc_vpe_decoder; +extern AVCodec ff_vp9_vpe_decoder; // The iterate API is not usable with ossfuzz due to the excessive size of binaries created #if CONFIG_OSSFUZZ diff --git a/libavcodec/vpe_vp9dec.c b/libavcodec/vpe_vp9dec.c new file mode 100755 index 0000000..9fd773d --- /dev/null +++ b/libavcodec/vpe_vp9dec.c @@ -0,0 +1,67 @@ +/* + * Verisilicon VPE VP9 Decoder + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include "hwconfig.h" +#include "internal.h" +#include "libavutil/pixfmt.h" + +#include "vpe_dec_common.h" + +static av_cold int vpe_vp9_decode_init(AVCodecContext *avctx) +{ + return ff_vpe_decode_init(avctx, VP9DEC_VPE); +} + +static const AVClass vpe_vp9_decode_class = { + .class_name = "vp9d_vpe", + .item_name = av_default_item_name, + .option = vpe_decode_options, + .version = LIBAVUTIL_VERSION_INT, +}; + +static const AVCodecHWConfigInternal *vpe_hw_configs[] = + { &(const AVCodecHWConfigInternal){ + .public = + { + .pix_fmt = AV_PIX_FMT_VPE, + .methods = AV_CODEC_HW_CONFIG_METHOD_HW_DEVICE_CTX | + AV_CODEC_HW_CONFIG_METHOD_HW_FRAMES_CTX, + .device_type = AV_HWDEVICE_TYPE_VPE, + }, + .hwaccel = NULL, + }, + NULL }; + +AVCodec ff_vp9_vpe_decoder = { + .name = "vp9_vpe", + .long_name = NULL_IF_CONFIG_SMALL("VP9 (VPE VC8000D)"), + .type = AVMEDIA_TYPE_VIDEO, + .id = AV_CODEC_ID_VP9, + .priv_data_size = sizeof(VpeDecCtx), + .init = &vpe_vp9_decode_init, + .receive_frame = &ff_vpe_decode_receive_frame, + .close = &ff_vpe_decode_close, + .priv_class = &vpe_vp9_decode_class, + .capabilities = + AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE | AV_CODEC_CAP_AVOID_PROBING, + .caps_internal = FF_CODEC_CAP_INIT_CLEANUP, + .pix_fmts = (const enum AVPixelFormat[]){ AV_PIX_FMT_VPE, AV_PIX_FMT_NONE }, + .hw_configs = vpe_hw_configs, +}; From patchwork Sun May 31 06:28:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Guiyong" X-Patchwork-Id: 20036 Return-Path: X-Original-To: patchwork@ffaux-bg.ffmpeg.org Delivered-To: patchwork@ffaux-bg.ffmpeg.org Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by ffaux.localdomain (Postfix) with ESMTP id 3603244A67B for ; Sun, 31 May 2020 09:28:34 +0300 (EEST) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 1D01D68826C; Sun, 31 May 2020 09:28:34 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from shasxm03.verisilicon.com (unknown [101.89.135.45]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 679B46880AB for ; Sun, 31 May 2020 09:28:25 +0300 (EEST) Content-Language: en-US DKIM-Signature: v=1; a=rsa-sha256; d=Verisilicon.com; s=default; c=simple/simple; t=1590906503; h=from:subject:to:date:message-id; bh=oS/CYMtePYUGbc8dvgP8jzpON92L6Y+X0ubAb/Ytybs=; b=Ehm1JyNaxubyjBMRmqLb2opAF75PrI6HVD22jzh0YThjhF82UgYeCmAa4qf6bn3caBoeTdMs++d NfghIF5NhVjaMw1zJPGGYCZxGvq5Dd617DWGTxL8hwHB7iMBuIKCD8x41t1GHyBf961hA6IuUWVNz S2O/OBur4Zh97y46/Ak= Received: from SHASXM03.verisilicon.com ([fe80::938:4dda:a2f9:38aa]) by SHASXM06.verisilicon.com ([fe80::59a8:ce34:dc14:ddda%16]) with mapi id 14.03.0123.003; Sun, 31 May 2020 14:28:22 +0800 From: "Zhang, Guiyong" To: "ffmpeg-devel@ffmpeg.org" Thread-Topic: [PATCH v2 5/8] avcodec/h26xenc: Add h264/hevc VPE HW encoder Thread-Index: AdY3FK48oIlOa6VTRyq168TY/psbsw== Date: Sun, 31 May 2020 06:28:21 +0000 Message-ID: <8847CE4FAA78C048A83C2AE8AA751D2770A53749@SHASXM03.verisilicon.com> Accept-Language: en-US, zh-CN X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.10.25.45] x-tm-as-product-ver: SMEX-11.0.0.4179-8.100.1062-25452.001 x-tm-as-result: No--10.781100-0.000000-31 x-tm-as-user-approved-sender: Yes x-tm-as-user-blocked-sender: No MIME-Version: 1.0 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 Subject: [FFmpeg-devel] [PATCH v2 5/8] avcodec/h26xenc: Add h264/hevc VPE HW encoder X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" This encoder uses VPI(VPE Interface) API and library for h264 and hevc encoding. Signed-off-by: rxchen --- configure | 2 + libavcodec/Makefile | 2 + libavcodec/allcodecs.c | 2 + libavcodec/vpe_h26xenc.c | 633 +++++++++++++++++++++++++++++++++++++++++++++++ libavcodec/vpe_h26xenc.h | 83 +++++++ 5 files changed, 722 insertions(+) create mode 100644 libavcodec/vpe_h26xenc.c create mode 100755 libavcodec/vpe_h26xenc.h -- 1.8.3.1 diff --git a/configure b/configure index f116c37..10d1322 100755 --- a/configure +++ b/configure @@ -3065,6 +3065,7 @@ h264_v4l2m2m_decoder_deps="v4l2_m2m h264_v4l2_m2m" h264_v4l2m2m_decoder_select="h264_mp4toannexb_bsf" h264_v4l2m2m_encoder_deps="v4l2_m2m h264_v4l2_m2m" h264_vpe_decoder_deps="vpe" +h264_vpe_encoder_deps="vpe" hevc_amf_encoder_deps="amf" hevc_cuvid_decoder_deps="cuvid" hevc_cuvid_decoder_select="hevc_mp4toannexb_bsf" @@ -3082,6 +3083,7 @@ hevc_v4l2m2m_decoder_deps="v4l2_m2m hevc_v4l2_m2m" hevc_v4l2m2m_decoder_select="hevc_mp4toannexb_bsf" hevc_v4l2m2m_encoder_deps="v4l2_m2m hevc_v4l2_m2m" hevc_vpe_decoder_deps="vpe" +hevc_vpe_encoder_deps="vpe" mjpeg_cuvid_decoder_deps="cuvid" mjpeg_qsv_decoder_select="qsvdec" mjpeg_qsv_encoder_deps="libmfx" diff --git a/libavcodec/Makefile b/libavcodec/Makefile index 6b59ae7..ffdfae4 100644 --- a/libavcodec/Makefile +++ b/libavcodec/Makefile @@ -764,6 +764,8 @@ OBJS-$(CONFIG_ZMBV_ENCODER) += zmbvenc.o OBJS-$(CONFIG_H264_VPE_DECODER) += vpe_h264dec.o vpe_dec_common.o OBJS-$(CONFIG_HEVC_VPE_DECODER) += vpe_hevcdec.o vpe_dec_common.o OBJS-$(CONFIG_VP9_VPE_DECODER) += vpe_vp9dec.o vpe_dec_common.o +OBJS-$(CONFIG_H264_VPE_ENCODER) += vpe_h26xenc.o +OBJS-$(CONFIG_HEVC_VPE_ENCODER) += vpe_h26xenc.o # (AD)PCM decoders/encoders OBJS-$(CONFIG_PCM_ALAW_DECODER) += pcm.o diff --git a/libavcodec/allcodecs.c b/libavcodec/allcodecs.c index 5c5c32e..91f4e61 100644 --- a/libavcodec/allcodecs.c +++ b/libavcodec/allcodecs.c @@ -811,6 +811,8 @@ extern AVCodec ff_vp9_qsv_encoder; extern AVCodec ff_h264_vpe_decoder; extern AVCodec ff_hevc_vpe_decoder; extern AVCodec ff_vp9_vpe_decoder; +extern AVCodec ff_h264_vpe_encoder; +extern AVCodec ff_hevc_vpe_encoder; // The iterate API is not usable with ossfuzz due to the excessive size of binaries created #if CONFIG_OSSFUZZ diff --git a/libavcodec/vpe_h26xenc.c b/libavcodec/vpe_h26xenc.c new file mode 100644 index 0000000..aff772e --- /dev/null +++ b/libavcodec/vpe_h26xenc.c @@ -0,0 +1,633 @@ +/* + * Verisilicon VPE H264/HEVC Encoder + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ +#include +#include +#include +#include "libavutil/opt.h" +#include "libavcodec/internal.h" +#include "vpe_h26xenc.h" +#include "libavutil/hwcontext_vpe.h" + +static av_cold int vpe_h26x_encode_close(AVCodecContext *avctx); + +/** + * Output the ES data in VpiPacket to AVPacket + */ +static int vpe_h26xe_output_es_to_avpacket(AVPacket *av_packet, + VpiPacket *vpi_packet) +{ + int ret = 0; + if (av_new_packet(av_packet, vpi_packet->size)) + return AVERROR(ENOMEM); + + memcpy(av_packet->data, (uint8_t *)vpi_packet->data, vpi_packet->size); + av_packet->pts = vpi_packet->pts; + av_packet->dts = vpi_packet->pkt_dts; + + return ret; +} + +/** + * Output the ES data in VpiPacket to AVPacket and control the encoder to + * update the statistic + */ +static int vpe_h26xe_output_es_update_statis(AVCodecContext *avctx, + AVPacket *av_packet, + VpiPacket *vpi_packet) +{ + int ret = 0; + VpeH26xEncCtx *enc_ctx = NULL; + int stream_size = vpi_packet->size; + VpiCtrlCmdParam cmd; + + if ((ret = vpe_h26xe_output_es_to_avpacket(av_packet, vpi_packet)) < 0) { + return ret; + } + + enc_ctx = (VpeH26xEncCtx *)avctx->priv_data; + cmd.cmd = VPI_CMD_H26xENC_UPDATE_STATISTIC; + cmd.data = &stream_size; + ret = enc_ctx->api->control(enc_ctx->ctx, (void *)&cmd, NULL); + if (ret != 0) { + av_log(avctx, AV_LOG_ERROR, "H26x_enc control STATISTIC failed\n"); + return AVERROR_EXTERNAL; + } + + return ret; +} + +/** + * Find the needed frame in frame_queue_for_enc[MAX_WAIT_DEPTH] and pass the + * frame info includes address to VpiFrame + * + * @param enc_ctx VpeH26xEnc context + * @param vpi_frame frame info for VPE encoding + * @return 0 on success, negative error code AVERROR_xxx on failure + */ +static int vpe_h26xe_find_frame_for_enc(VpeH26xEncCtx *enc_ctx, + VpiFrame *vpi_frame) +{ + VpeH26xEncFrm *frame_queued = NULL; + VpiFlushState flush_state; + int frame_index_need = 0; + int i = 0; + int ret = 0; + VpiCtrlCmdParam cmd; + VpiFrame *in_vpi_frame; + + cmd.cmd = VPI_CMD_H26xENC_GET_NEXT_PIC; + ret = enc_ctx->api->control(enc_ctx->ctx, (void *)&cmd, &frame_index_need); + if (ret != 0) { + av_log(enc_ctx, AV_LOG_ERROR, "H26x_enc control NEXT_PIC failed\n"); + return AVERROR_EXTERNAL; + } + + cmd.cmd = VPI_CMD_H26xENC_GET_FLUSHSTATE; + ret = enc_ctx->api->control(enc_ctx->ctx, (void *)&cmd, &flush_state); + if (ret != 0) { + av_log(enc_ctx, AV_LOG_ERROR, "H26x_enc control get STATE failed\n"); + return AVERROR_EXTERNAL; + } + /*Find the frame according to frame index */ + if (flush_state == VPIH26X_FLUSH_IDLE) { /*Normal encoding stage*/ + for (i = 0; i < MAX_WAIT_DEPTH; i++) { + if (enc_ctx->frame_queue_for_enc[i].state == 1) { + frame_queued = &enc_ctx->frame_queue_for_enc[i]; + if (frame_queued->frame_index == frame_index_need) { + frame_queued->in_pass_one_queue = 1; + goto find_pic; + } + } + } + if (i == MAX_WAIT_DEPTH) { + if (!enc_ctx->no_input_frm) { + /*No frame available, request new frames by returning EAGAIN.*/ + return AVERROR(EAGAIN); + } + /*No frame availbale and no frame input, return 0 for encoder to + start internal flushing operation*/ + return 0; + } + } else if (flush_state == VPIH26X_FLUSH_TRANSPIC) {/*flush_transpic stage*/ + /*Find the frame in the queue at flush_transpic stage + There is no frame inputs at this stage*/ + for (i = 0; i < MAX_WAIT_DEPTH; i++) { + if ((enc_ctx->frame_queue_for_enc[i].state == 1) && + (enc_ctx->frame_queue_for_enc[i].in_pass_one_queue != 1)) { + frame_queued = &enc_ctx->frame_queue_for_enc[i]; + if (frame_queued->frame_index == frame_index_need) { + frame_queued->in_pass_one_queue = 1; + goto find_pic; + } + } + } + + if (i == MAX_WAIT_DEPTH) { + /*No frame available for this stage, return 0 for encoder to continue + internal flushing operation*/ + return 0; + } + } else { /*Internal flushing stage*/ + /*Return 0 for encoder's internal flushing operation*/ + return 0; + } + +find_pic: + cmd.cmd = VPI_CMD_H26xENC_SET_FINDPIC; + ret = enc_ctx->api->control(enc_ctx->ctx, (void *)&cmd, NULL); + if (ret != 0) { + av_log(enc_ctx, AV_LOG_ERROR, "H26x_enc control PPIDX failed\n"); + return AVERROR_EXTERNAL; + } + /*Fill vpi_frame with the frame info*/ + in_vpi_frame = (VpiFrame *)frame_queued->frame->data[0]; + vpi_frame->width = frame_queued->frame->width; + vpi_frame->height = frame_queued->frame->height; + vpi_frame->linesize[0] = frame_queued->frame->linesize[0]; + vpi_frame->linesize[1] = frame_queued->frame->linesize[1]; + vpi_frame->linesize[2] = frame_queued->frame->linesize[2]; + vpi_frame->key_frame = frame_queued->frame->key_frame; + vpi_frame->pts = frame_queued->frame->pts; + vpi_frame->pkt_dts = frame_queued->frame->pkt_dts; + vpi_frame->data[0] = in_vpi_frame->data[0]; + vpi_frame->data[1] = in_vpi_frame->data[1]; + vpi_frame->data[2] = in_vpi_frame->data[2]; + return 0; +} + +/** + * av_frame_free the AVFrame in the AVFrame input queue + */ +static void vpe_h26xe_av_frame_free(AVCodecContext *avctx) +{ + VpeH26xEncCtx *enc_ctx = (VpeH26xEncCtx *)avctx->priv_data; + VpeH26xEncFrm *frame_queued = NULL; + int i = 0; + VpiFrame *cur_frame; + + for (i = 0; i < MAX_WAIT_DEPTH; i++) { + if (enc_ctx->frame_queue_for_enc[i].state == 1) { + frame_queued = &enc_ctx->frame_queue_for_enc[i]; + if (frame_queued->frame) { + cur_frame = (VpiFrame *)frame_queued->frame->data[0]; + av_frame_free(&frame_queued->frame); + cur_frame->used_cnt--; + if (cur_frame->used_cnt == 0) { + cur_frame->locked = 0; + } + } + frame_queued->state = 0; + } + } +} + +/** + * av_frame_unref the consumed AVFrame in the AVFrame input queue + */ +static void vpe_h26xe_av_frame_unref(AVCodecContext *avctx, + int consumed_frame_index) +{ + VpeH26xEncCtx *enc_ctx = (VpeH26xEncCtx *)avctx->priv_data; + VpeH26xEncFrm *frame_queued = NULL; + int i; + VpiFrame *cur_frame; + + /*find the need_frame_index */ + for (i = 0; i < MAX_WAIT_DEPTH; i++) { + if ((enc_ctx->frame_queue_for_enc[i].state == 1) && + (enc_ctx->frame_queue_for_enc[i].in_pass_one_queue == 1)) { + frame_queued = &enc_ctx->frame_queue_for_enc[i]; + if (frame_queued->frame_index == consumed_frame_index) { + goto find_pic; + } + } + } + + if (i == MAX_WAIT_DEPTH) + return; + +find_pic: + frame_queued->frame_index = -1; + frame_queued->state = 0; + frame_queued->in_pass_one_queue = 0; + cur_frame = (VpiFrame *)frame_queued->frame->data[0]; + cur_frame->used_cnt--; + if (cur_frame->used_cnt == 0) { + cur_frame->locked = 0; + } + av_frame_unref(frame_queued->frame); +} + +/** + * Output the encoded data to AVPacket + * The encoded data in VpePaket is outputed to AVPacket, unref/free the + * consumed AVFrame buffer in the queue, output non-empty AVPacket info + * according to the return value of the VPE encoding function. + * + * @param avctx AVCodec context + * @param av_packet output AVPacket + * @param vpi_packet contain the encoded data from VPE encoder + * @param enc_ret the return value of VPE encoding function + * @return 0 on success, negative error code AVERROR_xxx on failure + */ +static int vpe_h26xe_output_avpacket(AVCodecContext *avctx, AVPacket *av_packet, + VpiPacket *vpi_packet, int enc_ret) +{ + int ret = 0; + + switch (enc_ret) { + /*output AVPacket at encoder start stage according to enc_ret*/ + case VPI_ENC_START_OK: /*OK returned at encoder's start stage*/ + if (vpe_h26xe_output_es_to_avpacket(av_packet, vpi_packet) < 0) { + vpe_h26xe_av_frame_free(avctx); + return AVERROR_INVALIDDATA; + } + break; + case VPI_ENC_START_ERROR: /*ERROR returned at encoder's start stage*/ + vpe_h26xe_av_frame_free(avctx); + av_log(avctx, AV_LOG_ERROR, "H26x enocoding start fails. ret = %d\n", + enc_ret); + ret = AVERROR_INVALIDDATA; + break; + + /*output AVPacket at encoder encoding stage according to enc_ret*/ + case VPI_ENC_ENC_OK: /*OK returned at encoder's encoding stage*/ + vpe_h26xe_av_frame_unref(avctx, vpi_packet->index_encoded); + ret = AVERROR(EAGAIN); + break; + case VPI_ENC_ENC_FRM_ENQUEUE: /*FRM_ENQUEUE returned at encoding stage*/ + ret = AVERROR(EAGAIN); + break; + case VPI_ENC_ENC_READY: /*READY returned at encoder's encoding stage*/ + if (vpe_h26xe_output_es_update_statis(avctx, av_packet, vpi_packet) < 0) { + ret = AVERROR_INVALIDDATA; + av_log(avctx, AV_LOG_ERROR, "h26xe fails at VPI_ENC_ENC_READY\n"); + } + vpe_h26xe_av_frame_unref(avctx, vpi_packet->index_encoded); + break; + case VPI_ENC_ENC_ERROR: /*ERROR returned at encoder's encoding stage*/ + vpe_h26xe_av_frame_free(avctx); + ret = AVERROR_INVALIDDATA; + break; + + /*output AVPacket at encoder flushing stage according to enc_ret*/ + case VPI_ENC_FLUSH_IDLE_READY: /*READY returned at FLUSH_IDLE stage*/ + case VPI_ENC_FLUSH_PREPARE: /*READY at FLUSH_PREPARE stage*/ + case VPI_ENC_FLUSH_TRANSPIC_READY: /*READY at FLUSH_TRANSPIC stage*/ + case VPI_ENC_FLUSH_ENCDATA_READY: /*READY at FLUSH_ENCDATA stage*/ + if (vpe_h26xe_output_es_update_statis(avctx, av_packet, vpi_packet) < 0) { + ret = AVERROR_INVALIDDATA; + av_log(avctx, AV_LOG_ERROR, "h26xe fails at FLUSH READY\n"); + } + vpe_h26xe_av_frame_unref(avctx, vpi_packet->index_encoded); + break; + case VPI_ENC_FLUSH_FINISH_OK: /*OK returned at FLUSH_FINISH stage*/ + if (vpe_h26xe_output_es_to_avpacket(av_packet, vpi_packet) < 0) { + ret = AVERROR_INVALIDDATA; + av_log(avctx, AV_LOG_ERROR, + "h26xe fails at VPI_ENC_FLUSH_FINISH_OK\n"); + } + vpe_h26xe_av_frame_free(avctx); + break; + + case VPI_ENC_FLUSH_IDLE_ERROR: /*ERROR returned at FLUSH_IDLE stage*/ + case VPI_ENC_FLUSH_TRANSPIC_ERROR: /*ERROR at FLUSH_TRANSPIC stage*/ + case VPI_ENC_FLUSH_ENCDATA_ERROR: /*ERROR at FLUSH_ENCDATA stage*/ + case VPI_ENC_FLUSH_FINISH_ERROR: /*ERROR at FLUSH_FINISH stage*/ + vpe_h26xe_av_frame_free(avctx); + ret = AVERROR_INVALIDDATA; + break; + + case VPI_ENC_FLUSH_FINISH_END: /*END at FLUSH_FINISH stage*/ + vpe_h26xe_av_frame_free(avctx); + ret = AVERROR_EOF; + break; + } + return ret; +} + +static av_cold int vpe_h26x_encode_init(AVCodecContext *avctx) +{ + int ret = 0; + VpiFrame *frame_hwctx; + AVHWFramesContext *hwframe_ctx; + AVVpeFramesContext *vpeframe_ctx; + VpeH26xEncCtx *enc_ctx = (VpeH26xEncCtx *)avctx->priv_data; + + /*Create context and get the APIs for h26x encoder from VPI layer */ + ret = vpi_create(&enc_ctx->ctx, &enc_ctx->api, H26XENC_VPE); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "h26x_enc vpe create failure %d\n", ret); + goto error; + } + + memset(&enc_ctx->h26x_enc_cfg, 0, sizeof(H26xEncCfg)); + memset(enc_ctx->frame_queue_for_enc, 0, + sizeof(enc_ctx->frame_queue_for_enc)); + + /*Get HW frame. The avctx->hw_frames_ctx is the reference to the + AVHWFramesContext describing the input frame for h26x encoder*/ + if (avctx->hw_frames_ctx) { + enc_ctx->hwframe = av_buffer_ref(avctx->hw_frames_ctx); + if (!enc_ctx->hwframe) { + ret = AVERROR(ENOMEM); + goto error; + } + } else { + ret = AVERROR_INVALIDDATA; + goto error; + } + + hwframe_ctx = (AVHWFramesContext *)enc_ctx->hwframe->data; + vpeframe_ctx = (AVVpeFramesContext *)hwframe_ctx->hwctx; + frame_hwctx = vpeframe_ctx->frame; + + if (avctx->codec->id == AV_CODEC_ID_HEVC) { + sprintf(&enc_ctx->module_name[0], "%s", "HEVCENC"); + } else if (avctx->codec->id == AV_CODEC_ID_H264) { + sprintf(&enc_ctx->module_name[0], "%s", "H264ENC"); + } + + /*Initialize the VPE h26x encoder configuration*/ + strcpy(enc_ctx->h26x_enc_cfg.module_name, enc_ctx->module_name); + enc_ctx->h26x_enc_cfg.crf = enc_ctx->crf; + enc_ctx->h26x_enc_cfg.preset = enc_ctx->preset; + if (avctx->codec->id == AV_CODEC_ID_HEVC) { + enc_ctx->h26x_enc_cfg.codec_id = CODEC_ID_HEVC; + } else if (avctx->codec->id == AV_CODEC_ID_H264) { + enc_ctx->h26x_enc_cfg.codec_id = CODEC_ID_H264; + } else { + av_log(avctx, AV_LOG_ERROR, + "%s, avctx->codec->id isn't HEVC or H264 \n", __FUNCTION__); + return AVERROR(EINVAL); + } + enc_ctx->h26x_enc_cfg.codec_name = avctx->codec->name; + enc_ctx->h26x_enc_cfg.profile = enc_ctx->profile; + enc_ctx->h26x_enc_cfg.level = enc_ctx->level; + enc_ctx->h26x_enc_cfg.bit_per_second = avctx->bit_rate; + /* Input frame rate numerator*/ + enc_ctx->h26x_enc_cfg.input_rate_numer = avctx->framerate.num; + /* Input frame rate denominator*/ + enc_ctx->h26x_enc_cfg.input_rate_denom = avctx->framerate.den; + enc_ctx->h26x_enc_cfg.lum_width_src = avctx->width; + enc_ctx->h26x_enc_cfg.lum_height_src = avctx->height; + switch (avctx->pix_fmt) { + case AV_PIX_FMT_YUV420P: + enc_ctx->h26x_enc_cfg.input_format = VPI_YUV420_PLANAR; + break; + case AV_PIX_FMT_NV12: + enc_ctx->h26x_enc_cfg.input_format = VPI_YUV420_SEMIPLANAR; + break; + case AV_PIX_FMT_NV21: + enc_ctx->h26x_enc_cfg.input_format = VPI_YUV420_SEMIPLANAR_VU; + break; + case AV_PIX_FMT_YUV420P10LE: + enc_ctx->h26x_enc_cfg.input_format = VPI_YUV420_PLANAR_10BIT_P010; + break; + default: + enc_ctx->h26x_enc_cfg.input_format = VPI_YUV420_PLANAR; + break; + } + enc_ctx->h26x_enc_cfg.enc_params = enc_ctx->enc_params; + enc_ctx->h26x_enc_cfg.frame_ctx = frame_hwctx; + + /*Call the VPE h26x encoder initialization function*/ + ret = enc_ctx->api->init(enc_ctx->ctx, &enc_ctx->h26x_enc_cfg); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "vpe_h264x_encode_init failure\n"); + goto error; + } + + return 0; + +error: + vpe_h26x_encode_close(avctx); + return ret; +} + +static int vpe_h26x_encode_send_frame(AVCodecContext *avctx, + const AVFrame *input_frame) +{ + int i = 0; + int ret = 0; + VpeH26xEncFrm *frame_queued = NULL; + VpeH26xEncCtx *enc_ctx = (VpeH26xEncCtx *)avctx->priv_data; + VpiFrame *vpi_frame; + VpiCtrlCmdParam cmd; + + if (!input_frame) { + enc_ctx->no_input_frm = 1; + cmd.cmd = VPI_CMD_H26xENC_SET_NO_INFRM; + cmd.data = &enc_ctx->no_input_frm; + ret = enc_ctx->api->control(enc_ctx->ctx, (void *)&cmd, NULL); + if (ret != 0) { + av_log(enc_ctx, AV_LOG_ERROR, "H26x_enc control NO_INFRM failed\n"); + return AVERROR_EXTERNAL; + } + return ret; + } + + vpi_frame = (VpiFrame *)input_frame->data[0]; + vpi_frame->used_cnt++; + + enc_ctx->no_input_frm = 0; + /*Look for empty wait member in the queue for input_frame storing*/ + for (i = 0; i < MAX_WAIT_DEPTH; i++) { + if (enc_ctx->frame_queue_for_enc[i].state == 0) { + frame_queued = (VpeH26xEncFrm *)&enc_ctx->frame_queue_for_enc[i]; + + /*Fill the queue member with input_frame*/ + frame_queued->state = 1; + frame_queued->frame_index = enc_ctx->frame_index; + frame_queued->in_pass_one_queue = 0; + if (!frame_queued->frame) { + frame_queued->frame = av_frame_alloc(); + if (!frame_queued->frame) { + av_log(enc_ctx, AV_LOG_ERROR, + "No av frame mem alloc for enc data\n"); + return AVERROR(ENOMEM); + } + } + av_frame_unref(frame_queued->frame); + ret = av_frame_ref(frame_queued->frame, input_frame); + enc_ctx->frame_index++; + + break; + } + } + if (i == MAX_WAIT_DEPTH) + ret = AVERROR(EAGAIN); + + return ret; +} + +static int vpe_h26x_encode_receive_packet(AVCodecContext *avctx, + AVPacket *av_pkt) +{ + int ret = 0; + int enc_ret = 0; + VpiFrame vpi_frame; + VpiPacket vpi_packet; + VpiCtrlCmdParam cmd; + VpeH26xEncCtx *enc_ctx = (VpeH26xEncCtx *)avctx->priv_data; + + memset(&vpi_frame, 0, sizeof(VpiFrame)); + memset(&vpi_packet, 0, sizeof(VpiPacket)); + ret = vpe_h26xe_find_frame_for_enc(enc_ctx, &vpi_frame); + if (ret < 0) { + return ret; + } + + /*Call the VPE h26x encoder encoding function*/ + enc_ret = enc_ctx->api->encode(enc_ctx->ctx, &vpi_frame, &vpi_packet); + if (enc_ret >= VPI_ENC_FLUSH_IDLE_READY) { + /*No input at flush stage*/ + enc_ctx->no_input_frm = 1; + cmd.cmd = VPI_CMD_H26xENC_SET_NO_INFRM; + cmd.data = &enc_ctx->no_input_frm; + if (enc_ctx->api->control(enc_ctx->ctx, (void *)&cmd, NULL) != 0) { + av_log(avctx, AV_LOG_ERROR, "H26x_enc control NO_INFRM failed"); + return AVERROR_EXTERNAL; + } + } else { + enc_ctx->no_input_frm = 0; + } + + ret = vpe_h26xe_output_avpacket(avctx, av_pkt, &vpi_packet, enc_ret); + + return ret; +} + +static av_cold int vpe_h26x_encode_close(AVCodecContext *avctx) +{ + int ret = 0; + VpeH26xEncCtx *enc_ctx = (VpeH26xEncCtx *)avctx->priv_data; + + enc_ctx->api->close(enc_ctx->ctx); + av_buffer_unref(&enc_ctx->hwframe); + ret = vpi_destroy(enc_ctx->ctx); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "h26x encoder vpi_destroy failure\n"); + } + + return ret; +} + +#define OFFSETOPT(x) offsetof(VpeH26xEncCtx, x) +#define FLAGS \ + (AV_OPT_FLAG_ENCODING_PARAM | AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_EXPORT) + +static const AVOption vpe_h26x_encode_options[] = { + { "crf", + "VCE Constant rate factor mode. Works with lookahead turned on.", + OFFSETOPT(crf), + AV_OPT_TYPE_INT, + { .i64 = -1 }, + -1, + 51, + FLAGS }, + { "preset", + "Set the encoding preset", + OFFSETOPT(preset), + AV_OPT_TYPE_STRING, + { .str = NULL }, + .flags = FLAGS }, + { "profile", + "Set encode profile. HEVC:0-2; H264:9-12", + OFFSETOPT(profile), + AV_OPT_TYPE_STRING, + { .str = NULL }, + .flags = FLAGS }, + { "level", + "Set encode level", + OFFSETOPT(level), + AV_OPT_TYPE_STRING, + { .str = NULL }, + .flags = FLAGS }, + { "enc_params", + "Override the VPE h264/hevc configuration using a :-separated list of " + "key=value parameters. For more details, please refer to the document at" + "https://github.com/VeriSilicon/VPE/blob/master/doc/enc_params_h26x.md", + OFFSETOPT(enc_params), + AV_OPT_TYPE_STRING, + { .str = NULL }, + .flags = FLAGS }, + { NULL }, +}; + +static const AVCodecDefault vpe_h264_encode_defaults[] = { + { NULL }, +}; + +static const AVCodecDefault vpe_hevc_encode_defaults[] = { + { NULL }, +}; + +static const AVClass vpe_encode_h264_class = { + .class_name = "h264e_vpe", + .item_name = av_default_item_name, + .option = vpe_h26x_encode_options, + .version = LIBAVUTIL_VERSION_INT, +}; + +static const AVClass vpe_encode_hevc_class = { + .class_name = "hevce_vpe", + .item_name = av_default_item_name, + .option = vpe_h26x_encode_options, + .version = LIBAVUTIL_VERSION_INT, +}; + +AVCodec ff_h264_vpe_encoder = { + .name = "h264enc_vpe", + .long_name = NULL_IF_CONFIG_SMALL("H264 (VPE VC8000E)"), + .type = AVMEDIA_TYPE_VIDEO, + .id = AV_CODEC_ID_H264, + .priv_data_size = sizeof(VpeH26xEncCtx), + .init = &vpe_h26x_encode_init, + .close = &vpe_h26x_encode_close, + .send_frame = &vpe_h26x_encode_send_frame, + .receive_packet = &vpe_h26x_encode_receive_packet, + .priv_class = &vpe_encode_h264_class, + .capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE, + .defaults = vpe_h264_encode_defaults, + .pix_fmts = + (const enum AVPixelFormat[]){ AV_PIX_FMT_VPE, AV_PIX_FMT_YUV420P, + AV_PIX_FMT_NONE }, + .wrapper_name = "vpe", +}; + +AVCodec ff_hevc_vpe_encoder = { + .name = "hevcenc_vpe", + .long_name = NULL_IF_CONFIG_SMALL("HEVC (VPE VC8000E)"), + .type = AVMEDIA_TYPE_VIDEO, + .id = AV_CODEC_ID_HEVC, + .priv_data_size = sizeof(VpeH26xEncCtx), + .init = &vpe_h26x_encode_init, + .close = &vpe_h26x_encode_close, + .send_frame = &vpe_h26x_encode_send_frame, + .receive_packet = &vpe_h26x_encode_receive_packet, + .priv_class = &vpe_encode_hevc_class, + .capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE, + .defaults = vpe_hevc_encode_defaults, + .pix_fmts = + (const enum AVPixelFormat[]){ AV_PIX_FMT_VPE, AV_PIX_FMT_YUV420P, + AV_PIX_FMT_NONE }, + .wrapper_name = "vpe", +}; diff --git a/libavcodec/vpe_h26xenc.h b/libavcodec/vpe_h26xenc.h new file mode 100755 index 0000000..e7bdeb0 --- /dev/null +++ b/libavcodec/vpe_h26xenc.h @@ -0,0 +1,83 @@ +/* + * Verisilicon VPE Video H264/Hevc Encoder interface + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#ifndef AVCODEC_VPE_H26XENC_H +#define AVCODEC_VPE_H26XENC_H + +#include +#include + +#define MAX_WAIT_DEPTH 78 + +typedef struct VpeH26xEncFrm { + /*The state of used or not*/ + int state; + + /*In pass one queue or not*/ + int in_pass_one_queue; + + /*The index of the frame*/ + int frame_index; + + /*The pointer for input AVFrame*/ + AVFrame *frame; +} VpeH26xEncFrm; + +typedef struct VpeH26xEncCtx { + AVClass *class; + /*The hardware frame context containing the input frames*/ + AVBufferRef *hwframe; + + /*The hardware device context*/ + AVBufferRef *hwdevice; + + /*The name of the device*/ + char *dev_name; + + /*The name of h264 or h265 module*/ + char module_name[20]; + + /*VPI context*/ + VpiCtx ctx; + + /*The pointer of the VPE API*/ + VpiApi *api; + + /*VPE h26x encoder configure*/ + H26xEncCfg h26x_enc_cfg; + + /*The queue for the input AVFrames*/ + VpeH26xEncFrm frame_queue_for_enc[MAX_WAIT_DEPTH]; + + /*No input frame*/ + int no_input_frm; + + /*The index of the frame*/ + int frame_index; + + /*Params passed from ffmpeg */ + int crf; /*VCE Constant rate factor mode*/ + char *preset; /*Set the encoding preset*/ + char *profile; /*Set the profile of the encoding*/ + char *level; /*Set the level of the encoding*/ + char *enc_params; /*The encoding parameters*/ +} VpeH26xEncCtx; + +#endif From patchwork Sun May 31 06:29:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Guiyong" X-Patchwork-Id: 20037 Return-Path: X-Original-To: patchwork@ffaux-bg.ffmpeg.org Delivered-To: patchwork@ffaux-bg.ffmpeg.org Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by ffaux.localdomain (Postfix) with ESMTP id 5B77944A712 for ; Sun, 31 May 2020 09:29:31 +0300 (EEST) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 303D968048C; Sun, 31 May 2020 09:29:31 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from shasxm03.verisilicon.com (unknown [101.89.135.45]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id EEE10681868 for ; Sun, 31 May 2020 09:29:22 +0300 (EEST) Content-Language: en-US DKIM-Signature: v=1; a=rsa-sha256; d=Verisilicon.com; s=default; c=simple/simple; t=1590906561; h=from:subject:to:date:message-id; bh=zd602rKKol0PZ2lmX5Tj1GrNtxbq6ZOe1CnWtX6WVz4=; b=f36uvoNr/0CXUry1vOvyPMxIj5pf6khqDqby7tzjMpUbxFS5VKEkp+92/uPbIrxvA456emTd1io WAo3Yk30Ae9Tx6os1TZivIcod8NMPEPW+gtzi7/PNiijVNUKOwYKrPmXAyzzwbQS4fgqMy4BOCcPr flwREcMmYJPQS+6GnDU= Received: from SHASXM03.verisilicon.com ([fe80::938:4dda:a2f9:38aa]) by SHASXM06.verisilicon.com ([fe80::59a8:ce34:dc14:ddda%16]) with mapi id 14.03.0123.003; Sun, 31 May 2020 14:29:20 +0800 From: "Zhang, Guiyong" To: "ffmpeg-devel@ffmpeg.org" Thread-Topic: [PATCH v2 6/8] vcodec/vp9enc: Add vp9 VPE HW encoder Thread-Index: AdY3FNEGvpr6mSsySOWL8bFegAEaiA== Date: Sun, 31 May 2020 06:29:19 +0000 Message-ID: <8847CE4FAA78C048A83C2AE8AA751D2770A53756@SHASXM03.verisilicon.com> Accept-Language: en-US, zh-CN X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.10.25.45] x-tm-as-product-ver: SMEX-11.0.0.4179-8.100.1062-25452.001 x-tm-as-result: No--10.783500-0.000000-31 x-tm-as-user-approved-sender: Yes x-tm-as-user-blocked-sender: No MIME-Version: 1.0 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 Subject: [FFmpeg-devel] [PATCH v2 6/8] vcodec/vp9enc: Add vp9 VPE HW encoder X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" This encoder uses VPI(VPE Interface) API and library for vp9 encoding. Signed-off-by: Guiyong.zhang --- configure | 1 + libavcodec/Makefile | 1 + libavcodec/allcodecs.c | 1 + libavcodec/vpe_vp9enc.c | 536 ++++++++++++++++++++++++++++++++++++++++++++++++ libavcodec/vpe_vp9enc.h | 83 ++++++++ 5 files changed, 622 insertions(+) create mode 100755 libavcodec/vpe_vp9enc.c create mode 100755 libavcodec/vpe_vp9enc.h -- 1.8.3.1 diff --git a/configure b/configure index 10d1322..0522f21 100755 --- a/configure +++ b/configure @@ -3134,6 +3134,7 @@ vp9_qsv_encoder_deps="libmfx MFX_CODEC_VP9" vp9_qsv_encoder_select="qsvenc" vp9_v4l2m2m_decoder_deps="v4l2_m2m vp9_v4l2_m2m" vp9_vpe_decoder_deps="vpe" +vp9_vpe_encoder_deps="vpe" wmv3_crystalhd_decoder_select="crystalhd" # parsers diff --git a/libavcodec/Makefile b/libavcodec/Makefile index ffdfae4..7cf1d4e 100644 --- a/libavcodec/Makefile +++ b/libavcodec/Makefile @@ -766,6 +766,7 @@ OBJS-$(CONFIG_HEVC_VPE_DECODER) += vpe_hevcdec.o vpe_dec_common.o OBJS-$(CONFIG_VP9_VPE_DECODER) += vpe_vp9dec.o vpe_dec_common.o OBJS-$(CONFIG_H264_VPE_ENCODER) += vpe_h26xenc.o OBJS-$(CONFIG_HEVC_VPE_ENCODER) += vpe_h26xenc.o +OBJS-$(CONFIG_VP9_VPE_ENCODER) += vpe_vp9enc.o # (AD)PCM decoders/encoders OBJS-$(CONFIG_PCM_ALAW_DECODER) += pcm.o diff --git a/libavcodec/allcodecs.c b/libavcodec/allcodecs.c index 91f4e61..cebfc56 100644 --- a/libavcodec/allcodecs.c +++ b/libavcodec/allcodecs.c @@ -813,6 +813,7 @@ extern AVCodec ff_hevc_vpe_decoder; extern AVCodec ff_vp9_vpe_decoder; extern AVCodec ff_h264_vpe_encoder; extern AVCodec ff_hevc_vpe_encoder; +extern AVCodec ff_vp9_vpe_encoder; // The iterate API is not usable with ossfuzz due to the excessive size of binaries created #if CONFIG_OSSFUZZ diff --git a/libavcodec/vpe_vp9enc.c b/libavcodec/vpe_vp9enc.c new file mode 100755 index 0000000..087ddc5 --- /dev/null +++ b/libavcodec/vpe_vp9enc.c @@ -0,0 +1,536 @@ +/* + * Verisilicon VPE VP9 Encoder + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include "libavutil/pixfmt.h" +#include "vpe_vp9enc.h" + +#define OFFSET(x) (offsetof(VpeEncVp9Ctx, x)) +#define FLAGS \ + (AV_OPT_FLAG_ENCODING_PARAM | AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_EXPORT) + +const static AVOption vpe_enc_vp9_options[] = { + { "preset", + "Set the encoding preset, superfast/fast/medium/slow/superslow", + OFFSET(preset), + AV_OPT_TYPE_STRING, + { .str = "fast" }, + 0, + 0, + FLAGS }, + { "effort", + "Encoder effort level, 0=fastest, 5=best quality", + OFFSET(effort), + AV_OPT_TYPE_INT, + { .i64 = 0 }, + 0, + 5, + FLAGS }, + { "lag_in_frames", + "Number of frames to lag. Up to 25. [0]", + OFFSET(lag_in_frames), + AV_OPT_TYPE_INT, + { .i64 = 0 }, + 0, + 25, + FLAGS }, + { "passes", + "Number of passes (1/2). [1]", + OFFSET(passes), + AV_OPT_TYPE_INT, + { .i64 = 1 }, + 1, + 2, + FLAGS }, + + /*Detail parameters described in + https://github.com/VeriSilicon/VPE/blob/master/doc/enc_params_vp9.md */ + { "enc_params", + "Override the enc configuration", + OFFSET(enc_params), + AV_OPT_TYPE_STRING, + { 0 }, + 0, + 0, + FLAGS }, + { NULL }, +}; + +static int vpe_vp9enc_init_hwctx(AVCodecContext *avctx) +{ + int ret = 0; + AVHWFramesContext *hwframe_ctx; + VpeEncVp9Ctx *ctx = avctx->priv_data; + + if (avctx->hw_frames_ctx) { + ctx->hw_frame = av_buffer_ref(avctx->hw_frames_ctx); + if (!ctx->hw_frame) { + ret = AVERROR(ENOMEM); + goto error; + } + + hwframe_ctx = (AVHWFramesContext *)ctx->hw_frame->data; + ctx->hw_device = av_buffer_ref(hwframe_ctx->device_ref); + if (!ctx->hw_device) { + ret = AVERROR(ENOMEM); + goto error; + } + } else { + if (avctx->hw_device_ctx) { + ctx->hw_device = av_buffer_ref(avctx->hw_device_ctx); + if (!ctx->hw_device) { + ret = AVERROR(ENOMEM); + goto error; + } + } else { + ret = av_hwdevice_ctx_create(&ctx->hw_device, AV_HWDEVICE_TYPE_VPE, + NULL, NULL, 0); + if (ret < 0) + goto error; + } + + ctx->hw_frame = av_hwframe_ctx_alloc(ctx->hw_device); + if (!ctx->hw_frame) { + ret = AVERROR(ENOMEM); + goto error; + } + } + + return ret; +error: + return ret; +} + +static void vpe_vp9enc_output_packet(AVCodecContext *avctx, VpeEncVp9Ctx *ctx, + VpiPacket *vpi_packet, + AVPacket *out_packet) +{ + memcpy(out_packet->data, vpi_packet->data, vpi_packet->size); + out_packet->size = vpi_packet->size; + out_packet->pts = vpi_packet->pts; + out_packet->dts = vpi_packet->pkt_dts; +} + +static int vpe_vp9enc_input_frame(AVCodecContext *avctx, AVFrame *input_image, + VpiFrame *out_frame) +{ + VpiFrame *in_vpi_frame; + if (input_image) { + in_vpi_frame = (VpiFrame *)input_image->data[0]; + out_frame->width = input_image->width; + out_frame->height = input_image->height; + out_frame->linesize[0] = input_image->linesize[0]; + out_frame->linesize[1] = input_image->linesize[1]; + out_frame->linesize[2] = input_image->linesize[2]; + out_frame->key_frame = input_image->key_frame; + out_frame->pts = input_image->pts; + out_frame->pkt_dts = input_image->pkt_dts; + out_frame->data[0] = in_vpi_frame->data[0]; + out_frame->data[1] = in_vpi_frame->data[1]; + out_frame->data[2] = in_vpi_frame->data[2]; + } else { + memset(out_frame, 0, sizeof(VpiFrame)); + } + + return 0; +} + +static void vpe_dump_pic(const char *str, const char *sep, VpeEncVp9Ctx *ctx, + int cur) +{ + int i = 0; + + av_log(NULL, AV_LOG_DEBUG, "%s %d\n", str, cur); + for (i = 0; i < MAX_WAIT_DEPTH; i++) { + if ((ctx->pic_wait_list[i].state == 0) && + (ctx->pic_wait_list[i].poc == 0) && + (ctx->pic_wait_list[i].trans_pic == NULL)) + continue; + + av_log(NULL, AV_LOG_DEBUG, "pic[%d] state=%d, poc=%d, data=%p", i, + ctx->pic_wait_list[i].state, ctx->pic_wait_list[i].poc, + ctx->pic_wait_list[i].trans_pic); + + if (ctx->pic_wait_list[i].poc == cur) + av_log(NULL, AV_LOG_DEBUG, "%s", sep); + av_log(NULL, AV_LOG_DEBUG, "\n"); + } + av_log(NULL, AV_LOG_DEBUG, "\n"); +} + +static int vpe_vp9enc_receive_pic(AVCodecContext *avctx, VpeEncVp9Ctx *ctx, + const AVFrame *input_image) +{ + VpeEncVp9Pic *transpic = NULL; + VpiFrame *vpi_frame; + int i; + + ctx->poc = ctx->poc + 1; + for (i = 0; i < MAX_WAIT_DEPTH; i++) { + if (ctx->pic_wait_list[i].state == 0) { + transpic = &ctx->pic_wait_list[i]; + break; + } + } + + if (i == MAX_WAIT_DEPTH) { + return AVERROR(EAGAIN); + } + + if (input_image) { + if (!transpic->trans_pic) { + transpic->trans_pic = av_frame_alloc(); + if (!transpic->trans_pic) { + return AVERROR(ENOMEM); + } + } + av_frame_unref(transpic->trans_pic); + av_frame_ref(transpic->trans_pic, input_image); + } else { + av_log(avctx, AV_LOG_DEBUG, "input image is empty\n"); + } + + transpic->state = 1; + transpic->poc = ctx->poc; + if (input_image) { + vpi_frame = (VpiFrame *)input_image->data[0]; + vpi_frame->used_cnt++; + } + + vpe_dump_pic("vpe_vp9enc_receive_pic", " <---", ctx, ctx->poc); + return 0; +} + +static int vpe_vp9enc_get_pic(AVCodecContext *avctx, VpeEncVp9Ctx *ctx, + int need_poc, AVFrame **out_image) +{ + VpeEncVp9Pic *transpic = NULL; + int i; + + vpe_dump_pic("vpe_vp9enc_get_pic", " [****]", ctx, need_poc); + for (i = 0; i < MAX_WAIT_DEPTH; i++) { + if (ctx->pic_wait_list[i].state == 1) { + transpic = &ctx->pic_wait_list[i]; + if (transpic && transpic->poc == need_poc) { + *out_image = transpic->trans_pic; + return 0; + } + } + } + + *out_image = NULL; + return -1; +} + +static int vpe_vp9enc_consume_pic(AVCodecContext *avctx, VpeEncVp9Ctx *ctx, + int consume_poc) +{ + VpeEncVp9Pic *transpic = NULL; + VpiFrame *vpi_frame; + int i; + + vpe_dump_pic("vpe_vp9enc_consume_pic", " --->", ctx, consume_poc); + for (i = 0; i < MAX_WAIT_DEPTH; i++) { + if (ctx->pic_wait_list[i].state == 1) { + transpic = &ctx->pic_wait_list[i]; + if (transpic && transpic->poc == consume_poc) { + goto find_pic; + } + } + } + if (i == MAX_WAIT_DEPTH) { + av_log(avctx, AV_LOG_ERROR, + "vpe_vp9enc_consume_pic: poc %d is not matched\n", consume_poc); + return -1; + } + +find_pic: + transpic->poc = 0; + transpic->state = 0; + vpi_frame = (VpiFrame *)transpic->trans_pic->data[0]; + vpi_frame->used_cnt--; + if (vpi_frame->used_cnt == 0) + vpi_frame->locked = 0; + + av_frame_unref(transpic->trans_pic); + transpic->trans_pic = NULL; + return 0; +} + +static av_cold int vpe_enc_vp9_free_frames(AVCodecContext *avctx) +{ + VpeEncVp9Ctx *ctx = (VpeEncVp9Ctx *)avctx->priv_data; + int frame_tobe_free = 0; + VpiCtrlCmdParam cmd; + int ret = 0; + + /* Get picture number to be free */ + cmd.cmd = VPI_CMD_VP9ENC_GET_PIC_TOBE_FREE; + ret = ctx->vpi->control(ctx->ctx, (void *)&cmd, &frame_tobe_free); + if (ret < 0) + return AVERROR_EXTERNAL; + + if (frame_tobe_free > 0) { + ret = vpe_vp9enc_consume_pic(avctx, ctx, frame_tobe_free); + if (ret != 0) + return ret; + } + + return 0; +} + +static av_cold int vpe_enc_vp9_get_frames(AVCodecContext *avctx) +{ + VpeEncVp9Ctx *ctx = (VpeEncVp9Ctx *)avctx->priv_data; + int numbers_wait_frames = 0; + int i = 0; + + for (i = 0; i < MAX_WAIT_DEPTH; i++) { + if (ctx->pic_wait_list[i].trans_pic != NULL) { + numbers_wait_frames++; + } + } + return numbers_wait_frames; +} + +static av_cold int vpe_enc_vp9_close(AVCodecContext *avctx) +{ + VpeEncVp9Ctx *ctx = avctx->priv_data; + + if (!ctx->initialized) + return 0; + + ctx->vpi->close(ctx->ctx); + + av_buffer_unref(&ctx->hw_frame); + av_buffer_unref(&ctx->hw_device); + ctx->initialized = 0; + + if (vpi_destroy(ctx->ctx)) { + return AVERROR_EXTERNAL; + } + return 0; +} + +static av_cold int vpe_enc_vp9_init(AVCodecContext *avctx) +{ + VpeEncVp9Ctx *ctx = avctx->priv_data; + AVHWFramesContext *hwframe_ctx; + AVVpeFramesContext *vpeframe_ctx; + VpiEncVp9Opition *psetting = &ctx->vp9cfg; + int ret = 0; + + if (ctx->initialized == 1) { + return 0; + } + + ret = vpe_vp9enc_init_hwctx(avctx); + if (ret) { + av_log(avctx, AV_LOG_ERROR, "vpe_encvp9_init_hwctx failure\n"); + goto error; + } + hwframe_ctx = (AVHWFramesContext *)ctx->hw_frame->data; + vpeframe_ctx = (AVVpeFramesContext *)hwframe_ctx->hwctx; + + avctx->pix_fmt = AV_PIX_FMT_YUV420P; + psetting->preset = ctx->preset; + psetting->effort = ctx->effort; + psetting->lag_in_frames = ctx->lag_in_frames; + psetting->passes = ctx->passes; + psetting->framectx = vpeframe_ctx->frame; + psetting->enc_params = ctx->enc_params; + psetting->width = avctx->width; + psetting->height = avctx->height; + + if ((avctx->bit_rate >= 10000) && (avctx->bit_rate <= 60000000)) { + psetting->bit_rate = avctx->bit_rate; + } else { + av_log(avctx, AV_LOG_WARNING, "invalid bit_rate=%d\n", + psetting->bit_rate); + } + + if ((avctx->bit_rate_tolerance >= 10000) && + (avctx->bit_rate_tolerance <= 60000000)) { + psetting->bit_rate_tolerance = avctx->bit_rate_tolerance; + } else { + av_log(avctx, AV_LOG_WARNING, "invalid bit_rate_tolerance=%d\n", + avctx->bit_rate_tolerance); + } + + if ((avctx->framerate.num > 0) && (avctx->framerate.num < 1048576)) { + psetting->frame_rate_numer = avctx->framerate.num; + } else { + av_log(avctx, AV_LOG_WARNING, "invalid framerate.num=%d\n", + avctx->framerate.num); + } + if ((avctx->framerate.den > 0) && (avctx->framerate.den < 1048576)) { + psetting->frame_rate_denom = avctx->framerate.den; + } else { + av_log(avctx, AV_LOG_WARNING, "invalid framerate.den=%d\n", + avctx->framerate.den); + } + + if (avctx->profile == FF_PROFILE_VP9_0 || + avctx->profile == FF_PROFILE_VP9_1) { + psetting->force_8bit = 1; + } else { + psetting->force_8bit = 0; + } + + ret = vpi_create(&ctx->ctx, &ctx->vpi, VP9ENC_VPE); + if (ret) { + ret = AVERROR_EXTERNAL; + av_log(avctx, AV_LOG_ERROR, "VP9 enc vpi_create failed\n"); + goto error; + } + + ret = ctx->vpi->init(ctx->ctx, psetting); + if (ret) { + av_log(avctx, AV_LOG_ERROR, "VP9 enc init failed\n"); + ret = AVERROR_EXTERNAL; + goto error; + } + + ctx->initialized = 1; + ctx->poc = 0; + ctx->flush_state = 0; + memset(ctx->pic_wait_list, 0, sizeof(ctx->pic_wait_list)); + return ret; + +error: + vpe_enc_vp9_close(avctx); + return ret; +} + +static int vpe_enc_vp9_send_frame(AVCodecContext *avctx, + const AVFrame *input_frame) +{ + VpeEncVp9Ctx *ctx = (VpeEncVp9Ctx *)avctx->priv_data; + VpiCtrlCmdParam cmd; + int numbers_wait_frames = 0; + int ret = 0; + + /* Store input pictures */ + ret = vpe_vp9enc_receive_pic(avctx, ctx, input_frame); + if (ret != 0) { + return ret; + } + + /* Get back encoder wait list frames and send back to codec */ + numbers_wait_frames = vpe_enc_vp9_get_frames(avctx); + + /* Tell encoder how many frames are waitting for encoding*/ + cmd.cmd = VPI_CMD_VP9ENC_SET_PENDDING_FRAMES_COUNT; + cmd.data = &numbers_wait_frames; + ret = ctx->vpi->control(ctx->ctx, (void *)&cmd, NULL); + if (ret != 0) { + return AVERROR_EXTERNAL; + } + + return 0; +} + +static int vpe_enc_vp9_receive_packet(AVCodecContext *avctx, AVPacket *avpkt) +{ + AVFrame *pic = NULL; + VpeEncVp9Ctx *ctx = (VpeEncVp9Ctx *)avctx->priv_data; + VpiFrame vpi_frame; + VpiPacket vpi_packet; + VpiCtrlCmdParam cmd; + int next_frame = 0, inputs = 0; + int ret = 0, ret1 = 0; + int buffer_size = 0; + + /* Get next picture number to be encoded */ + cmd.cmd = VPI_CMD_VP9ENC_GET_NEXT_PIC; + ret = ctx->vpi->control(ctx->ctx, &cmd, &next_frame); + if (ret != 0) + return AVERROR_EXTERNAL; + + /* Get next picture to be encoded */ + ret = vpe_vp9enc_get_pic(avctx, ctx, next_frame + 1, &pic); + if (ret != 0) + return AVERROR(EAGAIN); + + inputs = vpe_enc_vp9_get_frames(avctx); + if (inputs == 0) { + av_log(avctx, AV_LOG_DEBUG, "Input Q was empty, send EOF\n"); + return AVERROR_EOF; + } + + /* Convert input picture from AVFrame to VpiFrame*/ + ret = vpe_vp9enc_input_frame(avctx, pic, &vpi_frame); + if (ret != 0) + return AVERROR_EXTERNAL; + + /*Allocate AVPacket bufffer*/ + buffer_size = (avctx->width * avctx->height * 3) / 4; + buffer_size += buffer_size / 2; + buffer_size &= (~4095); + ret = av_new_packet(avpkt, buffer_size); + if (ret != 0) + return ret; + + vpi_packet.data = avpkt->data; + ret = ctx->vpi->encode(ctx->ctx, (void *)&vpi_frame, (void *)&vpi_packet); + if (ret == 0) { + if (vpi_packet.size == 0) { + av_packet_unref(avpkt); + ret = AVERROR(EAGAIN); + goto exit; + } + /*Convert output packet from VpiPacket to AVPacket*/ + vpe_vp9enc_output_packet(avctx, ctx->ctx, &vpi_packet, avpkt); + } else { + av_log(avctx, AV_LOG_ERROR, "VP9 enc encode failed\n"); + ret = AVERROR_EXTERNAL; + } + +exit: + ret1 = vpe_enc_vp9_free_frames(avctx); + if (ret != 0) + return ret; + else + return ret1; +} + +static const AVClass vpe_enc_vp9_class = { + .class_name = "vp9enc_vpe", + .item_name = av_default_item_name, + .option = vpe_enc_vp9_options, + .version = LIBAVUTIL_VERSION_INT, +}; + +AVCodec ff_vp9_vpe_encoder = { + .name = "vp9enc_vpe", + .long_name = NULL_IF_CONFIG_SMALL("VP9 (VPE BIGSEA)"), + .type = AVMEDIA_TYPE_VIDEO, + .id = AV_CODEC_ID_VP9, + .priv_data_size = sizeof(VpeEncVp9Ctx), + .init = &vpe_enc_vp9_init, + .send_frame = &vpe_enc_vp9_send_frame, + .receive_packet = &vpe_enc_vp9_receive_packet, + .close = &vpe_enc_vp9_close, + .priv_class = &vpe_enc_vp9_class, + .capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE, + .pix_fmts = + (const enum AVPixelFormat[]){ AV_PIX_FMT_VPE, AV_PIX_FMT_YUV420P, + AV_PIX_FMT_NONE }, + .wrapper_name = "vpe", +}; diff --git a/libavcodec/vpe_vp9enc.h b/libavcodec/vpe_vp9enc.h new file mode 100755 index 0000000..a93b11a --- /dev/null +++ b/libavcodec/vpe_vp9enc.h @@ -0,0 +1,83 @@ +/* + * Verisilicon VPE VP9 Encoder head file + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#ifndef VPE_ENCVP9_H +#define VPE_ENCVP9_H + +#include + +#include +#include + +#include "avcodec.h" +#include "libavutil/log.h" +#include "libavutil/hwcontext.h" +#include "libavutil/hwcontext_vpe.h" +#include "libavutil/buffer.h" +#include "libavutil/error.h" +#include "libavutil/frame.h" +#include "libavutil/opt.h" + +#define MAX_WAIT_DEPTH 44 + +typedef struct { + int state; + int poc; + AVFrame *trans_pic; +} VpeEncVp9Pic; + +/* + * Private VPE VP9 encoder hardware structure + */ +typedef struct { + const AVClass *av_class; + /*The hardware device context*/ + AVBufferRef *hw_device; + /*The hardware frame context containing the input frames*/ + AVBufferRef *hw_frame; + /*VPI main context*/ + VpiCtx ctx; + /*VPE codec function pointer*/ + VpiApi *vpi; + /*VPE init state*/ + uint32_t initialized; + /*Input avframe queue*/ + VpeEncVp9Pic pic_wait_list[MAX_WAIT_DEPTH]; + /*Input avframe index*/ + int poc; + + /*Encoder config*/ + VpiEncVp9Opition vp9cfg; + /*Set the encoding preset, superfast/fast/medium/slow/superslow*/ + char *preset; + /*Encoder effort level*/ + int effort; + /*Number of frames to lag. */ + int lag_in_frames; + /*"Number of encoding passes*/ + int passes; + /*More encoding parameters*/ + char *enc_params; + /*Encoder flush state*/ + int flush_state; +} VpeEncVp9Ctx; + + +#endif /*VPE_ENCVP9_H*/ From patchwork Sun May 31 06:29:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Guiyong" X-Patchwork-Id: 20038 Return-Path: X-Original-To: patchwork@ffaux-bg.ffmpeg.org Delivered-To: patchwork@ffaux-bg.ffmpeg.org Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by ffaux.localdomain (Postfix) with ESMTP id 9F46C44A961 for ; Sun, 31 May 2020 09:30:08 +0300 (EEST) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 83B726880AB; Sun, 31 May 2020 09:30:08 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from shasxm03.verisilicon.com (unknown [101.89.135.45]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 499416880CC for ; Sun, 31 May 2020 09:29:59 +0300 (EEST) Content-Language: en-US DKIM-Signature: v=1; a=rsa-sha256; d=Verisilicon.com; s=default; c=simple/simple; t=1590906591; h=from:subject:to:date:message-id; bh=EXiOWMHiGvIAqFvT+Z8p35lC2jK+Iayw5vcCGBp33xw=; b=b8Z/SJGSZJgahdnJWweETbwMQiIqvwxMXFHUOnzcS1IXAihqbIp/GbZDVHH5O5DZcSSR1HAEY4X /UkaPvRRqDTbNej/FKpcjhkwQWJZgyF45SPC+8JrSPDtJJIU9opZqmamlxCKPYXCpU1dhiQi/gZ6m /lbVA1dC6vWhXZfaA6Q= Received: from SHASXM03.verisilicon.com ([fe80::938:4dda:a2f9:38aa]) by SHASXM06.verisilicon.com ([fe80::59a8:ce34:dc14:ddda%16]) with mapi id 14.03.0123.003; Sun, 31 May 2020 14:29:50 +0800 From: "Zhang, Guiyong" To: "ffmpeg-devel@ffmpeg.org" Thread-Topic: [PATCH v2 7/8] avfilter/spliter: Add VPE spliter filter Thread-Index: AdY3FOMlC3A2Q1OaTROc0feSBDrBUA== Date: Sun, 31 May 2020 06:29:50 +0000 Message-ID: <8847CE4FAA78C048A83C2AE8AA751D2770A53763@SHASXM03.verisilicon.com> Accept-Language: en-US, zh-CN X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.10.25.45] x-tm-as-product-ver: SMEX-11.0.0.4179-8.100.1062-25452.001 x-tm-as-result: No--9.912900-0.000000-31 x-tm-as-user-approved-sender: Yes x-tm-as-user-blocked-sender: No MIME-Version: 1.0 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 Subject: [FFmpeg-devel] [PATCH v2 7/8] avfilter/spliter: Add VPE spliter filter X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" This filter splite one input to multi output with different picture data. Signed-off-by: Qin.Wang --- configure | 1 + libavfilter/Makefile | 1 + libavfilter/allfilters.c | 1 + libavfilter/vf_spliter_vpe.c | 319 +++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 322 insertions(+) create mode 100755 libavfilter/vf_spliter_vpe.c -- 1.8.3.1 diff --git a/configure b/configure index 0522f21..6146954 100755 --- a/configure +++ b/configure @@ -3642,6 +3642,7 @@ vpp_qsv_filter_select="qsvvpp" xfade_opencl_filter_deps="opencl" yadif_cuda_filter_deps="ffnvcodec" yadif_cuda_filter_deps_any="cuda_nvcc cuda_llvm" +spliter_vpe_filter_deps="vpe" # examples avio_list_dir_deps="avformat avutil" diff --git a/libavfilter/Makefile b/libavfilter/Makefile index 5123540..104ec0c 100644 --- a/libavfilter/Makefile +++ b/libavfilter/Makefile @@ -466,6 +466,7 @@ OBJS-$(CONFIG_YAEPBLUR_FILTER) += vf_yaepblur.o OBJS-$(CONFIG_ZMQ_FILTER) += f_zmq.o OBJS-$(CONFIG_ZOOMPAN_FILTER) += vf_zoompan.o OBJS-$(CONFIG_ZSCALE_FILTER) += vf_zscale.o +OBJS-$(CONFIG_SPLITER_VPE_FILTER) += vf_spliter_vpe.o OBJS-$(CONFIG_ALLRGB_FILTER) += vsrc_testsrc.o OBJS-$(CONFIG_ALLYUV_FILTER) += vsrc_testsrc.o diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c index 1183e40..353a3ca 100644 --- a/libavfilter/allfilters.c +++ b/libavfilter/allfilters.c @@ -444,6 +444,7 @@ extern AVFilter ff_vf_yaepblur; extern AVFilter ff_vf_zmq; extern AVFilter ff_vf_zoompan; extern AVFilter ff_vf_zscale; +extern AVFilter ff_vf_spliter_vpe; extern AVFilter ff_vsrc_allrgb; extern AVFilter ff_vsrc_allyuv; diff --git a/libavfilter/vf_spliter_vpe.c b/libavfilter/vf_spliter_vpe.c new file mode 100755 index 0000000..0be2b09 --- /dev/null +++ b/libavfilter/vf_spliter_vpe.c @@ -0,0 +1,319 @@ +/* + * Verisilicon VPE H264 Decoder + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include + +#include "avfilter.h" +#include "filters.h" +#include "internal.h" +#include "libavutil/mem.h" +#include "libavutil/opt.h" +#include "libavutil/frame.h" +#include "libavutil/buffer.h" +#include "libavutil/internal.h" +#include "libavutil/hwcontext.h" +#include "libavutil/hwcontext_vpe.h" + +typedef struct SpliterVpeContext { + const AVClass *class; + int nb_outputs; + struct { + int enabled; + int out_index; + int flag; + int width; + int height; + struct { + int enabled; + int x; + int y; + int w; + int h; + } crop; + struct { + int enabled; + int w; + int h; + } scale; + } pic_info[PIC_INDEX_MAX_NUMBER]; +} SpliterVpeContext; + +static int spliter_vpe_out_config_props(AVFilterLink *outlink); + +static av_cold int spliter_vpe_init(AVFilterContext *ctx) +{ + SpliterVpeContext *s = ctx->priv; + int i, ret; + + for (i = 0; i < s->nb_outputs; i++) { + char name[32]; + AVFilterPad pad = { 0 }; + + snprintf(name, sizeof(name), "output%d", i); + pad.type = AVMEDIA_TYPE_VIDEO; + pad.name = av_strdup(name); + if (!pad.name) { + return AVERROR(ENOMEM); + } + pad.config_props = spliter_vpe_out_config_props; + + if ((ret = ff_insert_outpad(ctx, i, &pad)) < 0) { + av_freep(&pad.name); + return ret; + } + } + + for (i = 0; i < PIC_INDEX_MAX_NUMBER; i++) { + s->pic_info[i].out_index = -1; + } + + return 0; +} + +static av_cold void spliter_vpe_uninit(AVFilterContext *ctx) +{ + int i; + + for (i = 0; i < ctx->nb_outputs; i++) { + av_freep(&ctx->output_pads[i].name); + } +} + +static int spliter_vpe_config_props(AVFilterLink *inlink) +{ + AVHWFramesContext *hwframe_ctx; + AVVpeFramesContext *vpeframe_ctx; + VpiFrame *frame_hwctx; + AVFilterContext *dst = inlink->dst; + SpliterVpeContext *s = dst->priv; + int i; + + hwframe_ctx = (AVHWFramesContext *)inlink->hw_frames_ctx->data; + vpeframe_ctx = (AVVpeFramesContext *)hwframe_ctx->hwctx; + frame_hwctx = vpeframe_ctx->frame; + + for (i = 0; i < PIC_INDEX_MAX_NUMBER; i++) { + s->pic_info[i].enabled = frame_hwctx->pic_info[i].enabled; + s->pic_info[i].flag = frame_hwctx->pic_info[i].flag; + s->pic_info[i].width = frame_hwctx->pic_info[i].width; + s->pic_info[i].height = frame_hwctx->pic_info[i].height; + } + s->pic_info[0].crop.enabled = frame_hwctx->pic_info[0].crop.enabled; + s->pic_info[0].crop.x = frame_hwctx->pic_info[0].crop.x; + s->pic_info[0].crop.y = frame_hwctx->pic_info[0].crop.y; + s->pic_info[0].crop.w = frame_hwctx->pic_info[0].crop.w; + s->pic_info[0].crop.h = frame_hwctx->pic_info[0].crop.h; + + return 0; +} + +static int spliter_vpe_out_config_props(AVFilterLink *outlink) +{ + AVFilterContext *src = outlink->src; + SpliterVpeContext *s = src->priv; + int out_index, pp_index, j; + AVHWFramesContext *hwframe_ctx; + AVVpeFramesContext *vpeframe_ctx; + VpiFrame *frame_hwctx; + + if (!src->inputs[0]->hw_frames_ctx) { + // for ffplay + return 0; + } + + outlink->hw_frames_ctx = av_buffer_ref(src->inputs[0]->hw_frames_ctx); + hwframe_ctx = (AVHWFramesContext *)outlink->hw_frames_ctx->data; + vpeframe_ctx = (AVVpeFramesContext *)hwframe_ctx->hwctx; + frame_hwctx = vpeframe_ctx->frame; + frame_hwctx->nb_outputs = s->nb_outputs; + + for (out_index = 0; out_index < src->nb_outputs; out_index++) { + if (outlink == src->outputs[out_index]) { + break; + } + } + if (out_index == src->nb_outputs) { + av_log(src, AV_LOG_ERROR, "can't find output\n"); + return AVERROR_INVALIDDATA; + } + + for (pp_index = PIC_INDEX_MAX_NUMBER - 1; pp_index >= 0; pp_index--) { + if (s->pic_info[pp_index].enabled && !s->pic_info[pp_index].flag && + s->pic_info[pp_index].out_index == -1) { + break; + } + } + + for (j = 0; j < PIC_INDEX_MAX_NUMBER; j++) { + if (j == pp_index) { + continue; + } + if (frame_hwctx->pic_info[j].flag) { + continue; + } + frame_hwctx->pic_info[j].enabled = 0; + } + + outlink->w = s->pic_info[pp_index].width; + outlink->h = s->pic_info[pp_index].height; + s->pic_info[pp_index].out_index = out_index; + + return 0; +} + +static int spliter_vpe_query_formats(AVFilterContext *ctx) +{ + AVFilterFormats *fmts_list; + static const enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_VPE, + AV_PIX_FMT_NONE }; + + fmts_list = ff_make_format_list(pix_fmts); + if (!fmts_list) { + return AVERROR(ENOMEM); + } + + return ff_set_common_formats(ctx, fmts_list); +} + +static int spliter_vpe_filter_frame(AVFilterLink *inlink, AVFrame *frame) +{ + AVHWFramesContext *hwframe_ctx; + AVVpeFramesContext *vpeframe_ctx; + AVFilterContext *ctx = inlink->dst; + SpliterVpeContext *s = ctx->priv; + int i, j, pp_index, ret = AVERROR_UNKNOWN; + VpiPicInfo *pic_info; + VpiFrame *vpi_frame; + + hwframe_ctx = (AVHWFramesContext *)inlink->hw_frames_ctx->data; + vpeframe_ctx = (AVVpeFramesContext *)hwframe_ctx->hwctx; + + pp_index = 0; + for (i = 0; i < ctx->nb_outputs; i++) { + AVFrame *buf_out; + + if (ff_outlink_get_status(ctx->outputs[i])) { + continue; + } + + if (ctx->inputs[0]->hw_frames_ctx) { + for (pp_index = 0; pp_index < PIC_INDEX_MAX_NUMBER; pp_index++) { + if (i == s->pic_info[pp_index].out_index) { + break; + } + } + if (pp_index == PIC_INDEX_MAX_NUMBER) { + av_log(ctx, AV_LOG_ERROR, "can't find pp_index\n"); + ret = AVERROR_UNKNOWN; + goto err_exit; + } + } + + if (i > 0) { + buf_out = av_frame_alloc(); + if (!buf_out) { + ret = AVERROR(ENOMEM); + goto err_exit; + } + ret = av_frame_ref(buf_out, frame); + if (ret < 0) { + goto err_exit; + } + + for (j = 1; j < PIC_INDEX_MAX_NUMBER; j++) { + if (buf_out->buf[j]) { + av_buffer_unref(&buf_out->buf[j]); + } + } + for (j = 1; j < PIC_INDEX_MAX_NUMBER; j++) { + buf_out->buf[j] = + av_buffer_alloc(sizeof(vpeframe_ctx->pic_info_size)); + if (buf_out->buf[j] == NULL) { + goto err_exit; + } + } + + } else { + buf_out = frame; + } + + for (j = 1; j < PIC_INDEX_MAX_NUMBER; j++) { + if (buf_out->buf[j] == NULL || buf_out->buf[j]->data == NULL) + continue; + pic_info = (VpiPicInfo *)buf_out->buf[j]->data; + if (j == pp_index) { + pic_info->enabled = 1; + } else { + pic_info->enabled = 0; + } + } + + vpi_frame = (VpiFrame *)buf_out->data[0]; + if (!vpi_frame) + goto err_exit; + + vpi_frame->nb_outputs = s->nb_outputs; + ret = ff_filter_frame(ctx->outputs[i], buf_out); + if (ret < 0) { + goto err_exit; + } + } + +err_exit: + return ret; +} + +#define OFFSET(x) offsetof(SpliterVpeContext, x) +#define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_FILTERING_PARAM) +static const AVOption spliter_vpe_options[] = { { "outputs", + "set number of outputs", + OFFSET(nb_outputs), + AV_OPT_TYPE_INT, + { .i64 = 1 }, + 1, + 4, + FLAGS }, + { NULL } }; + +AVFILTER_DEFINE_CLASS(spliter_vpe); + +static const AVFilterPad spliter_vpe_inputs[] = + { { + .name = "default", + .type = AVMEDIA_TYPE_VIDEO, + .config_props = spliter_vpe_config_props, + .filter_frame = spliter_vpe_filter_frame, + }, + { NULL } }; + +AVFilter ff_vf_spliter_vpe = { + .name = "spliter_vpe", + .description = NULL_IF_CONFIG_SMALL("Filter to split pictures generated by " + "vpe"), + .priv_size = sizeof(SpliterVpeContext), + .priv_class = &spliter_vpe_class, + .init = spliter_vpe_init, + .uninit = spliter_vpe_uninit, + .query_formats = spliter_vpe_query_formats, + .inputs = spliter_vpe_inputs, + .outputs = NULL, + .flags = AVFILTER_FLAG_DYNAMIC_OUTPUTS, + .flags_internal = FF_FILTER_FLAG_HWFRAME_AWARE, +}; From patchwork Sun May 31 06:30:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Guiyong" X-Patchwork-Id: 20039 Return-Path: X-Original-To: patchwork@ffaux-bg.ffmpeg.org Delivered-To: patchwork@ffaux-bg.ffmpeg.org Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by ffaux.localdomain (Postfix) with ESMTP id 849EF44A961 for ; Sun, 31 May 2020 09:30:28 +0300 (EEST) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 696EC689259; Sun, 31 May 2020 09:30:28 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from shasxm03.verisilicon.com (unknown [101.89.135.45]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id A4C77680637 for ; Sun, 31 May 2020 09:30:20 +0300 (EEST) Content-Language: en-US DKIM-Signature: v=1; a=rsa-sha256; d=Verisilicon.com; s=default; c=simple/simple; t=1590906618; h=from:subject:to:date:message-id; bh=mkvjqd4qGa2l0kE6gyusk+ACftOVk5wuiNEanC67ewQ=; b=QdGj2aKa20z93JFkUODFTu7Eka1lF8cH8BzEb1sFJ77xCTPIZCRzfy2sJerXWkZXk6yClqHObPL vaNika+EYz6uIF4Xccwuq3muNBAtMvxrGn0zOHRdKFZkpOKZhSU0gfSZSwWkuSGiRMYiBFdU8o/pk 03RpKgLMP0GLZWMRQBE= Received: from SHASXM03.verisilicon.com ([fe80::938:4dda:a2f9:38aa]) by SHASXM06.verisilicon.com ([fe80::59a8:ce34:dc14:ddda%16]) with mapi id 14.03.0123.003; Sun, 31 May 2020 14:30:17 +0800 From: "Zhang, Guiyong" To: "ffmpeg-devel@ffmpeg.org" Thread-Topic: [PATCH v2 8/8] vfilter/pp: Add VPE post processing filter Thread-Index: AdY3FPMEqhWeJ/o+Tdezx5RhIaD1nA== Date: Sun, 31 May 2020 06:30:17 +0000 Message-ID: <8847CE4FAA78C048A83C2AE8AA751D2770A5376D@SHASXM03.verisilicon.com> Accept-Language: en-US, zh-CN X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.10.25.45] x-tm-as-product-ver: SMEX-11.0.0.4179-8.100.1062-25452.001 x-tm-as-result: No--12.891600-0.000000-31 x-tm-as-user-approved-sender: Yes x-tm-as-user-blocked-sender: No MIME-Version: 1.0 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 Subject: [FFmpeg-devel] [PATCH v2 8/8] vfilter/pp: Add VPE post processing filter X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" The input of this filter is raw video data, it supports most of the popular raw data formats like NV12, YUV420P, YUV420P10BE etc. Signed-off-by: Guiyong.zhang --- configure | 1 + libavfilter/Makefile | 1 + libavfilter/allfilters.c | 1 + libavfilter/vf_pp_vpe.c | 391 +++++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 394 insertions(+) create mode 100755 libavfilter/vf_pp_vpe.c -- 1.8.3.1 diff --git a/configure b/configure index 6146954..78a3203 100755 --- a/configure +++ b/configure @@ -3643,6 +3643,7 @@ xfade_opencl_filter_deps="opencl" yadif_cuda_filter_deps="ffnvcodec" yadif_cuda_filter_deps_any="cuda_nvcc cuda_llvm" spliter_vpe_filter_deps="vpe" +pp_vpe_filter_deps="vpe" # examples avio_list_dir_deps="avformat avutil" diff --git a/libavfilter/Makefile b/libavfilter/Makefile index 104ec0c..ba4fad8 100644 --- a/libavfilter/Makefile +++ b/libavfilter/Makefile @@ -467,6 +467,7 @@ OBJS-$(CONFIG_ZMQ_FILTER) += f_zmq.o OBJS-$(CONFIG_ZOOMPAN_FILTER) += vf_zoompan.o OBJS-$(CONFIG_ZSCALE_FILTER) += vf_zscale.o OBJS-$(CONFIG_SPLITER_VPE_FILTER) += vf_spliter_vpe.o +OBJS-$(CONFIG_PP_VPE_FILTER) += vf_pp_vpe.o OBJS-$(CONFIG_ALLRGB_FILTER) += vsrc_testsrc.o OBJS-$(CONFIG_ALLYUV_FILTER) += vsrc_testsrc.o diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c index 353a3ca..40c78f0 100644 --- a/libavfilter/allfilters.c +++ b/libavfilter/allfilters.c @@ -445,6 +445,7 @@ extern AVFilter ff_vf_zmq; extern AVFilter ff_vf_zoompan; extern AVFilter ff_vf_zscale; extern AVFilter ff_vf_spliter_vpe; +extern AVFilter ff_vf_pp_vpe; extern AVFilter ff_vsrc_allrgb; extern AVFilter ff_vsrc_allyuv; diff --git a/libavfilter/vf_pp_vpe.c b/libavfilter/vf_pp_vpe.c new file mode 100755 index 0000000..ea718cc --- /dev/null +++ b/libavfilter/vf_pp_vpe.c @@ -0,0 +1,391 @@ +/* + * Verisilicon VPE Post Processing Filter + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERC`ABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include + +#include +#include + +#include "avfilter.h" +#include "formats.h" +#include "internal.h" +#include "libavutil/pixfmt.h" +#include "libavutil/buffer.h" +#include "libavutil/hwcontext.h" +#include "libavutil/opt.h" +#include "libavutil/frame.h" +#include "libavfilter/filters.h" +#include "libavutil/hwcontext_vpe.h" + +typedef struct VpePPFilter { + const AVClass *av_class; + AVBufferRef *hw_device; + AVBufferRef *hw_frame; + + VpiCtx ctx; + VpiApi *vpi; + + int nb_outputs; + int force_10bit; + char *low_res; + VpiPPOpition cfg; +} VpePPFilter; + +static const enum AVPixelFormat input_pix_fmts[] = { + AV_PIX_FMT_NV12, AV_PIX_FMT_P010LE, AV_PIX_FMT_YUV420P, + AV_PIX_FMT_YUV422P, AV_PIX_FMT_NV21, AV_PIX_FMT_YUV420P10LE, + AV_PIX_FMT_YUV420P10BE, AV_PIX_FMT_YUV422P10LE, AV_PIX_FMT_YUV422P10BE, + AV_PIX_FMT_P010BE, AV_PIX_FMT_YUV444P, AV_PIX_FMT_RGB24, + AV_PIX_FMT_BGR24, AV_PIX_FMT_ARGB, AV_PIX_FMT_RGBA, + AV_PIX_FMT_ABGR, AV_PIX_FMT_BGRA, AV_PIX_FMT_NONE, +}; + +typedef struct PixelMapTable { + enum AVPixelFormat src; + VpiPixsFmt des; +} PixelMapTable; + +static PixelMapTable ptable[] = { + { AV_PIX_FMT_YUV420P, VPI_FMT_YUV420P }, + { AV_PIX_FMT_YUV422P, VPI_FMT_YUV422P }, + { AV_PIX_FMT_NV12, VPI_FMT_NV12 }, + { AV_PIX_FMT_NV21, VPI_FMT_NV21 }, + { AV_PIX_FMT_YUV420P10LE, VPI_FMT_YUV420P10LE }, + { AV_PIX_FMT_YUV420P10BE, VPI_FMT_YUV420P10BE }, + { AV_PIX_FMT_YUV422P10LE, VPI_FMT_YUV422P10LE }, + { AV_PIX_FMT_YUV422P10BE, VPI_FMT_YUV422P10BE }, + { AV_PIX_FMT_P010LE, VPI_FMT_P010LE }, + { AV_PIX_FMT_P010BE, VPI_FMT_P010BE }, + { AV_PIX_FMT_YUV444P, VPI_FMT_YUV444P }, + { AV_PIX_FMT_RGB24, VPI_FMT_RGB24 }, + { AV_PIX_FMT_BGR24, VPI_FMT_BGR24 }, + { AV_PIX_FMT_ARGB, VPI_FMT_ARGB }, + { AV_PIX_FMT_RGBA, VPI_FMT_RGBA }, + { AV_PIX_FMT_ABGR, VPI_FMT_ABGR }, + { AV_PIX_FMT_BGRA, VPI_FMT_BGRA }, +}; + +static const enum AVPixelFormat output_pix_fmts[] = { + AV_PIX_FMT_VPE, + AV_PIX_FMT_NONE, +}; + +static av_cold int vpe_pp_init(AVFilterContext *avf_ctx) +{ + VpePPFilter *ctx = avf_ctx->priv; + int ret = 0; + AVFilterPad pad = { 0 }; + + ret = vpi_create(&ctx->ctx, &ctx->vpi, PP_VPE); + if (ret) + return AVERROR_EXTERNAL; + + ret = ctx->vpi->init(ctx->ctx, NULL); + if (ret) + return AVERROR_EXTERNAL; + + pad.type = AVMEDIA_TYPE_VIDEO; + pad.name = "output0"; + if ((ret = ff_insert_outpad(avf_ctx, 0, &pad)) < 0) { + return ret; + } + + return 0; +} + +static av_cold void vpe_pp_uninit(AVFilterContext *avf_ctx) +{ + VpePPFilter *ctx = avf_ctx->priv; + + if (ctx->hw_device) { + ctx->vpi->close(ctx->ctx); + av_buffer_unref(&ctx->hw_device); + vpi_destroy(ctx->ctx); + avf_ctx->priv = NULL; + } +} + +static void vpe_pp_picture_consumed(void *opaque, uint8_t *data) +{ + VpePPFilter *ctx = opaque; + VpiCtrlCmdParam cmd; + + cmd.cmd = VPI_CMD_PP_CONSUME; + cmd.data = data; + ctx->vpi->control(ctx->ctx, (void *)&cmd, NULL); + free(data); +} + +static int vpe_pp_output_avframe(VpePPFilter *ctx, VpiFrame *input, + AVFrame *output) +{ + AVHWFramesContext *hwframe_ctx = (AVHWFramesContext *)ctx->hw_frame->data; + AVVpeFramesContext *vpeframe_ctx = (AVVpeFramesContext *)hwframe_ctx->hwctx; + VpiFrame *frame_hwctx = vpeframe_ctx->frame; + + if (input) { + output->width = input->width; + output->height = input->height; + output->linesize[0] = input->linesize[0]; + output->linesize[1] = input->linesize[1]; + output->linesize[2] = input->linesize[2]; + output->key_frame = input->key_frame; + output->format = AV_PIX_FMT_VPE; + output->data[0] = (void*)input; + output->buf[0] = + av_buffer_create((uint8_t *)input, sizeof(VpiFrame), + vpe_pp_picture_consumed, (void *)ctx, + AV_BUFFER_FLAG_READONLY); + if (output->buf[0] == NULL) + return AVERROR(ENOMEM); + + memcpy(frame_hwctx, input, sizeof(VpiFrame)); + output->hw_frames_ctx = av_buffer_ref(ctx->hw_frame); + if (output->hw_frames_ctx == NULL) + return AVERROR(ENOMEM); + + } else { + memset(output, 0, sizeof(AVFrame)); + } + + return 0; +} + +static int vpe_pp_output_vpeframe(AVFrame *input, VpiFrame *output, + int max_frame_delay) +{ + memset(output, 0, sizeof(VpiFrame)); + if (input) { + output->width = input->width; + output->height = input->height; + output->linesize[0] = input->linesize[0]; + output->linesize[1] = input->linesize[1]; + output->linesize[2] = input->linesize[2]; + output->key_frame = input->key_frame; + output->pts = input->pts; + output->pkt_dts = input->pkt_dts; + output->data[0] = input->data[0]; + output->data[1] = input->data[1]; + output->data[2] = input->data[2]; + output->max_frames_delay = max_frame_delay; + } + + return 0; +} + +static int vpe_pp_filter_frame(AVFilterLink *inlink, AVFrame *frame) +{ + AVFilterContext *avf_ctx = inlink->dst; + AVFilterLink *outlink = avf_ctx->outputs[0]; + AVHWFramesContext *hwframe_ctx = NULL; + AVFrame *buf_out = NULL; + VpePPFilter *ctx = avf_ctx->priv; + VpiFrame in_picture, *out_picture; + AVVpeFramesContext *vpeframe_ctx = NULL; + int ret = 0; + int max_frame_delay = 0; + + hwframe_ctx = (AVHWFramesContext *)ctx->hw_frame->data; + vpeframe_ctx = (AVVpeFramesContext *)hwframe_ctx->hwctx; + max_frame_delay = vpeframe_ctx->frame->max_frames_delay; + ret = vpe_pp_output_vpeframe(frame, &in_picture, max_frame_delay); + if (ret) + return ret; + + out_picture = (VpiFrame *)malloc(sizeof(VpiFrame)); + ret = ctx->vpi->process(ctx->ctx, &in_picture, out_picture); + if (ret) + return AVERROR_EXTERNAL; + + buf_out = av_frame_alloc(); + if (!buf_out) + return AVERROR(ENOMEM); + + ret = av_frame_copy_props(buf_out, frame); + if (ret) + return ret; + + ret = vpe_pp_output_avframe(ctx, out_picture, buf_out); + if (ret < 0) + return AVERROR_EXTERNAL; + + av_frame_free(&frame); + + ret = ff_outlink_get_status(outlink); + if (ret < 0) + return ret; + + ret = ff_filter_frame(outlink, buf_out); + if (ret < 0) + return ret; + + return 0; +} + +static int vpe_pp_init_hwctx(AVFilterContext *ctx, AVFilterLink *inlink) +{ + AVHWFramesContext *hwframe_ctx; + int ret = 0; + VpePPFilter *filter = ctx->priv; + + if (ctx->hw_device_ctx) { + filter->hw_device = av_buffer_ref(ctx->hw_device_ctx); + if (!filter->hw_device) + return AVERROR(ENOMEM); + } else { + return AVERROR(ENOMEM); + } + + filter->hw_frame = av_hwframe_ctx_alloc(filter->hw_device); + if (!filter->hw_frame) + return AVERROR(ENOMEM); + + hwframe_ctx = (AVHWFramesContext *)filter->hw_frame->data; + if (!hwframe_ctx->pool) { + hwframe_ctx->format = AV_PIX_FMT_VPE; + hwframe_ctx->sw_format = inlink->format; + hwframe_ctx->width = inlink->w; + hwframe_ctx->height = inlink->h; + + if ((ret = av_hwframe_ctx_init(filter->hw_frame)) < 0) { + return ret; + } + } + inlink->hw_frames_ctx = filter->hw_frame; + return 0; +} + +static int vpe_get_format(enum AVPixelFormat format) +{ + int i = 0; + + for (i = 0; i < sizeof(ptable) / sizeof(PixelMapTable); i++) { + if (format == ptable[i].src) + return ptable[i].des; + } + return AVERROR(EINVAL); +} + +static int vpe_pp_config_props(AVFilterLink *inlink) +{ + AVFilterContext *avf_ctx = inlink->dst; + AVHWFramesContext *hwframe_ctx; + AVVpeFramesContext *vpeframe_ctx; + VpePPFilter *ctx = avf_ctx->priv; + VpiPPOpition *cfg = &ctx->cfg; + VpiCtrlCmdParam cmd; + int ret = 0; + + ret = vpe_pp_init_hwctx(avf_ctx, inlink); + if (ret < 0){ + return AVERROR_EXTERNAL; + } + + hwframe_ctx = (AVHWFramesContext *)ctx->hw_frame->data; + vpeframe_ctx = (AVVpeFramesContext *)hwframe_ctx->hwctx; + /*Get config*/ + cfg->w = inlink->w; + cfg->h = inlink->h; + cfg->format = vpe_get_format(inlink->format); + cfg->nb_outputs = ctx->nb_outputs; + cfg->force_10bit = ctx->force_10bit; + cfg->low_res = ctx->low_res; + cfg->frame = vpeframe_ctx->frame; + + cmd.cmd = VPI_CMD_PP_CONFIG; + cmd.data = cfg; + ret = ctx->vpi->control(ctx->ctx, (void *)&cmd, NULL); + if (ret < 0){ + return AVERROR_EXTERNAL; + } + return 0; +} + +static int vpe_pp_query_formats(AVFilterContext *avf_ctx) +{ + int ret; + AVFilterFormats *in_fmts = ff_make_format_list(input_pix_fmts); + AVFilterFormats *out_fmts; + + ret = ff_formats_ref(in_fmts, &avf_ctx->inputs[0]->out_formats); + if (ret < 0){ + av_log(NULL, AV_LOG_ERROR, "ff_formats_ref error=%d\n", ret); + return ret; + } + + out_fmts = ff_make_format_list(output_pix_fmts); + ret = ff_formats_ref(out_fmts, &avf_ctx->outputs[0]->in_formats); + + return ret; +} + +#define OFFSET(x) offsetof(VpePPFilter, x) +#define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_FILTERING_PARAM) + +static const AVOption vpe_pp_options[] = { + { "outputs", + "set number of outputs", + OFFSET(nb_outputs), + AV_OPT_TYPE_INT, + { .i64 = 1 }, + 1, + 4, + FLAGS }, + { "low_res", + "specific resize configuration.", + OFFSET(low_res), + AV_OPT_TYPE_STRING, + { .str = NULL }, + .flags = FLAGS }, + { "force10bit", + "upsampling 8bit to 10bit", + OFFSET(force_10bit), + AV_OPT_TYPE_INT, + { .i64 = 0 }, + 0, + 1, + FLAGS }, + { NULL }, +}; + +AVFILTER_DEFINE_CLASS(vpe_pp); + +static const AVFilterPad vpe_pp_inputs[] = { + { + .name = "default", + .type = AVMEDIA_TYPE_VIDEO, + .filter_frame = vpe_pp_filter_frame, + .config_props = vpe_pp_config_props, + }, + { NULL } +}; + +AVFilter ff_vf_pp_vpe = { + .name = "vpe_pp", + .description = NULL_IF_CONFIG_SMALL("Filter using vpe post processing."), + .priv_size = sizeof(VpePPFilter), + .priv_class = &vpe_pp_class, + .init = vpe_pp_init, + .uninit = vpe_pp_uninit, + .query_formats = vpe_pp_query_formats, + .inputs = vpe_pp_inputs, + .outputs = NULL, + .flags = 0, +};