From patchwork Wed Sep 11 05:39:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zachary Zhou X-Patchwork-Id: 15018 Return-Path: X-Original-To: patchwork@ffaux-bg.ffmpeg.org Delivered-To: patchwork@ffaux-bg.ffmpeg.org Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by ffaux.localdomain (Postfix) with ESMTP id 1DE39447F86 for ; Wed, 11 Sep 2019 08:40:50 +0300 (EEST) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 0A08E687F9F; Wed, 11 Sep 2019 08:40:50 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 2226E687F3E for ; Wed, 11 Sep 2019 08:40:42 +0300 (EEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 10 Sep 2019 22:40:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,492,1559545200"; d="scan'208";a="175544381" Received: from u18-kbl-zz.sh.intel.com ([10.239.159.34]) by orsmga007.jf.intel.com with ESMTP; 10 Sep 2019 22:40:40 -0700 From: Zachary Zhou To: ffmpeg-devel@ffmpeg.org Date: Wed, 11 Sep 2019 13:39:56 +0800 Message-Id: <20190911053956.23488-2-zachary.zhou@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190911053956.23488-1-zachary.zhou@intel.com> References: <20190911053956.23488-1-zachary.zhou@intel.com> Subject: [FFmpeg-devel] [PATCH v4 2/2] avfilter: Add tonemap vaapi filter X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: zachary.zhou@intel.com MIME-Version: 1.0 Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" It supports ICL platform. H2H (HDR to HDR): P010 -> A2R10G10B10 H2S (HDR to SDR): P010 -> ARGB --- configure | 2 + doc/filters.texi | 72 +++++ libavfilter/Makefile | 1 + libavfilter/allfilters.c | 1 + libavfilter/vaapi_vpp.c | 5 + libavfilter/vf_tonemap_vaapi.c | 575 +++++++++++++++++++++++++++++++++ 6 files changed, 656 insertions(+) create mode 100644 libavfilter/vf_tonemap_vaapi.c diff --git a/configure b/configure index 8413826f9e..c9bd4bfcd8 100755 --- a/configure +++ b/configure @@ -3551,6 +3551,7 @@ tinterlace_merge_test_deps="tinterlace_filter" tinterlace_pad_test_deps="tinterlace_filter" tonemap_filter_deps="const_nan" tonemap_opencl_filter_deps="opencl const_nan" +tonemap_vaapi_filter_deps="vaapi VAProcPipelineParameterBuffer_output_hdr_metadata" transpose_opencl_filter_deps="opencl" transpose_vaapi_filter_deps="vaapi VAProcPipelineCaps_rotation_flags" unsharp_opencl_filter_deps="opencl" @@ -6544,6 +6545,7 @@ if enabled vaapi; then check_type "va/va.h va/va_dec_hevc.h" "VAPictureParameterBufferHEVC" check_struct "va/va.h" "VADecPictureParameterBufferVP9" bit_depth + check_struct "va/va.h va/va_vpp.h" "VAProcPipelineParameterBuffer" output_hdr_metadata check_struct "va/va.h va/va_vpp.h" "VAProcPipelineCaps" rotation_flags check_type "va/va.h va/va_enc_hevc.h" "VAEncPictureParameterBufferHEVC" check_type "va/va.h va/va_enc_jpeg.h" "VAEncPictureParameterBufferJPEG" diff --git a/doc/filters.texi b/doc/filters.texi index 9d500e44a9..3a3e259f8d 100644 --- a/doc/filters.texi +++ b/doc/filters.texi @@ -20140,6 +20140,78 @@ Convert HDR(PQ/HLG) video to bt2020-transfer-characteristic p010 format using li @end example @end itemize +@section tonemap_vappi + +Perform HDR(High Dynamic Range) to HDR and HDR to SDR conversion with tone-mapping. +It maps the dynamic range of HDR10 content to the dynamic range of the +display panel. + +It accepts the following parameters: + +@table @option +@item type +Specify the tone-mapping operator to be used. + +Possible values are: +@table @var +@item h2h +Perform H2H(HDR to HDR), convert from p010 to r10g10b10a2 +@item h2s +Perform H2S(HDR to SDR), convert from p010 to argb +@end table + +@item display +Set mastering display metadata for H2H + +Can assume the following values: +@table @var +@item G +Green primary G(x|y). +The value for x and y shall be in the range of 0 to 50000 inclusive. +@item B +Blue primary B(x|y). +The value for x and y shall be in the range of 0 to 50000 inclusive. +@item R +Red primary R(x|y). +The value for x and y shall be in the range of 0 to 50000 inclusive. +@item WP +White point WP(x|y). +The value for x and y shall be in the range of 0 to 50000 inclusive. +@item L +Display mastering luminance L(min|max). +The value is in units of 0.0001 candelas per square metre. +@end table + +@item light +Set content light level for H2H + +Can assume the following values: +@table @var +@item CLL +Max content light level. +The value is in units of 0.0001 candelas per square metre. +@item FALL +Max average light level per frame. +The value is in units of 0.0001 candelas per square metre. +@end table + +@end table + +@subsection Example + +@itemize +@item +Convert HDR video to HDR video from p010 format to r10g10b10a format. +@example +-i INPUT -vf "tonemap_vaapi=h2h:display=G(13250|34500)B(7500|3000)R(34000|16000)WP(15635|16450)L(2000|12000):light=CLL(10000)FALL(1000)" OUTPUT +@end example +@item +Convert HDR video to SDR video from p010 format to argb format. +@example +-i INPUT -vf "tonemap_vaapi=h2s" OUTPUT +@end example +@end itemize + @section unsharp_opencl Sharpen or blur the input video. diff --git a/libavfilter/Makefile b/libavfilter/Makefile index 3ef4191d9a..2d0151b182 100644 --- a/libavfilter/Makefile +++ b/libavfilter/Makefile @@ -401,6 +401,7 @@ OBJS-$(CONFIG_TMIX_FILTER) += vf_mix.o framesync.o OBJS-$(CONFIG_TONEMAP_FILTER) += vf_tonemap.o colorspace.o OBJS-$(CONFIG_TONEMAP_OPENCL_FILTER) += vf_tonemap_opencl.o colorspace.o opencl.o \ opencl/tonemap.o opencl/colorspace_common.o +OBJS-$(CONFIG_TONEMAP_VAAPI_FILTER) += vf_tonemap_vaapi.o vaapi_vpp.o OBJS-$(CONFIG_TPAD_FILTER) += vf_tpad.o OBJS-$(CONFIG_TRANSPOSE_FILTER) += vf_transpose.o OBJS-$(CONFIG_TRANSPOSE_NPP_FILTER) += vf_transpose_npp.o diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c index b675c688ee..f0da9ac16e 100644 --- a/libavfilter/allfilters.c +++ b/libavfilter/allfilters.c @@ -381,6 +381,7 @@ extern AVFilter ff_vf_tlut2; extern AVFilter ff_vf_tmix; extern AVFilter ff_vf_tonemap; extern AVFilter ff_vf_tonemap_opencl; +extern AVFilter ff_vf_tonemap_vaapi; extern AVFilter ff_vf_tpad; extern AVFilter ff_vf_transpose; extern AVFilter ff_vf_transpose_npp; diff --git a/libavfilter/vaapi_vpp.c b/libavfilter/vaapi_vpp.c index b5b245c8af..5776243fa0 100644 --- a/libavfilter/vaapi_vpp.c +++ b/libavfilter/vaapi_vpp.c @@ -257,6 +257,11 @@ static const VAAPIColourProperties vaapi_colour_standard_map[] = { { VAProcColorStandardSMPTE170M, 6, 6, 6 }, { VAProcColorStandardSMPTE240M, 7, 7, 7 }, { VAProcColorStandardGenericFilm, 8, 1, 1 }, + +#if VA_CHECK_VERSION(2, 3, 0) + { VAProcColorStandardExplicit, 9, 16, AVCOL_SPC_BT2020_NCL}, +#endif + #if VA_CHECK_VERSION(1, 1, 0) { VAProcColorStandardSRGB, 1, 13, 0 }, { VAProcColorStandardXVYCC601, 1, 11, 5 }, diff --git a/libavfilter/vf_tonemap_vaapi.c b/libavfilter/vf_tonemap_vaapi.c new file mode 100644 index 0000000000..9b4ab4a365 --- /dev/null +++ b/libavfilter/vf_tonemap_vaapi.c @@ -0,0 +1,575 @@ +/* + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ +#include + +#include "libavutil/avassert.h" +#include "libavutil/mem.h" +#include "libavutil/opt.h" +#include "libavutil/pixdesc.h" +#include "libavutil/mastering_display_metadata.h" + +#include "avfilter.h" +#include "formats.h" +#include "internal.h" +#include "vaapi_vpp.h" + +// ITU-T H.265 Table E.3: Colour Primaries +#define COLOUR_PRIMARY_BT2020 9 +#define COLOUR_PRIMARY_BT709 1 +// ITU-T H.265 Table E.4 Transfer characteristics +#define TRANSFER_CHARACTERISTICS_BT709 1 +#define TRANSFER_CHARACTERISTICS_ST2084 16 + +typedef enum { + HDR_VAAPI_H2H, + HDR_VAAPI_H2S, +} HDRType; + +typedef struct HDRVAAPIContext { + VAAPIVPPContext vpp_ctx; // must be the first field + + int hdr_type; + + char *master_display; + char *content_light; + + VAHdrMetaDataHDR10 in_metadata; + VAHdrMetaDataHDR10 out_metadata; + + AVFrameSideData *src_display; + AVFrameSideData *src_light; +} HDRVAAPIContext; + +static int tonemap_vaapi_save_metadata(AVFilterContext *avctx, AVFrame *input_frame) +{ + HDRVAAPIContext *ctx = avctx->priv; + AVMasteringDisplayMetadata *hdr_meta; + AVContentLightMetadata *light_meta; + + ctx->src_display = av_frame_get_side_data(input_frame, + AV_FRAME_DATA_MASTERING_DISPLAY_METADATA); + if (ctx->src_display) { + hdr_meta = (AVMasteringDisplayMetadata *)ctx->src_display->data; + if (!hdr_meta) { + av_log(avctx, AV_LOG_ERROR, "No mastering display data\n"); + return AVERROR(EINVAL); + } + + if (hdr_meta->has_luminance) { + const int luma_den = 10000; + ctx->in_metadata.max_display_mastering_luminance = + lrint(luma_den * av_q2d(hdr_meta->max_luminance)); + ctx->in_metadata.min_display_mastering_luminance = + FFMIN(lrint(luma_den * av_q2d(hdr_meta->min_luminance)), + ctx->in_metadata.max_display_mastering_luminance); + + av_log(avctx, AV_LOG_DEBUG, + "Mastering Display Metadata(in luminance):\n"); + av_log(avctx, AV_LOG_DEBUG, + "min_luminance=%u, max_luminance=%u\n", + ctx->in_metadata.min_display_mastering_luminance, + ctx->in_metadata.max_display_mastering_luminance); + } + + if (hdr_meta->has_primaries) { + int i; + const int mapping[3] = {1, 2, 0}; //green, blue, red + const int chroma_den = 50000; + + for (i = 0; i < 3; i++) { + const int j = mapping[i]; + ctx->in_metadata.display_primaries_x[i] = + FFMIN(lrint(chroma_den * + av_q2d(hdr_meta->display_primaries[j][0])), + chroma_den); + ctx->in_metadata.display_primaries_y[i] = + FFMIN(lrint(chroma_den * + av_q2d(hdr_meta->display_primaries[j][1])), + chroma_den); + } + + ctx->in_metadata.white_point_x = + FFMIN(lrint(chroma_den * av_q2d(hdr_meta->white_point[0])), + chroma_den); + ctx->in_metadata.white_point_y = + FFMIN(lrint(chroma_den * av_q2d(hdr_meta->white_point[1])), + chroma_den); + + av_log(avctx, AV_LOG_DEBUG, + "Mastering Display Metadata(in primaries):\n"); + av_log(avctx, AV_LOG_DEBUG, + "G(%u,%u) B(%u,%u) R(%u,%u) WP(%u,%u)\n", + ctx->in_metadata.display_primaries_x[0], + ctx->in_metadata.display_primaries_y[0], + ctx->in_metadata.display_primaries_x[1], + ctx->in_metadata.display_primaries_y[1], + ctx->in_metadata.display_primaries_x[2], + ctx->in_metadata.display_primaries_y[2], + ctx->in_metadata.white_point_x, + ctx->in_metadata.white_point_y); + } + } else { + av_log(avctx, AV_LOG_DEBUG, "No mastering display data from input\n"); + } + + ctx->src_light = av_frame_get_side_data(input_frame, + AV_FRAME_DATA_CONTENT_LIGHT_LEVEL); + if (ctx->src_light) { + light_meta = (AVContentLightMetadata *)ctx->src_light->data; + if (!light_meta) { + av_log(avctx, AV_LOG_ERROR, "No light meta data\n"); + return AVERROR(EINVAL); + } + + ctx->in_metadata.max_content_light_level = light_meta->MaxCLL; + ctx->in_metadata.max_pic_average_light_level = light_meta->MaxFALL; + + av_log(avctx, AV_LOG_DEBUG, + "Mastering Content Light Level (in):\n"); + av_log(avctx, AV_LOG_DEBUG, + "MaxCLL(%u) MaxFALL(%u)\n", + ctx->in_metadata.max_content_light_level, + ctx->in_metadata.max_pic_average_light_level); + } else { + av_log(avctx, AV_LOG_DEBUG, "No content light level from input\n"); + } + + return 0; +} + +static int tonemap_vaapi_set_filter_params(AVFilterContext *avctx, AVFrame *input_frame) +{ + VAAPIVPPContext *vpp_ctx = avctx->priv; + HDRVAAPIContext *ctx = avctx->priv; + VAStatus vas; + VAProcFilterParameterBufferHDRToneMapping *hdrtm_param; + + vas = vaMapBuffer(vpp_ctx->hwctx->display, vpp_ctx->filter_buffers[0], + (void**)&hdrtm_param); + if (vas != VA_STATUS_SUCCESS) { + av_log(avctx, AV_LOG_ERROR, "Failed to map " + "buffer (%d): %d (%s).\n", + vpp_ctx->filter_buffers[0], vas, vaErrorStr(vas)); + return AVERROR(EIO); + } + + memcpy(hdrtm_param->data.metadata, &ctx->in_metadata, sizeof(VAHdrMetaDataHDR10)); + + vas = vaUnmapBuffer(vpp_ctx->hwctx->display, vpp_ctx->filter_buffers[0]); + if (vas != VA_STATUS_SUCCESS) { + av_log(avctx, AV_LOG_ERROR, "Failed to unmap output buffers: " + "%d (%s).\n", vas, vaErrorStr(vas)); + return AVERROR(EIO); + } + + return 0; +} + +static int tonemap_vaapi_build_filter_params(AVFilterContext *avctx) +{ + VAAPIVPPContext *vpp_ctx = avctx->priv; + HDRVAAPIContext *ctx = avctx->priv; + VAStatus vas; + VAProcFilterCapHighDynamicRange hdr_cap; + int num_query_caps; + VAProcFilterParameterBufferHDRToneMapping hdrtm_param; + + vas = vaQueryVideoProcFilterCaps(vpp_ctx->hwctx->display, + vpp_ctx->va_context, + VAProcFilterHighDynamicRangeToneMapping, + &hdr_cap, &num_query_caps); + if (vas != VA_STATUS_SUCCESS) { + av_log(avctx, AV_LOG_ERROR, "Failed to query HDR caps " + "context: %d (%s).\n", vas, vaErrorStr(vas)); + return AVERROR(EIO); + } + + if (hdr_cap.metadata_type == VAProcHighDynamicRangeMetadataNone) { + av_log(avctx, AV_LOG_ERROR, "VAAPI driver doesn't support HDR\n"); + return AVERROR(EINVAL); + } + + switch (ctx->hdr_type) { + case HDR_VAAPI_H2H: + if (!(VA_TONE_MAPPING_HDR_TO_HDR & hdr_cap.caps_flag)) { + av_log(avctx, AV_LOG_ERROR, + "VAAPI driver doesn't support H2H\n"); + return AVERROR(EINVAL); + } + break; + case HDR_VAAPI_H2S: + if (!(VA_TONE_MAPPING_HDR_TO_SDR & hdr_cap.caps_flag)) { + av_log(avctx, AV_LOG_ERROR, + "VAAPI driver doesn't support H2S\n"); + return AVERROR(EINVAL); + } + break; + default: + av_assert0(0); + } + + memset(&hdrtm_param, 0, sizeof(hdrtm_param)); + memset(&ctx->in_metadata, 0, sizeof(ctx->in_metadata)); + hdrtm_param.type = VAProcFilterHighDynamicRangeToneMapping; + hdrtm_param.data.metadata_type = VAProcHighDynamicRangeMetadataHDR10; + hdrtm_param.data.metadata = &ctx->in_metadata; + hdrtm_param.data.metadata_size = sizeof(VAHdrMetaDataHDR10); + + ff_vaapi_vpp_make_param_buffers(avctx, + VAProcFilterParameterBufferType, + &hdrtm_param, sizeof(hdrtm_param), 1); + + return 0; +} + +static int tonemap_vaapi_update_sidedata(AVFilterContext *avctx, AVFrame *output_frame) +{ + HDRVAAPIContext *ctx = avctx->priv; + AVFrameSideData *metadata; + AVMasteringDisplayMetadata *hdr_meta; + AVFrameSideData *metadata_lt; + AVContentLightMetadata *hdr_meta_lt; + + metadata = av_frame_get_side_data(output_frame, + AV_FRAME_DATA_MASTERING_DISPLAY_METADATA); + if (metadata) { + int i; + const int mapping[3] = {1, 2, 0}; //green, blue, red + const int chroma_den = 50000; + const int luma_den = 10000; + + hdr_meta = (AVMasteringDisplayMetadata *)metadata->data; + if (!hdr_meta) { + av_log(avctx, AV_LOG_ERROR, "No mastering display data\n"); + return AVERROR(EINVAL); + } + + for (i = 0; i < 3; i++) { + const int j = mapping[i]; + hdr_meta->display_primaries[j][0].num = ctx->out_metadata.display_primaries_x[i]; + hdr_meta->display_primaries[j][0].den = chroma_den; + + hdr_meta->display_primaries[j][1].num = ctx->out_metadata.display_primaries_y[i]; + hdr_meta->display_primaries[j][1].den = chroma_den; + } + + hdr_meta->white_point[0].num = ctx->out_metadata.white_point_x; + hdr_meta->white_point[0].den = chroma_den; + + hdr_meta->white_point[0].num = ctx->out_metadata.white_point_y; + hdr_meta->white_point[0].den = chroma_den; + hdr_meta->has_primaries = 1; + + hdr_meta->max_luminance.num = ctx->out_metadata.max_display_mastering_luminance; + hdr_meta->max_luminance.den = luma_den; + + hdr_meta->min_luminance.num = ctx->out_metadata.min_display_mastering_luminance; + hdr_meta->min_luminance.den = luma_den; + hdr_meta->has_luminance = 1; + + av_log(avctx, AV_LOG_DEBUG, + "Mastering Display Metadata(out luminance):\n"); + av_log(avctx, AV_LOG_DEBUG, + "min_luminance=%u, max_luminance=%u\n", + ctx->out_metadata.min_display_mastering_luminance, + ctx->out_metadata.max_display_mastering_luminance); + + av_log(avctx, AV_LOG_DEBUG, + "Mastering Display Metadata(out primaries):\n"); + av_log(avctx, AV_LOG_DEBUG, + "G(%u,%u) B(%u,%u) R(%u,%u) WP(%u,%u)\n", + ctx->out_metadata.display_primaries_x[0], + ctx->out_metadata.display_primaries_y[0], + ctx->out_metadata.display_primaries_x[1], + ctx->out_metadata.display_primaries_y[1], + ctx->out_metadata.display_primaries_x[2], + ctx->out_metadata.display_primaries_y[2], + ctx->out_metadata.white_point_x, + ctx->out_metadata.white_point_y); + } else { + av_log(avctx, AV_LOG_DEBUG, "No mastering display data for output\n"); + } + + metadata_lt = av_frame_get_side_data(output_frame, + AV_FRAME_DATA_CONTENT_LIGHT_LEVEL); + if (metadata_lt) { + hdr_meta_lt = (AVContentLightMetadata *)metadata_lt->data; + if (!hdr_meta_lt) { + av_log(avctx, AV_LOG_ERROR, "No light meta data\n"); + return AVERROR(EINVAL); + } + + hdr_meta_lt->MaxCLL = FFMIN(ctx->out_metadata.max_content_light_level, 65535); + hdr_meta_lt->MaxFALL = FFMIN(ctx->out_metadata.max_pic_average_light_level, 65535); + + av_log(avctx, AV_LOG_DEBUG, + "Mastering Content Light Level (out):\n"); + av_log(avctx, AV_LOG_DEBUG, + "MaxCLL(%u) MaxFALL(%u)\n", + ctx->out_metadata.max_content_light_level, + ctx->out_metadata.max_pic_average_light_level); + + } else { + av_log(avctx, AV_LOG_DEBUG, "No content light level for output\n"); + } + + return 0; +} + +static int tonemap_vaapi_filter_frame(AVFilterLink *inlink, AVFrame *input_frame) +{ + AVFilterContext *avctx = inlink->dst; + AVFilterLink *outlink = avctx->outputs[0]; + VAAPIVPPContext *vpp_ctx = avctx->priv; + HDRVAAPIContext *ctx = avctx->priv; + AVFrame *output_frame = NULL; + VASurfaceID input_surface, output_surface; + VARectangle input_region; + + VAProcPipelineParameterBuffer params; + int err; + + VAHdrMetaData out_hdr_metadata; + + av_log(avctx, AV_LOG_DEBUG, "Filter input: %s, %ux%u (%"PRId64").\n", + av_get_pix_fmt_name(input_frame->format), + input_frame->width, input_frame->height, input_frame->pts); + + if (vpp_ctx->va_context == VA_INVALID_ID) + return AVERROR(EINVAL); + + err = tonemap_vaapi_save_metadata(avctx, input_frame); + if (err < 0) + goto fail; + + err = tonemap_vaapi_set_filter_params(avctx, input_frame); + if (err < 0) + goto fail; + + input_surface = (VASurfaceID)(uintptr_t)input_frame->data[3]; + av_log(avctx, AV_LOG_DEBUG, "Using surface %#x for tonemap vpp input.\n", + input_surface); + + output_frame = ff_get_video_buffer(outlink, vpp_ctx->output_width, + vpp_ctx->output_height); + if (!output_frame) { + err = AVERROR(ENOMEM); + goto fail; + } + + output_surface = (VASurfaceID)(uintptr_t)output_frame->data[3]; + av_log(avctx, AV_LOG_DEBUG, "Using surface %#x for tonemap vpp output.\n", + output_surface); + memset(¶ms, 0, sizeof(params)); + input_region = (VARectangle) { + .x = 0, + .y = 0, + .width = input_frame->width, + .height = input_frame->height, + }; + + params.filters = &vpp_ctx->filter_buffers[0]; + params.num_filters = vpp_ctx->nb_filter_buffers; + + err = ff_vaapi_vpp_init_params(avctx, ¶ms, + input_frame, output_frame); + if (err < 0) + goto fail; + + switch (ctx->hdr_type) + { + case HDR_VAAPI_H2H: + params.output_color_standard = VAProcColorStandardExplicit; + params.output_color_properties.colour_primaries = COLOUR_PRIMARY_BT2020; + params.output_color_properties.transfer_characteristics = TRANSFER_CHARACTERISTICS_ST2084; + break; + case HDR_VAAPI_H2S: + params.output_color_standard = VAProcColorStandardBT709; + params.output_color_properties.colour_primaries = COLOUR_PRIMARY_BT709; + params.output_color_properties.transfer_characteristics = TRANSFER_CHARACTERISTICS_BT709; + break; + default: + av_assert0(0); + } + + if (ctx->hdr_type == HDR_VAAPI_H2H) { + memset(&out_hdr_metadata, 0, sizeof(out_hdr_metadata)); + if (!ctx->master_display) { + av_log(avctx, AV_LOG_ERROR, + "Option mastering-display input invalid\n"); + return AVERROR(EINVAL); + } + + if (10 != sscanf(ctx->master_display, + "G(%hu|%hu)B(%hu|%hu)R(%hu|%hu)WP(%hu|%hu)L(%u|%u)", + &ctx->out_metadata.display_primaries_x[0], + &ctx->out_metadata.display_primaries_y[0], + &ctx->out_metadata.display_primaries_x[1], + &ctx->out_metadata.display_primaries_y[1], + &ctx->out_metadata.display_primaries_x[2], + &ctx->out_metadata.display_primaries_y[2], + &ctx->out_metadata.white_point_x, + &ctx->out_metadata.white_point_y, + &ctx->out_metadata.min_display_mastering_luminance, + &ctx->out_metadata.max_display_mastering_luminance)) { + av_log(avctx, AV_LOG_ERROR, + "Option mastering-display input invalid\n"); + return AVERROR(EINVAL); + } + + if (!ctx->content_light) { + av_log(avctx, AV_LOG_ERROR, + "Option content-light input invalid\n"); + return AVERROR(EINVAL); + } + + if (2 != sscanf(ctx->content_light, + "CLL(%hu)FALL(%hu)", + &ctx->out_metadata.max_content_light_level, + &ctx->out_metadata.max_pic_average_light_level)) { + av_log(avctx, AV_LOG_ERROR, + "Option content-light input invalid\n"); + return AVERROR(EINVAL); + } + + out_hdr_metadata.metadata_type = VAProcHighDynamicRangeMetadataHDR10; + out_hdr_metadata.metadata = &ctx->out_metadata; + out_hdr_metadata.metadata_size = sizeof(VAHdrMetaDataHDR10); + + params.output_hdr_metadata = &out_hdr_metadata; + } + + err = ff_vaapi_vpp_render_picture(avctx, ¶ms, output_frame); + if (err < 0) + goto fail; + + err = av_frame_copy_props(output_frame, input_frame); + if (err < 0) + goto fail; + + if (ctx->hdr_type == HDR_VAAPI_H2H) { + err = tonemap_vaapi_update_sidedata(avctx, output_frame); + if (err < 0) + goto fail; + } + + av_frame_free(&input_frame); + + av_log(avctx, AV_LOG_DEBUG, "Filter output: %s, %ux%u (%"PRId64").\n", + av_get_pix_fmt_name(output_frame->format), + output_frame->width, output_frame->height, output_frame->pts); + + return ff_filter_frame(outlink, output_frame); + +fail: + av_frame_free(&input_frame); + av_frame_free(&output_frame); + return err; +} + +static av_cold int tonemap_vaapi_init(AVFilterContext *avctx) +{ + VAAPIVPPContext *vpp_ctx = avctx->priv; + HDRVAAPIContext *ctx = avctx->priv; + + ff_vaapi_vpp_ctx_init(avctx); + vpp_ctx->build_filter_params = tonemap_vaapi_build_filter_params; + vpp_ctx->pipeline_uninit = ff_vaapi_vpp_pipeline_uninit; + + if (ctx->hdr_type == HDR_VAAPI_H2H) { + vpp_ctx->output_format = AV_PIX_FMT_A2R10G10B10; + } else if (ctx->hdr_type == HDR_VAAPI_H2S) { + vpp_ctx->output_format = AV_PIX_FMT_ARGB; + } else { + av_assert0(0); + } + + return 0; +} + +static int tonemap_vaapi_vpp_query_formats(AVFilterContext *avctx) +{ + int err; + + enum AVPixelFormat pix_in_fmts[] = { + AV_PIX_FMT_P010, //Input + }; + + enum AVPixelFormat pix_out_fmts[] = { + AV_PIX_FMT_A2R10G10B10, //H2H RGB10 + AV_PIX_FMT_ARGB, //H2S RGB8 + }; + + err = ff_formats_ref(ff_make_format_list(pix_in_fmts), + &avctx->inputs[0]->out_formats); + if (err < 0) + return err; + + err = ff_formats_ref(ff_make_format_list(pix_out_fmts), + &avctx->outputs[0]->in_formats); + if (err < 0) + return err; + + return ff_vaapi_vpp_query_formats(avctx); +} + +#define OFFSET(x) offsetof(HDRVAAPIContext, x) +#define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_FILTERING_PARAM) +static const AVOption tonemap_vaapi_options[] = { + { "type", "hdr type", OFFSET(hdr_type), AV_OPT_TYPE_INT, { .i64 = HDR_VAAPI_H2H }, 0, 2, FLAGS, "type" }, + { "h2h", "vaapi P010 to A2R10G10B10", 0, AV_OPT_TYPE_CONST, {.i64=HDR_VAAPI_H2H}, INT_MIN, INT_MAX, FLAGS, "type" }, + { "h2s", "vaapi P010 to ARGB", 0, AV_OPT_TYPE_CONST, {.i64=HDR_VAAPI_H2S}, INT_MIN, INT_MAX, FLAGS, "type" }, + { "display", "set master display", OFFSET(master_display), AV_OPT_TYPE_STRING, {.str=NULL}, CHAR_MIN, CHAR_MAX, FLAGS }, + { "light", "set content light", OFFSET(content_light), AV_OPT_TYPE_STRING, {.str=NULL}, CHAR_MIN, CHAR_MAX, FLAGS }, + { NULL } +}; + + +AVFILTER_DEFINE_CLASS(tonemap_vaapi); + +static const AVFilterPad tonemap_vaapi_inputs[] = { + { + .name = "default", + .type = AVMEDIA_TYPE_VIDEO, + .filter_frame = &tonemap_vaapi_filter_frame, + .config_props = &ff_vaapi_vpp_config_input, + }, + { NULL } +}; + +static const AVFilterPad tonemap_vaapi_outputs[] = { + { + .name = "default", + .type = AVMEDIA_TYPE_VIDEO, + .config_props = &ff_vaapi_vpp_config_output, + }, + { NULL } +}; + +AVFilter ff_vf_tonemap_vaapi = { + .name = "tonemap_vaapi", + .description = NULL_IF_CONFIG_SMALL("VAAPI VPP for tonemap"), + .priv_size = sizeof(HDRVAAPIContext), + .init = &tonemap_vaapi_init, + .uninit = &ff_vaapi_vpp_ctx_uninit, + .query_formats = &tonemap_vaapi_vpp_query_formats, + .inputs = tonemap_vaapi_inputs, + .outputs = tonemap_vaapi_outputs, + .priv_class = &tonemap_vaapi_class, + .flags_internal = FF_FILTER_FLAG_HWFRAME_AWARE, +};