From patchwork Tue May 28 15:47:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 49315 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a59:8f0d:0:b0:460:55fa:d5ed with SMTP id i13csp53734vqu; Tue, 28 May 2024 08:50:10 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCX5hTCp0PQPmHVymZ62WEUPCVw23I2IhlTdy9PdqLLgy4uhSGA5vI/HEs/6FnsizNVuOm/p1Tv+luVJN0ccXOiVS48hYP7oraQioA== X-Google-Smtp-Source: AGHT+IEx4fmkREoiEtjhzons0e0WNNPT0txmPE5NLvSd5N8rdYStR+tpw4HzDtxu6bUeBIH+ckLA X-Received: by 2002:a17:906:6944:b0:a61:a705:56f3 with SMTP id a640c23a62f3a-a6262b1689fmr978106066b.35.1716911409956; Tue, 28 May 2024 08:50:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1716911409; cv=none; d=google.com; s=arc-20160816; b=HTwPOMPV/P+pEm/MdyuQVCnMZV6EmSGnG9rFfCXbRdAVZV2fUXXBYNPlL7q18QClz6 tg6+UXmT56CKBSN1y6sGejkQGCI+k8SGsAB/bMH2XHX2e2/B7bIvgtPtxkkvH9I8QUBw o7tPGU0/51alEikMh80seBnXLo3PUOqn4hxgmmftBCGfUPcHtAVedHwudAWZH//I+fLT b3aIGGkdLCf9vp4nBnhUvpDuZjpBz7y0KOUnWmmisi3+InHSHqLa2cfRTxkP51CyTxpc Zhd1ndVTEmo20wXBtPwMPAJax/hwbBF6T6hnXObbA3TQruZte9jFA2aJnwusynAztHSh wu/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:message-id:date:to:from :dkim-signature:delivered-to; bh=fUC/QkbNIblpm1WCX4q8WApRPNu2PvGGzlGKcaK7dqk=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=tKfUlIXnajeJdqr+RseQz9hDHZPU4bNoxl/NtfYFhupTAH1hZ/bWN+nyqJzstYPNhB 2nN+VJ8meiGRPhaUhUCYhmNuKlw/0FW/0buS5gD/hwqPRll8iIW5dEJyTPkqWzB2Ofss IXkimCaPmyn5/yY2JqpiX88yaaProM6AsGIPqmhelAnE7bIJZFMOY8bsG8s0nN9+SRr/ UENg0gH/wphkXbNeMEDJOdisnf65IDdmd+wsVGdArXHnXVYWs385tQh/lDeAhzHlb71Q DQqbXaAEJDVq0NmmEpGVAqZOWeEsE+GY4tjuJRY6YX5Tif1fk04IRjZmR338WAFMiIGv qzcg==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=mjhzpt1P; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id a640c23a62f3a-a63480181b9si81495566b.342.2024.05.28.08.50.00; Tue, 28 May 2024 08:50:09 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=mjhzpt1P; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 539F868D4A6; Tue, 28 May 2024 18:49:57 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 3CA8B68C094 for ; Tue, 28 May 2024 18:49:49 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716911395; x=1748447395; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=aDP6lLxdtaRsGTCrJdz2OssXB+9hNcIsXTozk5/2BS4=; b=mjhzpt1P4wcqnWxjxP+Rv3tE9ZL9KPAKTFVywMKSrz8CnY07OR0WjZ3E qY4kbeycamgPbmlalNEeWm0lGNVW/5QzPJL2/U3qAbCP0VxQaq0T8snYQ hBdKkeqsxKUeigAUC0mMLjwyuos1I8ylqiuJPhgWdZIGkIFpS4RbScv3J Wr8rkbbmZ15XP45xUFqRNX3CiwtwnW9BSE02lkcsuhmbM7ntc5S707ZqH Tc0wygJS0E/BDledRtTsNkTdTCykU+OMH5UiDX3B/o5qAzFgLEGCjNFfw GqjMn64l3sDIoj4PNvK/DcM6nGS+uLShSnKVKBCUTf9pIpXQ6LyfwQzIC A==; X-CSE-ConnectionGUID: GM+AcRyRTOSXp9MRlsH4Xw== X-CSE-MsgGUID: l5qi4MzeSna2pOAinMqGkg== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="23821841" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="23821841" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 08:49:47 -0700 X-CSE-ConnectionGUID: xUG09ytvQUO4a/H5aKnuLg== X-CSE-MsgGUID: 51oDpnDuRSuik8BsYBQqGw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="35731025" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by orviesa007.jf.intel.com with ESMTP; 28 May 2024 08:49:46 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Tue, 28 May 2024 23:47:52 +0800 Message-ID: <20240528154807.1151-1-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v12 01/15] avcodec/vaapi_encode: introduce a base layer for vaapi encode X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: arzirbtUzTjh From: Tong Wu Since VAAPI and future D3D12VA implementation may share some common parameters, a base layer encode context is introduced as vaapi context's base. Signed-off-by: Tong Wu --- libavcodec/hw_base_encode.h | 56 +++++++++++++++++++++++++++++++++++++ libavcodec/vaapi_encode.h | 39 +++++--------------------- 2 files changed, 63 insertions(+), 32 deletions(-) create mode 100644 libavcodec/hw_base_encode.h diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h new file mode 100644 index 0000000000..ffcb6bcbb2 --- /dev/null +++ b/libavcodec/hw_base_encode.h @@ -0,0 +1,56 @@ +/* + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#ifndef AVCODEC_HW_BASE_ENCODE_H +#define AVCODEC_HW_BASE_ENCODE_H + +#define MAX_DPB_SIZE 16 +#define MAX_PICTURE_REFERENCES 2 +#define MAX_REORDER_DELAY 16 +#define MAX_ASYNC_DEPTH 64 +#define MAX_REFERENCE_LIST_NUM 2 + +enum { + PICTURE_TYPE_IDR = 0, + PICTURE_TYPE_I = 1, + PICTURE_TYPE_P = 2, + PICTURE_TYPE_B = 3, +}; + +enum { + // Codec supports controlling the subdivision of pictures into slices. + FLAG_SLICE_CONTROL = 1 << 0, + // Codec only supports constant quality (no rate control). + FLAG_CONSTANT_QUALITY_ONLY = 1 << 1, + // Codec is intra-only. + FLAG_INTRA_ONLY = 1 << 2, + // Codec supports B-pictures. + FLAG_B_PICTURES = 1 << 3, + // Codec supports referencing B-pictures. + FLAG_B_PICTURE_REFERENCES = 1 << 4, + // Codec supports non-IDR key pictures (that is, key pictures do + // not necessarily empty the DPB). + FLAG_NON_IDR_KEY_PICTURES = 1 << 5, +}; + +typedef struct FFHWBaseEncodeContext { + const AVClass *class; +} FFHWBaseEncodeContext; + +#endif /* AVCODEC_HW_BASE_ENCODE_H */ + diff --git a/libavcodec/vaapi_encode.h b/libavcodec/vaapi_encode.h index 0eed9691ca..cae7af8725 100644 --- a/libavcodec/vaapi_encode.h +++ b/libavcodec/vaapi_encode.h @@ -33,34 +33,27 @@ #include "avcodec.h" #include "hwconfig.h" +#include "hw_base_encode.h" struct VAAPIEncodeType; struct VAAPIEncodePicture; +// Codec output packet without timestamp delay, which means the +// output packet has same PTS and DTS. +#define FLAG_TIMESTAMP_NO_DELAY 1 << 6 + enum { MAX_CONFIG_ATTRIBUTES = 4, MAX_GLOBAL_PARAMS = 4, - MAX_DPB_SIZE = 16, - MAX_PICTURE_REFERENCES = 2, - MAX_REORDER_DELAY = 16, MAX_PARAM_BUFFER_SIZE = 1024, // A.4.1: table A.6 allows at most 22 tile rows for any level. MAX_TILE_ROWS = 22, // A.4.1: table A.6 allows at most 20 tile columns for any level. MAX_TILE_COLS = 20, - MAX_ASYNC_DEPTH = 64, - MAX_REFERENCE_LIST_NUM = 2, }; extern const AVCodecHWConfigInternal *const ff_vaapi_encode_hw_configs[]; -enum { - PICTURE_TYPE_IDR = 0, - PICTURE_TYPE_I = 1, - PICTURE_TYPE_P = 2, - PICTURE_TYPE_B = 3, -}; - typedef struct VAAPIEncodeSlice { int index; int row_start; @@ -193,7 +186,8 @@ typedef struct VAAPIEncodeRCMode { } VAAPIEncodeRCMode; typedef struct VAAPIEncodeContext { - const AVClass *class; + // Base context. + FFHWBaseEncodeContext base; // Codec-specific hooks. const struct VAAPIEncodeType *codec; @@ -397,25 +391,6 @@ typedef struct VAAPIEncodeContext { AVPacket *tail_pkt; } VAAPIEncodeContext; -enum { - // Codec supports controlling the subdivision of pictures into slices. - FLAG_SLICE_CONTROL = 1 << 0, - // Codec only supports constant quality (no rate control). - FLAG_CONSTANT_QUALITY_ONLY = 1 << 1, - // Codec is intra-only. - FLAG_INTRA_ONLY = 1 << 2, - // Codec supports B-pictures. - FLAG_B_PICTURES = 1 << 3, - // Codec supports referencing B-pictures. - FLAG_B_PICTURE_REFERENCES = 1 << 4, - // Codec supports non-IDR key pictures (that is, key pictures do - // not necessarily empty the DPB). - FLAG_NON_IDR_KEY_PICTURES = 1 << 5, - // Codec output packet without timestamp delay, which means the - // output packet has same PTS and DTS. - FLAG_TIMESTAMP_NO_DELAY = 1 << 6, -}; - typedef struct VAAPIEncodeType { // List of supported profiles and corresponding VAAPI profiles. // (Must end with AV_PROFILE_UNKNOWN.) From patchwork Tue May 28 15:47:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 49317 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a59:8f0d:0:b0:460:55fa:d5ed with SMTP id i13csp53780vqu; Tue, 28 May 2024 08:50:14 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCUio2q6DNOmGkROfaGx6O3pWMIyv3RKB9/YhNI5KMaisz5izUMA17TvhEm8ElMfYx5bUiy7zTmYORjhP4jFpuWBh0d5NWgLLskptg== X-Google-Smtp-Source: AGHT+IEV4+LBvYdE9pHQQpefSMYHVMZqq0KtLGAjWA4h76Xu7COWCM2WKNWVrART1XU/5K1BzCt3 X-Received: by 2002:a17:906:c217:b0:a62:2ef9:139 with SMTP id a640c23a62f3a-a6262f5c244mr787679666b.0.1716911414312; Tue, 28 May 2024 08:50:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1716911414; cv=none; d=google.com; s=arc-20160816; b=PujSIelwJDlwYFd34v6PorFjp+CdW8KKRoFvz5U9jQANi5rR2L7t0trO+y/Fnl57+X 9CjmG7CpFtDcaTbE83TnStZEGoaBOD3dH3CLIGPINFQrpYs9sC+JcZnv3bZueoUWLU64 fuVeI8k9Q+es5rf8qVoly+O7jo2HmvYyNh5EsYMA3K5G4TGifVvMhxgjc6AAsE0Vc9bM lY/toUbqN5PJJMI8LGLmuVFH5hVOAjfqyJZ90hpFLBOu8XO5dS09nFmFyK4xxHCY6JzF yQaG/ujVRUeV6h1ioym2EP7TnoKBujkLUCLg3xjmJmuaSSgDUrysgMPkwkOCQZviABlh OuMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=C39YQM2Cu/gYzxTErOZhyADCzGKeM7vG5KXIMWizruM=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=bBV4LOwlAhGmAqYI+PEPcOWjsF9MLmkpj6OFjVQf8HJnvqg6EjhnUci8RnpCEiETpU hx/aAQAynaPLrMexvJkwdEfzXbZNt878CoYGS8zdjiDPjBrSy15md20XL9OiX6EnR6Ld vpKUEEN8U0MscUq4yim/FJm5ibazYUJsSjAHIx5kmaXUKayvx5WoDfGNCPU7Gnx7eKxy wuRXPe2JKq9lZ7k94SZ77vsB0CVRAUHOmXI2KYMtlDoUOYv8KiIgUUeMJfAutb6caHww SpP6kFdMO4XhQThWR97m2PjKelvIZP1MULUsqphn6pzrLjJ8Y2kywIiBfEnWPickCyEy IoaQ==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=ChJV9lOS; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id a640c23a62f3a-a626cc64108si529947066b.531.2024.05.28.08.50.13; Tue, 28 May 2024 08:50:14 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=ChJV9lOS; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id A70AC68D4DB; Tue, 28 May 2024 18:49:59 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 650E468C094 for ; Tue, 28 May 2024 18:49:51 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716911396; x=1748447396; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=O8vOHoQlTRWMjMu6RELNJEnDlFpEPTQA3WiHrYmRoJ8=; b=ChJV9lOSPG0MVi30j5F6l1AX/mduxaErMY/K0wyuBObydjNsOW9ueuly An40C2OQ5akUhSb/7Jydt3T9Bl/xejDCMw0Wivy7bMxa5bC7ossALkxGe nVgxGuGR6dOPGKZf9emEOcbQy9eDNUV6oLhAxgYdJbMY5eocSiLLKUjdD pxyGvzVEWRBdp2IY29afngE/oW0mxgQuYP0FsQosPVLihRCtulSjPhmjU 75L5lDSp0oTTQuTmuoviOPRdeic615ratbPUxaE5MIi4iaXopNNKCnmZR 4gdP/htCRqYLIkvEGQNNb0UWY9+jkvrbfKXg2uGRaXBkvVQUvRmaWH3RH Q==; X-CSE-ConnectionGUID: SVv+sFALT/uH6A3bz2Nqyg== X-CSE-MsgGUID: 4iCapGqaSiyTXWVzaOhhrA== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="23821852" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="23821852" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 08:49:48 -0700 X-CSE-ConnectionGUID: DpC9jG+6TqCh1hZwAPIVRA== X-CSE-MsgGUID: poLQmBZbQMW9xmBeTVAEMg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="35731030" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by orviesa007.jf.intel.com with ESMTP; 28 May 2024 08:49:47 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Tue, 28 May 2024 23:47:53 +0800 Message-ID: <20240528154807.1151-2-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240528154807.1151-1-tong1.wu@intel.com> References: <20240528154807.1151-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v12 02/15] avcodec/hw_base_encode: add FF_HW_ prefix for two enums X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: zzDPn2DS3KfY From: Tong Wu PICTURE_TYPE_* and FLAG_* are added FF_HW_ prefix after being moved to base layer. Signed-off-by: Tong Wu --- libavcodec/hw_base_encode.h | 20 ++++++------- libavcodec/vaapi_encode.c | 50 ++++++++++++++++----------------- libavcodec/vaapi_encode_av1.c | 12 ++++---- libavcodec/vaapi_encode_h264.c | 50 ++++++++++++++++----------------- libavcodec/vaapi_encode_h265.c | 44 ++++++++++++++--------------- libavcodec/vaapi_encode_mjpeg.c | 6 ++-- libavcodec/vaapi_encode_mpeg2.c | 30 ++++++++++---------- libavcodec/vaapi_encode_vp8.c | 10 +++---- libavcodec/vaapi_encode_vp9.c | 16 +++++------ 9 files changed, 119 insertions(+), 119 deletions(-) diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h index ffcb6bcbb2..f3c9f32977 100644 --- a/libavcodec/hw_base_encode.h +++ b/libavcodec/hw_base_encode.h @@ -26,26 +26,26 @@ #define MAX_REFERENCE_LIST_NUM 2 enum { - PICTURE_TYPE_IDR = 0, - PICTURE_TYPE_I = 1, - PICTURE_TYPE_P = 2, - PICTURE_TYPE_B = 3, + FF_HW_PICTURE_TYPE_IDR = 0, + FF_HW_PICTURE_TYPE_I = 1, + FF_HW_PICTURE_TYPE_P = 2, + FF_HW_PICTURE_TYPE_B = 3, }; enum { // Codec supports controlling the subdivision of pictures into slices. - FLAG_SLICE_CONTROL = 1 << 0, + FF_HW_FLAG_SLICE_CONTROL = 1 << 0, // Codec only supports constant quality (no rate control). - FLAG_CONSTANT_QUALITY_ONLY = 1 << 1, + FF_HW_FLAG_CONSTANT_QUALITY_ONLY = 1 << 1, // Codec is intra-only. - FLAG_INTRA_ONLY = 1 << 2, + FF_HW_FLAG_INTRA_ONLY = 1 << 2, // Codec supports B-pictures. - FLAG_B_PICTURES = 1 << 3, + FF_HW_FLAG_B_PICTURES = 1 << 3, // Codec supports referencing B-pictures. - FLAG_B_PICTURE_REFERENCES = 1 << 4, + FF_HW_FLAG_B_PICTURE_REFERENCES = 1 << 4, // Codec supports non-IDR key pictures (that is, key pictures do // not necessarily empty the DPB). - FLAG_NON_IDR_KEY_PICTURES = 1 << 5, + FF_HW_FLAG_NON_IDR_KEY_PICTURES = 1 << 5, }; typedef struct FFHWBaseEncodeContext { diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c index f54b2579ec..3c3d6a37c7 100644 --- a/libavcodec/vaapi_encode.c +++ b/libavcodec/vaapi_encode.c @@ -345,7 +345,7 @@ static int vaapi_encode_issue(AVCodecContext *avctx, pic->nb_param_buffers = 0; - if (pic->type == PICTURE_TYPE_IDR && ctx->codec->init_sequence_params) { + if (pic->type == FF_HW_PICTURE_TYPE_IDR && ctx->codec->init_sequence_params) { err = vaapi_encode_make_param_buffer(avctx, pic, VAEncSequenceParameterBufferType, ctx->codec_sequence_params, @@ -354,7 +354,7 @@ static int vaapi_encode_issue(AVCodecContext *avctx, goto fail; } - if (pic->type == PICTURE_TYPE_IDR) { + if (pic->type == FF_HW_PICTURE_TYPE_IDR) { for (i = 0; i < ctx->nb_global_params; i++) { err = vaapi_encode_make_misc_param_buffer(avctx, pic, ctx->global_params_type[i], @@ -391,7 +391,7 @@ static int vaapi_encode_issue(AVCodecContext *avctx, } #endif - if (pic->type == PICTURE_TYPE_IDR) { + if (pic->type == FF_HW_PICTURE_TYPE_IDR) { if (ctx->va_packed_headers & VA_ENC_PACKED_HEADER_SEQUENCE && ctx->codec->write_sequence_header) { bit_len = 8 * sizeof(data); @@ -671,7 +671,7 @@ static int vaapi_encode_set_output_property(AVCodecContext *avctx, { VAAPIEncodeContext *ctx = avctx->priv_data; - if (pic->type == PICTURE_TYPE_IDR) + if (pic->type == FF_HW_PICTURE_TYPE_IDR) pkt->flags |= AV_PKT_FLAG_KEY; pkt->pts = pic->pts; @@ -996,7 +996,7 @@ static void vaapi_encode_remove_refs(AVCodecContext *avctx, av_assert0(pic->dpb[i]->ref_count[level] >= 0); } - av_assert0(pic->prev || pic->type == PICTURE_TYPE_IDR); + av_assert0(pic->prev || pic->type == FF_HW_PICTURE_TYPE_IDR); if (pic->prev) { --pic->prev->ref_count[level]; av_assert0(pic->prev->ref_count[level] >= 0); @@ -1025,7 +1025,7 @@ static void vaapi_encode_set_b_pictures(AVCodecContext *avctx, for (pic = start->next; pic; pic = pic->next) { if (pic == end) break; - pic->type = PICTURE_TYPE_B; + pic->type = FF_HW_PICTURE_TYPE_B; pic->b_depth = current_depth; vaapi_encode_add_ref(avctx, pic, start, 1, 1, 0); @@ -1045,7 +1045,7 @@ static void vaapi_encode_set_b_pictures(AVCodecContext *avctx, ++len; for (pic = start->next, i = 1; 2 * i < len; pic = pic->next, i++); - pic->type = PICTURE_TYPE_B; + pic->type = FF_HW_PICTURE_TYPE_B; pic->b_depth = current_depth; pic->is_reference = 1; @@ -1078,7 +1078,7 @@ static void vaapi_encode_add_next_prev(AVCodecContext *avctx, if (!pic) return; - if (pic->type == PICTURE_TYPE_IDR) { + if (pic->type == FF_HW_PICTURE_TYPE_IDR) { for (i = 0; i < ctx->nb_next_prev; i++) { --ctx->next_prev[i]->ref_count[0]; ctx->next_prev[i] = NULL; @@ -1115,7 +1115,7 @@ static int vaapi_encode_pick_next(AVCodecContext *avctx, for (pic = ctx->pic_start; pic; pic = pic->next) { if (pic->encode_issued) continue; - if (pic->type != PICTURE_TYPE_B) + if (pic->type != FF_HW_PICTURE_TYPE_B) continue; for (i = 0; i < pic->nb_refs[0]; i++) { if (!pic->refs[0][i]->encode_issued) @@ -1192,7 +1192,7 @@ static int vaapi_encode_pick_next(AVCodecContext *avctx, if (pic->force_idr) { av_log(avctx, AV_LOG_DEBUG, "Pick forced IDR-picture to " "encode next.\n"); - pic->type = PICTURE_TYPE_IDR; + pic->type = FF_HW_PICTURE_TYPE_IDR; ctx->idr_counter = 1; ctx->gop_counter = 1; @@ -1200,12 +1200,12 @@ static int vaapi_encode_pick_next(AVCodecContext *avctx, if (ctx->idr_counter == ctx->gop_per_idr) { av_log(avctx, AV_LOG_DEBUG, "Pick new-GOP IDR-picture to " "encode next.\n"); - pic->type = PICTURE_TYPE_IDR; + pic->type = FF_HW_PICTURE_TYPE_IDR; ctx->idr_counter = 1; } else { av_log(avctx, AV_LOG_DEBUG, "Pick new-GOP I-picture to " "encode next.\n"); - pic->type = PICTURE_TYPE_I; + pic->type = FF_HW_PICTURE_TYPE_I; ++ctx->idr_counter; } ctx->gop_counter = 1; @@ -1218,7 +1218,7 @@ static int vaapi_encode_pick_next(AVCodecContext *avctx, av_log(avctx, AV_LOG_DEBUG, "Pick normal P-picture to " "encode next.\n"); } - pic->type = PICTURE_TYPE_P; + pic->type = FF_HW_PICTURE_TYPE_P; av_assert0(start); ctx->gop_counter += 1 + b_counter; } @@ -1226,18 +1226,18 @@ static int vaapi_encode_pick_next(AVCodecContext *avctx, *pic_out = pic; vaapi_encode_add_ref(avctx, pic, pic, 0, 1, 0); - if (pic->type != PICTURE_TYPE_IDR) { + if (pic->type != FF_HW_PICTURE_TYPE_IDR) { // TODO: apply both previous and forward multi reference for all vaapi encoders. // And L0/L1 reference frame number can be set dynamically through query // VAConfigAttribEncMaxRefFrames attribute. if (avctx->codec_id == AV_CODEC_ID_AV1) { for (i = 0; i < ctx->nb_next_prev; i++) vaapi_encode_add_ref(avctx, pic, ctx->next_prev[i], - pic->type == PICTURE_TYPE_P, + pic->type == FF_HW_PICTURE_TYPE_P, b_counter > 0, 0); } else vaapi_encode_add_ref(avctx, pic, start, - pic->type == PICTURE_TYPE_P, + pic->type == FF_HW_PICTURE_TYPE_P, b_counter > 0, 0); vaapi_encode_add_ref(avctx, pic, ctx->next_prev[ctx->nb_next_prev - 1], 0, 0, 1); @@ -1405,7 +1405,7 @@ start: /** if no B frame before repeat P frame, sent repeat P frame out. */ if (ctx->tail_pkt->size) { for (VAAPIEncodePicture *tmp = ctx->pic_start; tmp; tmp = tmp->next) { - if (tmp->type == PICTURE_TYPE_B && tmp->pts < ctx->tail_pkt->pts) + if (tmp->type == FF_HW_PICTURE_TYPE_B && tmp->pts < ctx->tail_pkt->pts) break; else if (!tmp->next) { av_packet_move_ref(pkt, ctx->tail_pkt); @@ -1875,7 +1875,7 @@ static av_cold int vaapi_encode_init_rate_control(AVCodecContext *avctx) if (ctx->explicit_qp) TRY_RC_MODE(RC_MODE_CQP, 1); - if (ctx->codec->flags & FLAG_CONSTANT_QUALITY_ONLY) + if (ctx->codec->flags & FF_HW_FLAG_CONSTANT_QUALITY_ONLY) TRY_RC_MODE(RC_MODE_CQP, 1); if (avctx->flags & AV_CODEC_FLAG_QSCALE) @@ -2215,7 +2215,7 @@ static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx) prediction_pre_only = 0; #if VA_CHECK_VERSION(1, 9, 0) - if (!(ctx->codec->flags & FLAG_INTRA_ONLY || + if (!(ctx->codec->flags & FF_HW_FLAG_INTRA_ONLY || avctx->gop_size <= 1)) { attr = (VAConfigAttrib) { VAConfigAttribPredictionDirection }; vas = vaGetConfigAttributes(ctx->hwctx->display, @@ -2256,7 +2256,7 @@ static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx) } #endif - if (ctx->codec->flags & FLAG_INTRA_ONLY || + if (ctx->codec->flags & FF_HW_FLAG_INTRA_ONLY || avctx->gop_size <= 1) { av_log(avctx, AV_LOG_VERBOSE, "Using intra frames only.\n"); ctx->gop_size = 1; @@ -2264,7 +2264,7 @@ static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx) av_log(avctx, AV_LOG_ERROR, "Driver does not support any " "reference frames.\n"); return AVERROR(EINVAL); - } else if (!(ctx->codec->flags & FLAG_B_PICTURES) || + } else if (!(ctx->codec->flags & FF_HW_FLAG_B_PICTURES) || ref_l1 < 1 || avctx->max_b_frames < 1 || prediction_pre_only) { if (ctx->p_to_gpb) @@ -2288,7 +2288,7 @@ static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx) ctx->gop_size = avctx->gop_size; ctx->p_per_i = INT_MAX; ctx->b_per_p = avctx->max_b_frames; - if (ctx->codec->flags & FLAG_B_PICTURE_REFERENCES) { + if (ctx->codec->flags & FF_HW_FLAG_B_PICTURE_REFERENCES) { ctx->max_b_depth = FFMIN(ctx->desired_b_depth, av_log2(ctx->b_per_p) + 1); } else { @@ -2296,7 +2296,7 @@ static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx) } } - if (ctx->codec->flags & FLAG_NON_IDR_KEY_PICTURES) { + if (ctx->codec->flags & FF_HW_FLAG_NON_IDR_KEY_PICTURES) { ctx->closed_gop = !!(avctx->flags & AV_CODEC_FLAG_CLOSED_GOP); ctx->gop_per_idr = ctx->idr_interval + 1; } else { @@ -2426,7 +2426,7 @@ static av_cold int vaapi_encode_init_slice_structure(AVCodecContext *avctx) uint32_t max_slices, slice_structure; int ret; - if (!(ctx->codec->flags & FLAG_SLICE_CONTROL)) { + if (!(ctx->codec->flags & FF_HW_FLAG_SLICE_CONTROL)) { if (avctx->slices > 0) { av_log(avctx, AV_LOG_WARNING, "Multiple slices were requested " "but this codec does not support controlling slices.\n"); @@ -2827,7 +2827,7 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx) // Assume 16x16 blocks. ctx->surface_width = FFALIGN(avctx->width, 16); ctx->surface_height = FFALIGN(avctx->height, 16); - if (ctx->codec->flags & FLAG_SLICE_CONTROL) { + if (ctx->codec->flags & FF_HW_FLAG_SLICE_CONTROL) { ctx->slice_block_width = 16; ctx->slice_block_height = 16; } diff --git a/libavcodec/vaapi_encode_av1.c b/libavcodec/vaapi_encode_av1.c index b868f5b66a..7740942f2c 100644 --- a/libavcodec/vaapi_encode_av1.c +++ b/libavcodec/vaapi_encode_av1.c @@ -488,7 +488,7 @@ static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx, fh_obu->header.obu_has_size_field = 1; switch (pic->type) { - case PICTURE_TYPE_IDR: + case FF_HW_PICTURE_TYPE_IDR: av_assert0(pic->nb_refs[0] == 0 || pic->nb_refs[1]); fh->frame_type = AV1_FRAME_KEY; fh->refresh_frame_flags = 0xFF; @@ -496,7 +496,7 @@ static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx, hpic->slot = 0; hpic->last_idr_frame = pic->display_order; break; - case PICTURE_TYPE_P: + case FF_HW_PICTURE_TYPE_P: av_assert0(pic->nb_refs[0]); fh->frame_type = AV1_FRAME_INTER; fh->base_q_idx = priv->q_idx_p; @@ -523,7 +523,7 @@ static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx, vpic->ref_frame_ctrl_l0.fields.search_idx1 = AV1_REF_FRAME_GOLDEN; } break; - case PICTURE_TYPE_B: + case FF_HW_PICTURE_TYPE_B: av_assert0(pic->nb_refs[0] && pic->nb_refs[1]); fh->frame_type = AV1_FRAME_INTER; fh->base_q_idx = priv->q_idx_b; @@ -656,7 +656,7 @@ static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx, vpic->bit_offset_cdef_params = priv->cdef_start_offset; vpic->size_in_bits_cdef_params = priv->cdef_param_size; vpic->size_in_bits_frame_hdr_obu = priv->fh_data_len; - vpic->byte_offset_frame_hdr_obu_size = (((pic->type == PICTURE_TYPE_IDR) ? + vpic->byte_offset_frame_hdr_obu_size = (((pic->type == FF_HW_PICTURE_TYPE_IDR) ? priv->sh_data_len / 8 : 0) + (fh_obu->header.obu_extension_flag ? 2 : 1)); @@ -664,7 +664,7 @@ static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx, priv->nb_mh = 0; - if (pic->type == PICTURE_TYPE_IDR) { + if (pic->type == FF_HW_PICTURE_TYPE_IDR) { AVFrameSideData *sd = av_frame_get_side_data(pic->input_image, AV_FRAME_DATA_MASTERING_DISPLAY_METADATA); @@ -841,7 +841,7 @@ static const VAAPIEncodeProfile vaapi_encode_av1_profiles[] = { static const VAAPIEncodeType vaapi_encode_type_av1 = { .profiles = vaapi_encode_av1_profiles, - .flags = FLAG_B_PICTURES | FLAG_TIMESTAMP_NO_DELAY, + .flags = FF_HW_FLAG_B_PICTURES | FLAG_TIMESTAMP_NO_DELAY, .default_quality = 25, .get_encoder_caps = &vaapi_encode_av1_get_encoder_caps, diff --git a/libavcodec/vaapi_encode_h264.c b/libavcodec/vaapi_encode_h264.c index d656b1020f..df116d6610 100644 --- a/libavcodec/vaapi_encode_h264.c +++ b/libavcodec/vaapi_encode_h264.c @@ -233,7 +233,7 @@ static int vaapi_encode_h264_write_extra_header(AVCodecContext *avctx, goto fail; } if (priv->sei_needed & SEI_TIMING) { - if (pic->type == PICTURE_TYPE_IDR) { + if (pic->type == FF_HW_PICTURE_TYPE_IDR) { err = ff_cbs_sei_add_message(priv->cbc, au, 1, SEI_TYPE_BUFFERING_PERIOD, &priv->sei_buffering_period, NULL); @@ -629,7 +629,7 @@ static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx, VAEncPictureParameterBufferH264 *vpic = pic->codec_picture_params; int i, j = 0; - if (pic->type == PICTURE_TYPE_IDR) { + if (pic->type == FF_HW_PICTURE_TYPE_IDR) { av_assert0(pic->display_order == pic->encode_order); hpic->frame_num = 0; @@ -646,10 +646,10 @@ static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx, hpic->last_idr_frame = hprev->last_idr_frame; hpic->idr_pic_id = hprev->idr_pic_id; - if (pic->type == PICTURE_TYPE_I) { + if (pic->type == FF_HW_PICTURE_TYPE_I) { hpic->slice_type = 7; hpic->primary_pic_type = 0; - } else if (pic->type == PICTURE_TYPE_P) { + } else if (pic->type == FF_HW_PICTURE_TYPE_P) { hpic->slice_type = 5; hpic->primary_pic_type = 1; } else { @@ -695,7 +695,7 @@ static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx, priv->sei_needed |= SEI_TIMING; } - if (priv->sei & SEI_RECOVERY_POINT && pic->type == PICTURE_TYPE_I) { + if (priv->sei & SEI_RECOVERY_POINT && pic->type == FF_HW_PICTURE_TYPE_I) { priv->sei_recovery_point = (H264RawSEIRecoveryPoint) { .recovery_frame_cnt = 0, .exact_match_flag = 1, @@ -757,7 +757,7 @@ static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx, vpic->frame_num = hpic->frame_num; - vpic->pic_fields.bits.idr_pic_flag = (pic->type == PICTURE_TYPE_IDR); + vpic->pic_fields.bits.idr_pic_flag = (pic->type == FF_HW_PICTURE_TYPE_IDR); vpic->pic_fields.bits.reference_pic_flag = pic->is_reference; return 0; @@ -781,7 +781,7 @@ static void vaapi_encode_h264_default_ref_pic_list(AVCodecContext *avctx, hn = prev->dpb[i]->priv_data; av_assert0(hn->frame_num < hp->frame_num); - if (pic->type == PICTURE_TYPE_P) { + if (pic->type == FF_HW_PICTURE_TYPE_P) { for (j = n; j > 0; j--) { hc = rpl0[j - 1]->priv_data; av_assert0(hc->frame_num != hn->frame_num); @@ -791,7 +791,7 @@ static void vaapi_encode_h264_default_ref_pic_list(AVCodecContext *avctx, } rpl0[j] = prev->dpb[i]; - } else if (pic->type == PICTURE_TYPE_B) { + } else if (pic->type == FF_HW_PICTURE_TYPE_B) { for (j = n; j > 0; j--) { hc = rpl0[j - 1]->priv_data; av_assert0(hc->pic_order_cnt != hp->pic_order_cnt); @@ -826,7 +826,7 @@ static void vaapi_encode_h264_default_ref_pic_list(AVCodecContext *avctx, ++n; } - if (pic->type == PICTURE_TYPE_B) { + if (pic->type == FF_HW_PICTURE_TYPE_B) { for (i = 0; i < n; i++) { if (rpl0[i] != rpl1[i]) break; @@ -835,8 +835,8 @@ static void vaapi_encode_h264_default_ref_pic_list(AVCodecContext *avctx, FFSWAP(VAAPIEncodePicture*, rpl1[0], rpl1[1]); } - if (pic->type == PICTURE_TYPE_P || - pic->type == PICTURE_TYPE_B) { + if (pic->type == FF_HW_PICTURE_TYPE_P || + pic->type == FF_HW_PICTURE_TYPE_B) { av_log(avctx, AV_LOG_DEBUG, "Default RefPicList0 for fn=%d/poc=%d:", hp->frame_num, hp->pic_order_cnt); for (i = 0; i < n; i++) { @@ -846,7 +846,7 @@ static void vaapi_encode_h264_default_ref_pic_list(AVCodecContext *avctx, } av_log(avctx, AV_LOG_DEBUG, "\n"); } - if (pic->type == PICTURE_TYPE_B) { + if (pic->type == FF_HW_PICTURE_TYPE_B) { av_log(avctx, AV_LOG_DEBUG, "Default RefPicList1 for fn=%d/poc=%d:", hp->frame_num, hp->pic_order_cnt); for (i = 0; i < n; i++) { @@ -874,7 +874,7 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx, VAEncSliceParameterBufferH264 *vslice = slice->codec_slice_params; int i, j; - if (pic->type == PICTURE_TYPE_IDR) { + if (pic->type == FF_HW_PICTURE_TYPE_IDR) { sh->nal_unit_header.nal_unit_type = H264_NAL_IDR_SLICE; sh->nal_unit_header.nal_ref_idc = 3; } else { @@ -895,14 +895,14 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx, sh->direct_spatial_mv_pred_flag = 1; - if (pic->type == PICTURE_TYPE_B) + if (pic->type == FF_HW_PICTURE_TYPE_B) sh->slice_qp_delta = priv->fixed_qp_b - (pps->pic_init_qp_minus26 + 26); - else if (pic->type == PICTURE_TYPE_P) + else if (pic->type == FF_HW_PICTURE_TYPE_P) sh->slice_qp_delta = priv->fixed_qp_p - (pps->pic_init_qp_minus26 + 26); else sh->slice_qp_delta = priv->fixed_qp_idr - (pps->pic_init_qp_minus26 + 26); - if (pic->is_reference && pic->type != PICTURE_TYPE_IDR) { + if (pic->is_reference && pic->type != FF_HW_PICTURE_TYPE_IDR) { VAAPIEncodePicture *discard_list[MAX_DPB_SIZE]; int discard = 0, keep = 0; @@ -939,7 +939,7 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx, // If the intended references are not the first entries of RefPicListN // by default, use ref-pic-list-modification to move them there. - if (pic->type == PICTURE_TYPE_P || pic->type == PICTURE_TYPE_B) { + if (pic->type == FF_HW_PICTURE_TYPE_P || pic->type == FF_HW_PICTURE_TYPE_B) { VAAPIEncodePicture *def_l0[MAX_DPB_SIZE], *def_l1[MAX_DPB_SIZE]; VAAPIEncodeH264Picture *href; int n; @@ -947,7 +947,7 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx, vaapi_encode_h264_default_ref_pic_list(avctx, pic, def_l0, def_l1, &n); - if (pic->type == PICTURE_TYPE_P) { + if (pic->type == FF_HW_PICTURE_TYPE_P) { int need_rplm = 0; for (i = 0; i < pic->nb_refs[0]; i++) { av_assert0(pic->refs[0][i]); @@ -1064,13 +1064,13 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx, if (pic->nb_refs[0]) { // Backward reference for P- or B-frame. - av_assert0(pic->type == PICTURE_TYPE_P || - pic->type == PICTURE_TYPE_B); + av_assert0(pic->type == FF_HW_PICTURE_TYPE_P || + pic->type == FF_HW_PICTURE_TYPE_B); vslice->RefPicList0[0] = vpic->ReferenceFrames[0]; } if (pic->nb_refs[1]) { // Forward reference for B-frame. - av_assert0(pic->type == PICTURE_TYPE_B); + av_assert0(pic->type == FF_HW_PICTURE_TYPE_B); vslice->RefPicList1[0] = vpic->ReferenceFrames[1]; } @@ -1170,10 +1170,10 @@ static const VAAPIEncodeProfile vaapi_encode_h264_profiles[] = { static const VAAPIEncodeType vaapi_encode_type_h264 = { .profiles = vaapi_encode_h264_profiles, - .flags = FLAG_SLICE_CONTROL | - FLAG_B_PICTURES | - FLAG_B_PICTURE_REFERENCES | - FLAG_NON_IDR_KEY_PICTURES, + .flags = FF_HW_FLAG_SLICE_CONTROL | + FF_HW_FLAG_B_PICTURES | + FF_HW_FLAG_B_PICTURE_REFERENCES | + FF_HW_FLAG_NON_IDR_KEY_PICTURES, .default_quality = 20, diff --git a/libavcodec/vaapi_encode_h265.c b/libavcodec/vaapi_encode_h265.c index 2f59161346..6b272d9108 100644 --- a/libavcodec/vaapi_encode_h265.c +++ b/libavcodec/vaapi_encode_h265.c @@ -765,7 +765,7 @@ static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx, VAEncPictureParameterBufferHEVC *vpic = pic->codec_picture_params; int i, j = 0; - if (pic->type == PICTURE_TYPE_IDR) { + if (pic->type == FF_HW_PICTURE_TYPE_IDR) { av_assert0(pic->display_order == pic->encode_order); hpic->last_idr_frame = pic->display_order; @@ -777,11 +777,11 @@ static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx, av_assert0(prev); hpic->last_idr_frame = hprev->last_idr_frame; - if (pic->type == PICTURE_TYPE_I) { + if (pic->type == FF_HW_PICTURE_TYPE_I) { hpic->slice_nal_unit = HEVC_NAL_CRA_NUT; hpic->slice_type = HEVC_SLICE_I; hpic->pic_type = 0; - } else if (pic->type == PICTURE_TYPE_P) { + } else if (pic->type == FF_HW_PICTURE_TYPE_P) { av_assert0(pic->refs[0]); hpic->slice_nal_unit = HEVC_NAL_TRAIL_R; hpic->slice_type = HEVC_SLICE_P; @@ -790,7 +790,7 @@ static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx, VAAPIEncodePicture *irap_ref; av_assert0(pic->refs[0][0] && pic->refs[1][0]); for (irap_ref = pic; irap_ref; irap_ref = irap_ref->refs[1][0]) { - if (irap_ref->type == PICTURE_TYPE_I) + if (irap_ref->type == FF_HW_PICTURE_TYPE_I) break; } if (pic->b_depth == ctx->max_b_depth) { @@ -826,7 +826,7 @@ static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx, // may force an IDR frame on the output where the medadata gets // changed on the input frame. if ((priv->sei & SEI_MASTERING_DISPLAY) && - (pic->type == PICTURE_TYPE_I || pic->type == PICTURE_TYPE_IDR)) { + (pic->type == FF_HW_PICTURE_TYPE_I || pic->type == FF_HW_PICTURE_TYPE_IDR)) { AVFrameSideData *sd = av_frame_get_side_data(pic->input_image, AV_FRAME_DATA_MASTERING_DISPLAY_METADATA); @@ -874,7 +874,7 @@ static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx, } if ((priv->sei & SEI_CONTENT_LIGHT_LEVEL) && - (pic->type == PICTURE_TYPE_I || pic->type == PICTURE_TYPE_IDR)) { + (pic->type == FF_HW_PICTURE_TYPE_I || pic->type == FF_HW_PICTURE_TYPE_IDR)) { AVFrameSideData *sd = av_frame_get_side_data(pic->input_image, AV_FRAME_DATA_CONTENT_LIGHT_LEVEL); @@ -946,19 +946,19 @@ static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx, vpic->pic_fields.bits.reference_pic_flag = pic->is_reference; switch (pic->type) { - case PICTURE_TYPE_IDR: + case FF_HW_PICTURE_TYPE_IDR: vpic->pic_fields.bits.idr_pic_flag = 1; vpic->pic_fields.bits.coding_type = 1; break; - case PICTURE_TYPE_I: + case FF_HW_PICTURE_TYPE_I: vpic->pic_fields.bits.idr_pic_flag = 0; vpic->pic_fields.bits.coding_type = 1; break; - case PICTURE_TYPE_P: + case FF_HW_PICTURE_TYPE_P: vpic->pic_fields.bits.idr_pic_flag = 0; vpic->pic_fields.bits.coding_type = 2; break; - case PICTURE_TYPE_B: + case FF_HW_PICTURE_TYPE_B: vpic->pic_fields.bits.idr_pic_flag = 0; vpic->pic_fields.bits.coding_type = 3; break; @@ -1002,7 +1002,7 @@ static int vaapi_encode_h265_init_slice_params(AVCodecContext *avctx, sh->slice_pic_order_cnt_lsb = hpic->pic_order_cnt & (1 << (sps->log2_max_pic_order_cnt_lsb_minus4 + 4)) - 1; - if (pic->type != PICTURE_TYPE_IDR) { + if (pic->type != FF_HW_PICTURE_TYPE_IDR) { H265RawSTRefPicSet *rps; const VAAPIEncodeH265Picture *strp; int rps_poc[MAX_DPB_SIZE]; @@ -1109,9 +1109,9 @@ static int vaapi_encode_h265_init_slice_params(AVCodecContext *avctx, sh->slice_sao_luma_flag = sh->slice_sao_chroma_flag = sps->sample_adaptive_offset_enabled_flag; - if (pic->type == PICTURE_TYPE_B) + if (pic->type == FF_HW_PICTURE_TYPE_B) sh->slice_qp_delta = priv->fixed_qp_b - (pps->init_qp_minus26 + 26); - else if (pic->type == PICTURE_TYPE_P) + else if (pic->type == FF_HW_PICTURE_TYPE_P) sh->slice_qp_delta = priv->fixed_qp_p - (pps->init_qp_minus26 + 26); else sh->slice_qp_delta = priv->fixed_qp_idr - (pps->init_qp_minus26 + 26); @@ -1168,20 +1168,20 @@ static int vaapi_encode_h265_init_slice_params(AVCodecContext *avctx, if (pic->nb_refs[0]) { // Backward reference for P- or B-frame. - av_assert0(pic->type == PICTURE_TYPE_P || - pic->type == PICTURE_TYPE_B); + av_assert0(pic->type == FF_HW_PICTURE_TYPE_P || + pic->type == FF_HW_PICTURE_TYPE_B); vslice->ref_pic_list0[0] = vpic->reference_frames[0]; - if (ctx->p_to_gpb && pic->type == PICTURE_TYPE_P) + if (ctx->p_to_gpb && pic->type == FF_HW_PICTURE_TYPE_P) // Reference for GPB B-frame, L0 == L1 vslice->ref_pic_list1[0] = vpic->reference_frames[0]; } if (pic->nb_refs[1]) { // Forward reference for B-frame. - av_assert0(pic->type == PICTURE_TYPE_B); + av_assert0(pic->type == FF_HW_PICTURE_TYPE_B); vslice->ref_pic_list1[0] = vpic->reference_frames[1]; } - if (pic->type == PICTURE_TYPE_P && ctx->p_to_gpb) { + if (pic->type == FF_HW_PICTURE_TYPE_P && ctx->p_to_gpb) { vslice->slice_type = HEVC_SLICE_B; for (i = 0; i < FF_ARRAY_ELEMS(vslice->ref_pic_list0); i++) { vslice->ref_pic_list1[i].picture_id = vslice->ref_pic_list0[i].picture_id; @@ -1321,10 +1321,10 @@ static const VAAPIEncodeProfile vaapi_encode_h265_profiles[] = { static const VAAPIEncodeType vaapi_encode_type_h265 = { .profiles = vaapi_encode_h265_profiles, - .flags = FLAG_SLICE_CONTROL | - FLAG_B_PICTURES | - FLAG_B_PICTURE_REFERENCES | - FLAG_NON_IDR_KEY_PICTURES, + .flags = FF_HW_FLAG_SLICE_CONTROL | + FF_HW_FLAG_B_PICTURES | + FF_HW_FLAG_B_PICTURE_REFERENCES | + FF_HW_FLAG_NON_IDR_KEY_PICTURES, .default_quality = 25, diff --git a/libavcodec/vaapi_encode_mjpeg.c b/libavcodec/vaapi_encode_mjpeg.c index c17747e3a9..0e74a3eb6f 100644 --- a/libavcodec/vaapi_encode_mjpeg.c +++ b/libavcodec/vaapi_encode_mjpeg.c @@ -232,7 +232,7 @@ static int vaapi_encode_mjpeg_init_picture_params(AVCodecContext *avctx, const uint8_t *components; int t, i, quant_scale, len; - av_assert0(pic->type == PICTURE_TYPE_IDR); + av_assert0(pic->type == FF_HW_PICTURE_TYPE_IDR); desc = av_pix_fmt_desc_get(priv->common.input_frames->sw_format); av_assert0(desc); @@ -494,8 +494,8 @@ static const VAAPIEncodeProfile vaapi_encode_mjpeg_profiles[] = { static const VAAPIEncodeType vaapi_encode_type_mjpeg = { .profiles = vaapi_encode_mjpeg_profiles, - .flags = FLAG_CONSTANT_QUALITY_ONLY | - FLAG_INTRA_ONLY, + .flags = FF_HW_FLAG_CONSTANT_QUALITY_ONLY | + FF_HW_FLAG_INTRA_ONLY, .get_encoder_caps = &vaapi_encode_mjpeg_get_encoder_caps, .configure = &vaapi_encode_mjpeg_configure, diff --git a/libavcodec/vaapi_encode_mpeg2.c b/libavcodec/vaapi_encode_mpeg2.c index c9b16fbcfc..92bf6e443e 100644 --- a/libavcodec/vaapi_encode_mpeg2.c +++ b/libavcodec/vaapi_encode_mpeg2.c @@ -424,23 +424,23 @@ static int vaapi_encode_mpeg2_init_picture_params(AVCodecContext *avctx, MPEG2RawPictureCodingExtension *pce = &priv->picture_coding_extension.data.picture_coding; VAEncPictureParameterBufferMPEG2 *vpic = pic->codec_picture_params; - if (pic->type == PICTURE_TYPE_IDR || pic->type == PICTURE_TYPE_I) { + if (pic->type == FF_HW_PICTURE_TYPE_IDR || pic->type == FF_HW_PICTURE_TYPE_I) { ph->temporal_reference = 0; ph->picture_coding_type = 1; priv->last_i_frame = pic->display_order; } else { ph->temporal_reference = pic->display_order - priv->last_i_frame; - ph->picture_coding_type = pic->type == PICTURE_TYPE_B ? 3 : 2; + ph->picture_coding_type = pic->type == FF_HW_PICTURE_TYPE_B ? 3 : 2; } - if (pic->type == PICTURE_TYPE_P || pic->type == PICTURE_TYPE_B) { + if (pic->type == FF_HW_PICTURE_TYPE_P || pic->type == FF_HW_PICTURE_TYPE_B) { pce->f_code[0][0] = priv->f_code_horizontal; pce->f_code[0][1] = priv->f_code_vertical; } else { pce->f_code[0][0] = 15; pce->f_code[0][1] = 15; } - if (pic->type == PICTURE_TYPE_B) { + if (pic->type == FF_HW_PICTURE_TYPE_B) { pce->f_code[1][0] = priv->f_code_horizontal; pce->f_code[1][1] = priv->f_code_vertical; } else { @@ -452,15 +452,15 @@ static int vaapi_encode_mpeg2_init_picture_params(AVCodecContext *avctx, vpic->coded_buf = pic->output_buffer; switch (pic->type) { - case PICTURE_TYPE_IDR: - case PICTURE_TYPE_I: + case FF_HW_PICTURE_TYPE_IDR: + case FF_HW_PICTURE_TYPE_I: vpic->picture_type = VAEncPictureTypeIntra; break; - case PICTURE_TYPE_P: + case FF_HW_PICTURE_TYPE_P: vpic->picture_type = VAEncPictureTypePredictive; vpic->forward_reference_picture = pic->refs[0][0]->recon_surface; break; - case PICTURE_TYPE_B: + case FF_HW_PICTURE_TYPE_B: vpic->picture_type = VAEncPictureTypeBidirectional; vpic->forward_reference_picture = pic->refs[0][0]->recon_surface; vpic->backward_reference_picture = pic->refs[1][0]->recon_surface; @@ -490,14 +490,14 @@ static int vaapi_encode_mpeg2_init_slice_params(AVCodecContext *avctx, vslice->num_macroblocks = slice->block_size; switch (pic->type) { - case PICTURE_TYPE_IDR: - case PICTURE_TYPE_I: + case FF_HW_PICTURE_TYPE_IDR: + case FF_HW_PICTURE_TYPE_I: qp = priv->quant_i; break; - case PICTURE_TYPE_P: + case FF_HW_PICTURE_TYPE_P: qp = priv->quant_p; break; - case PICTURE_TYPE_B: + case FF_HW_PICTURE_TYPE_B: qp = priv->quant_b; break; default: @@ -505,8 +505,8 @@ static int vaapi_encode_mpeg2_init_slice_params(AVCodecContext *avctx, } vslice->quantiser_scale_code = qp; - vslice->is_intra_slice = (pic->type == PICTURE_TYPE_IDR || - pic->type == PICTURE_TYPE_I); + vslice->is_intra_slice = (pic->type == FF_HW_PICTURE_TYPE_IDR || + pic->type == FF_HW_PICTURE_TYPE_I); return 0; } @@ -566,7 +566,7 @@ static const VAAPIEncodeProfile vaapi_encode_mpeg2_profiles[] = { static const VAAPIEncodeType vaapi_encode_type_mpeg2 = { .profiles = vaapi_encode_mpeg2_profiles, - .flags = FLAG_B_PICTURES, + .flags = FF_HW_FLAG_B_PICTURES, .configure = &vaapi_encode_mpeg2_configure, diff --git a/libavcodec/vaapi_encode_vp8.c b/libavcodec/vaapi_encode_vp8.c index 8a557b967e..7dcd3d84af 100644 --- a/libavcodec/vaapi_encode_vp8.c +++ b/libavcodec/vaapi_encode_vp8.c @@ -84,8 +84,8 @@ static int vaapi_encode_vp8_init_picture_params(AVCodecContext *avctx, vpic->coded_buf = pic->output_buffer; switch (pic->type) { - case PICTURE_TYPE_IDR: - case PICTURE_TYPE_I: + case FF_HW_PICTURE_TYPE_IDR: + case FF_HW_PICTURE_TYPE_I: av_assert0(pic->nb_refs[0] == 0 && pic->nb_refs[1] == 0); vpic->ref_flags.bits.force_kf = 1; vpic->ref_last_frame = @@ -93,7 +93,7 @@ static int vaapi_encode_vp8_init_picture_params(AVCodecContext *avctx, vpic->ref_arf_frame = VA_INVALID_SURFACE; break; - case PICTURE_TYPE_P: + case FF_HW_PICTURE_TYPE_P: av_assert0(!pic->nb_refs[1]); vpic->ref_flags.bits.no_ref_last = 0; vpic->ref_flags.bits.no_ref_gf = 1; @@ -107,7 +107,7 @@ static int vaapi_encode_vp8_init_picture_params(AVCodecContext *avctx, av_assert0(0 && "invalid picture type"); } - vpic->pic_flags.bits.frame_type = (pic->type != PICTURE_TYPE_IDR); + vpic->pic_flags.bits.frame_type = (pic->type != FF_HW_PICTURE_TYPE_IDR); vpic->pic_flags.bits.show_frame = 1; vpic->pic_flags.bits.refresh_last = 1; @@ -145,7 +145,7 @@ static int vaapi_encode_vp8_write_quant_table(AVCodecContext *avctx, memset(&quant, 0, sizeof(quant)); - if (pic->type == PICTURE_TYPE_P) + if (pic->type == FF_HW_PICTURE_TYPE_P) q = priv->q_index_p; else q = priv->q_index_i; diff --git a/libavcodec/vaapi_encode_vp9.c b/libavcodec/vaapi_encode_vp9.c index c2a8dec71b..4eec0fd445 100644 --- a/libavcodec/vaapi_encode_vp9.c +++ b/libavcodec/vaapi_encode_vp9.c @@ -95,13 +95,13 @@ static int vaapi_encode_vp9_init_picture_params(AVCodecContext *avctx, vpic->log2_tile_columns = num_tile_columns == 1 ? 0 : av_log2(num_tile_columns - 1) + 1; switch (pic->type) { - case PICTURE_TYPE_IDR: + case FF_HW_PICTURE_TYPE_IDR: av_assert0(pic->nb_refs[0] == 0 && pic->nb_refs[1] == 0); vpic->ref_flags.bits.force_kf = 1; vpic->refresh_frame_flags = 0xff; hpic->slot = 0; break; - case PICTURE_TYPE_P: + case FF_HW_PICTURE_TYPE_P: av_assert0(!pic->nb_refs[1]); { VAAPIEncodeVP9Picture *href = pic->refs[0][0]->priv_data; @@ -119,7 +119,7 @@ static int vaapi_encode_vp9_init_picture_params(AVCodecContext *avctx, vpic->ref_flags.bits.ref_last_sign_bias = 1; } break; - case PICTURE_TYPE_B: + case FF_HW_PICTURE_TYPE_B: av_assert0(pic->nb_refs[0] && pic->nb_refs[1]); { VAAPIEncodeVP9Picture *href0 = pic->refs[0][0]->priv_data, @@ -167,12 +167,12 @@ static int vaapi_encode_vp9_init_picture_params(AVCodecContext *avctx, } } - vpic->pic_flags.bits.frame_type = (pic->type != PICTURE_TYPE_IDR); + vpic->pic_flags.bits.frame_type = (pic->type != FF_HW_PICTURE_TYPE_IDR); vpic->pic_flags.bits.show_frame = pic->display_order <= pic->encode_order; - if (pic->type == PICTURE_TYPE_IDR) + if (pic->type == FF_HW_PICTURE_TYPE_IDR) vpic->luma_ac_qindex = priv->q_idx_idr; - else if (pic->type == PICTURE_TYPE_P) + else if (pic->type == FF_HW_PICTURE_TYPE_P) vpic->luma_ac_qindex = priv->q_idx_p; else vpic->luma_ac_qindex = priv->q_idx_b; @@ -239,8 +239,8 @@ static const VAAPIEncodeProfile vaapi_encode_vp9_profiles[] = { static const VAAPIEncodeType vaapi_encode_type_vp9 = { .profiles = vaapi_encode_vp9_profiles, - .flags = FLAG_B_PICTURES | - FLAG_B_PICTURE_REFERENCES, + .flags = FF_HW_FLAG_B_PICTURES | + FF_HW_FLAG_B_PICTURE_REFERENCES, .default_quality = 100, From patchwork Tue May 28 15:47:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 49318 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a59:8f0d:0:b0:460:55fa:d5ed with SMTP id i13csp53943vqu; Tue, 28 May 2024 08:50:31 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCUs0OViLfCQJLiWuic/bMk+S2pptUz3toQgrgy01l8BvyAThnPpdZH26OEeFD/px3nPnWkQbXdprUYWT4/8JvDaVGsCe2c/kASkQg== X-Google-Smtp-Source: AGHT+IGhxj8I2CLxR7RrEgPmGNFQDm6hyflD2KlxzYtixeGFI3SgvLJW9Uj8vAPrxQ9rX4pkCEP6 X-Received: by 2002:a17:907:7215:b0:a62:dfa1:c9f3 with SMTP id a640c23a62f3a-a62dfa1ca98mr595613866b.1.1716911430840; Tue, 28 May 2024 08:50:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1716911430; cv=none; d=google.com; s=arc-20160816; b=stRZW/m09TR3b8hSRBFj5y8jyClffUawDC3DJIxFrOXxTTWiGSi0V9YqwZiTq4jIcf EfghFa54Gno0G7xQrSiyi9hCRORIF6hoetSEaHzRIuGmsjIc3I6Fa2JBSp7kKnpw1Pfc VkOkYSGZkyu4crqwJONDrQDZmh9XtpsdfjDPB4ecXjTVyopGqCRGPVXDnR36wK59AVmC nuEzfv9M96fH0PaXQphlEpnypux6YhdL3L0w1PP5/F3Zfs9m7U7AZ3X5DfCrCThBsMAa ReGmrOFK6xAHL7Ed3UnxqWwW+QchwK5bzrYf6R3/WNIEjmsbIKs5y5gQU7f1L/z3mLvy eyrg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=7AvrqZNCoE6IWUPEfv9tu7gfPxYVYynG351fAe7yNVI=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=JPahtxkCoHCsknTk7P6pT2B6lDD6C6hehMuJDN0k7iQSRdlHKm6MKqiuQ3fP64mlXY cv+WU5Nqz+hy/siimSdpXCoD8h1VS3yhBDyiSoCVDe5MlrrYZgLiO7MoEfJe7uPW0Ka7 XqZkGUSkhZypbjAa9QbXbmTOhD57PbPQTiflE1tcn0mF5rOv5sQGcqXRsVHpwVYObFfo ZVOk7rooyhD16mB0d+a3pXxkzWy3EU1UMFcDVGozVJ0GnFfkTkJ29bnFBzvRy939i6WT 5TbqZCFG4kJyxfnXruzH1f9VaEtPlIk49AggbRi4Xvb25R1vtSLJYOZxqP8iwbufP0GD AT1g==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=aEvzo68J; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id a640c23a62f3a-a62e5b2fac4si283413066b.810.2024.05.28.08.50.26; Tue, 28 May 2024 08:50:30 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=aEvzo68J; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 5409068D4F6; Tue, 28 May 2024 18:50:01 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 7D00768C094 for ; Tue, 28 May 2024 18:49:52 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716911398; x=1748447398; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Wv4Y3lIK8HpY8axYE/GOHrl4sYMYSRGUnp4iaxvX2fM=; b=aEvzo68Jh7N3UKC6LAwJaANkFAuB2CyXv8NVbgd84nN8dJm+20ICLvoH aFbrZCfI0uwfahM17cpD6XowIaXNrvjHatxCO/0K839WcbkZLEjP/lusF vUspxJpb5nxGPB8NShqGOFramOMvN8pP/gX757TpgJmeEkwF3ywH045sI V1MA92Oo4BQ9ORnRhH0Jh1rEacOMfuE0Guv2FsrZK9WXIh66QGt0MJusv tjNvl+Nt4OMyyGT/URF1YqQGHsx0OKZuVih37JpDImaQCVpHI1keTtOqL YJCxD8olJ5le5BTbf6mfOsueyELq+p2AWtBUQEeYJI9rdbIM5D3veu80/ Q==; X-CSE-ConnectionGUID: 41hokX2STee6pHbwUILnHA== X-CSE-MsgGUID: Yc3y9g0dTVO9KhdjKkhc0Q== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="23821856" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="23821856" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 08:49:50 -0700 X-CSE-ConnectionGUID: PjLXL9HTS2eLwnG8mIMggA== X-CSE-MsgGUID: 4ogr8lvJR5O5072Qz092ow== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="35731047" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by orviesa007.jf.intel.com with ESMTP; 28 May 2024 08:49:49 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Tue, 28 May 2024 23:47:54 +0800 Message-ID: <20240528154807.1151-3-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240528154807.1151-1-tong1.wu@intel.com> References: <20240528154807.1151-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v12 03/15] avcodec/vaapi_encode: add async_depth to common options X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: jrrrJGXoC0HN From: Tong Wu Signed-off-by: Tong Wu --- libavcodec/hw_base_encode.h | 10 +++++++++- libavcodec/vaapi_encode.c | 13 ++++++++----- libavcodec/vaapi_encode.h | 7 ------- libavcodec/vaapi_encode_av1.c | 1 + libavcodec/vaapi_encode_h264.c | 1 + libavcodec/vaapi_encode_h265.c | 1 + libavcodec/vaapi_encode_mjpeg.c | 1 + libavcodec/vaapi_encode_mpeg2.c | 1 + libavcodec/vaapi_encode_vp8.c | 1 + libavcodec/vaapi_encode_vp9.c | 1 + 10 files changed, 24 insertions(+), 13 deletions(-) diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h index f3c9f32977..c14c174102 100644 --- a/libavcodec/hw_base_encode.h +++ b/libavcodec/hw_base_encode.h @@ -50,7 +50,15 @@ enum { typedef struct FFHWBaseEncodeContext { const AVClass *class; + + // Max number of frame buffered in encoder. + int async_depth; } FFHWBaseEncodeContext; -#endif /* AVCODEC_HW_BASE_ENCODE_H */ +#define HW_BASE_ENCODE_COMMON_OPTIONS \ + { "async_depth", "Maximum processing parallelism. " \ + "Increase this to improve single channel performance.", \ + OFFSET(common.base.async_depth), AV_OPT_TYPE_INT, \ + { .i64 = 2 }, 1, MAX_ASYNC_DEPTH, FLAGS } +#endif /* AVCODEC_HW_BASE_ENCODE_H */ diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c index 3c3d6a37c7..77f539dc6d 100644 --- a/libavcodec/vaapi_encode.c +++ b/libavcodec/vaapi_encode.c @@ -669,7 +669,8 @@ static int vaapi_encode_set_output_property(AVCodecContext *avctx, VAAPIEncodePicture *pic, AVPacket *pkt) { - VAAPIEncodeContext *ctx = avctx->priv_data; + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; + VAAPIEncodeContext *ctx = avctx->priv_data; if (pic->type == FF_HW_PICTURE_TYPE_IDR) pkt->flags |= AV_PKT_FLAG_KEY; @@ -699,7 +700,7 @@ static int vaapi_encode_set_output_property(AVCodecContext *avctx, pkt->dts = ctx->ts_ring[pic->encode_order] - ctx->dts_pts_diff; } else { pkt->dts = ctx->ts_ring[(pic->encode_order - ctx->decode_delay) % - (3 * ctx->output_delay + ctx->async_depth)]; + (3 * ctx->output_delay + base_ctx->async_depth)]; } return 0; @@ -1320,6 +1321,7 @@ static int vaapi_encode_check_frame(AVCodecContext *avctx, static int vaapi_encode_send_frame(AVCodecContext *avctx, AVFrame *frame) { + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeContext *ctx = avctx->priv_data; VAAPIEncodePicture *pic; int err; @@ -1365,7 +1367,7 @@ static int vaapi_encode_send_frame(AVCodecContext *avctx, AVFrame *frame) ctx->dts_pts_diff = pic->pts - ctx->first_pts; if (ctx->output_delay > 0) ctx->ts_ring[ctx->input_order % - (3 * ctx->output_delay + ctx->async_depth)] = pic->pts; + (3 * ctx->output_delay + base_ctx->async_depth)] = pic->pts; pic->display_order = ctx->input_order; ++ctx->input_order; @@ -2773,7 +2775,8 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx) av_cold int ff_vaapi_encode_init(AVCodecContext *avctx) { - VAAPIEncodeContext *ctx = avctx->priv_data; + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; + VAAPIEncodeContext *ctx = avctx->priv_data; AVVAAPIFramesContext *recon_hwctx = NULL; VAStatus vas; int err; @@ -2966,7 +2969,7 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx) vas = vaSyncBuffer(ctx->hwctx->display, VA_INVALID_ID, 0); if (vas != VA_STATUS_ERROR_UNIMPLEMENTED) { ctx->has_sync_buffer_func = 1; - ctx->encode_fifo = av_fifo_alloc2(ctx->async_depth, + ctx->encode_fifo = av_fifo_alloc2(base_ctx->async_depth, sizeof(VAAPIEncodePicture *), 0); if (!ctx->encode_fifo) diff --git a/libavcodec/vaapi_encode.h b/libavcodec/vaapi_encode.h index cae7af8725..9fdb945b18 100644 --- a/libavcodec/vaapi_encode.h +++ b/libavcodec/vaapi_encode.h @@ -374,8 +374,6 @@ typedef struct VAAPIEncodeContext { int has_sync_buffer_func; // Store buffered pic AVFifo *encode_fifo; - // Max number of frame buffered in encoder. - int async_depth; /** Head data for current output pkt, used only for AV1. */ //void *header_data; @@ -491,11 +489,6 @@ int ff_vaapi_encode_close(AVCodecContext *avctx); "Maximum B-frame reference depth", \ OFFSET(common.desired_b_depth), AV_OPT_TYPE_INT, \ { .i64 = 1 }, 1, INT_MAX, FLAGS }, \ - { "async_depth", "Maximum processing parallelism. " \ - "Increase this to improve single channel performance. This option " \ - "doesn't work if driver doesn't implement vaSyncBuffer function.", \ - OFFSET(common.async_depth), AV_OPT_TYPE_INT, \ - { .i64 = 2 }, 1, MAX_ASYNC_DEPTH, FLAGS }, \ { "max_frame_size", \ "Maximum frame size (in bytes)",\ OFFSET(common.max_frame_size), AV_OPT_TYPE_INT, \ diff --git a/libavcodec/vaapi_encode_av1.c b/libavcodec/vaapi_encode_av1.c index 7740942f2c..5bd12f9f92 100644 --- a/libavcodec/vaapi_encode_av1.c +++ b/libavcodec/vaapi_encode_av1.c @@ -965,6 +965,7 @@ static av_cold int vaapi_encode_av1_close(AVCodecContext *avctx) #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM) static const AVOption vaapi_encode_av1_options[] = { + HW_BASE_ENCODE_COMMON_OPTIONS, VAAPI_ENCODE_COMMON_OPTIONS, VAAPI_ENCODE_RC_OPTIONS, { "profile", "Set profile (seq_profile)", diff --git a/libavcodec/vaapi_encode_h264.c b/libavcodec/vaapi_encode_h264.c index df116d6610..aa5b499d9f 100644 --- a/libavcodec/vaapi_encode_h264.c +++ b/libavcodec/vaapi_encode_h264.c @@ -1276,6 +1276,7 @@ static av_cold int vaapi_encode_h264_close(AVCodecContext *avctx) #define OFFSET(x) offsetof(VAAPIEncodeH264Context, x) #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM) static const AVOption vaapi_encode_h264_options[] = { + HW_BASE_ENCODE_COMMON_OPTIONS, VAAPI_ENCODE_COMMON_OPTIONS, VAAPI_ENCODE_RC_OPTIONS, diff --git a/libavcodec/vaapi_encode_h265.c b/libavcodec/vaapi_encode_h265.c index 6b272d9108..e65c91304a 100644 --- a/libavcodec/vaapi_encode_h265.c +++ b/libavcodec/vaapi_encode_h265.c @@ -1394,6 +1394,7 @@ static av_cold int vaapi_encode_h265_close(AVCodecContext *avctx) #define OFFSET(x) offsetof(VAAPIEncodeH265Context, x) #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM) static const AVOption vaapi_encode_h265_options[] = { + HW_BASE_ENCODE_COMMON_OPTIONS, VAAPI_ENCODE_COMMON_OPTIONS, VAAPI_ENCODE_RC_OPTIONS, diff --git a/libavcodec/vaapi_encode_mjpeg.c b/libavcodec/vaapi_encode_mjpeg.c index 0e74a3eb6f..9e533e9fbe 100644 --- a/libavcodec/vaapi_encode_mjpeg.c +++ b/libavcodec/vaapi_encode_mjpeg.c @@ -540,6 +540,7 @@ static av_cold int vaapi_encode_mjpeg_close(AVCodecContext *avctx) #define OFFSET(x) offsetof(VAAPIEncodeMJPEGContext, x) #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM) static const AVOption vaapi_encode_mjpeg_options[] = { + HW_BASE_ENCODE_COMMON_OPTIONS, VAAPI_ENCODE_COMMON_OPTIONS, { "jfif", "Include JFIF header", diff --git a/libavcodec/vaapi_encode_mpeg2.c b/libavcodec/vaapi_encode_mpeg2.c index 92bf6e443e..69784f9ba6 100644 --- a/libavcodec/vaapi_encode_mpeg2.c +++ b/libavcodec/vaapi_encode_mpeg2.c @@ -639,6 +639,7 @@ static av_cold int vaapi_encode_mpeg2_close(AVCodecContext *avctx) #define OFFSET(x) offsetof(VAAPIEncodeMPEG2Context, x) #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM) static const AVOption vaapi_encode_mpeg2_options[] = { + HW_BASE_ENCODE_COMMON_OPTIONS, VAAPI_ENCODE_COMMON_OPTIONS, VAAPI_ENCODE_RC_OPTIONS, diff --git a/libavcodec/vaapi_encode_vp8.c b/libavcodec/vaapi_encode_vp8.c index 7dcd3d84af..1f2de050d2 100644 --- a/libavcodec/vaapi_encode_vp8.c +++ b/libavcodec/vaapi_encode_vp8.c @@ -216,6 +216,7 @@ static av_cold int vaapi_encode_vp8_init(AVCodecContext *avctx) #define OFFSET(x) offsetof(VAAPIEncodeVP8Context, x) #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM) static const AVOption vaapi_encode_vp8_options[] = { + HW_BASE_ENCODE_COMMON_OPTIONS, VAAPI_ENCODE_COMMON_OPTIONS, VAAPI_ENCODE_RC_OPTIONS, diff --git a/libavcodec/vaapi_encode_vp9.c b/libavcodec/vaapi_encode_vp9.c index 4eec0fd445..690e4a6bd1 100644 --- a/libavcodec/vaapi_encode_vp9.c +++ b/libavcodec/vaapi_encode_vp9.c @@ -273,6 +273,7 @@ static av_cold int vaapi_encode_vp9_init(AVCodecContext *avctx) #define OFFSET(x) offsetof(VAAPIEncodeVP9Context, x) #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM) static const AVOption vaapi_encode_vp9_options[] = { + HW_BASE_ENCODE_COMMON_OPTIONS, VAAPI_ENCODE_COMMON_OPTIONS, VAAPI_ENCODE_RC_OPTIONS, From patchwork Tue May 28 15:47:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 49320 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a59:8f0d:0:b0:460:55fa:d5ed with SMTP id i13csp54217vqu; Tue, 28 May 2024 08:50:55 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCVLsSLZtDxDprWqOTmdM8yAhVJC1RA2WgxkrvvPjO1LIrEs7B2Q9+GQqSTBAetTr3BD1sj8h9/vYHWBHDbwXGX5ZV+81eDaJlkQdw== X-Google-Smtp-Source: AGHT+IGqIEz1ELfTPSDOnlwz/2XOY3aRuTcHTqL2nghiIFeweKxUTsDxLXH4/lQ/Ux24uUxLcr/C X-Received: by 2002:a17:906:b0cf:b0:a59:bcfd:d950 with SMTP id a640c23a62f3a-a62645d650amr795000366b.46.1716911455324; Tue, 28 May 2024 08:50:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1716911455; cv=none; d=google.com; s=arc-20160816; b=rjN+Xbkpn8ERxcv0BtmCmr+gVtbvsGUxuYA8ufSKIRzcuVcFj8HTKgjSQzG0w3eX9S f9e1AdXP3aP356IV86yyKELcJzE9YmByEFnG+ibIEsi/OettmeBoqqKnErsK3HW8v1lp 9hswO0rKX+JGybqfgW5KB37jnl7B8MUKFWZBL6Q6muyS3WT/jvtnNAH7FBuSRToxJ81Q vYYRqEGvqpWTcJsWuzmxCt936z4K2psVbeCfz3dE+qE+jzusHHirwGivbQ9WQMw4GAiL APMCO7NtSpV7oEsEaO1Jsf2KtWnLeyj6JXEpGokB5EOTtYeL2eacIR0CFqK5oOHU3ECg iroA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=h6XJb8GiKrW+arK69I7Soh/4ZV8A1l/gNQBpW5DWCxk=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=LS7BMTB8G4ZsMQdYBPZ6baoaMwNMtRS6lentknyQY9edXpi2bzuvKKxWefDvrKUW9f AIBbD+zM8koMEyH9s6E48jJ72q1frrKl5CF+WdA+7XR9ZrTgzct/InkUMl4NKSFDpRZ4 p4zSIWFUr1i+R2V9hkl0nJh4e9LxrEQBKmlALB5GQ1Sge2ZF9Y5+XVcqriKqVIJl05gQ ZudC4N42tLi4OygGfAsJYhg0Aji30OM8SjzPM0if+XsERHP8Yhe9iqnc6FW956f1zBl/ Orpr5MkK21S7EOOetTa6FQ3IfRitJLUoan/Eei5IMBCl4SksZ3wKs9e0weo+XUH18reD Df3Q==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=GzA+BaSd; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id a640c23a62f3a-a626cdd47c5si513362066b.1037.2024.05.28.08.50.36; Tue, 28 May 2024 08:50:55 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=GzA+BaSd; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id B08B268D518; Tue, 28 May 2024 18:50:02 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 2BBEA68D4EF for ; Tue, 28 May 2024 18:49:55 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716911401; x=1748447401; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0o9z1NPtxDUhzMcSzSZQXNFxOXWJqm/eOZ96igSiTCg=; b=GzA+BaSdhLV/61vLJf1h5F6cwNEb/gF3rnqocmW6ZKoWCDi7Bda9VnKi NxTwiAnYqyCYgB7qtRcpUIUDF9ewOt3KS9nFXr0bbYl+Z8u8CjGjAakqx yONALQ6qayCgCjwqbofTJZRY7TOGzS31mEL+ng3tfCNyco/090xH4i8Yu RmsqSV2hcaFerI4TW4ppGHHfkMnu+ZoifEqH29NK7FPpeBcoyns7o7daY xBbEP+sOVitxDmWnltbKQgLEW7PtOpYkoZFFFIlEoPqXtIiLFNYwA6cUP oOrjZRrLIxFvpBFz7ZeSe2Hm3eEQxLC3o03ycjY8YuksAzuoY6koGuFGK Q==; X-CSE-ConnectionGUID: uW2D24CqT1CdthGjEDTDcA== X-CSE-MsgGUID: LuEjL6c0SzWxWi6+wZolHw== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="23821862" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="23821862" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 08:49:51 -0700 X-CSE-ConnectionGUID: yTqSeStTQ1ugWuuUjmk1xw== X-CSE-MsgGUID: 6cNe/qR4T1yukfMLVZUKGw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="35731069" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by orviesa007.jf.intel.com with ESMTP; 28 May 2024 08:49:50 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Tue, 28 May 2024 23:47:55 +0800 Message-ID: <20240528154807.1151-4-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240528154807.1151-1-tong1.wu@intel.com> References: <20240528154807.1151-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v12 04/15] avcodec/vaapi_encode: add picture type name to base X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: xInyK3aqHFFX From: Tong Wu Signed-off-by: Tong Wu --- libavcodec/hw_base_encode.h | 6 ++++++ libavcodec/vaapi_encode.c | 4 +--- 2 files changed, 7 insertions(+), 3 deletions(-) diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h index c14c174102..858450afa8 100644 --- a/libavcodec/hw_base_encode.h +++ b/libavcodec/hw_base_encode.h @@ -25,6 +25,12 @@ #define MAX_ASYNC_DEPTH 64 #define MAX_REFERENCE_LIST_NUM 2 +static inline const char *ff_hw_base_encode_get_pictype_name(const int type) +{ + const char * const picture_type_name[] = { "IDR", "I", "P", "B" }; + return picture_type_name[type]; +} + enum { FF_HW_PICTURE_TYPE_IDR = 0, FF_HW_PICTURE_TYPE_I = 1, diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c index 77f539dc6d..54bdd73902 100644 --- a/libavcodec/vaapi_encode.c +++ b/libavcodec/vaapi_encode.c @@ -38,8 +38,6 @@ const AVCodecHWConfigInternal *const ff_vaapi_encode_hw_configs[] = { NULL, }; -static const char * const picture_type_name[] = { "IDR", "I", "P", "B" }; - static int vaapi_encode_make_packed_header(AVCodecContext *avctx, VAAPIEncodePicture *pic, int type, char *data, size_t bit_len) @@ -277,7 +275,7 @@ static int vaapi_encode_issue(AVCodecContext *avctx, av_log(avctx, AV_LOG_DEBUG, "Issuing encode for pic %"PRId64"/%"PRId64" " "as type %s.\n", pic->display_order, pic->encode_order, - picture_type_name[pic->type]); + ff_hw_base_encode_get_pictype_name(pic->type)); if (pic->nb_refs[0] == 0 && pic->nb_refs[1] == 0) { av_log(avctx, AV_LOG_DEBUG, "No reference pictures.\n"); } else { From patchwork Tue May 28 15:47:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 49319 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a59:8f0d:0:b0:460:55fa:d5ed with SMTP id i13csp54141vqu; Tue, 28 May 2024 08:50:47 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCVqIpTlV1oGa+p+sQZyMNFKYL/AvW6L1CCENt1I+OB7TgLcoAetQMAPT6K9dAHZrIV/6v+mk/t729TE+TegDhwrVQSZIlZEcSs4Yg== X-Google-Smtp-Source: AGHT+IHel2qV/gRdfUJNDPKRqNhOPuSJdZK5hNm1CDxx2qWbmaSI9EnO4dxqmBcxLCEmIodbBfbL X-Received: by 2002:ac2:5a09:0:b0:51d:9f10:71b7 with SMTP id 2adb3069b0e04-52965c2f877mr9853202e87.28.1716911447328; Tue, 28 May 2024 08:50:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1716911447; cv=none; d=google.com; s=arc-20160816; b=Hm4KSlYRXfpjqyDWtqu26SHriyLPUAKM4qrZcDv+6Md1u6lcuXA9uNVCSdp75eqH89 SCLCPqBlBgGddGITAv/AZxkY3Tyb3vxMq/6Nd1sbUA7Ok3/fvJzojDp73H7XIy42lxa0 3YZOEE1GfN9nwSN8kQGOyB9BdbhnX6+4VAciFfue66Xf5FisgnHD7+/LyJ1luJ9CnUb8 ApjiNa48+laUgJpEvoeW4zmIYUdz8MmZsEFxozUAs6x7EXYvWpJagfsEzvUd2yo5y6QU Zkz2DNarxj/Kc3h2Y0ChJ9iponCwDYQnGiwivKuv6KytNnfdshIqls3lVylai9sdj4I4 ZqxQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=6wDb9FdAOrmeh12WGXygL2dT2ehu0bEA5kQ6oJjZeFU=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=L1XC7jnz2vTlFM2JFnBILC3Po8rTcCd1EJcpA2cNShez92FgIgst3cYt6IyN59ogUc N2DMD3Zw+CAgIBAxXieb5w7FOi5iL+j4PD2sB7BKNBYl4t9AP+Xqh15GNA8FxsO7nOQe xk4ByLYdloRRnwsWeQOdYYuEA4fjxwTlc3IMWjN0XIqxVkzuoaSPgLGM+7xoIElZpp1d SNC2YV98JyyoVXTWhR75TBAMbLXWMaZy31iqN7vI7L+yDzwoAZGRM9ImTK/nmV+4e4Ft dYq+jF7b3qJVg4EMiNFwdLwhgQwWo9JQB3s/f2dDQ5ECPYC5S3NLFJbh7oedjOrNXqjV +ypQ==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=AUaONzwa; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id a640c23a62f3a-a6389a0b3f6si6383066b.1009.2024.05.28.08.50.46; Tue, 28 May 2024 08:50:47 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=AUaONzwa; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 555E468D525; Tue, 28 May 2024 18:50:05 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id A5ED268D515 for ; Tue, 28 May 2024 18:49:57 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716911403; x=1748447403; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=UHVhPZdXVYdfV+aAuyW+JXAh2kdiK59l+rKa0N1HpSQ=; b=AUaONzwa3PZbn495Iy1xqorEs+khiCImTzTp027fzpbtGNMnazSGk0kD OQpRkNubkKVoO9YoeRIGRww38nSEeK3E1WJ8l+iwJz9U0G9dg+K2gEeXH 4g2NGNXXygsPdQ8lrbzvm4eiBQpJhBjVX/Olob9PB991lkPujPcFyMn3g caYYrp1XLm+V9lFC3mbml3fLzjkkhQYNc/bDGWMz2awLyX89AYRVZPiBo q+TWRhoXTfb3xSBt83RqBn8eAxQqcgux0BVA+Wc+5j/pJtuHEG518DcAe F+6DkC0LfiD43CBgBCGbVZdMaH3ACFBlhLBidttKdLT7UqTcwAGNSzDMX g==; X-CSE-ConnectionGUID: K0iCeqKMQe6+N9St32vIvw== X-CSE-MsgGUID: u3+dq+BfRBiVfEQ2ki8AJQ== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="23821868" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="23821868" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 08:49:52 -0700 X-CSE-ConnectionGUID: z657PnUCQ8yISPPKI6s/sA== X-CSE-MsgGUID: 70hXVoZRRtOFzolbaha9zA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="35731078" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by orviesa007.jf.intel.com with ESMTP; 28 May 2024 08:49:51 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Tue, 28 May 2024 23:47:56 +0800 Message-ID: <20240528154807.1151-5-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240528154807.1151-1-tong1.wu@intel.com> References: <20240528154807.1151-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v12 05/15] avcodec/vaapi_encode: move pic->input_surface initialization to encode_alloc X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: CyqFTatIFRlD From: Tong Wu When allocating the VAAPIEncodePicture, pic->input_surface can be initialized right in the place. This movement simplifies the send_frame logic and is the preparation for moving vaapi_encode_send_frame to the base layer. Signed-off-by: Tong Wu --- libavcodec/vaapi_encode.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c index 54bdd73902..194422b36d 100644 --- a/libavcodec/vaapi_encode.c +++ b/libavcodec/vaapi_encode.c @@ -878,7 +878,8 @@ static int vaapi_encode_discard(AVCodecContext *avctx, return 0; } -static VAAPIEncodePicture *vaapi_encode_alloc(AVCodecContext *avctx) +static VAAPIEncodePicture *vaapi_encode_alloc(AVCodecContext *avctx, + const AVFrame *frame) { VAAPIEncodeContext *ctx = avctx->priv_data; VAAPIEncodePicture *pic; @@ -895,7 +896,7 @@ static VAAPIEncodePicture *vaapi_encode_alloc(AVCodecContext *avctx) } } - pic->input_surface = VA_INVALID_ID; + pic->input_surface = (VASurfaceID)(uintptr_t)frame->data[3]; pic->recon_surface = VA_INVALID_ID; pic->output_buffer = VA_INVALID_ID; @@ -1332,7 +1333,7 @@ static int vaapi_encode_send_frame(AVCodecContext *avctx, AVFrame *frame) if (err < 0) return err; - pic = vaapi_encode_alloc(avctx); + pic = vaapi_encode_alloc(avctx, frame); if (!pic) return AVERROR(ENOMEM); @@ -1345,7 +1346,6 @@ static int vaapi_encode_send_frame(AVCodecContext *avctx, AVFrame *frame) if (ctx->input_order == 0 || frame->pict_type == AV_PICTURE_TYPE_I) pic->force_idr = 1; - pic->input_surface = (VASurfaceID)(uintptr_t)frame->data[3]; pic->pts = frame->pts; pic->duration = frame->duration; From patchwork Tue May 28 15:47:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 49322 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a59:8f0d:0:b0:460:55fa:d5ed with SMTP id i13csp54377vqu; Tue, 28 May 2024 08:51:12 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCULoYqcIDTWGn4EmJyldqo3bb2uuLpv8UUnsnZhm98Qsb77PgkufORx6mV2RKnE/o5P8ai86iRd4VIH3EXWtZMNiPPJIOkQn2NKWg== X-Google-Smtp-Source: AGHT+IG3G/gXXbGS7REpbLsvDTJvoFdKUeenFe2YdUFUwoNzLJRRhjvG02XqAWDWuEOZhCeY/mBW X-Received: by 2002:a17:906:130f:b0:a59:ba2b:5913 with SMTP id a640c23a62f3a-a6265114c4fmr790368966b.62.1716911472660; Tue, 28 May 2024 08:51:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1716911472; cv=none; d=google.com; s=arc-20160816; b=TQLazSLvZdfqPewVbGtSHjMGzQYaGVuGmwy4Lx6pzmnebPHo6792h5FFjUS0WrLeyS 2cSrAsdtf2vauy/pK6ZFI2uqaLqb85rO090TdwZi67jhJkKfxUK3XXWTeFimAe50Zdw5 EmxA18bALmCDLgjm7dyATsaROMqsWBvthse2Y1smDbPjmy91Q6VwNsZUcXbFH+3XikgW 8AqkdX46YkyK/1EkPB9W/OoEBJt2J5/FBBFUuW/WLZZS+x9kQzOA0tTlmedex6IhAB+3 1JPzRlJCgqzSGyF0mmadBxQQ9y4lNRucSj4aFGmOpqC6joXH8/Jo7nYxmNtrRg4lgTTF BONg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=TIC/Z2iMj8QgVm5hec20KIGKPCrr01umt3BsudQceE8=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=ov05ZAlI/lAvulNLqQ79oPB+qjeP1fJ7PMuqlTaahqvRiURNPYZ4DHiX6eBAiSnz2k WHIrCjbHwrz6YulGkJDbkFpyeQF+GYUUV452IzsEkeqaYOuYfZRpb8x1yUZuDHCWt7aR oGMqa1oz/nt+sHSefPUrQXDhJ3Lo2NUqC6Lt2Xy9bvP6TyDhuKYdM7yYQzeam+hzgDMM cNGxe0DkqWQb57gRm44KXxN9hZAUJhVSwFwisQ3PSlo5fWXhe00M3c0Ml2K3CE4zglf7 NSl32lZJY2w4V8euT74cbezhoIfHpZmOdsO3g1bbjxvcZ19uFoWUxRhyjGKQP7d0TvR5 PB7w==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=gVilcY3a; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id a640c23a62f3a-a6349582e5esi80632566b.119.2024.05.28.08.51.09; Tue, 28 May 2024 08:51:12 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=gVilcY3a; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id A5FD968D56E; Tue, 28 May 2024 18:50:07 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 73EDD68D4FB for ; Tue, 28 May 2024 18:49:58 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716911404; x=1748447404; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=UqktceSDsYVPqQeQwQxFxd8PycY3pzOKyeTs+Ztjjis=; b=gVilcY3a/E/cO2b5Em0d4zzu/O8dThpXzu5SobzKBkmaTfRJauk6erzO GRHdIebaQa/LnsTFTQP2poFA/hfvvQbk5l7Jkq9rkgS6Ys3Kyv7b28dfm SyfM6f9jXkz5n4f8Dc7RCqAgLuPEtiK90JYKgmglIRAj0T5zS1AnCJY8m QNu0hqv4XxiiXJdJxjZ8pDZelyHcFA38Ft5a2W6K5Owq9pz3+r3WHO1IH YmuFukln+WFXHgQldNJBmErSpAc5kLymOxGrtnGOVCJ+AE9BeJ4h5I1Pk eHaccQ3sNykXjBOloLKL3FjMjVf16Uh8T1oDdlLVzBuVtbEt2Ofg3O28L w==; X-CSE-ConnectionGUID: ZuG4arNqTfaFofcSH0wtfQ== X-CSE-MsgGUID: Az3rIxprQseBmgLCOucOGA== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="23821882" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="23821882" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 08:49:54 -0700 X-CSE-ConnectionGUID: ZhZ+LImJSpyArLyncSynBA== X-CSE-MsgGUID: 0E1aMV6YRtCJRBlPFG38bw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="35731085" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by orviesa007.jf.intel.com with ESMTP; 28 May 2024 08:49:52 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Tue, 28 May 2024 23:47:57 +0800 Message-ID: <20240528154807.1151-6-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240528154807.1151-1-tong1.wu@intel.com> References: <20240528154807.1151-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v12 06/15] avcodec/vaapi_encode: move the dpb logic from VAAPI to base layer X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: 0RmE3hUEWliL From: Tong Wu Move receive_packet function to base. This requires adding *alloc, *issue, *output, *free as hardware callbacks. HWBaseEncodePicture is introduced as the base layer structure. The related parameters in VAAPIEncodeContext are also extracted to HWBaseEncodeContext. Then DPB management logic can be fully extracted to base layer as-is. Signed-off-by: Tong Wu --- libavcodec/Makefile | 2 +- libavcodec/hw_base_encode.c | 594 ++++++++++++++++++++++++ libavcodec/hw_base_encode.h | 124 +++++ libavcodec/vaapi_encode.c | 793 +++++--------------------------- libavcodec/vaapi_encode.h | 102 +--- libavcodec/vaapi_encode_av1.c | 35 +- libavcodec/vaapi_encode_h264.c | 84 ++-- libavcodec/vaapi_encode_h265.c | 53 ++- libavcodec/vaapi_encode_mjpeg.c | 13 +- libavcodec/vaapi_encode_mpeg2.c | 33 +- libavcodec/vaapi_encode_vp8.c | 18 +- libavcodec/vaapi_encode_vp9.c | 24 +- 12 files changed, 985 insertions(+), 890 deletions(-) create mode 100644 libavcodec/hw_base_encode.c diff --git a/libavcodec/Makefile b/libavcodec/Makefile index 2443d2c6fd..998f6b7e12 100644 --- a/libavcodec/Makefile +++ b/libavcodec/Makefile @@ -165,7 +165,7 @@ OBJS-$(CONFIG_STARTCODE) += startcode.o OBJS-$(CONFIG_TEXTUREDSP) += texturedsp.o OBJS-$(CONFIG_TEXTUREDSPENC) += texturedspenc.o OBJS-$(CONFIG_TPELDSP) += tpeldsp.o -OBJS-$(CONFIG_VAAPI_ENCODE) += vaapi_encode.o +OBJS-$(CONFIG_VAAPI_ENCODE) += vaapi_encode.o hw_base_encode.o OBJS-$(CONFIG_AV1_AMF_ENCODER) += amfenc_av1.o OBJS-$(CONFIG_VC1DSP) += vc1dsp.o OBJS-$(CONFIG_VIDEODSP) += videodsp.o diff --git a/libavcodec/hw_base_encode.c b/libavcodec/hw_base_encode.c new file mode 100644 index 0000000000..16afaa37be --- /dev/null +++ b/libavcodec/hw_base_encode.c @@ -0,0 +1,594 @@ +/* + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include "libavutil/avassert.h" +#include "libavutil/common.h" +#include "libavutil/internal.h" +#include "libavutil/log.h" +#include "libavutil/mem.h" +#include "libavutil/pixdesc.h" + +#include "encode.h" +#include "avcodec.h" +#include "hw_base_encode.h" + +static void hw_base_encode_add_ref(FFHWBaseEncodePicture *pic, + FFHWBaseEncodePicture *target, + int is_ref, int in_dpb, int prev) +{ + int refs = 0; + + if (is_ref) { + av_assert0(pic != target); + av_assert0(pic->nb_refs[0] < MAX_PICTURE_REFERENCES && + pic->nb_refs[1] < MAX_PICTURE_REFERENCES); + if (target->display_order < pic->display_order) + pic->refs[0][pic->nb_refs[0]++] = target; + else + pic->refs[1][pic->nb_refs[1]++] = target; + ++refs; + } + + if (in_dpb) { + av_assert0(pic->nb_dpb_pics < MAX_DPB_SIZE); + pic->dpb[pic->nb_dpb_pics++] = target; + ++refs; + } + + if (prev) { + av_assert0(!pic->prev); + pic->prev = target; + ++refs; + } + + target->ref_count[0] += refs; + target->ref_count[1] += refs; +} + +static void hw_base_encode_remove_refs(FFHWBaseEncodePicture *pic, int level) +{ + int i; + + if (pic->ref_removed[level]) + return; + + for (i = 0; i < pic->nb_refs[0]; i++) { + av_assert0(pic->refs[0][i]); + --pic->refs[0][i]->ref_count[level]; + av_assert0(pic->refs[0][i]->ref_count[level] >= 0); + } + + for (i = 0; i < pic->nb_refs[1]; i++) { + av_assert0(pic->refs[1][i]); + --pic->refs[1][i]->ref_count[level]; + av_assert0(pic->refs[1][i]->ref_count[level] >= 0); + } + + for (i = 0; i < pic->nb_dpb_pics; i++) { + av_assert0(pic->dpb[i]); + --pic->dpb[i]->ref_count[level]; + av_assert0(pic->dpb[i]->ref_count[level] >= 0); + } + + av_assert0(pic->prev || pic->type == FF_HW_PICTURE_TYPE_IDR); + if (pic->prev) { + --pic->prev->ref_count[level]; + av_assert0(pic->prev->ref_count[level] >= 0); + } + + pic->ref_removed[level] = 1; +} + +static void hw_base_encode_set_b_pictures(AVCodecContext *avctx, + FFHWBaseEncodePicture *start, + FFHWBaseEncodePicture *end, + FFHWBaseEncodePicture *prev, + int current_depth, + FFHWBaseEncodePicture **last) +{ + FFHWBaseEncodeContext *ctx = avctx->priv_data; + FFHWBaseEncodePicture *pic, *next, *ref; + int i, len; + + av_assert0(start && end && start != end && start->next != end); + + // If we are at the maximum depth then encode all pictures as + // non-referenced B-pictures. Also do this if there is exactly one + // picture left, since there will be nothing to reference it. + if (current_depth == ctx->max_b_depth || start->next->next == end) { + for (pic = start->next; pic; pic = pic->next) { + if (pic == end) + break; + pic->type = FF_HW_PICTURE_TYPE_B; + pic->b_depth = current_depth; + + hw_base_encode_add_ref(pic, start, 1, 1, 0); + hw_base_encode_add_ref(pic, end, 1, 1, 0); + hw_base_encode_add_ref(pic, prev, 0, 0, 1); + + for (ref = end->refs[1][0]; ref; ref = ref->refs[1][0]) + hw_base_encode_add_ref(pic, ref, 0, 1, 0); + } + *last = prev; + + } else { + // Split the current list at the midpoint with a referenced + // B-picture, then descend into each side separately. + len = 0; + for (pic = start->next; pic != end; pic = pic->next) + ++len; + for (pic = start->next, i = 1; 2 * i < len; pic = pic->next, i++); + + pic->type = FF_HW_PICTURE_TYPE_B; + pic->b_depth = current_depth; + + pic->is_reference = 1; + + hw_base_encode_add_ref(pic, pic, 0, 1, 0); + hw_base_encode_add_ref(pic, start, 1, 1, 0); + hw_base_encode_add_ref(pic, end, 1, 1, 0); + hw_base_encode_add_ref(pic, prev, 0, 0, 1); + + for (ref = end->refs[1][0]; ref; ref = ref->refs[1][0]) + hw_base_encode_add_ref(pic, ref, 0, 1, 0); + + if (i > 1) + hw_base_encode_set_b_pictures(avctx, start, pic, pic, + current_depth + 1, &next); + else + next = pic; + + hw_base_encode_set_b_pictures(avctx, pic, end, next, + current_depth + 1, last); + } +} + +static void hw_base_encode_add_next_prev(AVCodecContext *avctx, + FFHWBaseEncodePicture *pic) +{ + FFHWBaseEncodeContext *ctx = avctx->priv_data; + int i; + + if (!pic) + return; + + if (pic->type == FF_HW_PICTURE_TYPE_IDR) { + for (i = 0; i < ctx->nb_next_prev; i++) { + --ctx->next_prev[i]->ref_count[0]; + ctx->next_prev[i] = NULL; + } + ctx->next_prev[0] = pic; + ++pic->ref_count[0]; + ctx->nb_next_prev = 1; + + return; + } + + if (ctx->nb_next_prev < MAX_PICTURE_REFERENCES) { + ctx->next_prev[ctx->nb_next_prev++] = pic; + ++pic->ref_count[0]; + } else { + --ctx->next_prev[0]->ref_count[0]; + for (i = 0; i < MAX_PICTURE_REFERENCES - 1; i++) + ctx->next_prev[i] = ctx->next_prev[i + 1]; + ctx->next_prev[i] = pic; + ++pic->ref_count[0]; + } +} + +static int hw_base_encode_pick_next(AVCodecContext *avctx, + FFHWBaseEncodePicture **pic_out) +{ + FFHWBaseEncodeContext *ctx = avctx->priv_data; + FFHWBaseEncodePicture *pic = NULL, *prev = NULL, *next, *start; + int i, b_counter, closed_gop_end; + + // If there are any B-frames already queued, the next one to encode + // is the earliest not-yet-issued frame for which all references are + // available. + for (pic = ctx->pic_start; pic; pic = pic->next) { + if (pic->encode_issued) + continue; + if (pic->type != FF_HW_PICTURE_TYPE_B) + continue; + for (i = 0; i < pic->nb_refs[0]; i++) { + if (!pic->refs[0][i]->encode_issued) + break; + } + if (i != pic->nb_refs[0]) + continue; + + for (i = 0; i < pic->nb_refs[1]; i++) { + if (!pic->refs[1][i]->encode_issued) + break; + } + if (i == pic->nb_refs[1]) + break; + } + + if (pic) { + av_log(avctx, AV_LOG_DEBUG, "Pick B-picture at depth %d to " + "encode next.\n", pic->b_depth); + *pic_out = pic; + return 0; + } + + // Find the B-per-Pth available picture to become the next picture + // on the top layer. + start = NULL; + b_counter = 0; + closed_gop_end = ctx->closed_gop || + ctx->idr_counter == ctx->gop_per_idr; + for (pic = ctx->pic_start; pic; pic = next) { + next = pic->next; + if (pic->encode_issued) { + start = pic; + continue; + } + // If the next available picture is force-IDR, encode it to start + // a new GOP immediately. + if (pic->force_idr) + break; + if (b_counter == ctx->b_per_p) + break; + // If this picture ends a closed GOP or starts a new GOP then it + // needs to be in the top layer. + if (ctx->gop_counter + b_counter + closed_gop_end >= ctx->gop_size) + break; + // If the picture after this one is force-IDR, we need to encode + // this one in the top layer. + if (next && next->force_idr) + break; + ++b_counter; + } + + // At the end of the stream the last picture must be in the top layer. + if (!pic && ctx->end_of_stream) { + --b_counter; + pic = ctx->pic_end; + if (pic->encode_complete) + return AVERROR_EOF; + else if (pic->encode_issued) + return AVERROR(EAGAIN); + } + + if (!pic) { + av_log(avctx, AV_LOG_DEBUG, "Pick nothing to encode next - " + "need more input for reference pictures.\n"); + return AVERROR(EAGAIN); + } + if (ctx->input_order <= ctx->decode_delay && !ctx->end_of_stream) { + av_log(avctx, AV_LOG_DEBUG, "Pick nothing to encode next - " + "need more input for timestamps.\n"); + return AVERROR(EAGAIN); + } + + if (pic->force_idr) { + av_log(avctx, AV_LOG_DEBUG, "Pick forced IDR-picture to " + "encode next.\n"); + pic->type = FF_HW_PICTURE_TYPE_IDR; + ctx->idr_counter = 1; + ctx->gop_counter = 1; + + } else if (ctx->gop_counter + b_counter >= ctx->gop_size) { + if (ctx->idr_counter == ctx->gop_per_idr) { + av_log(avctx, AV_LOG_DEBUG, "Pick new-GOP IDR-picture to " + "encode next.\n"); + pic->type = FF_HW_PICTURE_TYPE_IDR; + ctx->idr_counter = 1; + } else { + av_log(avctx, AV_LOG_DEBUG, "Pick new-GOP I-picture to " + "encode next.\n"); + pic->type = FF_HW_PICTURE_TYPE_I; + ++ctx->idr_counter; + } + ctx->gop_counter = 1; + + } else { + if (ctx->gop_counter + b_counter + closed_gop_end == ctx->gop_size) { + av_log(avctx, AV_LOG_DEBUG, "Pick group-end P-picture to " + "encode next.\n"); + } else { + av_log(avctx, AV_LOG_DEBUG, "Pick normal P-picture to " + "encode next.\n"); + } + pic->type = FF_HW_PICTURE_TYPE_P; + av_assert0(start); + ctx->gop_counter += 1 + b_counter; + } + pic->is_reference = 1; + *pic_out = pic; + + hw_base_encode_add_ref(pic, pic, 0, 1, 0); + if (pic->type != FF_HW_PICTURE_TYPE_IDR) { + // TODO: apply both previous and forward multi reference for all vaapi encoders. + // And L0/L1 reference frame number can be set dynamically through query + // VAConfigAttribEncMaxRefFrames attribute. + if (avctx->codec_id == AV_CODEC_ID_AV1) { + for (i = 0; i < ctx->nb_next_prev; i++) + hw_base_encode_add_ref(pic, ctx->next_prev[i], + pic->type == FF_HW_PICTURE_TYPE_P, + b_counter > 0, 0); + } else + hw_base_encode_add_ref(pic, start, + pic->type == FF_HW_PICTURE_TYPE_P, + b_counter > 0, 0); + + hw_base_encode_add_ref(pic, ctx->next_prev[ctx->nb_next_prev - 1], 0, 0, 1); + } + + if (b_counter > 0) { + hw_base_encode_set_b_pictures(avctx, start, pic, pic, 1, + &prev); + } else { + prev = pic; + } + hw_base_encode_add_next_prev(avctx, prev); + + return 0; +} + +static int hw_base_encode_clear_old(AVCodecContext *avctx) +{ + FFHWBaseEncodeContext *ctx = avctx->priv_data; + FFHWBaseEncodePicture *pic, *prev, *next; + + av_assert0(ctx->pic_start); + + // Remove direct references once each picture is complete. + for (pic = ctx->pic_start; pic; pic = pic->next) { + if (pic->encode_complete && pic->next) + hw_base_encode_remove_refs(pic, 0); + } + + // Remove indirect references once a picture has no direct references. + for (pic = ctx->pic_start; pic; pic = pic->next) { + if (pic->encode_complete && pic->ref_count[0] == 0) + hw_base_encode_remove_refs(pic, 1); + } + + // Clear out all complete pictures with no remaining references. + prev = NULL; + for (pic = ctx->pic_start; pic; pic = next) { + next = pic->next; + if (pic->encode_complete && pic->ref_count[1] == 0) { + av_assert0(pic->ref_removed[0] && pic->ref_removed[1]); + if (prev) + prev->next = next; + else + ctx->pic_start = next; + ctx->op->free(avctx, pic); + } else { + prev = pic; + } + } + + return 0; +} + +static int hw_base_encode_check_frame(AVCodecContext *avctx, + const AVFrame *frame) +{ + FFHWBaseEncodeContext *ctx = avctx->priv_data; + + if ((frame->crop_top || frame->crop_bottom || + frame->crop_left || frame->crop_right) && !ctx->crop_warned) { + av_log(avctx, AV_LOG_WARNING, "Cropping information on input " + "frames ignored due to lack of API support.\n"); + ctx->crop_warned = 1; + } + + if (!ctx->roi_allowed) { + AVFrameSideData *sd = + av_frame_get_side_data(frame, AV_FRAME_DATA_REGIONS_OF_INTEREST); + + if (sd && !ctx->roi_warned) { + av_log(avctx, AV_LOG_WARNING, "ROI side data on input " + "frames ignored due to lack of driver support.\n"); + ctx->roi_warned = 1; + } + } + + return 0; +} + +static int hw_base_encode_send_frame(AVCodecContext *avctx, AVFrame *frame) +{ + FFHWBaseEncodeContext *ctx = avctx->priv_data; + FFHWBaseEncodePicture *pic; + int err; + + if (frame) { + av_log(avctx, AV_LOG_DEBUG, "Input frame: %ux%u (%"PRId64").\n", + frame->width, frame->height, frame->pts); + + err = hw_base_encode_check_frame(avctx, frame); + if (err < 0) + return err; + + pic = ctx->op->alloc(avctx, frame); + if (!pic) + return AVERROR(ENOMEM); + + pic->input_image = av_frame_alloc(); + if (!pic->input_image) { + err = AVERROR(ENOMEM); + goto fail; + } + + pic->recon_image = av_frame_alloc(); + if (!pic->recon_image) { + err = AVERROR(ENOMEM); + goto fail; + } + + if (ctx->input_order == 0 || frame->pict_type == AV_PICTURE_TYPE_I) + pic->force_idr = 1; + + pic->pts = frame->pts; + pic->duration = frame->duration; + + if (avctx->flags & AV_CODEC_FLAG_COPY_OPAQUE) { + err = av_buffer_replace(&pic->opaque_ref, frame->opaque_ref); + if (err < 0) + goto fail; + + pic->opaque = frame->opaque; + } + + av_frame_move_ref(pic->input_image, frame); + + if (ctx->input_order == 0) + ctx->first_pts = pic->pts; + if (ctx->input_order == ctx->decode_delay) + ctx->dts_pts_diff = pic->pts - ctx->first_pts; + if (ctx->output_delay > 0) + ctx->ts_ring[ctx->input_order % + (3 * ctx->output_delay + ctx->async_depth)] = pic->pts; + + pic->display_order = ctx->input_order; + ++ctx->input_order; + + if (ctx->pic_start) { + ctx->pic_end->next = pic; + ctx->pic_end = pic; + } else { + ctx->pic_start = pic; + ctx->pic_end = pic; + } + + } else { + ctx->end_of_stream = 1; + + // Fix timestamps if we hit end-of-stream before the initial decode + // delay has elapsed. + if (ctx->input_order < ctx->decode_delay) + ctx->dts_pts_diff = ctx->pic_end->pts - ctx->first_pts; + } + + return 0; + +fail: + ctx->op->free(avctx, pic); + return err; +} + +int ff_hw_base_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt) +{ + FFHWBaseEncodeContext *ctx = avctx->priv_data; + FFHWBaseEncodePicture *pic = NULL; + AVFrame *frame = ctx->frame; + int err; + + av_assert0(ctx->op && ctx->op->alloc && ctx->op->issue && + ctx->op->output && ctx->op->free); + +start: + /** if no B frame before repeat P frame, sent repeat P frame out. */ + if (ctx->tail_pkt->size) { + for (FFHWBaseEncodePicture *tmp = ctx->pic_start; tmp; tmp = tmp->next) { + if (tmp->type == FF_HW_PICTURE_TYPE_B && tmp->pts < ctx->tail_pkt->pts) + break; + else if (!tmp->next) { + av_packet_move_ref(pkt, ctx->tail_pkt); + goto end; + } + } + } + + err = ff_encode_get_frame(avctx, frame); + if (err < 0 && err != AVERROR_EOF) + return err; + + if (err == AVERROR_EOF) + frame = NULL; + + err = hw_base_encode_send_frame(avctx, frame); + if (err < 0) + return err; + + if (!ctx->pic_start) { + if (ctx->end_of_stream) + return AVERROR_EOF; + else + return AVERROR(EAGAIN); + } + + if (ctx->async_encode) { + if (av_fifo_can_write(ctx->encode_fifo)) { + err = hw_base_encode_pick_next(avctx, &pic); + if (!err) { + av_assert0(pic); + pic->encode_order = ctx->encode_order + + av_fifo_can_read(ctx->encode_fifo); + err = ctx->op->issue(avctx, pic); + if (err < 0) { + av_log(avctx, AV_LOG_ERROR, "Encode failed: %d.\n", err); + return err; + } + pic->encode_issued = 1; + av_fifo_write(ctx->encode_fifo, &pic, 1); + } + } + + if (!av_fifo_can_read(ctx->encode_fifo)) + return err; + + // More frames can be buffered + if (av_fifo_can_write(ctx->encode_fifo) && !ctx->end_of_stream) + return AVERROR(EAGAIN); + + av_fifo_read(ctx->encode_fifo, &pic, 1); + ctx->encode_order = pic->encode_order + 1; + } else { + err = hw_base_encode_pick_next(avctx, &pic); + if (err < 0) + return err; + av_assert0(pic); + + pic->encode_order = ctx->encode_order++; + + err = ctx->op->issue(avctx, pic); + if (err < 0) { + av_log(avctx, AV_LOG_ERROR, "Encode failed: %d.\n", err); + return err; + } + + pic->encode_issued = 1; + } + + err = ctx->op->output(avctx, pic, pkt); + if (err < 0) { + av_log(avctx, AV_LOG_ERROR, "Output failed: %d.\n", err); + return err; + } + + ctx->output_order = pic->encode_order; + hw_base_encode_clear_old(avctx); + + /** loop to get an available pkt in encoder flushing. */ + if (ctx->end_of_stream && !pkt->size) + goto start; + +end: + if (pkt->size) + av_log(avctx, AV_LOG_DEBUG, "Output packet: pts %"PRId64", dts %"PRId64", " + "size %d bytes.\n", pkt->pts, pkt->dts, pkt->size); + + return 0; +} diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h index 858450afa8..81e6c87036 100644 --- a/libavcodec/hw_base_encode.h +++ b/libavcodec/hw_base_encode.h @@ -19,6 +19,8 @@ #ifndef AVCODEC_HW_BASE_ENCODE_H #define AVCODEC_HW_BASE_ENCODE_H +#include "libavutil/fifo.h" + #define MAX_DPB_SIZE 16 #define MAX_PICTURE_REFERENCES 2 #define MAX_REORDER_DELAY 16 @@ -54,13 +56,135 @@ enum { FF_HW_FLAG_NON_IDR_KEY_PICTURES = 1 << 5, }; +typedef struct FFHWBaseEncodePicture { + struct FFHWBaseEncodePicture *next; + + int64_t display_order; + int64_t encode_order; + int64_t pts; + int64_t duration; + int force_idr; + + void *opaque; + AVBufferRef *opaque_ref; + + int type; + int b_depth; + int encode_issued; + int encode_complete; + + AVFrame *input_image; + AVFrame *recon_image; + + void *priv_data; + + // Whether this picture is a reference picture. + int is_reference; + + // The contents of the DPB after this picture has been decoded. + // This will contain the picture itself if it is a reference picture, + // but not if it isn't. + int nb_dpb_pics; + struct FFHWBaseEncodePicture *dpb[MAX_DPB_SIZE]; + // The reference pictures used in decoding this picture. If they are + // used by later pictures they will also appear in the DPB. ref[0][] for + // previous reference frames. ref[1][] for future reference frames. + int nb_refs[MAX_REFERENCE_LIST_NUM]; + struct FFHWBaseEncodePicture *refs[MAX_REFERENCE_LIST_NUM][MAX_PICTURE_REFERENCES]; + // The previous reference picture in encode order. Must be in at least + // one of the reference list and DPB list. + struct FFHWBaseEncodePicture *prev; + // Reference count for other pictures referring to this one through + // the above pointers, directly from incomplete pictures and indirectly + // through completed pictures. + int ref_count[2]; + int ref_removed[2]; +} FFHWBaseEncodePicture; + +typedef struct FFHWEncodePictureOperation { + // Alloc memory for the picture structure and initialize the API-specific internals + // based of the given frame. + FFHWBaseEncodePicture * (*alloc)(AVCodecContext *avctx, const AVFrame *frame); + // Issue the picture structure, which will send the frame surface to HW Encode API. + int (*issue)(AVCodecContext *avctx, const FFHWBaseEncodePicture *base_pic); + // Get the output AVPacket. + int (*output)(AVCodecContext *avctx, const FFHWBaseEncodePicture *base_pic, AVPacket *pkt); + // Free the picture structure. + int (*free)(AVCodecContext *avctx, FFHWBaseEncodePicture *base_pic); +} FFHWEncodePictureOperation; + typedef struct FFHWBaseEncodeContext { const AVClass *class; + // Hardware-specific hooks. + const struct FFHWEncodePictureOperation *op; + + // Current encoding window, in display (input) order. + FFHWBaseEncodePicture *pic_start, *pic_end; + // The next picture to use as the previous reference picture in + // encoding order. Order from small to large in encoding order. + FFHWBaseEncodePicture *next_prev[MAX_PICTURE_REFERENCES]; + int nb_next_prev; + + // Next input order index (display order). + int64_t input_order; + // Number of frames that output is behind input. + int64_t output_delay; + // Next encode order index. + int64_t encode_order; + // Number of frames decode output will need to be delayed. + int64_t decode_delay; + // Next output order index (in encode order). + int64_t output_order; + + // Timestamp handling. + int64_t first_pts; + int64_t dts_pts_diff; + int64_t ts_ring[MAX_REORDER_DELAY * 3 + + MAX_ASYNC_DEPTH]; + + // Frame type decision. + int gop_size; + int closed_gop; + int gop_per_idr; + int p_per_i; + int max_b_depth; + int b_per_p; + int force_idr; + int idr_counter; + int gop_counter; + int end_of_stream; + int p_to_gpb; + + // Whether the driver supports ROI at all. + int roi_allowed; + + // The encoder does not support cropping information, so warn about + // it the first time we encounter any nonzero crop fields. + int crop_warned; + // If the driver does not support ROI then warn the first time we + // encounter a frame with ROI side data. + int roi_warned; + + // The frame to be filled with data. + AVFrame *frame; + + // Whether the HW supports sync buffer function. + // If supported, encode_fifo/async_depth will be used together. + // Used for output buffer synchronization. + int async_encode; + + // Store buffered pic. + AVFifo *encode_fifo; // Max number of frame buffered in encoder. int async_depth; + + /** Tail data of a pic, now only used for av1 repeat frame header. */ + AVPacket *tail_pkt; } FFHWBaseEncodeContext; +int ff_hw_base_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt); + #define HW_BASE_ENCODE_COMMON_OPTIONS \ { "async_depth", "Maximum processing parallelism. " \ "Increase this to improve single channel performance.", \ diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c index 194422b36d..1055fca0b1 100644 --- a/libavcodec/vaapi_encode.c +++ b/libavcodec/vaapi_encode.c @@ -138,22 +138,26 @@ static int vaapi_encode_make_misc_param_buffer(AVCodecContext *avctx, static int vaapi_encode_wait(AVCodecContext *avctx, VAAPIEncodePicture *pic) { +#if VA_CHECK_VERSION(1, 9, 0) + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; +#endif VAAPIEncodeContext *ctx = avctx->priv_data; + FFHWBaseEncodePicture *base_pic = &pic->base; VAStatus vas; - av_assert0(pic->encode_issued); + av_assert0(base_pic->encode_issued); - if (pic->encode_complete) { + if (base_pic->encode_complete) { // Already waited for this picture. return 0; } av_log(avctx, AV_LOG_DEBUG, "Sync to pic %"PRId64"/%"PRId64" " - "(input surface %#x).\n", pic->display_order, - pic->encode_order, pic->input_surface); + "(input surface %#x).\n", base_pic->display_order, + base_pic->encode_order, pic->input_surface); #if VA_CHECK_VERSION(1, 9, 0) - if (ctx->has_sync_buffer_func) { + if (base_ctx->async_encode) { vas = vaSyncBuffer(ctx->hwctx->display, pic->output_buffer, VA_TIMEOUT_INFINITE); @@ -174,9 +178,9 @@ static int vaapi_encode_wait(AVCodecContext *avctx, } // Input is definitely finished with now. - av_frame_free(&pic->input_image); + av_frame_free(&base_pic->input_image); - pic->encode_complete = 1; + base_pic->encode_complete = 1; return 0; } @@ -263,9 +267,11 @@ static int vaapi_encode_make_tile_slice(AVCodecContext *avctx, } static int vaapi_encode_issue(AVCodecContext *avctx, - VAAPIEncodePicture *pic) + const FFHWBaseEncodePicture *base_pic) { - VAAPIEncodeContext *ctx = avctx->priv_data; + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; + VAAPIEncodeContext *ctx = avctx->priv_data; + VAAPIEncodePicture *pic = (VAAPIEncodePicture*)base_pic; VAAPIEncodeSlice *slice; VAStatus vas; int err, i; @@ -274,52 +280,46 @@ static int vaapi_encode_issue(AVCodecContext *avctx, av_unused AVFrameSideData *sd; av_log(avctx, AV_LOG_DEBUG, "Issuing encode for pic %"PRId64"/%"PRId64" " - "as type %s.\n", pic->display_order, pic->encode_order, - ff_hw_base_encode_get_pictype_name(pic->type)); - if (pic->nb_refs[0] == 0 && pic->nb_refs[1] == 0) { + "as type %s.\n", base_pic->display_order, base_pic->encode_order, + ff_hw_base_encode_get_pictype_name(base_pic->type)); + if (base_pic->nb_refs[0] == 0 && base_pic->nb_refs[1] == 0) { av_log(avctx, AV_LOG_DEBUG, "No reference pictures.\n"); } else { av_log(avctx, AV_LOG_DEBUG, "L0 refers to"); - for (i = 0; i < pic->nb_refs[0]; i++) { + for (i = 0; i < base_pic->nb_refs[0]; i++) { av_log(avctx, AV_LOG_DEBUG, " %"PRId64"/%"PRId64, - pic->refs[0][i]->display_order, pic->refs[0][i]->encode_order); + base_pic->refs[0][i]->display_order, base_pic->refs[0][i]->encode_order); } av_log(avctx, AV_LOG_DEBUG, ".\n"); - if (pic->nb_refs[1]) { + if (base_pic->nb_refs[1]) { av_log(avctx, AV_LOG_DEBUG, "L1 refers to"); - for (i = 0; i < pic->nb_refs[1]; i++) { + for (i = 0; i < base_pic->nb_refs[1]; i++) { av_log(avctx, AV_LOG_DEBUG, " %"PRId64"/%"PRId64, - pic->refs[1][i]->display_order, pic->refs[1][i]->encode_order); + base_pic->refs[1][i]->display_order, base_pic->refs[1][i]->encode_order); } av_log(avctx, AV_LOG_DEBUG, ".\n"); } } - av_assert0(!pic->encode_issued); - for (i = 0; i < pic->nb_refs[0]; i++) { - av_assert0(pic->refs[0][i]); - av_assert0(pic->refs[0][i]->encode_issued); + av_assert0(!base_pic->encode_issued); + for (i = 0; i < base_pic->nb_refs[0]; i++) { + av_assert0(base_pic->refs[0][i]); + av_assert0(base_pic->refs[0][i]->encode_issued); } - for (i = 0; i < pic->nb_refs[1]; i++) { - av_assert0(pic->refs[1][i]); - av_assert0(pic->refs[1][i]->encode_issued); + for (i = 0; i < base_pic->nb_refs[1]; i++) { + av_assert0(base_pic->refs[1][i]); + av_assert0(base_pic->refs[1][i]->encode_issued); } av_log(avctx, AV_LOG_DEBUG, "Input surface is %#x.\n", pic->input_surface); - pic->recon_image = av_frame_alloc(); - if (!pic->recon_image) { - err = AVERROR(ENOMEM); - goto fail; - } - - err = av_hwframe_get_buffer(ctx->recon_frames_ref, pic->recon_image, 0); + err = av_hwframe_get_buffer(ctx->recon_frames_ref, base_pic->recon_image, 0); if (err < 0) { err = AVERROR(ENOMEM); goto fail; } - pic->recon_surface = (VASurfaceID)(uintptr_t)pic->recon_image->data[3]; + pic->recon_surface = (VASurfaceID)(uintptr_t)base_pic->recon_image->data[3]; av_log(avctx, AV_LOG_DEBUG, "Recon surface is %#x.\n", pic->recon_surface); pic->output_buffer_ref = ff_refstruct_pool_get(ctx->output_buffer_pool); @@ -343,7 +343,7 @@ static int vaapi_encode_issue(AVCodecContext *avctx, pic->nb_param_buffers = 0; - if (pic->type == FF_HW_PICTURE_TYPE_IDR && ctx->codec->init_sequence_params) { + if (base_pic->type == FF_HW_PICTURE_TYPE_IDR && ctx->codec->init_sequence_params) { err = vaapi_encode_make_param_buffer(avctx, pic, VAEncSequenceParameterBufferType, ctx->codec_sequence_params, @@ -352,7 +352,7 @@ static int vaapi_encode_issue(AVCodecContext *avctx, goto fail; } - if (pic->type == FF_HW_PICTURE_TYPE_IDR) { + if (base_pic->type == FF_HW_PICTURE_TYPE_IDR) { for (i = 0; i < ctx->nb_global_params; i++) { err = vaapi_encode_make_misc_param_buffer(avctx, pic, ctx->global_params_type[i], @@ -389,7 +389,7 @@ static int vaapi_encode_issue(AVCodecContext *avctx, } #endif - if (pic->type == FF_HW_PICTURE_TYPE_IDR) { + if (base_pic->type == FF_HW_PICTURE_TYPE_IDR) { if (ctx->va_packed_headers & VA_ENC_PACKED_HEADER_SEQUENCE && ctx->codec->write_sequence_header) { bit_len = 8 * sizeof(data); @@ -529,9 +529,9 @@ static int vaapi_encode_issue(AVCodecContext *avctx, } #if VA_CHECK_VERSION(1, 0, 0) - sd = av_frame_get_side_data(pic->input_image, + sd = av_frame_get_side_data(base_pic->input_image, AV_FRAME_DATA_REGIONS_OF_INTEREST); - if (sd && ctx->roi_allowed) { + if (sd && base_ctx->roi_allowed) { const AVRegionOfInterest *roi; uint32_t roi_size; VAEncMiscParameterBufferROI param_roi; @@ -542,11 +542,11 @@ static int vaapi_encode_issue(AVCodecContext *avctx, av_assert0(roi_size && sd->size % roi_size == 0); nb_roi = sd->size / roi_size; if (nb_roi > ctx->roi_max_regions) { - if (!ctx->roi_warned) { + if (!base_ctx->roi_warned) { av_log(avctx, AV_LOG_WARNING, "More ROIs set than " "supported by driver (%d > %d).\n", nb_roi, ctx->roi_max_regions); - ctx->roi_warned = 1; + base_ctx->roi_warned = 1; } nb_roi = ctx->roi_max_regions; } @@ -639,8 +639,6 @@ static int vaapi_encode_issue(AVCodecContext *avctx, } } - pic->encode_issued = 1; - return 0; fail_with_picture: @@ -657,14 +655,13 @@ fail_at_end: av_freep(&pic->param_buffers); av_freep(&pic->slices); av_freep(&pic->roi); - av_frame_free(&pic->recon_image); ff_refstruct_unref(&pic->output_buffer_ref); pic->output_buffer = VA_INVALID_ID; return err; } static int vaapi_encode_set_output_property(AVCodecContext *avctx, - VAAPIEncodePicture *pic, + FFHWBaseEncodePicture *pic, AVPacket *pkt) { FFHWBaseEncodeContext *base_ctx = avctx->priv_data; @@ -689,16 +686,16 @@ static int vaapi_encode_set_output_property(AVCodecContext *avctx, return 0; } - if (ctx->output_delay == 0) { + if (base_ctx->output_delay == 0) { pkt->dts = pkt->pts; - } else if (pic->encode_order < ctx->decode_delay) { - if (ctx->ts_ring[pic->encode_order] < INT64_MIN + ctx->dts_pts_diff) + } else if (pic->encode_order < base_ctx->decode_delay) { + if (base_ctx->ts_ring[pic->encode_order] < INT64_MIN + base_ctx->dts_pts_diff) pkt->dts = INT64_MIN; else - pkt->dts = ctx->ts_ring[pic->encode_order] - ctx->dts_pts_diff; + pkt->dts = base_ctx->ts_ring[pic->encode_order] - base_ctx->dts_pts_diff; } else { - pkt->dts = ctx->ts_ring[(pic->encode_order - ctx->decode_delay) % - (3 * ctx->output_delay + base_ctx->async_depth)]; + pkt->dts = base_ctx->ts_ring[(pic->encode_order - base_ctx->decode_delay) % + (3 * base_ctx->output_delay + base_ctx->async_depth)]; } return 0; @@ -817,9 +814,11 @@ end: } static int vaapi_encode_output(AVCodecContext *avctx, - VAAPIEncodePicture *pic, AVPacket *pkt) + const FFHWBaseEncodePicture *base_pic, AVPacket *pkt) { - VAAPIEncodeContext *ctx = avctx->priv_data; + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; + VAAPIEncodeContext *ctx = avctx->priv_data; + VAAPIEncodePicture *pic = (VAAPIEncodePicture*)base_pic; AVPacket *pkt_ptr = pkt; int err; @@ -832,17 +831,17 @@ static int vaapi_encode_output(AVCodecContext *avctx, ctx->coded_buffer_ref = ff_refstruct_ref(pic->output_buffer_ref); if (pic->tail_size) { - if (ctx->tail_pkt->size) { + if (base_ctx->tail_pkt->size) { err = AVERROR_BUG; goto end; } - err = ff_get_encode_buffer(avctx, ctx->tail_pkt, pic->tail_size, 0); + err = ff_get_encode_buffer(avctx, base_ctx->tail_pkt, pic->tail_size, 0); if (err < 0) goto end; - memcpy(ctx->tail_pkt->data, pic->tail_data, pic->tail_size); - pkt_ptr = ctx->tail_pkt; + memcpy(base_ctx->tail_pkt->data, pic->tail_data, pic->tail_size); + pkt_ptr = base_ctx->tail_pkt; } } else { err = vaapi_encode_get_coded_data(avctx, pic, pkt); @@ -851,9 +850,9 @@ static int vaapi_encode_output(AVCodecContext *avctx, } av_log(avctx, AV_LOG_DEBUG, "Output read for pic %"PRId64"/%"PRId64".\n", - pic->display_order, pic->encode_order); + base_pic->display_order, base_pic->encode_order); - vaapi_encode_set_output_property(avctx, pic, pkt_ptr); + vaapi_encode_set_output_property(avctx, (FFHWBaseEncodePicture*)pic, pkt_ptr); end: ff_refstruct_unref(&pic->output_buffer_ref); @@ -864,12 +863,14 @@ end: static int vaapi_encode_discard(AVCodecContext *avctx, VAAPIEncodePicture *pic) { + FFHWBaseEncodePicture *base_pic = &pic->base; + vaapi_encode_wait(avctx, pic); if (pic->output_buffer_ref) { av_log(avctx, AV_LOG_DEBUG, "Discard output for pic " "%"PRId64"/%"PRId64".\n", - pic->display_order, pic->encode_order); + base_pic->display_order, base_pic->encode_order); ff_refstruct_unref(&pic->output_buffer_ref); pic->output_buffer = VA_INVALID_ID; @@ -878,8 +879,8 @@ static int vaapi_encode_discard(AVCodecContext *avctx, return 0; } -static VAAPIEncodePicture *vaapi_encode_alloc(AVCodecContext *avctx, - const AVFrame *frame) +static FFHWBaseEncodePicture *vaapi_encode_alloc(AVCodecContext *avctx, + const AVFrame *frame) { VAAPIEncodeContext *ctx = avctx->priv_data; VAAPIEncodePicture *pic; @@ -889,8 +890,8 @@ static VAAPIEncodePicture *vaapi_encode_alloc(AVCodecContext *avctx, return NULL; if (ctx->codec->picture_priv_data_size > 0) { - pic->priv_data = av_mallocz(ctx->codec->picture_priv_data_size); - if (!pic->priv_data) { + pic->base.priv_data = av_mallocz(ctx->codec->picture_priv_data_size); + if (!pic->base.priv_data) { av_freep(&pic); return NULL; } @@ -900,15 +901,16 @@ static VAAPIEncodePicture *vaapi_encode_alloc(AVCodecContext *avctx, pic->recon_surface = VA_INVALID_ID; pic->output_buffer = VA_INVALID_ID; - return pic; + return &pic->base; } static int vaapi_encode_free(AVCodecContext *avctx, - VAAPIEncodePicture *pic) + FFHWBaseEncodePicture *base_pic) { + VAAPIEncodePicture *pic = (VAAPIEncodePicture*)base_pic; int i; - if (pic->encode_issued) + if (base_pic->encode_issued) vaapi_encode_discard(avctx, pic); if (pic->slices) { @@ -916,17 +918,17 @@ static int vaapi_encode_free(AVCodecContext *avctx, av_freep(&pic->slices[i].codec_slice_params); } - av_frame_free(&pic->input_image); - av_frame_free(&pic->recon_image); + av_frame_free(&base_pic->input_image); + av_frame_free(&base_pic->recon_image); - av_buffer_unref(&pic->opaque_ref); + av_buffer_unref(&base_pic->opaque_ref); av_freep(&pic->param_buffers); av_freep(&pic->slices); // Output buffer should already be destroyed. av_assert0(pic->output_buffer == VA_INVALID_ID); - av_freep(&pic->priv_data); + av_freep(&base_pic->priv_data); av_freep(&pic->codec_picture_params); av_freep(&pic->roi); @@ -935,564 +937,6 @@ static int vaapi_encode_free(AVCodecContext *avctx, return 0; } -static void vaapi_encode_add_ref(AVCodecContext *avctx, - VAAPIEncodePicture *pic, - VAAPIEncodePicture *target, - int is_ref, int in_dpb, int prev) -{ - int refs = 0; - - if (is_ref) { - av_assert0(pic != target); - av_assert0(pic->nb_refs[0] < MAX_PICTURE_REFERENCES && - pic->nb_refs[1] < MAX_PICTURE_REFERENCES); - if (target->display_order < pic->display_order) - pic->refs[0][pic->nb_refs[0]++] = target; - else - pic->refs[1][pic->nb_refs[1]++] = target; - ++refs; - } - - if (in_dpb) { - av_assert0(pic->nb_dpb_pics < MAX_DPB_SIZE); - pic->dpb[pic->nb_dpb_pics++] = target; - ++refs; - } - - if (prev) { - av_assert0(!pic->prev); - pic->prev = target; - ++refs; - } - - target->ref_count[0] += refs; - target->ref_count[1] += refs; -} - -static void vaapi_encode_remove_refs(AVCodecContext *avctx, - VAAPIEncodePicture *pic, - int level) -{ - int i; - - if (pic->ref_removed[level]) - return; - - for (i = 0; i < pic->nb_refs[0]; i++) { - av_assert0(pic->refs[0][i]); - --pic->refs[0][i]->ref_count[level]; - av_assert0(pic->refs[0][i]->ref_count[level] >= 0); - } - - for (i = 0; i < pic->nb_refs[1]; i++) { - av_assert0(pic->refs[1][i]); - --pic->refs[1][i]->ref_count[level]; - av_assert0(pic->refs[1][i]->ref_count[level] >= 0); - } - - for (i = 0; i < pic->nb_dpb_pics; i++) { - av_assert0(pic->dpb[i]); - --pic->dpb[i]->ref_count[level]; - av_assert0(pic->dpb[i]->ref_count[level] >= 0); - } - - av_assert0(pic->prev || pic->type == FF_HW_PICTURE_TYPE_IDR); - if (pic->prev) { - --pic->prev->ref_count[level]; - av_assert0(pic->prev->ref_count[level] >= 0); - } - - pic->ref_removed[level] = 1; -} - -static void vaapi_encode_set_b_pictures(AVCodecContext *avctx, - VAAPIEncodePicture *start, - VAAPIEncodePicture *end, - VAAPIEncodePicture *prev, - int current_depth, - VAAPIEncodePicture **last) -{ - VAAPIEncodeContext *ctx = avctx->priv_data; - VAAPIEncodePicture *pic, *next, *ref; - int i, len; - - av_assert0(start && end && start != end && start->next != end); - - // If we are at the maximum depth then encode all pictures as - // non-referenced B-pictures. Also do this if there is exactly one - // picture left, since there will be nothing to reference it. - if (current_depth == ctx->max_b_depth || start->next->next == end) { - for (pic = start->next; pic; pic = pic->next) { - if (pic == end) - break; - pic->type = FF_HW_PICTURE_TYPE_B; - pic->b_depth = current_depth; - - vaapi_encode_add_ref(avctx, pic, start, 1, 1, 0); - vaapi_encode_add_ref(avctx, pic, end, 1, 1, 0); - vaapi_encode_add_ref(avctx, pic, prev, 0, 0, 1); - - for (ref = end->refs[1][0]; ref; ref = ref->refs[1][0]) - vaapi_encode_add_ref(avctx, pic, ref, 0, 1, 0); - } - *last = prev; - - } else { - // Split the current list at the midpoint with a referenced - // B-picture, then descend into each side separately. - len = 0; - for (pic = start->next; pic != end; pic = pic->next) - ++len; - for (pic = start->next, i = 1; 2 * i < len; pic = pic->next, i++); - - pic->type = FF_HW_PICTURE_TYPE_B; - pic->b_depth = current_depth; - - pic->is_reference = 1; - - vaapi_encode_add_ref(avctx, pic, pic, 0, 1, 0); - vaapi_encode_add_ref(avctx, pic, start, 1, 1, 0); - vaapi_encode_add_ref(avctx, pic, end, 1, 1, 0); - vaapi_encode_add_ref(avctx, pic, prev, 0, 0, 1); - - for (ref = end->refs[1][0]; ref; ref = ref->refs[1][0]) - vaapi_encode_add_ref(avctx, pic, ref, 0, 1, 0); - - if (i > 1) - vaapi_encode_set_b_pictures(avctx, start, pic, pic, - current_depth + 1, &next); - else - next = pic; - - vaapi_encode_set_b_pictures(avctx, pic, end, next, - current_depth + 1, last); - } -} - -static void vaapi_encode_add_next_prev(AVCodecContext *avctx, - VAAPIEncodePicture *pic) -{ - VAAPIEncodeContext *ctx = avctx->priv_data; - int i; - - if (!pic) - return; - - if (pic->type == FF_HW_PICTURE_TYPE_IDR) { - for (i = 0; i < ctx->nb_next_prev; i++) { - --ctx->next_prev[i]->ref_count[0]; - ctx->next_prev[i] = NULL; - } - ctx->next_prev[0] = pic; - ++pic->ref_count[0]; - ctx->nb_next_prev = 1; - - return; - } - - if (ctx->nb_next_prev < MAX_PICTURE_REFERENCES) { - ctx->next_prev[ctx->nb_next_prev++] = pic; - ++pic->ref_count[0]; - } else { - --ctx->next_prev[0]->ref_count[0]; - for (i = 0; i < MAX_PICTURE_REFERENCES - 1; i++) - ctx->next_prev[i] = ctx->next_prev[i + 1]; - ctx->next_prev[i] = pic; - ++pic->ref_count[0]; - } -} - -static int vaapi_encode_pick_next(AVCodecContext *avctx, - VAAPIEncodePicture **pic_out) -{ - VAAPIEncodeContext *ctx = avctx->priv_data; - VAAPIEncodePicture *pic = NULL, *prev = NULL, *next, *start; - int i, b_counter, closed_gop_end; - - // If there are any B-frames already queued, the next one to encode - // is the earliest not-yet-issued frame for which all references are - // available. - for (pic = ctx->pic_start; pic; pic = pic->next) { - if (pic->encode_issued) - continue; - if (pic->type != FF_HW_PICTURE_TYPE_B) - continue; - for (i = 0; i < pic->nb_refs[0]; i++) { - if (!pic->refs[0][i]->encode_issued) - break; - } - if (i != pic->nb_refs[0]) - continue; - - for (i = 0; i < pic->nb_refs[1]; i++) { - if (!pic->refs[1][i]->encode_issued) - break; - } - if (i == pic->nb_refs[1]) - break; - } - - if (pic) { - av_log(avctx, AV_LOG_DEBUG, "Pick B-picture at depth %d to " - "encode next.\n", pic->b_depth); - *pic_out = pic; - return 0; - } - - // Find the B-per-Pth available picture to become the next picture - // on the top layer. - start = NULL; - b_counter = 0; - closed_gop_end = ctx->closed_gop || - ctx->idr_counter == ctx->gop_per_idr; - for (pic = ctx->pic_start; pic; pic = next) { - next = pic->next; - if (pic->encode_issued) { - start = pic; - continue; - } - // If the next available picture is force-IDR, encode it to start - // a new GOP immediately. - if (pic->force_idr) - break; - if (b_counter == ctx->b_per_p) - break; - // If this picture ends a closed GOP or starts a new GOP then it - // needs to be in the top layer. - if (ctx->gop_counter + b_counter + closed_gop_end >= ctx->gop_size) - break; - // If the picture after this one is force-IDR, we need to encode - // this one in the top layer. - if (next && next->force_idr) - break; - ++b_counter; - } - - // At the end of the stream the last picture must be in the top layer. - if (!pic && ctx->end_of_stream) { - --b_counter; - pic = ctx->pic_end; - if (pic->encode_complete) - return AVERROR_EOF; - else if (pic->encode_issued) - return AVERROR(EAGAIN); - } - - if (!pic) { - av_log(avctx, AV_LOG_DEBUG, "Pick nothing to encode next - " - "need more input for reference pictures.\n"); - return AVERROR(EAGAIN); - } - if (ctx->input_order <= ctx->decode_delay && !ctx->end_of_stream) { - av_log(avctx, AV_LOG_DEBUG, "Pick nothing to encode next - " - "need more input for timestamps.\n"); - return AVERROR(EAGAIN); - } - - if (pic->force_idr) { - av_log(avctx, AV_LOG_DEBUG, "Pick forced IDR-picture to " - "encode next.\n"); - pic->type = FF_HW_PICTURE_TYPE_IDR; - ctx->idr_counter = 1; - ctx->gop_counter = 1; - - } else if (ctx->gop_counter + b_counter >= ctx->gop_size) { - if (ctx->idr_counter == ctx->gop_per_idr) { - av_log(avctx, AV_LOG_DEBUG, "Pick new-GOP IDR-picture to " - "encode next.\n"); - pic->type = FF_HW_PICTURE_TYPE_IDR; - ctx->idr_counter = 1; - } else { - av_log(avctx, AV_LOG_DEBUG, "Pick new-GOP I-picture to " - "encode next.\n"); - pic->type = FF_HW_PICTURE_TYPE_I; - ++ctx->idr_counter; - } - ctx->gop_counter = 1; - - } else { - if (ctx->gop_counter + b_counter + closed_gop_end == ctx->gop_size) { - av_log(avctx, AV_LOG_DEBUG, "Pick group-end P-picture to " - "encode next.\n"); - } else { - av_log(avctx, AV_LOG_DEBUG, "Pick normal P-picture to " - "encode next.\n"); - } - pic->type = FF_HW_PICTURE_TYPE_P; - av_assert0(start); - ctx->gop_counter += 1 + b_counter; - } - pic->is_reference = 1; - *pic_out = pic; - - vaapi_encode_add_ref(avctx, pic, pic, 0, 1, 0); - if (pic->type != FF_HW_PICTURE_TYPE_IDR) { - // TODO: apply both previous and forward multi reference for all vaapi encoders. - // And L0/L1 reference frame number can be set dynamically through query - // VAConfigAttribEncMaxRefFrames attribute. - if (avctx->codec_id == AV_CODEC_ID_AV1) { - for (i = 0; i < ctx->nb_next_prev; i++) - vaapi_encode_add_ref(avctx, pic, ctx->next_prev[i], - pic->type == FF_HW_PICTURE_TYPE_P, - b_counter > 0, 0); - } else - vaapi_encode_add_ref(avctx, pic, start, - pic->type == FF_HW_PICTURE_TYPE_P, - b_counter > 0, 0); - - vaapi_encode_add_ref(avctx, pic, ctx->next_prev[ctx->nb_next_prev - 1], 0, 0, 1); - } - - if (b_counter > 0) { - vaapi_encode_set_b_pictures(avctx, start, pic, pic, 1, - &prev); - } else { - prev = pic; - } - vaapi_encode_add_next_prev(avctx, prev); - - return 0; -} - -static int vaapi_encode_clear_old(AVCodecContext *avctx) -{ - VAAPIEncodeContext *ctx = avctx->priv_data; - VAAPIEncodePicture *pic, *prev, *next; - - av_assert0(ctx->pic_start); - - // Remove direct references once each picture is complete. - for (pic = ctx->pic_start; pic; pic = pic->next) { - if (pic->encode_complete && pic->next) - vaapi_encode_remove_refs(avctx, pic, 0); - } - - // Remove indirect references once a picture has no direct references. - for (pic = ctx->pic_start; pic; pic = pic->next) { - if (pic->encode_complete && pic->ref_count[0] == 0) - vaapi_encode_remove_refs(avctx, pic, 1); - } - - // Clear out all complete pictures with no remaining references. - prev = NULL; - for (pic = ctx->pic_start; pic; pic = next) { - next = pic->next; - if (pic->encode_complete && pic->ref_count[1] == 0) { - av_assert0(pic->ref_removed[0] && pic->ref_removed[1]); - if (prev) - prev->next = next; - else - ctx->pic_start = next; - vaapi_encode_free(avctx, pic); - } else { - prev = pic; - } - } - - return 0; -} - -static int vaapi_encode_check_frame(AVCodecContext *avctx, - const AVFrame *frame) -{ - VAAPIEncodeContext *ctx = avctx->priv_data; - - if ((frame->crop_top || frame->crop_bottom || - frame->crop_left || frame->crop_right) && !ctx->crop_warned) { - av_log(avctx, AV_LOG_WARNING, "Cropping information on input " - "frames ignored due to lack of API support.\n"); - ctx->crop_warned = 1; - } - - if (!ctx->roi_allowed) { - AVFrameSideData *sd = - av_frame_get_side_data(frame, AV_FRAME_DATA_REGIONS_OF_INTEREST); - - if (sd && !ctx->roi_warned) { - av_log(avctx, AV_LOG_WARNING, "ROI side data on input " - "frames ignored due to lack of driver support.\n"); - ctx->roi_warned = 1; - } - } - - return 0; -} - -static int vaapi_encode_send_frame(AVCodecContext *avctx, AVFrame *frame) -{ - FFHWBaseEncodeContext *base_ctx = avctx->priv_data; - VAAPIEncodeContext *ctx = avctx->priv_data; - VAAPIEncodePicture *pic; - int err; - - if (frame) { - av_log(avctx, AV_LOG_DEBUG, "Input frame: %ux%u (%"PRId64").\n", - frame->width, frame->height, frame->pts); - - err = vaapi_encode_check_frame(avctx, frame); - if (err < 0) - return err; - - pic = vaapi_encode_alloc(avctx, frame); - if (!pic) - return AVERROR(ENOMEM); - - pic->input_image = av_frame_alloc(); - if (!pic->input_image) { - err = AVERROR(ENOMEM); - goto fail; - } - - if (ctx->input_order == 0 || frame->pict_type == AV_PICTURE_TYPE_I) - pic->force_idr = 1; - - pic->pts = frame->pts; - pic->duration = frame->duration; - - if (avctx->flags & AV_CODEC_FLAG_COPY_OPAQUE) { - err = av_buffer_replace(&pic->opaque_ref, frame->opaque_ref); - if (err < 0) - goto fail; - - pic->opaque = frame->opaque; - } - - av_frame_move_ref(pic->input_image, frame); - - if (ctx->input_order == 0) - ctx->first_pts = pic->pts; - if (ctx->input_order == ctx->decode_delay) - ctx->dts_pts_diff = pic->pts - ctx->first_pts; - if (ctx->output_delay > 0) - ctx->ts_ring[ctx->input_order % - (3 * ctx->output_delay + base_ctx->async_depth)] = pic->pts; - - pic->display_order = ctx->input_order; - ++ctx->input_order; - - if (ctx->pic_start) { - ctx->pic_end->next = pic; - ctx->pic_end = pic; - } else { - ctx->pic_start = pic; - ctx->pic_end = pic; - } - - } else { - ctx->end_of_stream = 1; - - // Fix timestamps if we hit end-of-stream before the initial decode - // delay has elapsed. - if (ctx->input_order < ctx->decode_delay) - ctx->dts_pts_diff = ctx->pic_end->pts - ctx->first_pts; - } - - return 0; - -fail: - vaapi_encode_free(avctx, pic); - return err; -} - -int ff_vaapi_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt) -{ - VAAPIEncodeContext *ctx = avctx->priv_data; - VAAPIEncodePicture *pic = NULL; - AVFrame *frame = ctx->frame; - int err; - -start: - /** if no B frame before repeat P frame, sent repeat P frame out. */ - if (ctx->tail_pkt->size) { - for (VAAPIEncodePicture *tmp = ctx->pic_start; tmp; tmp = tmp->next) { - if (tmp->type == FF_HW_PICTURE_TYPE_B && tmp->pts < ctx->tail_pkt->pts) - break; - else if (!tmp->next) { - av_packet_move_ref(pkt, ctx->tail_pkt); - goto end; - } - } - } - - err = ff_encode_get_frame(avctx, frame); - if (err < 0 && err != AVERROR_EOF) - return err; - - if (err == AVERROR_EOF) - frame = NULL; - - err = vaapi_encode_send_frame(avctx, frame); - if (err < 0) - return err; - - if (!ctx->pic_start) { - if (ctx->end_of_stream) - return AVERROR_EOF; - else - return AVERROR(EAGAIN); - } - - if (ctx->has_sync_buffer_func) { - if (av_fifo_can_write(ctx->encode_fifo)) { - err = vaapi_encode_pick_next(avctx, &pic); - if (!err) { - av_assert0(pic); - pic->encode_order = ctx->encode_order + - av_fifo_can_read(ctx->encode_fifo); - err = vaapi_encode_issue(avctx, pic); - if (err < 0) { - av_log(avctx, AV_LOG_ERROR, "Encode failed: %d.\n", err); - return err; - } - av_fifo_write(ctx->encode_fifo, &pic, 1); - } - } - - if (!av_fifo_can_read(ctx->encode_fifo)) - return err; - - // More frames can be buffered - if (av_fifo_can_write(ctx->encode_fifo) && !ctx->end_of_stream) - return AVERROR(EAGAIN); - - av_fifo_read(ctx->encode_fifo, &pic, 1); - ctx->encode_order = pic->encode_order + 1; - } else { - err = vaapi_encode_pick_next(avctx, &pic); - if (err < 0) - return err; - av_assert0(pic); - - pic->encode_order = ctx->encode_order++; - - err = vaapi_encode_issue(avctx, pic); - if (err < 0) { - av_log(avctx, AV_LOG_ERROR, "Encode failed: %d.\n", err); - return err; - } - } - - err = vaapi_encode_output(avctx, pic, pkt); - if (err < 0) { - av_log(avctx, AV_LOG_ERROR, "Output failed: %d.\n", err); - return err; - } - - ctx->output_order = pic->encode_order; - vaapi_encode_clear_old(avctx); - - /** loop to get an available pkt in encoder flushing. */ - if (ctx->end_of_stream && !pkt->size) - goto start; - -end: - if (pkt->size) - av_log(avctx, AV_LOG_DEBUG, "Output packet: pts %"PRId64", dts %"PRId64", " - "size %d bytes.\n", pkt->pts, pkt->dts, pkt->size); - - return 0; -} - static av_cold void vaapi_encode_add_global_param(AVCodecContext *avctx, int type, void *buffer, size_t size) { @@ -2188,7 +1632,8 @@ static av_cold int vaapi_encode_init_max_frame_size(AVCodecContext *avctx) static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx) { - VAAPIEncodeContext *ctx = avctx->priv_data; + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; + VAAPIEncodeContext *ctx = avctx->priv_data; VAStatus vas; VAConfigAttrib attr = { VAConfigAttribEncMaxRefFrames }; uint32_t ref_l0, ref_l1; @@ -2211,7 +1656,7 @@ static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx) ref_l1 = attr.value >> 16 & 0xffff; } - ctx->p_to_gpb = 0; + base_ctx->p_to_gpb = 0; prediction_pre_only = 0; #if VA_CHECK_VERSION(1, 9, 0) @@ -2247,7 +1692,7 @@ static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx) if (attr.value & VA_PREDICTION_DIRECTION_BI_NOT_EMPTY) { if (ref_l0 > 0 && ref_l1 > 0) { - ctx->p_to_gpb = 1; + base_ctx->p_to_gpb = 1; av_log(avctx, AV_LOG_VERBOSE, "Driver does not support P-frames, " "replacing them with B-frames.\n"); } @@ -2259,7 +1704,7 @@ static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx) if (ctx->codec->flags & FF_HW_FLAG_INTRA_ONLY || avctx->gop_size <= 1) { av_log(avctx, AV_LOG_VERBOSE, "Using intra frames only.\n"); - ctx->gop_size = 1; + base_ctx->gop_size = 1; } else if (ref_l0 < 1) { av_log(avctx, AV_LOG_ERROR, "Driver does not support any " "reference frames.\n"); @@ -2267,41 +1712,41 @@ static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx) } else if (!(ctx->codec->flags & FF_HW_FLAG_B_PICTURES) || ref_l1 < 1 || avctx->max_b_frames < 1 || prediction_pre_only) { - if (ctx->p_to_gpb) + if (base_ctx->p_to_gpb) av_log(avctx, AV_LOG_VERBOSE, "Using intra and B-frames " "(supported references: %d / %d).\n", ref_l0, ref_l1); else av_log(avctx, AV_LOG_VERBOSE, "Using intra and P-frames " "(supported references: %d / %d).\n", ref_l0, ref_l1); - ctx->gop_size = avctx->gop_size; - ctx->p_per_i = INT_MAX; - ctx->b_per_p = 0; + base_ctx->gop_size = avctx->gop_size; + base_ctx->p_per_i = INT_MAX; + base_ctx->b_per_p = 0; } else { - if (ctx->p_to_gpb) + if (base_ctx->p_to_gpb) av_log(avctx, AV_LOG_VERBOSE, "Using intra and B-frames " "(supported references: %d / %d).\n", ref_l0, ref_l1); else av_log(avctx, AV_LOG_VERBOSE, "Using intra, P- and B-frames " "(supported references: %d / %d).\n", ref_l0, ref_l1); - ctx->gop_size = avctx->gop_size; - ctx->p_per_i = INT_MAX; - ctx->b_per_p = avctx->max_b_frames; + base_ctx->gop_size = avctx->gop_size; + base_ctx->p_per_i = INT_MAX; + base_ctx->b_per_p = avctx->max_b_frames; if (ctx->codec->flags & FF_HW_FLAG_B_PICTURE_REFERENCES) { - ctx->max_b_depth = FFMIN(ctx->desired_b_depth, - av_log2(ctx->b_per_p) + 1); + base_ctx->max_b_depth = FFMIN(ctx->desired_b_depth, + av_log2(base_ctx->b_per_p) + 1); } else { - ctx->max_b_depth = 1; + base_ctx->max_b_depth = 1; } } if (ctx->codec->flags & FF_HW_FLAG_NON_IDR_KEY_PICTURES) { - ctx->closed_gop = !!(avctx->flags & AV_CODEC_FLAG_CLOSED_GOP); - ctx->gop_per_idr = ctx->idr_interval + 1; + base_ctx->closed_gop = !!(avctx->flags & AV_CODEC_FLAG_CLOSED_GOP); + base_ctx->gop_per_idr = ctx->idr_interval + 1; } else { - ctx->closed_gop = 1; - ctx->gop_per_idr = 1; + base_ctx->closed_gop = 1; + base_ctx->gop_per_idr = 1; } return 0; @@ -2614,7 +2059,8 @@ static av_cold int vaapi_encode_init_quality(AVCodecContext *avctx) static av_cold int vaapi_encode_init_roi(AVCodecContext *avctx) { #if VA_CHECK_VERSION(1, 0, 0) - VAAPIEncodeContext *ctx = avctx->priv_data; + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; + VAAPIEncodeContext *ctx = avctx->priv_data; VAStatus vas; VAConfigAttrib attr = { VAConfigAttribEncROI }; @@ -2629,14 +2075,14 @@ static av_cold int vaapi_encode_init_roi(AVCodecContext *avctx) } if (attr.value == VA_ATTRIB_NOT_SUPPORTED) { - ctx->roi_allowed = 0; + base_ctx->roi_allowed = 0; } else { VAConfigAttribValEncROI roi = { .value = attr.value, }; ctx->roi_max_regions = roi.bits.num_roi_regions; - ctx->roi_allowed = ctx->roi_max_regions > 0 && + base_ctx->roi_allowed = ctx->roi_max_regions > 0 && (ctx->va_rc_mode == VA_RC_CQP || roi.bits.roi_rc_qp_delta_support); } @@ -2771,6 +2217,16 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx) return err; } +static const FFHWEncodePictureOperation vaapi_op = { + .alloc = &vaapi_encode_alloc, + + .issue = &vaapi_encode_issue, + + .output = &vaapi_encode_output, + + .free = &vaapi_encode_free, +}; + av_cold int ff_vaapi_encode_init(AVCodecContext *avctx) { FFHWBaseEncodeContext *base_ctx = avctx->priv_data; @@ -2782,10 +2238,12 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx) ctx->va_config = VA_INVALID_ID; ctx->va_context = VA_INVALID_ID; + base_ctx->op = &vaapi_op; + /* If you add something that can fail above this av_frame_alloc(), * modify ff_vaapi_encode_close() accordingly. */ - ctx->frame = av_frame_alloc(); - if (!ctx->frame) { + base_ctx->frame = av_frame_alloc(); + if (!base_ctx->frame) { return AVERROR(ENOMEM); } @@ -2810,8 +2268,8 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx) ctx->device = (AVHWDeviceContext*)ctx->device_ref->data; ctx->hwctx = ctx->device->hwctx; - ctx->tail_pkt = av_packet_alloc(); - if (!ctx->tail_pkt) { + base_ctx->tail_pkt = av_packet_alloc(); + if (!base_ctx->tail_pkt) { err = AVERROR(ENOMEM); goto fail; } @@ -2910,8 +2368,8 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx) goto fail; } - ctx->output_delay = ctx->b_per_p; - ctx->decode_delay = ctx->max_b_depth; + base_ctx->output_delay = base_ctx->b_per_p; + base_ctx->decode_delay = base_ctx->max_b_depth; if (ctx->codec->sequence_params_size > 0) { ctx->codec_sequence_params = @@ -2966,11 +2424,11 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx) // check vaSyncBuffer function vas = vaSyncBuffer(ctx->hwctx->display, VA_INVALID_ID, 0); if (vas != VA_STATUS_ERROR_UNIMPLEMENTED) { - ctx->has_sync_buffer_func = 1; - ctx->encode_fifo = av_fifo_alloc2(base_ctx->async_depth, - sizeof(VAAPIEncodePicture *), - 0); - if (!ctx->encode_fifo) + base_ctx->async_encode = 1; + base_ctx->encode_fifo = av_fifo_alloc2(base_ctx->async_depth, + sizeof(VAAPIEncodePicture*), + 0); + if (!base_ctx->encode_fifo) return AVERROR(ENOMEM); } #endif @@ -2983,15 +2441,16 @@ fail: av_cold int ff_vaapi_encode_close(AVCodecContext *avctx) { - VAAPIEncodeContext *ctx = avctx->priv_data; - VAAPIEncodePicture *pic, *next; + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; + VAAPIEncodeContext *ctx = avctx->priv_data; + FFHWBaseEncodePicture *pic, *next; /* We check ctx->frame to know whether ff_vaapi_encode_init() * has been called and va_config/va_context initialized. */ - if (!ctx->frame) + if (!base_ctx->frame) return 0; - for (pic = ctx->pic_start; pic; pic = next) { + for (pic = base_ctx->pic_start; pic; pic = next) { next = pic->next; vaapi_encode_free(avctx, pic); } @@ -3008,12 +2467,12 @@ av_cold int ff_vaapi_encode_close(AVCodecContext *avctx) ctx->va_config = VA_INVALID_ID; } - av_frame_free(&ctx->frame); - av_packet_free(&ctx->tail_pkt); + av_frame_free(&base_ctx->frame); + av_packet_free(&base_ctx->tail_pkt); av_freep(&ctx->codec_sequence_params); av_freep(&ctx->codec_picture_params); - av_fifo_freep2(&ctx->encode_fifo); + av_fifo_freep2(&base_ctx->encode_fifo); av_buffer_unref(&ctx->recon_frames_ref); av_buffer_unref(&ctx->input_frames_ref); diff --git a/libavcodec/vaapi_encode.h b/libavcodec/vaapi_encode.h index 9fdb945b18..5e442d9c23 100644 --- a/libavcodec/vaapi_encode.h +++ b/libavcodec/vaapi_encode.h @@ -29,7 +29,6 @@ #include "libavutil/hwcontext.h" #include "libavutil/hwcontext_vaapi.h" -#include "libavutil/fifo.h" #include "avcodec.h" #include "hwconfig.h" @@ -64,16 +63,7 @@ typedef struct VAAPIEncodeSlice { } VAAPIEncodeSlice; typedef struct VAAPIEncodePicture { - struct VAAPIEncodePicture *next; - - int64_t display_order; - int64_t encode_order; - int64_t pts; - int64_t duration; - int force_idr; - - void *opaque; - AVBufferRef *opaque_ref; + FFHWBaseEncodePicture base; #if VA_CHECK_VERSION(1, 0, 0) // ROI regions. @@ -82,15 +72,7 @@ typedef struct VAAPIEncodePicture { void *roi; #endif - int type; - int b_depth; - int encode_issued; - int encode_complete; - - AVFrame *input_image; VASurfaceID input_surface; - - AVFrame *recon_image; VASurfaceID recon_surface; int nb_param_buffers; @@ -100,31 +82,8 @@ typedef struct VAAPIEncodePicture { VABufferID *output_buffer_ref; VABufferID output_buffer; - void *priv_data; void *codec_picture_params; - // Whether this picture is a reference picture. - int is_reference; - - // The contents of the DPB after this picture has been decoded. - // This will contain the picture itself if it is a reference picture, - // but not if it isn't. - int nb_dpb_pics; - struct VAAPIEncodePicture *dpb[MAX_DPB_SIZE]; - // The reference pictures used in decoding this picture. If they are - // used by later pictures they will also appear in the DPB. ref[0][] for - // previous reference frames. ref[1][] for future reference frames. - int nb_refs[MAX_REFERENCE_LIST_NUM]; - struct VAAPIEncodePicture *refs[MAX_REFERENCE_LIST_NUM][MAX_PICTURE_REFERENCES]; - // The previous reference picture in encode order. Must be in at least - // one of the reference list and DPB list. - struct VAAPIEncodePicture *prev; - // Reference count for other pictures referring to this one through - // the above pointers, directly from incomplete pictures and indirectly - // through completed pictures. - int ref_count[2]; - int ref_removed[2]; - int nb_slices; VAAPIEncodeSlice *slices; @@ -298,30 +257,6 @@ typedef struct VAAPIEncodeContext { // structure (VAEncPictureParameterBuffer*). void *codec_picture_params; - // Current encoding window, in display (input) order. - VAAPIEncodePicture *pic_start, *pic_end; - // The next picture to use as the previous reference picture in - // encoding order. Order from small to large in encoding order. - VAAPIEncodePicture *next_prev[MAX_PICTURE_REFERENCES]; - int nb_next_prev; - - // Next input order index (display order). - int64_t input_order; - // Number of frames that output is behind input. - int64_t output_delay; - // Next encode order index. - int64_t encode_order; - // Number of frames decode output will need to be delayed. - int64_t decode_delay; - // Next output order index (in encode order). - int64_t output_order; - - // Timestamp handling. - int64_t first_pts; - int64_t dts_pts_diff; - int64_t ts_ring[MAX_REORDER_DELAY * 3 + - MAX_ASYNC_DEPTH]; - // Slice structure. int slice_block_rows; int slice_block_cols; @@ -340,41 +275,12 @@ typedef struct VAAPIEncodeContext { // Location of the i-th tile row boundary. int row_bd[MAX_TILE_ROWS + 1]; - // Frame type decision. - int gop_size; - int closed_gop; - int gop_per_idr; - int p_per_i; - int max_b_depth; - int b_per_p; - int force_idr; - int idr_counter; - int gop_counter; - int end_of_stream; - int p_to_gpb; - - // Whether the driver supports ROI at all. - int roi_allowed; // Maximum number of regions supported by the driver. int roi_max_regions; // Quantisation range for offset calculations. Set by codec-specific // code, as it may change based on parameters. int roi_quant_range; - // The encoder does not support cropping information, so warn about - // it the first time we encounter any nonzero crop fields. - int crop_warned; - // If the driver does not support ROI then warn the first time we - // encounter a frame with ROI side data. - int roi_warned; - - AVFrame *frame; - - // Whether the driver support vaSyncBuffer - int has_sync_buffer_func; - // Store buffered pic - AVFifo *encode_fifo; - /** Head data for current output pkt, used only for AV1. */ //void *header_data; //size_t header_data_size; @@ -384,9 +290,6 @@ typedef struct VAAPIEncodeContext { * This is a RefStruct reference. */ VABufferID *coded_buffer_ref; - - /** Tail data of a pic, now only used for av1 repeat frame header. */ - AVPacket *tail_pkt; } VAAPIEncodeContext; typedef struct VAAPIEncodeType { @@ -468,9 +371,6 @@ typedef struct VAAPIEncodeType { char *data, size_t *data_len); } VAAPIEncodeType; - -int ff_vaapi_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt); - int ff_vaapi_encode_init(AVCodecContext *avctx); int ff_vaapi_encode_close(AVCodecContext *avctx); diff --git a/libavcodec/vaapi_encode_av1.c b/libavcodec/vaapi_encode_av1.c index 5bd12f9f92..1c9235acd9 100644 --- a/libavcodec/vaapi_encode_av1.c +++ b/libavcodec/vaapi_encode_av1.c @@ -360,6 +360,7 @@ static int vaapi_encode_av1_write_sequence_header(AVCodecContext *avctx, static int vaapi_encode_av1_init_sequence_params(AVCodecContext *avctx) { + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeContext *ctx = avctx->priv_data; VAAPIEncodeAV1Context *priv = avctx->priv_data; AV1RawOBU *sh_obu = &priv->sh; @@ -441,8 +442,8 @@ static int vaapi_encode_av1_init_sequence_params(AVCodecContext *avctx) vseq->seq_level_idx = sh->seq_level_idx[0]; vseq->seq_tier = sh->seq_tier[0]; vseq->order_hint_bits_minus_1 = sh->order_hint_bits_minus_1; - vseq->intra_period = ctx->gop_size; - vseq->ip_period = ctx->b_per_p + 1; + vseq->intra_period = base_ctx->gop_size; + vseq->ip_period = base_ctx->b_per_p + 1; vseq->seq_fields.bits.enable_order_hint = sh->enable_order_hint; @@ -465,16 +466,17 @@ end: } static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx, - VAAPIEncodePicture *pic) + VAAPIEncodePicture *vaapi_pic) { VAAPIEncodeContext *ctx = avctx->priv_data; VAAPIEncodeAV1Context *priv = avctx->priv_data; + const FFHWBaseEncodePicture *pic = &vaapi_pic->base; VAAPIEncodeAV1Picture *hpic = pic->priv_data; AV1RawOBU *fh_obu = &priv->fh; AV1RawFrameHeader *fh = &fh_obu->obu.frame.header; - VAEncPictureParameterBufferAV1 *vpic = pic->codec_picture_params; + VAEncPictureParameterBufferAV1 *vpic = vaapi_pic->codec_picture_params; CodedBitstreamFragment *obu = &priv->current_obu; - VAAPIEncodePicture *ref; + FFHWBaseEncodePicture *ref; VAAPIEncodeAV1Picture *href; int slot, i; int ret; @@ -482,8 +484,8 @@ static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx, { 1, 0, 0, 0, -1, 0, -1, -1 }; memset(fh_obu, 0, sizeof(*fh_obu)); - pic->nb_slices = priv->tile_groups; - pic->non_independent_frame = pic->encode_order < pic->display_order; + vaapi_pic->nb_slices = priv->tile_groups; + vaapi_pic->non_independent_frame = pic->encode_order < pic->display_order; fh_obu->header.obu_type = AV1_OBU_FRAME_HEADER; fh_obu->header.obu_has_size_field = 1; @@ -601,8 +603,8 @@ static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx, vpic->frame_width_minus_1 = fh->frame_width_minus_1; vpic->frame_height_minus_1 = fh->frame_height_minus_1; vpic->primary_ref_frame = fh->primary_ref_frame; - vpic->reconstructed_frame = pic->recon_surface; - vpic->coded_buf = pic->output_buffer; + vpic->reconstructed_frame = vaapi_pic->recon_surface; + vpic->coded_buf = vaapi_pic->output_buffer; vpic->tile_cols = fh->tile_cols; vpic->tile_rows = fh->tile_rows; vpic->order_hint = fh->order_hint; @@ -630,12 +632,12 @@ static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx, for (i = 0; i < MAX_REFERENCE_LIST_NUM; i++) { for (int j = 0; j < pic->nb_refs[i]; j++) { - VAAPIEncodePicture *ref_pic = pic->refs[i][j]; + FFHWBaseEncodePicture *ref_pic = pic->refs[i][j]; slot = ((VAAPIEncodeAV1Picture*)ref_pic->priv_data)->slot; av_assert0(vpic->reference_frames[slot] == VA_INVALID_SURFACE); - vpic->reference_frames[slot] = ref_pic->recon_surface; + vpic->reference_frames[slot] = ((VAAPIEncodePicture *)ref_pic)->recon_surface; } } @@ -752,7 +754,7 @@ static int vaapi_encode_av1_init_slice_params(AVCodecContext *avctx, } static int vaapi_encode_av1_write_picture_header(AVCodecContext *avctx, - VAAPIEncodePicture *pic, + VAAPIEncodePicture *vaapi_pic, char *data, size_t *data_len) { VAAPIEncodeAV1Context *priv = avctx->priv_data; @@ -760,10 +762,11 @@ static int vaapi_encode_av1_write_picture_header(AVCodecContext *avctx, CodedBitstreamAV1Context *cbctx = priv->cbc->priv_data; AV1RawOBU *fh_obu = &priv->fh; AV1RawFrameHeader *rep_fh = &fh_obu->obu.frame_header; + const FFHWBaseEncodePicture *pic = &vaapi_pic->base; VAAPIEncodeAV1Picture *href; int ret = 0; - pic->tail_size = 0; + vaapi_pic->tail_size = 0; /** Pack repeat frame header. */ if (pic->display_order > pic->encode_order) { memset(fh_obu, 0, sizeof(*fh_obu)); @@ -785,11 +788,11 @@ static int vaapi_encode_av1_write_picture_header(AVCodecContext *avctx, if (ret < 0) goto end; - ret = vaapi_encode_av1_write_obu(avctx, pic->tail_data, &pic->tail_size, obu); + ret = vaapi_encode_av1_write_obu(avctx, vaapi_pic->tail_data, &vaapi_pic->tail_size, obu); if (ret < 0) goto end; - pic->tail_size /= 8; + vaapi_pic->tail_size /= 8; } memcpy(data, &priv->fh_data, MAX_PARAM_BUFFER_SIZE * sizeof(char)); @@ -1038,7 +1041,7 @@ const FFCodec ff_av1_vaapi_encoder = { .p.id = AV_CODEC_ID_AV1, .priv_data_size = sizeof(VAAPIEncodeAV1Context), .init = &vaapi_encode_av1_init, - FF_CODEC_RECEIVE_PACKET_CB(&ff_vaapi_encode_receive_packet), + FF_CODEC_RECEIVE_PACKET_CB(&ff_hw_base_encode_receive_packet), .close = &vaapi_encode_av1_close, .p.priv_class = &vaapi_encode_av1_class, .p.capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE | diff --git a/libavcodec/vaapi_encode_h264.c b/libavcodec/vaapi_encode_h264.c index aa5b499d9f..2f91655d18 100644 --- a/libavcodec/vaapi_encode_h264.c +++ b/libavcodec/vaapi_encode_h264.c @@ -233,7 +233,7 @@ static int vaapi_encode_h264_write_extra_header(AVCodecContext *avctx, goto fail; } if (priv->sei_needed & SEI_TIMING) { - if (pic->type == FF_HW_PICTURE_TYPE_IDR) { + if (pic->base.type == FF_HW_PICTURE_TYPE_IDR) { err = ff_cbs_sei_add_message(priv->cbc, au, 1, SEI_TYPE_BUFFERING_PERIOD, &priv->sei_buffering_period, NULL); @@ -295,6 +295,7 @@ fail: static int vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx) { + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeContext *ctx = avctx->priv_data; VAAPIEncodeH264Context *priv = avctx->priv_data; H264RawSPS *sps = &priv->raw_sps; @@ -326,18 +327,18 @@ static int vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx) sps->constraint_set1_flag = 1; if (avctx->profile == AV_PROFILE_H264_HIGH || avctx->profile == AV_PROFILE_H264_HIGH_10) - sps->constraint_set3_flag = ctx->gop_size == 1; + sps->constraint_set3_flag = base_ctx->gop_size == 1; if (avctx->profile == AV_PROFILE_H264_MAIN || avctx->profile == AV_PROFILE_H264_HIGH || avctx->profile == AV_PROFILE_H264_HIGH_10) { sps->constraint_set4_flag = 1; - sps->constraint_set5_flag = ctx->b_per_p == 0; + sps->constraint_set5_flag = base_ctx->b_per_p == 0; } - if (ctx->gop_size == 1) + if (base_ctx->gop_size == 1) priv->dpb_frames = 0; else - priv->dpb_frames = 1 + ctx->max_b_depth; + priv->dpb_frames = 1 + base_ctx->max_b_depth; if (avctx->level != AV_LEVEL_UNKNOWN) { sps->level_idc = avctx->level; @@ -374,7 +375,7 @@ static int vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx) sps->bit_depth_chroma_minus8 = bit_depth - 8; sps->log2_max_frame_num_minus4 = 4; - sps->pic_order_cnt_type = ctx->max_b_depth ? 0 : 2; + sps->pic_order_cnt_type = base_ctx->max_b_depth ? 0 : 2; if (sps->pic_order_cnt_type == 0) { sps->log2_max_pic_order_cnt_lsb_minus4 = 4; } @@ -501,8 +502,8 @@ static int vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx) sps->vui.motion_vectors_over_pic_boundaries_flag = 1; sps->vui.log2_max_mv_length_horizontal = 15; sps->vui.log2_max_mv_length_vertical = 15; - sps->vui.max_num_reorder_frames = ctx->max_b_depth; - sps->vui.max_dec_frame_buffering = ctx->max_b_depth + 1; + sps->vui.max_num_reorder_frames = base_ctx->max_b_depth; + sps->vui.max_dec_frame_buffering = base_ctx->max_b_depth + 1; pps->nal_unit_header.nal_ref_idc = 3; pps->nal_unit_header.nal_unit_type = H264_NAL_PPS; @@ -535,9 +536,9 @@ static int vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx) *vseq = (VAEncSequenceParameterBufferH264) { .seq_parameter_set_id = sps->seq_parameter_set_id, .level_idc = sps->level_idc, - .intra_period = ctx->gop_size, - .intra_idr_period = ctx->gop_size, - .ip_period = ctx->b_per_p + 1, + .intra_period = base_ctx->gop_size, + .intra_idr_period = base_ctx->gop_size, + .ip_period = base_ctx->b_per_p + 1, .bits_per_second = ctx->va_bit_rate, .max_num_ref_frames = sps->max_num_ref_frames, @@ -619,14 +620,15 @@ static int vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx) } static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx, - VAAPIEncodePicture *pic) + VAAPIEncodePicture *vaapi_pic) { - VAAPIEncodeContext *ctx = avctx->priv_data; + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeH264Context *priv = avctx->priv_data; + const FFHWBaseEncodePicture *pic = &vaapi_pic->base; VAAPIEncodeH264Picture *hpic = pic->priv_data; - VAAPIEncodePicture *prev = pic->prev; + FFHWBaseEncodePicture *prev = pic->prev; VAAPIEncodeH264Picture *hprev = prev ? prev->priv_data : NULL; - VAEncPictureParameterBufferH264 *vpic = pic->codec_picture_params; + VAEncPictureParameterBufferH264 *vpic = vaapi_pic->codec_picture_params; int i, j = 0; if (pic->type == FF_HW_PICTURE_TYPE_IDR) { @@ -662,7 +664,7 @@ static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx, hpic->pic_order_cnt *= 2; } - hpic->dpb_delay = pic->display_order - pic->encode_order + ctx->max_b_depth; + hpic->dpb_delay = pic->display_order - pic->encode_order + base_ctx->max_b_depth; hpic->cpb_delay = pic->encode_order - hpic->last_idr_frame; if (priv->aud) { @@ -699,7 +701,7 @@ static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx, priv->sei_recovery_point = (H264RawSEIRecoveryPoint) { .recovery_frame_cnt = 0, .exact_match_flag = 1, - .broken_link_flag = ctx->b_per_p > 0, + .broken_link_flag = base_ctx->b_per_p > 0, }; priv->sei_needed |= SEI_RECOVERY_POINT; @@ -722,7 +724,7 @@ static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx, } vpic->CurrPic = (VAPictureH264) { - .picture_id = pic->recon_surface, + .picture_id = vaapi_pic->recon_surface, .frame_idx = hpic->frame_num, .flags = 0, .TopFieldOrderCnt = hpic->pic_order_cnt, @@ -730,14 +732,14 @@ static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx, }; for (int k = 0; k < MAX_REFERENCE_LIST_NUM; k++) { for (i = 0; i < pic->nb_refs[k]; i++) { - VAAPIEncodePicture *ref = pic->refs[k][i]; + FFHWBaseEncodePicture *ref = pic->refs[k][i]; VAAPIEncodeH264Picture *href; av_assert0(ref && ref->encode_order < pic->encode_order); href = ref->priv_data; vpic->ReferenceFrames[j++] = (VAPictureH264) { - .picture_id = ref->recon_surface, + .picture_id = ((VAAPIEncodePicture *)ref)->recon_surface, .frame_idx = href->frame_num, .flags = VA_PICTURE_H264_SHORT_TERM_REFERENCE, .TopFieldOrderCnt = href->pic_order_cnt, @@ -753,7 +755,7 @@ static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx, }; } - vpic->coded_buf = pic->output_buffer; + vpic->coded_buf = vaapi_pic->output_buffer; vpic->frame_num = hpic->frame_num; @@ -764,12 +766,13 @@ static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx, } static void vaapi_encode_h264_default_ref_pic_list(AVCodecContext *avctx, - VAAPIEncodePicture *pic, + VAAPIEncodePicture *vaapi_pic, VAAPIEncodePicture **rpl0, VAAPIEncodePicture **rpl1, int *rpl_size) { - VAAPIEncodePicture *prev; + FFHWBaseEncodePicture *pic = &vaapi_pic->base; + FFHWBaseEncodePicture *prev; VAAPIEncodeH264Picture *hp, *hn, *hc; int i, j, n = 0; @@ -783,17 +786,17 @@ static void vaapi_encode_h264_default_ref_pic_list(AVCodecContext *avctx, if (pic->type == FF_HW_PICTURE_TYPE_P) { for (j = n; j > 0; j--) { - hc = rpl0[j - 1]->priv_data; + hc = rpl0[j - 1]->base.priv_data; av_assert0(hc->frame_num != hn->frame_num); if (hc->frame_num > hn->frame_num) break; rpl0[j] = rpl0[j - 1]; } - rpl0[j] = prev->dpb[i]; + rpl0[j] = (VAAPIEncodePicture *)prev->dpb[i]; } else if (pic->type == FF_HW_PICTURE_TYPE_B) { for (j = n; j > 0; j--) { - hc = rpl0[j - 1]->priv_data; + hc = rpl0[j - 1]->base.priv_data; av_assert0(hc->pic_order_cnt != hp->pic_order_cnt); if (hc->pic_order_cnt < hp->pic_order_cnt) { if (hn->pic_order_cnt > hp->pic_order_cnt || @@ -805,10 +808,10 @@ static void vaapi_encode_h264_default_ref_pic_list(AVCodecContext *avctx, } rpl0[j] = rpl0[j - 1]; } - rpl0[j] = prev->dpb[i]; + rpl0[j] = (VAAPIEncodePicture *)prev->dpb[i]; for (j = n; j > 0; j--) { - hc = rpl1[j - 1]->priv_data; + hc = rpl1[j - 1]->base.priv_data; av_assert0(hc->pic_order_cnt != hp->pic_order_cnt); if (hc->pic_order_cnt > hp->pic_order_cnt) { if (hn->pic_order_cnt < hp->pic_order_cnt || @@ -820,7 +823,7 @@ static void vaapi_encode_h264_default_ref_pic_list(AVCodecContext *avctx, } rpl1[j] = rpl1[j - 1]; } - rpl1[j] = prev->dpb[i]; + rpl1[j] = (VAAPIEncodePicture *)prev->dpb[i]; } ++n; @@ -840,7 +843,7 @@ static void vaapi_encode_h264_default_ref_pic_list(AVCodecContext *avctx, av_log(avctx, AV_LOG_DEBUG, "Default RefPicList0 for fn=%d/poc=%d:", hp->frame_num, hp->pic_order_cnt); for (i = 0; i < n; i++) { - hn = rpl0[i]->priv_data; + hn = rpl0[i]->base.priv_data; av_log(avctx, AV_LOG_DEBUG, " fn=%d/poc=%d", hn->frame_num, hn->pic_order_cnt); } @@ -850,7 +853,7 @@ static void vaapi_encode_h264_default_ref_pic_list(AVCodecContext *avctx, av_log(avctx, AV_LOG_DEBUG, "Default RefPicList1 for fn=%d/poc=%d:", hp->frame_num, hp->pic_order_cnt); for (i = 0; i < n; i++) { - hn = rpl1[i]->priv_data; + hn = rpl1[i]->base.priv_data; av_log(avctx, AV_LOG_DEBUG, " fn=%d/poc=%d", hn->frame_num, hn->pic_order_cnt); } @@ -861,16 +864,17 @@ static void vaapi_encode_h264_default_ref_pic_list(AVCodecContext *avctx, } static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx, - VAAPIEncodePicture *pic, + VAAPIEncodePicture *vaapi_pic, VAAPIEncodeSlice *slice) { VAAPIEncodeH264Context *priv = avctx->priv_data; + const FFHWBaseEncodePicture *pic = &vaapi_pic->base; VAAPIEncodeH264Picture *hpic = pic->priv_data; - VAAPIEncodePicture *prev = pic->prev; + FFHWBaseEncodePicture *prev = pic->prev; H264RawSPS *sps = &priv->raw_sps; H264RawPPS *pps = &priv->raw_pps; H264RawSliceHeader *sh = &priv->raw_slice.header; - VAEncPictureParameterBufferH264 *vpic = pic->codec_picture_params; + VAEncPictureParameterBufferH264 *vpic = vaapi_pic->codec_picture_params; VAEncSliceParameterBufferH264 *vslice = slice->codec_slice_params; int i, j; @@ -903,7 +907,7 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx, sh->slice_qp_delta = priv->fixed_qp_idr - (pps->pic_init_qp_minus26 + 26); if (pic->is_reference && pic->type != FF_HW_PICTURE_TYPE_IDR) { - VAAPIEncodePicture *discard_list[MAX_DPB_SIZE]; + FFHWBaseEncodePicture *discard_list[MAX_DPB_SIZE]; int discard = 0, keep = 0; // Discard everything which is in the DPB of the previous frame but @@ -944,14 +948,14 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx, VAAPIEncodeH264Picture *href; int n; - vaapi_encode_h264_default_ref_pic_list(avctx, pic, + vaapi_encode_h264_default_ref_pic_list(avctx, vaapi_pic, def_l0, def_l1, &n); if (pic->type == FF_HW_PICTURE_TYPE_P) { int need_rplm = 0; for (i = 0; i < pic->nb_refs[0]; i++) { av_assert0(pic->refs[0][i]); - if (pic->refs[0][i] != def_l0[i]) + if (pic->refs[0][i] != (FFHWBaseEncodePicture *)def_l0[i]) need_rplm = 1; } @@ -982,7 +986,7 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx, av_assert0(pic->refs[0][i]); href = pic->refs[0][i]->priv_data; av_assert0(href->pic_order_cnt < hpic->pic_order_cnt); - if (pic->refs[0][i] != def_l0[n0]) + if (pic->refs[0][i] != (FFHWBaseEncodePicture *)def_l0[n0]) need_rplm_l0 = 1; ++n0; } @@ -991,7 +995,7 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx, av_assert0(pic->refs[1][i]); href = pic->refs[1][i]->priv_data; av_assert0(href->pic_order_cnt > hpic->pic_order_cnt); - if (pic->refs[1][i] != def_l1[n1]) + if (pic->refs[1][i] != (FFHWBaseEncodePicture *)def_l1[n1]) need_rplm_l1 = 1; ++n1; } @@ -1380,7 +1384,7 @@ const FFCodec ff_h264_vaapi_encoder = { .p.id = AV_CODEC_ID_H264, .priv_data_size = sizeof(VAAPIEncodeH264Context), .init = &vaapi_encode_h264_init, - FF_CODEC_RECEIVE_PACKET_CB(&ff_vaapi_encode_receive_packet), + FF_CODEC_RECEIVE_PACKET_CB(&ff_hw_base_encode_receive_packet), .close = &vaapi_encode_h264_close, .p.priv_class = &vaapi_encode_h264_class, .p.capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE | diff --git a/libavcodec/vaapi_encode_h265.c b/libavcodec/vaapi_encode_h265.c index e65c91304a..6befe3426e 100644 --- a/libavcodec/vaapi_encode_h265.c +++ b/libavcodec/vaapi_encode_h265.c @@ -259,6 +259,7 @@ fail: static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx) { + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeContext *ctx = avctx->priv_data; VAAPIEncodeH265Context *priv = avctx->priv_data; H265RawVPS *vps = &priv->raw_vps; @@ -340,7 +341,7 @@ static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx) ptl->general_max_420chroma_constraint_flag = chroma_format <= 1; ptl->general_max_monochrome_constraint_flag = chroma_format == 0; - ptl->general_intra_constraint_flag = ctx->gop_size == 1; + ptl->general_intra_constraint_flag = base_ctx->gop_size == 1; ptl->general_one_picture_only_constraint_flag = 0; ptl->general_lower_bit_rate_constraint_flag = 1; @@ -353,7 +354,7 @@ static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx) level = ff_h265_guess_level(ptl, avctx->bit_rate, ctx->surface_width, ctx->surface_height, ctx->nb_slices, ctx->tile_rows, ctx->tile_cols, - (ctx->b_per_p > 0) + 1); + (base_ctx->b_per_p > 0) + 1); if (level) { av_log(avctx, AV_LOG_VERBOSE, "Using level %s.\n", level->name); ptl->general_level_idc = level->level_idc; @@ -367,8 +368,8 @@ static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx) } vps->vps_sub_layer_ordering_info_present_flag = 0; - vps->vps_max_dec_pic_buffering_minus1[0] = ctx->max_b_depth + 1; - vps->vps_max_num_reorder_pics[0] = ctx->max_b_depth; + vps->vps_max_dec_pic_buffering_minus1[0] = base_ctx->max_b_depth + 1; + vps->vps_max_num_reorder_pics[0] = base_ctx->max_b_depth; vps->vps_max_latency_increase_plus1[0] = 0; vps->vps_max_layer_id = 0; @@ -642,9 +643,9 @@ static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx) .general_level_idc = vps->profile_tier_level.general_level_idc, .general_tier_flag = vps->profile_tier_level.general_tier_flag, - .intra_period = ctx->gop_size, - .intra_idr_period = ctx->gop_size, - .ip_period = ctx->b_per_p + 1, + .intra_period = base_ctx->gop_size, + .intra_idr_period = base_ctx->gop_size, + .ip_period = base_ctx->b_per_p + 1, .bits_per_second = ctx->va_bit_rate, .pic_width_in_luma_samples = sps->pic_width_in_luma_samples, @@ -755,14 +756,15 @@ static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx) } static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx, - VAAPIEncodePicture *pic) + VAAPIEncodePicture *vaapi_pic) { - VAAPIEncodeContext *ctx = avctx->priv_data; + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeH265Context *priv = avctx->priv_data; + FFHWBaseEncodePicture *pic = &vaapi_pic->base; VAAPIEncodeH265Picture *hpic = pic->priv_data; - VAAPIEncodePicture *prev = pic->prev; + FFHWBaseEncodePicture *prev = pic->prev; VAAPIEncodeH265Picture *hprev = prev ? prev->priv_data : NULL; - VAEncPictureParameterBufferHEVC *vpic = pic->codec_picture_params; + VAEncPictureParameterBufferHEVC *vpic = vaapi_pic->codec_picture_params; int i, j = 0; if (pic->type == FF_HW_PICTURE_TYPE_IDR) { @@ -787,13 +789,13 @@ static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx, hpic->slice_type = HEVC_SLICE_P; hpic->pic_type = 1; } else { - VAAPIEncodePicture *irap_ref; + FFHWBaseEncodePicture *irap_ref; av_assert0(pic->refs[0][0] && pic->refs[1][0]); for (irap_ref = pic; irap_ref; irap_ref = irap_ref->refs[1][0]) { if (irap_ref->type == FF_HW_PICTURE_TYPE_I) break; } - if (pic->b_depth == ctx->max_b_depth) { + if (pic->b_depth == base_ctx->max_b_depth) { hpic->slice_nal_unit = irap_ref ? HEVC_NAL_RASL_N : HEVC_NAL_TRAIL_N; } else { @@ -909,21 +911,21 @@ static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx, } vpic->decoded_curr_pic = (VAPictureHEVC) { - .picture_id = pic->recon_surface, + .picture_id = vaapi_pic->recon_surface, .pic_order_cnt = hpic->pic_order_cnt, .flags = 0, }; for (int k = 0; k < MAX_REFERENCE_LIST_NUM; k++) { for (i = 0; i < pic->nb_refs[k]; i++) { - VAAPIEncodePicture *ref = pic->refs[k][i]; + FFHWBaseEncodePicture *ref = pic->refs[k][i]; VAAPIEncodeH265Picture *href; av_assert0(ref && ref->encode_order < pic->encode_order); href = ref->priv_data; vpic->reference_frames[j++] = (VAPictureHEVC) { - .picture_id = ref->recon_surface, + .picture_id = ((VAAPIEncodePicture *)ref)->recon_surface, .pic_order_cnt = href->pic_order_cnt, .flags = (ref->display_order < pic->display_order ? VA_PICTURE_HEVC_RPS_ST_CURR_BEFORE : 0) | @@ -940,7 +942,7 @@ static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx, }; } - vpic->coded_buf = pic->output_buffer; + vpic->coded_buf = vaapi_pic->output_buffer; vpic->nal_unit_type = hpic->slice_nal_unit; @@ -970,16 +972,17 @@ static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx, } static int vaapi_encode_h265_init_slice_params(AVCodecContext *avctx, - VAAPIEncodePicture *pic, + VAAPIEncodePicture *vaapi_pic, VAAPIEncodeSlice *slice) { - VAAPIEncodeContext *ctx = avctx->priv_data; + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeH265Context *priv = avctx->priv_data; + const FFHWBaseEncodePicture *pic = &vaapi_pic->base; VAAPIEncodeH265Picture *hpic = pic->priv_data; const H265RawSPS *sps = &priv->raw_sps; const H265RawPPS *pps = &priv->raw_pps; H265RawSliceHeader *sh = &priv->raw_slice.header; - VAEncPictureParameterBufferHEVC *vpic = pic->codec_picture_params; + VAEncPictureParameterBufferHEVC *vpic = vaapi_pic->codec_picture_params; VAEncSliceParameterBufferHEVC *vslice = slice->codec_slice_params; int i; @@ -996,7 +999,7 @@ static int vaapi_encode_h265_init_slice_params(AVCodecContext *avctx, sh->slice_type = hpic->slice_type; - if (sh->slice_type == HEVC_SLICE_P && ctx->p_to_gpb) + if (sh->slice_type == HEVC_SLICE_P && base_ctx->p_to_gpb) sh->slice_type = HEVC_SLICE_B; sh->slice_pic_order_cnt_lsb = hpic->pic_order_cnt & @@ -1140,7 +1143,7 @@ static int vaapi_encode_h265_init_slice_params(AVCodecContext *avctx, .slice_tc_offset_div2 = sh->slice_tc_offset_div2, .slice_fields.bits = { - .last_slice_of_pic_flag = slice->index == pic->nb_slices - 1, + .last_slice_of_pic_flag = slice->index == vaapi_pic->nb_slices - 1, .dependent_slice_segment_flag = sh->dependent_slice_segment_flag, .colour_plane_id = sh->colour_plane_id, .slice_temporal_mvp_enabled_flag = @@ -1171,7 +1174,7 @@ static int vaapi_encode_h265_init_slice_params(AVCodecContext *avctx, av_assert0(pic->type == FF_HW_PICTURE_TYPE_P || pic->type == FF_HW_PICTURE_TYPE_B); vslice->ref_pic_list0[0] = vpic->reference_frames[0]; - if (ctx->p_to_gpb && pic->type == FF_HW_PICTURE_TYPE_P) + if (base_ctx->p_to_gpb && pic->type == FF_HW_PICTURE_TYPE_P) // Reference for GPB B-frame, L0 == L1 vslice->ref_pic_list1[0] = vpic->reference_frames[0]; } @@ -1181,7 +1184,7 @@ static int vaapi_encode_h265_init_slice_params(AVCodecContext *avctx, vslice->ref_pic_list1[0] = vpic->reference_frames[1]; } - if (pic->type == FF_HW_PICTURE_TYPE_P && ctx->p_to_gpb) { + if (pic->type == FF_HW_PICTURE_TYPE_P && base_ctx->p_to_gpb) { vslice->slice_type = HEVC_SLICE_B; for (i = 0; i < FF_ARRAY_ELEMS(vslice->ref_pic_list0); i++) { vslice->ref_pic_list1[i].picture_id = vslice->ref_pic_list0[i].picture_id; @@ -1494,7 +1497,7 @@ const FFCodec ff_hevc_vaapi_encoder = { .p.id = AV_CODEC_ID_HEVC, .priv_data_size = sizeof(VAAPIEncodeH265Context), .init = &vaapi_encode_h265_init, - FF_CODEC_RECEIVE_PACKET_CB(&ff_vaapi_encode_receive_packet), + FF_CODEC_RECEIVE_PACKET_CB(&ff_hw_base_encode_receive_packet), .close = &vaapi_encode_h265_close, .p.priv_class = &vaapi_encode_h265_class, .p.capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE | diff --git a/libavcodec/vaapi_encode_mjpeg.c b/libavcodec/vaapi_encode_mjpeg.c index 9e533e9fbe..4c31881e6a 100644 --- a/libavcodec/vaapi_encode_mjpeg.c +++ b/libavcodec/vaapi_encode_mjpeg.c @@ -220,12 +220,13 @@ static int vaapi_encode_mjpeg_write_extra_buffer(AVCodecContext *avctx, } static int vaapi_encode_mjpeg_init_picture_params(AVCodecContext *avctx, - VAAPIEncodePicture *pic) + VAAPIEncodePicture *vaapi_pic) { VAAPIEncodeMJPEGContext *priv = avctx->priv_data; + const FFHWBaseEncodePicture *pic = &vaapi_pic->base; JPEGRawFrameHeader *fh = &priv->frame_header; JPEGRawScanHeader *sh = &priv->scan.header; - VAEncPictureParameterBufferJPEG *vpic = pic->codec_picture_params; + VAEncPictureParameterBufferJPEG *vpic = vaapi_pic->codec_picture_params; const AVPixFmtDescriptor *desc; const uint8_t components_rgb[3] = { 'R', 'G', 'B' }; const uint8_t components_yuv[3] = { 1, 2, 3 }; @@ -377,8 +378,8 @@ static int vaapi_encode_mjpeg_init_picture_params(AVCodecContext *avctx, *vpic = (VAEncPictureParameterBufferJPEG) { - .reconstructed_picture = pic->recon_surface, - .coded_buf = pic->output_buffer, + .reconstructed_picture = vaapi_pic->recon_surface, + .coded_buf = vaapi_pic->output_buffer, .picture_width = fh->X, .picture_height = fh->Y, @@ -406,7 +407,7 @@ static int vaapi_encode_mjpeg_init_picture_params(AVCodecContext *avctx, vpic->quantiser_table_selector[i] = fh->Tq[i]; } - pic->nb_slices = 1; + vaapi_pic->nb_slices = 1; return 0; } @@ -572,7 +573,7 @@ const FFCodec ff_mjpeg_vaapi_encoder = { .p.id = AV_CODEC_ID_MJPEG, .priv_data_size = sizeof(VAAPIEncodeMJPEGContext), .init = &vaapi_encode_mjpeg_init, - FF_CODEC_RECEIVE_PACKET_CB(&ff_vaapi_encode_receive_packet), + FF_CODEC_RECEIVE_PACKET_CB(&ff_hw_base_encode_receive_packet), .close = &vaapi_encode_mjpeg_close, .p.priv_class = &vaapi_encode_mjpeg_class, .p.capabilities = AV_CODEC_CAP_HARDWARE | AV_CODEC_CAP_DR1 | diff --git a/libavcodec/vaapi_encode_mpeg2.c b/libavcodec/vaapi_encode_mpeg2.c index 69784f9ba6..c20196fb48 100644 --- a/libavcodec/vaapi_encode_mpeg2.c +++ b/libavcodec/vaapi_encode_mpeg2.c @@ -166,6 +166,7 @@ fail: static int vaapi_encode_mpeg2_init_sequence_params(AVCodecContext *avctx) { + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeContext *ctx = avctx->priv_data; VAAPIEncodeMPEG2Context *priv = avctx->priv_data; MPEG2RawSequenceHeader *sh = &priv->sequence_header; @@ -281,7 +282,7 @@ static int vaapi_encode_mpeg2_init_sequence_params(AVCodecContext *avctx) se->bit_rate_extension = priv->bit_rate >> 18; se->vbv_buffer_size_extension = priv->vbv_buffer_size >> 10; - se->low_delay = ctx->b_per_p == 0; + se->low_delay = base_ctx->b_per_p == 0; se->frame_rate_extension_n = ext_n; se->frame_rate_extension_d = ext_d; @@ -353,8 +354,8 @@ static int vaapi_encode_mpeg2_init_sequence_params(AVCodecContext *avctx) *vseq = (VAEncSequenceParameterBufferMPEG2) { - .intra_period = ctx->gop_size, - .ip_period = ctx->b_per_p + 1, + .intra_period = base_ctx->gop_size, + .ip_period = base_ctx->b_per_p + 1, .picture_width = avctx->width, .picture_height = avctx->height, @@ -417,12 +418,13 @@ static int vaapi_encode_mpeg2_init_sequence_params(AVCodecContext *avctx) } static int vaapi_encode_mpeg2_init_picture_params(AVCodecContext *avctx, - VAAPIEncodePicture *pic) + VAAPIEncodePicture *vaapi_pic) { VAAPIEncodeMPEG2Context *priv = avctx->priv_data; + const FFHWBaseEncodePicture *pic = &vaapi_pic->base; MPEG2RawPictureHeader *ph = &priv->picture_header; MPEG2RawPictureCodingExtension *pce = &priv->picture_coding_extension.data.picture_coding; - VAEncPictureParameterBufferMPEG2 *vpic = pic->codec_picture_params; + VAEncPictureParameterBufferMPEG2 *vpic = vaapi_pic->codec_picture_params; if (pic->type == FF_HW_PICTURE_TYPE_IDR || pic->type == FF_HW_PICTURE_TYPE_I) { ph->temporal_reference = 0; @@ -448,8 +450,8 @@ static int vaapi_encode_mpeg2_init_picture_params(AVCodecContext *avctx, pce->f_code[1][1] = 15; } - vpic->reconstructed_picture = pic->recon_surface; - vpic->coded_buf = pic->output_buffer; + vpic->reconstructed_picture = vaapi_pic->recon_surface; + vpic->coded_buf = vaapi_pic->output_buffer; switch (pic->type) { case FF_HW_PICTURE_TYPE_IDR: @@ -458,12 +460,12 @@ static int vaapi_encode_mpeg2_init_picture_params(AVCodecContext *avctx, break; case FF_HW_PICTURE_TYPE_P: vpic->picture_type = VAEncPictureTypePredictive; - vpic->forward_reference_picture = pic->refs[0][0]->recon_surface; + vpic->forward_reference_picture = ((VAAPIEncodePicture *)pic->refs[0][0])->recon_surface; break; case FF_HW_PICTURE_TYPE_B: vpic->picture_type = VAEncPictureTypeBidirectional; - vpic->forward_reference_picture = pic->refs[0][0]->recon_surface; - vpic->backward_reference_picture = pic->refs[1][0]->recon_surface; + vpic->forward_reference_picture = ((VAAPIEncodePicture *)pic->refs[0][0])->recon_surface; + vpic->backward_reference_picture = ((VAAPIEncodePicture *)pic->refs[1][0])->recon_surface; break; default: av_assert0(0 && "invalid picture type"); @@ -479,11 +481,12 @@ static int vaapi_encode_mpeg2_init_picture_params(AVCodecContext *avctx, } static int vaapi_encode_mpeg2_init_slice_params(AVCodecContext *avctx, - VAAPIEncodePicture *pic, - VAAPIEncodeSlice *slice) + VAAPIEncodePicture *vaapi_pic, + VAAPIEncodeSlice *slice) { - VAAPIEncodeMPEG2Context *priv = avctx->priv_data; - VAEncSliceParameterBufferMPEG2 *vslice = slice->codec_slice_params; + const FFHWBaseEncodePicture *pic = &vaapi_pic->base; + VAAPIEncodeMPEG2Context *priv = avctx->priv_data; + VAEncSliceParameterBufferMPEG2 *vslice = slice->codec_slice_params; int qp; vslice->macroblock_address = slice->block_start; @@ -695,7 +698,7 @@ const FFCodec ff_mpeg2_vaapi_encoder = { .p.id = AV_CODEC_ID_MPEG2VIDEO, .priv_data_size = sizeof(VAAPIEncodeMPEG2Context), .init = &vaapi_encode_mpeg2_init, - FF_CODEC_RECEIVE_PACKET_CB(&ff_vaapi_encode_receive_packet), + FF_CODEC_RECEIVE_PACKET_CB(&ff_hw_base_encode_receive_packet), .close = &vaapi_encode_mpeg2_close, .p.priv_class = &vaapi_encode_mpeg2_class, .p.capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE | diff --git a/libavcodec/vaapi_encode_vp8.c b/libavcodec/vaapi_encode_vp8.c index 1f2de050d2..a95f9c5447 100644 --- a/libavcodec/vaapi_encode_vp8.c +++ b/libavcodec/vaapi_encode_vp8.c @@ -52,6 +52,7 @@ typedef struct VAAPIEncodeVP8Context { static int vaapi_encode_vp8_init_sequence_params(AVCodecContext *avctx) { + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeContext *ctx = avctx->priv_data; VAEncSequenceParameterBufferVP8 *vseq = ctx->codec_sequence_params; @@ -66,22 +67,23 @@ static int vaapi_encode_vp8_init_sequence_params(AVCodecContext *avctx) if (!(ctx->va_rc_mode & VA_RC_CQP)) { vseq->bits_per_second = ctx->va_bit_rate; - vseq->intra_period = ctx->gop_size; + vseq->intra_period = base_ctx->gop_size; } return 0; } static int vaapi_encode_vp8_init_picture_params(AVCodecContext *avctx, - VAAPIEncodePicture *pic) + VAAPIEncodePicture *vaapi_pic) { + const FFHWBaseEncodePicture *pic = &vaapi_pic->base; VAAPIEncodeVP8Context *priv = avctx->priv_data; - VAEncPictureParameterBufferVP8 *vpic = pic->codec_picture_params; + VAEncPictureParameterBufferVP8 *vpic = vaapi_pic->codec_picture_params; int i; - vpic->reconstructed_frame = pic->recon_surface; + vpic->reconstructed_frame = vaapi_pic->recon_surface; - vpic->coded_buf = pic->output_buffer; + vpic->coded_buf = vaapi_pic->output_buffer; switch (pic->type) { case FF_HW_PICTURE_TYPE_IDR: @@ -101,7 +103,7 @@ static int vaapi_encode_vp8_init_picture_params(AVCodecContext *avctx, vpic->ref_last_frame = vpic->ref_gf_frame = vpic->ref_arf_frame = - pic->refs[0][0]->recon_surface; + ((VAAPIEncodePicture *)pic->refs[0][0])->recon_surface; break; default: av_assert0(0 && "invalid picture type"); @@ -145,7 +147,7 @@ static int vaapi_encode_vp8_write_quant_table(AVCodecContext *avctx, memset(&quant, 0, sizeof(quant)); - if (pic->type == FF_HW_PICTURE_TYPE_P) + if (pic->base.type == FF_HW_PICTURE_TYPE_P) q = priv->q_index_p; else q = priv->q_index_i; @@ -250,7 +252,7 @@ const FFCodec ff_vp8_vaapi_encoder = { .p.id = AV_CODEC_ID_VP8, .priv_data_size = sizeof(VAAPIEncodeVP8Context), .init = &vaapi_encode_vp8_init, - FF_CODEC_RECEIVE_PACKET_CB(&ff_vaapi_encode_receive_packet), + FF_CODEC_RECEIVE_PACKET_CB(&ff_hw_base_encode_receive_packet), .close = &ff_vaapi_encode_close, .p.priv_class = &vaapi_encode_vp8_class, .p.capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE | diff --git a/libavcodec/vaapi_encode_vp9.c b/libavcodec/vaapi_encode_vp9.c index 690e4a6bd1..a737a0cdd7 100644 --- a/libavcodec/vaapi_encode_vp9.c +++ b/libavcodec/vaapi_encode_vp9.c @@ -53,6 +53,7 @@ typedef struct VAAPIEncodeVP9Context { static int vaapi_encode_vp9_init_sequence_params(AVCodecContext *avctx) { + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeContext *ctx = avctx->priv_data; VAEncSequenceParameterBufferVP9 *vseq = ctx->codec_sequence_params; VAEncPictureParameterBufferVP9 *vpic = ctx->codec_picture_params; @@ -64,7 +65,7 @@ static int vaapi_encode_vp9_init_sequence_params(AVCodecContext *avctx) if (!(ctx->va_rc_mode & VA_RC_CQP)) { vseq->bits_per_second = ctx->va_bit_rate; - vseq->intra_period = ctx->gop_size; + vseq->intra_period = base_ctx->gop_size; } vpic->frame_width_src = avctx->width; @@ -76,17 +77,18 @@ static int vaapi_encode_vp9_init_sequence_params(AVCodecContext *avctx) } static int vaapi_encode_vp9_init_picture_params(AVCodecContext *avctx, - VAAPIEncodePicture *pic) + VAAPIEncodePicture *vaapi_pic) { - VAAPIEncodeContext *ctx = avctx->priv_data; + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeVP9Context *priv = avctx->priv_data; + const FFHWBaseEncodePicture *pic = &vaapi_pic->base; VAAPIEncodeVP9Picture *hpic = pic->priv_data; - VAEncPictureParameterBufferVP9 *vpic = pic->codec_picture_params; + VAEncPictureParameterBufferVP9 *vpic = vaapi_pic->codec_picture_params; int i; int num_tile_columns; - vpic->reconstructed_frame = pic->recon_surface; - vpic->coded_buf = pic->output_buffer; + vpic->reconstructed_frame = vaapi_pic->recon_surface; + vpic->coded_buf = vaapi_pic->output_buffer; // Maximum width of a tile in units of superblocks is MAX_TILE_WIDTH_B64(64) // So the number of tile columns is related to the width of the picture. @@ -107,7 +109,7 @@ static int vaapi_encode_vp9_init_picture_params(AVCodecContext *avctx, VAAPIEncodeVP9Picture *href = pic->refs[0][0]->priv_data; av_assert0(href->slot == 0 || href->slot == 1); - if (ctx->max_b_depth > 0) { + if (base_ctx->max_b_depth > 0) { hpic->slot = !href->slot; vpic->refresh_frame_flags = 1 << hpic->slot | 0xfc; } else { @@ -127,7 +129,7 @@ static int vaapi_encode_vp9_init_picture_params(AVCodecContext *avctx, av_assert0(href0->slot < pic->b_depth + 1 && href1->slot < pic->b_depth + 1); - if (pic->b_depth == ctx->max_b_depth) { + if (pic->b_depth == base_ctx->max_b_depth) { // Unreferenced frame. vpic->refresh_frame_flags = 0x00; hpic->slot = 8; @@ -159,11 +161,11 @@ static int vaapi_encode_vp9_init_picture_params(AVCodecContext *avctx, for (i = 0; i < MAX_REFERENCE_LIST_NUM; i++) { for (int j = 0; j < pic->nb_refs[i]; j++) { - VAAPIEncodePicture *ref_pic = pic->refs[i][j]; + FFHWBaseEncodePicture *ref_pic = pic->refs[i][j]; int slot; slot = ((VAAPIEncodeVP9Picture*)ref_pic->priv_data)->slot; av_assert0(vpic->reference_frames[slot] == VA_INVALID_SURFACE); - vpic->reference_frames[slot] = ref_pic->recon_surface; + vpic->reference_frames[slot] = ((VAAPIEncodePicture *)ref_pic)->recon_surface; } } @@ -307,7 +309,7 @@ const FFCodec ff_vp9_vaapi_encoder = { .p.id = AV_CODEC_ID_VP9, .priv_data_size = sizeof(VAAPIEncodeVP9Context), .init = &vaapi_encode_vp9_init, - FF_CODEC_RECEIVE_PACKET_CB(&ff_vaapi_encode_receive_packet), + FF_CODEC_RECEIVE_PACKET_CB(&ff_hw_base_encode_receive_packet), .close = &ff_vaapi_encode_close, .p.priv_class = &vaapi_encode_vp9_class, .p.capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE | From patchwork Tue May 28 15:47:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 49327 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a59:8f0d:0:b0:460:55fa:d5ed with SMTP id i13csp55028vqu; Tue, 28 May 2024 08:52:17 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCWvnMS1PuICajJemXav+9Cor1P0v6pr5KmvKjXMWV0pRfjwyr4veRyxTPgfd1fQL9jvCuuLw3d9Jt6TDaRkI6lPAM/g+EAzLC2kTg== X-Google-Smtp-Source: AGHT+IEf2dy5KmnWir1CnAklCehjY9XP3frT4nv2vE2RZEuTwRb10ncx61hGB19ZC/fEfqmyHzHk X-Received: by 2002:a17:907:930e:b0:a5a:6f4f:e54e with SMTP id a640c23a62f3a-a626501ca62mr963640166b.65.1716911537654; Tue, 28 May 2024 08:52:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1716911537; cv=none; d=google.com; s=arc-20160816; b=Nzi6unZkXoXERBUz8YzgBUwJg1ZA3EK/h9r/i1ZB1S87U6/b/uwmRrKsHpwXS7lvNa A5kgqvfJ0ciV7spEkIZB8TrtRoWshqmYLN9XHyY3Mdkg9K8A/0EC8hwchBY5cWn5wjdZ SHtRNHmokc+o5RNN5r19AGsIdPqN1PpntIqu0SYg2hwKMWxgt9oUYKAXqgq/4BjfUzqQ WDMuyLx74dhqD+TicxP7PYKaGjI4T265/DR4AEbJ7HbllKPbPkw4p4jGJ0zuoNwUqDaU Xj9QXa/WOQO+g1yyVSGn+rmDi6yDMHsOFqQI7I1klp8V+Af23a8tYpbFtOZZAxnWqZZk xC9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=Jr16H/ljDlWyI+0xloUlJTAKwBenefBs/qEySfrqIis=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=PZW2TDFu1xJPFygyQA7Z6633VSKCUR7LdWeJNNcLgUcOuSmYlxAYCfvMgtgdyW2jID LyTZgGgqcfAf7ngLXaSvCzaNgsMrvNnGjGRty58+QLAsXSj4AeInktHb90gl1/hdHdxC KKwuK7SHFxtq5fFjd817ziraTj1OGo0BRoHb+AILmsyMGi65duXfRKbKPiBU01mkId7n airPt7TClj86cMrp25zeRC0Pr+1ZtkN5ErjwGXkfqa3hpyl4dDjroJ78ucq6zPyeXUDi xm3WyC/gA/cu1Is/4SK/Kxo9u+UgZzv0IDTn/xDnw8n0Ufirg1CxVAdw1Gm5m9w88W0m mjxQ==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=Huc5fBkI; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id a640c23a62f3a-a626cc36db8si511272866b.449.2024.05.28.08.52.16; Tue, 28 May 2024 08:52:17 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=Huc5fBkI; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 9679D68C094; Tue, 28 May 2024 18:50:16 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 1E67168D575 for ; Tue, 28 May 2024 18:50:01 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716911407; x=1748447407; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0zT9IPLy+7JJsoINJjyRZyD1YwAF7aTyrgl1fGvAjQo=; b=Huc5fBkInRfW2r+4JYCvpUiJpbiQmcS7BTtBXVnp30DouV+AGj0HmJmN x3YIUJLbmYaL4du9vdtPyXIN0sH61icuOYyGJuaQweN8+9NB0O1LQC29+ 6dlRDplQI/m6bh56glNokWWaW8hcnv9vuZpFreFVkhTok9AtqGhHBEeI7 G593qYzlpLRgLwD4vgxyPEF9a6I+0dzjxaVfADGO+babSALiOuhEtT8zg 7Vwu7r7AVGhb9DYudEL58NB23IgSMiqHYA6HFOwH2wJjIXoke8muKi4dE Cm2XCiPWX1r0g+e9V5LVoeEcX419UxdzqQ8wVmY6k9VmoOYobYBopTKZT w==; X-CSE-ConnectionGUID: tjnAeVD2QpSb04AqIIbB3g== X-CSE-MsgGUID: 1/4H516hTbCc16uJ74/ldA== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="23821891" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="23821891" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 08:49:55 -0700 X-CSE-ConnectionGUID: 7Aq3aeLHTCyZ1OPfZRf0kQ== X-CSE-MsgGUID: 2eBHhat0RL+UXLcRIPvlqg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="35731090" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by orviesa007.jf.intel.com with ESMTP; 28 May 2024 08:49:54 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Tue, 28 May 2024 23:47:58 +0800 Message-ID: <20240528154807.1151-7-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240528154807.1151-1-tong1.wu@intel.com> References: <20240528154807.1151-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v12 07/15] avcodec/vaapi_encode: extract the init and close function to base layer X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: R0u/ajpz3f5L From: Tong Wu Related parameters such as device context, frame context are also moved to base layer. Signed-off-by: Tong Wu --- libavcodec/hw_base_encode.c | 49 ++++++++++++++++++ libavcodec/hw_base_encode.h | 17 +++++++ libavcodec/vaapi_encode.c | 90 +++++++++++---------------------- libavcodec/vaapi_encode.h | 10 ---- libavcodec/vaapi_encode_av1.c | 2 +- libavcodec/vaapi_encode_h264.c | 2 +- libavcodec/vaapi_encode_h265.c | 2 +- libavcodec/vaapi_encode_mjpeg.c | 6 ++- 8 files changed, 102 insertions(+), 76 deletions(-) diff --git a/libavcodec/hw_base_encode.c b/libavcodec/hw_base_encode.c index 16afaa37be..c4789380b6 100644 --- a/libavcodec/hw_base_encode.c +++ b/libavcodec/hw_base_encode.c @@ -592,3 +592,52 @@ end: return 0; } + +int ff_hw_base_encode_init(AVCodecContext *avctx) +{ + FFHWBaseEncodeContext *ctx = avctx->priv_data; + + ctx->frame = av_frame_alloc(); + if (!ctx->frame) + return AVERROR(ENOMEM); + + if (!avctx->hw_frames_ctx) { + av_log(avctx, AV_LOG_ERROR, "A hardware frames reference is " + "required to associate the encoding device.\n"); + return AVERROR(EINVAL); + } + + ctx->input_frames_ref = av_buffer_ref(avctx->hw_frames_ctx); + if (!ctx->input_frames_ref) + return AVERROR(ENOMEM); + + ctx->input_frames = (AVHWFramesContext *)ctx->input_frames_ref->data; + + ctx->device_ref = av_buffer_ref(ctx->input_frames->device_ref); + if (!ctx->device_ref) + return AVERROR(ENOMEM); + + ctx->device = (AVHWDeviceContext *)ctx->device_ref->data; + + ctx->tail_pkt = av_packet_alloc(); + if (!ctx->tail_pkt) + return AVERROR(ENOMEM); + + return 0; +} + +int ff_hw_base_encode_close(AVCodecContext *avctx) +{ + FFHWBaseEncodeContext *ctx = avctx->priv_data; + + av_fifo_freep2(&ctx->encode_fifo); + + av_frame_free(&ctx->frame); + av_packet_free(&ctx->tail_pkt); + + av_buffer_unref(&ctx->device_ref); + av_buffer_unref(&ctx->input_frames_ref); + av_buffer_unref(&ctx->recon_frames_ref); + + return 0; +} diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h index 81e6c87036..a062009860 100644 --- a/libavcodec/hw_base_encode.h +++ b/libavcodec/hw_base_encode.h @@ -19,6 +19,7 @@ #ifndef AVCODEC_HW_BASE_ENCODE_H #define AVCODEC_HW_BASE_ENCODE_H +#include "libavutil/hwcontext.h" #include "libavutil/fifo.h" #define MAX_DPB_SIZE 16 @@ -119,6 +120,18 @@ typedef struct FFHWBaseEncodeContext { // Hardware-specific hooks. const struct FFHWEncodePictureOperation *op; + // The hardware device context. + AVBufferRef *device_ref; + AVHWDeviceContext *device; + + // The hardware frame context containing the input frames. + AVBufferRef *input_frames_ref; + AVHWFramesContext *input_frames; + + // The hardware frame context containing the reconstructed frames. + AVBufferRef *recon_frames_ref; + AVHWFramesContext *recon_frames; + // Current encoding window, in display (input) order. FFHWBaseEncodePicture *pic_start, *pic_end; // The next picture to use as the previous reference picture in @@ -185,6 +198,10 @@ typedef struct FFHWBaseEncodeContext { int ff_hw_base_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt); +int ff_hw_base_encode_init(AVCodecContext *avctx); + +int ff_hw_base_encode_close(AVCodecContext *avctx); + #define HW_BASE_ENCODE_COMMON_OPTIONS \ { "async_depth", "Maximum processing parallelism. " \ "Increase this to improve single channel performance.", \ diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c index 1055fca0b1..1b3bab2c14 100644 --- a/libavcodec/vaapi_encode.c +++ b/libavcodec/vaapi_encode.c @@ -314,7 +314,7 @@ static int vaapi_encode_issue(AVCodecContext *avctx, av_log(avctx, AV_LOG_DEBUG, "Input surface is %#x.\n", pic->input_surface); - err = av_hwframe_get_buffer(ctx->recon_frames_ref, base_pic->recon_image, 0); + err = av_hwframe_get_buffer(base_ctx->recon_frames_ref, base_pic->recon_image, 0); if (err < 0) { err = AVERROR(ENOMEM); goto fail; @@ -996,9 +996,10 @@ static const VAEntrypoint vaapi_encode_entrypoints_low_power[] = { static av_cold int vaapi_encode_profile_entrypoint(AVCodecContext *avctx) { - VAAPIEncodeContext *ctx = avctx->priv_data; - VAProfile *va_profiles = NULL; - VAEntrypoint *va_entrypoints = NULL; + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; + VAAPIEncodeContext *ctx = avctx->priv_data; + VAProfile *va_profiles = NULL; + VAEntrypoint *va_entrypoints = NULL; VAStatus vas; const VAEntrypoint *usable_entrypoints; const VAAPIEncodeProfile *profile; @@ -1021,10 +1022,10 @@ static av_cold int vaapi_encode_profile_entrypoint(AVCodecContext *avctx) usable_entrypoints = vaapi_encode_entrypoints_normal; } - desc = av_pix_fmt_desc_get(ctx->input_frames->sw_format); + desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format); if (!desc) { av_log(avctx, AV_LOG_ERROR, "Invalid input pixfmt (%d).\n", - ctx->input_frames->sw_format); + base_ctx->input_frames->sw_format); return AVERROR(EINVAL); } depth = desc->comp[0].depth; @@ -2131,20 +2132,21 @@ static int vaapi_encode_alloc_output_buffer(FFRefStructOpaque opaque, void *obj) static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx) { - VAAPIEncodeContext *ctx = avctx->priv_data; + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; + VAAPIEncodeContext *ctx = avctx->priv_data; AVVAAPIHWConfig *hwconfig = NULL; AVHWFramesConstraints *constraints = NULL; enum AVPixelFormat recon_format; int err, i; - hwconfig = av_hwdevice_hwconfig_alloc(ctx->device_ref); + hwconfig = av_hwdevice_hwconfig_alloc(base_ctx->device_ref); if (!hwconfig) { err = AVERROR(ENOMEM); goto fail; } hwconfig->config_id = ctx->va_config; - constraints = av_hwdevice_get_hwframe_constraints(ctx->device_ref, + constraints = av_hwdevice_get_hwframe_constraints(base_ctx->device_ref, hwconfig); if (!constraints) { err = AVERROR(ENOMEM); @@ -2157,9 +2159,9 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx) recon_format = AV_PIX_FMT_NONE; if (constraints->valid_sw_formats) { for (i = 0; constraints->valid_sw_formats[i] != AV_PIX_FMT_NONE; i++) { - if (ctx->input_frames->sw_format == + if (base_ctx->input_frames->sw_format == constraints->valid_sw_formats[i]) { - recon_format = ctx->input_frames->sw_format; + recon_format = base_ctx->input_frames->sw_format; break; } } @@ -2170,7 +2172,7 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx) } } else { // No idea what to use; copy input format. - recon_format = ctx->input_frames->sw_format; + recon_format = base_ctx->input_frames->sw_format; } av_log(avctx, AV_LOG_DEBUG, "Using %s as format of " "reconstructed frames.\n", av_get_pix_fmt_name(recon_format)); @@ -2191,19 +2193,19 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx) av_freep(&hwconfig); av_hwframe_constraints_free(&constraints); - ctx->recon_frames_ref = av_hwframe_ctx_alloc(ctx->device_ref); - if (!ctx->recon_frames_ref) { + base_ctx->recon_frames_ref = av_hwframe_ctx_alloc(base_ctx->device_ref); + if (!base_ctx->recon_frames_ref) { err = AVERROR(ENOMEM); goto fail; } - ctx->recon_frames = (AVHWFramesContext*)ctx->recon_frames_ref->data; + base_ctx->recon_frames = (AVHWFramesContext*)base_ctx->recon_frames_ref->data; - ctx->recon_frames->format = AV_PIX_FMT_VAAPI; - ctx->recon_frames->sw_format = recon_format; - ctx->recon_frames->width = ctx->surface_width; - ctx->recon_frames->height = ctx->surface_height; + base_ctx->recon_frames->format = AV_PIX_FMT_VAAPI; + base_ctx->recon_frames->sw_format = recon_format; + base_ctx->recon_frames->width = ctx->surface_width; + base_ctx->recon_frames->height = ctx->surface_height; - err = av_hwframe_ctx_init(ctx->recon_frames_ref); + err = av_hwframe_ctx_init(base_ctx->recon_frames_ref); if (err < 0) { av_log(avctx, AV_LOG_ERROR, "Failed to initialise reconstructed " "frame context: %d.\n", err); @@ -2235,44 +2237,16 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx) VAStatus vas; int err; + err = ff_hw_base_encode_init(avctx); + if (err < 0) + goto fail; + ctx->va_config = VA_INVALID_ID; ctx->va_context = VA_INVALID_ID; base_ctx->op = &vaapi_op; - /* If you add something that can fail above this av_frame_alloc(), - * modify ff_vaapi_encode_close() accordingly. */ - base_ctx->frame = av_frame_alloc(); - if (!base_ctx->frame) { - return AVERROR(ENOMEM); - } - - if (!avctx->hw_frames_ctx) { - av_log(avctx, AV_LOG_ERROR, "A hardware frames reference is " - "required to associate the encoding device.\n"); - return AVERROR(EINVAL); - } - - ctx->input_frames_ref = av_buffer_ref(avctx->hw_frames_ctx); - if (!ctx->input_frames_ref) { - err = AVERROR(ENOMEM); - goto fail; - } - ctx->input_frames = (AVHWFramesContext*)ctx->input_frames_ref->data; - - ctx->device_ref = av_buffer_ref(ctx->input_frames->device_ref); - if (!ctx->device_ref) { - err = AVERROR(ENOMEM); - goto fail; - } - ctx->device = (AVHWDeviceContext*)ctx->device_ref->data; - ctx->hwctx = ctx->device->hwctx; - - base_ctx->tail_pkt = av_packet_alloc(); - if (!base_ctx->tail_pkt) { - err = AVERROR(ENOMEM); - goto fail; - } + ctx->hwctx = base_ctx->device->hwctx; err = vaapi_encode_profile_entrypoint(avctx); if (err < 0) @@ -2339,7 +2313,7 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx) if (err < 0) goto fail; - recon_hwctx = ctx->recon_frames->hwctx; + recon_hwctx = base_ctx->recon_frames->hwctx; vas = vaCreateContext(ctx->hwctx->display, ctx->va_config, ctx->surface_width, ctx->surface_height, VA_PROGRESSIVE, @@ -2467,16 +2441,10 @@ av_cold int ff_vaapi_encode_close(AVCodecContext *avctx) ctx->va_config = VA_INVALID_ID; } - av_frame_free(&base_ctx->frame); - av_packet_free(&base_ctx->tail_pkt); - av_freep(&ctx->codec_sequence_params); av_freep(&ctx->codec_picture_params); - av_fifo_freep2(&base_ctx->encode_fifo); - av_buffer_unref(&ctx->recon_frames_ref); - av_buffer_unref(&ctx->input_frames_ref); - av_buffer_unref(&ctx->device_ref); + ff_hw_base_encode_close(avctx); return 0; } diff --git a/libavcodec/vaapi_encode.h b/libavcodec/vaapi_encode.h index 5e442d9c23..d5d7d27a07 100644 --- a/libavcodec/vaapi_encode.h +++ b/libavcodec/vaapi_encode.h @@ -219,18 +219,8 @@ typedef struct VAAPIEncodeContext { VAConfigID va_config; VAContextID va_context; - AVBufferRef *device_ref; - AVHWDeviceContext *device; AVVAAPIDeviceContext *hwctx; - // The hardware frame context containing the input frames. - AVBufferRef *input_frames_ref; - AVHWFramesContext *input_frames; - - // The hardware frame context containing the reconstructed frames. - AVBufferRef *recon_frames_ref; - AVHWFramesContext *recon_frames; - // Pool of (reusable) bitstream output buffers. struct FFRefStructPool *output_buffer_pool; diff --git a/libavcodec/vaapi_encode_av1.c b/libavcodec/vaapi_encode_av1.c index 1c9235acd9..a590f4cd8b 100644 --- a/libavcodec/vaapi_encode_av1.c +++ b/libavcodec/vaapi_encode_av1.c @@ -373,7 +373,7 @@ static int vaapi_encode_av1_init_sequence_params(AVCodecContext *avctx) memset(sh_obu, 0, sizeof(*sh_obu)); sh_obu->header.obu_type = AV1_OBU_SEQUENCE_HEADER; - desc = av_pix_fmt_desc_get(priv->common.input_frames->sw_format); + desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format); av_assert0(desc); sh->seq_profile = avctx->profile; diff --git a/libavcodec/vaapi_encode_h264.c b/libavcodec/vaapi_encode_h264.c index 2f91655d18..eacfb12706 100644 --- a/libavcodec/vaapi_encode_h264.c +++ b/libavcodec/vaapi_encode_h264.c @@ -308,7 +308,7 @@ static int vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx) memset(sps, 0, sizeof(*sps)); memset(pps, 0, sizeof(*pps)); - desc = av_pix_fmt_desc_get(priv->common.input_frames->sw_format); + desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format); av_assert0(desc); if (desc->nb_components == 1 || desc->log2_chroma_w != 1 || desc->log2_chroma_h != 1) { av_log(avctx, AV_LOG_ERROR, "Chroma format of input pixel format " diff --git a/libavcodec/vaapi_encode_h265.c b/libavcodec/vaapi_encode_h265.c index 6befe3426e..509251c97a 100644 --- a/libavcodec/vaapi_encode_h265.c +++ b/libavcodec/vaapi_encode_h265.c @@ -278,7 +278,7 @@ static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx) memset(pps, 0, sizeof(*pps)); - desc = av_pix_fmt_desc_get(priv->common.input_frames->sw_format); + desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format); av_assert0(desc); if (desc->nb_components == 1) { chroma_format = 0; diff --git a/libavcodec/vaapi_encode_mjpeg.c b/libavcodec/vaapi_encode_mjpeg.c index 4c31881e6a..48dc677bec 100644 --- a/libavcodec/vaapi_encode_mjpeg.c +++ b/libavcodec/vaapi_encode_mjpeg.c @@ -222,6 +222,7 @@ static int vaapi_encode_mjpeg_write_extra_buffer(AVCodecContext *avctx, static int vaapi_encode_mjpeg_init_picture_params(AVCodecContext *avctx, VAAPIEncodePicture *vaapi_pic) { + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeMJPEGContext *priv = avctx->priv_data; const FFHWBaseEncodePicture *pic = &vaapi_pic->base; JPEGRawFrameHeader *fh = &priv->frame_header; @@ -235,7 +236,7 @@ static int vaapi_encode_mjpeg_init_picture_params(AVCodecContext *avctx, av_assert0(pic->type == FF_HW_PICTURE_TYPE_IDR); - desc = av_pix_fmt_desc_get(priv->common.input_frames->sw_format); + desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format); av_assert0(desc); if (desc->flags & AV_PIX_FMT_FLAG_RGB) components = components_rgb; @@ -437,10 +438,11 @@ static int vaapi_encode_mjpeg_init_slice_params(AVCodecContext *avctx, static av_cold int vaapi_encode_mjpeg_get_encoder_caps(AVCodecContext *avctx) { + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeContext *ctx = avctx->priv_data; const AVPixFmtDescriptor *desc; - desc = av_pix_fmt_desc_get(ctx->input_frames->sw_format); + desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format); av_assert0(desc); ctx->surface_width = FFALIGN(avctx->width, 8 << desc->log2_chroma_w); From patchwork Tue May 28 15:47:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 49321 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a59:8f0d:0:b0:460:55fa:d5ed with SMTP id i13csp54265vqu; Tue, 28 May 2024 08:51:00 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCUSxG6/lcwwL42uAyNtUJVe3VccfWFWUl3ixRBKvCqeuSqqocLiaMWuEemHqUA44C8s+2gm/2hDSE/iTieBo/RfbnGVGWQ94zJy9g== X-Google-Smtp-Source: AGHT+IGg2/OyiwXdtHqMDL+aLnnPVWpJ0KXhQs5TLb2oJWOJ9qPy+Sq5fl2uQTk+40yUeQ7ZCYW8 X-Received: by 2002:a17:906:b0cf:b0:a59:aa7a:3b16 with SMTP id a640c23a62f3a-a62616defe6mr1091584066b.4.1716911460477; Tue, 28 May 2024 08:51:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1716911460; cv=none; d=google.com; s=arc-20160816; b=x531Hmku0tDFKzny5Ppj9Ca6V16cQzP/Lq0lrqbOGPLu/KgRZWGzRlB6uT/gM4l1tV oXx/7AIIu8lYfBOAUvSLJCogEQSThssFXlTItbWWU4/IQNSyxNUB+fHw+2LzG/zDZx8M 0pqCzJY70D8WGUy/wAKF7qXN5dU2Kmx5uMLdGDF68RDqsCOhBolzjGljZxcExqxNcJkX AvehSt07QoXoTSxegvmKtUwqBOiE88fRAxXXVfCkNJ/oUs5KeOZgtaVaygR+0WrOHmkn GMdckpHjrmO4+6zm43sSb2AMsNeadWz2SamuCNifAXdE3HCr3WT9Pfllb04FwKF8lMs1 PYmw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=PryqNTwLzUeYPK1vEw4b3mIT6FQ5IDXG1x6Sj+23J10=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=I4y630VB4XwISBxcmUVz0ZKNpvja1pc3Pv/uSGOci/jZ6brFxCPnpfTyFtVQvhpRxB KUHAsH+YIX0AF4y+6XAZR48q2fFr8ZdxDRhQkKsSkdfz/7vUizuY8EieQ0tzh0/66HR1 wjHNRjuqcx/BmGOeGtRJ+b63EWmXIqJNhguL6Ah6yzcQ9pHnJVeDnI1plfBaeXxNep2e Xn3FosLbM1InnzmwDgmjyFiaZH/Q3af3SxHaWYaDjNZF+NyGl+nuVYNkf4o43BNRencC +tlBMyOzwOn9Xwq7/ZMIfTFOKSZEOl8othaaJXJAjIi2BBYM5Cmdbx4U70dW0BGAT1f9 ePhQ==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=FamOsb3a; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id a640c23a62f3a-a626c800ef5si508634066b.200.2024.05.28.08.50.57; Tue, 28 May 2024 08:51:00 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=FamOsb3a; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 7C8A368D566; Tue, 28 May 2024 18:50:06 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id CDC5468D504 for ; Tue, 28 May 2024 18:50:03 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716911404; x=1748447404; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SgB+x63Hu2tvNVI7wGBc1MOfDfO1dIM62AtojPPvFPc=; b=FamOsb3aYEYJyoXAY9Jptyx2T6H0ss8zhBoC9rv2Z3PEwkkykdOMQI7s jnyM4lIEsXN0+s6RlL1wIaeBqCF2i4Ea6kDV6uPC7gzoPSk7wFScASuNL WynhOjgUB8pk8xvdsjZncctZqPVharmD8BP8JtOoy7wp5m8ijfMaN7RpG pyCi/uo/jabGZhU/kfqVuPIWWdCHpPq+yCmxyA4mOMR+Qy244Gwnw63Oe kC8WlrUC3ebJnX/vjKi73TUi+YX3FFxRCl4Ba1qCUtJZ2G9/Q5Eo2WbNd Y8BuWve1pghCh34Nery048Wr83pyYCpCRwDOWshsV2zCTc4kDIe0WerrG g==; X-CSE-ConnectionGUID: Fw/Ag5XaTkm9Mgn3/5lmiw== X-CSE-MsgGUID: TFb+Qg9PTreMUY8JBH3mfw== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="23821897" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="23821897" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 08:49:57 -0700 X-CSE-ConnectionGUID: zbCtCX1VTCG/hFp7IY3RMw== X-CSE-MsgGUID: zmB+5Ub/T0ywb5KdcvjU9Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="35731094" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by orviesa007.jf.intel.com with ESMTP; 28 May 2024 08:49:56 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Tue, 28 May 2024 23:47:59 +0800 Message-ID: <20240528154807.1151-8-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240528154807.1151-1-tong1.wu@intel.com> References: <20240528154807.1151-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v12 08/15] avcodec/vaapi_encode: extract gop configuration and two options to base layer X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: l1tzHciYQLUX From: Tong Wu idr_interval and desired_b_depth are moved to HW_BASE_ENCODE_COMMON_OPTIONS. Signed-off-by: Tong Wu --- libavcodec/hw_base_encode.c | 54 +++++++++++++++++++++++++++++++++++++ libavcodec/hw_base_encode.h | 19 +++++++++++++ libavcodec/vaapi_encode.c | 52 +++-------------------------------- libavcodec/vaapi_encode.h | 16 ----------- 4 files changed, 77 insertions(+), 64 deletions(-) diff --git a/libavcodec/hw_base_encode.c b/libavcodec/hw_base_encode.c index c4789380b6..56dc015e2e 100644 --- a/libavcodec/hw_base_encode.c +++ b/libavcodec/hw_base_encode.c @@ -593,6 +593,60 @@ end: return 0; } +int ff_hw_base_init_gop_structure(AVCodecContext *avctx, uint32_t ref_l0, uint32_t ref_l1, + int flags, int prediction_pre_only) +{ + FFHWBaseEncodeContext *ctx = avctx->priv_data; + + if (flags & FF_HW_FLAG_INTRA_ONLY || avctx->gop_size <= 1) { + av_log(avctx, AV_LOG_VERBOSE, "Using intra frames only.\n"); + ctx->gop_size = 1; + } else if (ref_l0 < 1) { + av_log(avctx, AV_LOG_ERROR, "Driver does not support any " + "reference frames.\n"); + return AVERROR(EINVAL); + } else if (!(flags & FF_HW_FLAG_B_PICTURES) || ref_l1 < 1 || + avctx->max_b_frames < 1 || prediction_pre_only) { + if (ctx->p_to_gpb) + av_log(avctx, AV_LOG_VERBOSE, "Using intra and B-frames " + "(supported references: %d / %d).\n", + ref_l0, ref_l1); + else + av_log(avctx, AV_LOG_VERBOSE, "Using intra and P-frames " + "(supported references: %d / %d).\n", ref_l0, ref_l1); + ctx->gop_size = avctx->gop_size; + ctx->p_per_i = INT_MAX; + ctx->b_per_p = 0; + } else { + if (ctx->p_to_gpb) + av_log(avctx, AV_LOG_VERBOSE, "Using intra and B-frames " + "(supported references: %d / %d).\n", + ref_l0, ref_l1); + else + av_log(avctx, AV_LOG_VERBOSE, "Using intra, P- and B-frames " + "(supported references: %d / %d).\n", ref_l0, ref_l1); + ctx->gop_size = avctx->gop_size; + ctx->p_per_i = INT_MAX; + ctx->b_per_p = avctx->max_b_frames; + if (flags & FF_HW_FLAG_B_PICTURE_REFERENCES) { + ctx->max_b_depth = FFMIN(ctx->desired_b_depth, + av_log2(ctx->b_per_p) + 1); + } else { + ctx->max_b_depth = 1; + } + } + + if (flags & FF_HW_FLAG_NON_IDR_KEY_PICTURES) { + ctx->closed_gop = !!(avctx->flags & AV_CODEC_FLAG_CLOSED_GOP); + ctx->gop_per_idr = ctx->idr_interval + 1; + } else { + ctx->closed_gop = 1; + ctx->gop_per_idr = 1; + } + + return 0; +} + int ff_hw_base_encode_init(AVCodecContext *avctx) { FFHWBaseEncodeContext *ctx = avctx->priv_data; diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h index a062009860..c7e535d35d 100644 --- a/libavcodec/hw_base_encode.h +++ b/libavcodec/hw_base_encode.h @@ -120,6 +120,14 @@ typedef struct FFHWBaseEncodeContext { // Hardware-specific hooks. const struct FFHWEncodePictureOperation *op; + // Global options. + + // Number of I frames between IDR frames. + int idr_interval; + + // Desired B frame reference depth. + int desired_b_depth; + // The hardware device context. AVBufferRef *device_ref; AVHWDeviceContext *device; @@ -198,11 +206,22 @@ typedef struct FFHWBaseEncodeContext { int ff_hw_base_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt); +int ff_hw_base_init_gop_structure(AVCodecContext *avctx, uint32_t ref_l0, uint32_t ref_l1, + int flags, int prediction_pre_only); + int ff_hw_base_encode_init(AVCodecContext *avctx); int ff_hw_base_encode_close(AVCodecContext *avctx); #define HW_BASE_ENCODE_COMMON_OPTIONS \ + { "idr_interval", \ + "Distance (in I-frames) between key frames", \ + OFFSET(common.base.idr_interval), AV_OPT_TYPE_INT, \ + { .i64 = 0 }, 0, INT_MAX, FLAGS }, \ + { "b_depth", \ + "Maximum B-frame reference depth", \ + OFFSET(common.base.desired_b_depth), AV_OPT_TYPE_INT, \ + { .i64 = 1 }, 1, INT_MAX, FLAGS }, \ { "async_depth", "Maximum processing parallelism. " \ "Increase this to improve single channel performance.", \ OFFSET(common.base.async_depth), AV_OPT_TYPE_INT, \ diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c index 1b3bab2c14..c29493e2c8 100644 --- a/libavcodec/vaapi_encode.c +++ b/libavcodec/vaapi_encode.c @@ -1638,7 +1638,7 @@ static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx) VAStatus vas; VAConfigAttrib attr = { VAConfigAttribEncMaxRefFrames }; uint32_t ref_l0, ref_l1; - int prediction_pre_only; + int prediction_pre_only, err; vas = vaGetConfigAttributes(ctx->hwctx->display, ctx->va_profile, @@ -1702,53 +1702,9 @@ static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx) } #endif - if (ctx->codec->flags & FF_HW_FLAG_INTRA_ONLY || - avctx->gop_size <= 1) { - av_log(avctx, AV_LOG_VERBOSE, "Using intra frames only.\n"); - base_ctx->gop_size = 1; - } else if (ref_l0 < 1) { - av_log(avctx, AV_LOG_ERROR, "Driver does not support any " - "reference frames.\n"); - return AVERROR(EINVAL); - } else if (!(ctx->codec->flags & FF_HW_FLAG_B_PICTURES) || - ref_l1 < 1 || avctx->max_b_frames < 1 || - prediction_pre_only) { - if (base_ctx->p_to_gpb) - av_log(avctx, AV_LOG_VERBOSE, "Using intra and B-frames " - "(supported references: %d / %d).\n", - ref_l0, ref_l1); - else - av_log(avctx, AV_LOG_VERBOSE, "Using intra and P-frames " - "(supported references: %d / %d).\n", ref_l0, ref_l1); - base_ctx->gop_size = avctx->gop_size; - base_ctx->p_per_i = INT_MAX; - base_ctx->b_per_p = 0; - } else { - if (base_ctx->p_to_gpb) - av_log(avctx, AV_LOG_VERBOSE, "Using intra and B-frames " - "(supported references: %d / %d).\n", - ref_l0, ref_l1); - else - av_log(avctx, AV_LOG_VERBOSE, "Using intra, P- and B-frames " - "(supported references: %d / %d).\n", ref_l0, ref_l1); - base_ctx->gop_size = avctx->gop_size; - base_ctx->p_per_i = INT_MAX; - base_ctx->b_per_p = avctx->max_b_frames; - if (ctx->codec->flags & FF_HW_FLAG_B_PICTURE_REFERENCES) { - base_ctx->max_b_depth = FFMIN(ctx->desired_b_depth, - av_log2(base_ctx->b_per_p) + 1); - } else { - base_ctx->max_b_depth = 1; - } - } - - if (ctx->codec->flags & FF_HW_FLAG_NON_IDR_KEY_PICTURES) { - base_ctx->closed_gop = !!(avctx->flags & AV_CODEC_FLAG_CLOSED_GOP); - base_ctx->gop_per_idr = ctx->idr_interval + 1; - } else { - base_ctx->closed_gop = 1; - base_ctx->gop_per_idr = 1; - } + err = ff_hw_base_init_gop_structure(avctx, ref_l0, ref_l1, ctx->codec->flags, prediction_pre_only); + if (err < 0) + return err; return 0; } diff --git a/libavcodec/vaapi_encode.h b/libavcodec/vaapi_encode.h index d5d7d27a07..088141ca76 100644 --- a/libavcodec/vaapi_encode.h +++ b/libavcodec/vaapi_encode.h @@ -151,17 +151,9 @@ typedef struct VAAPIEncodeContext { // Codec-specific hooks. const struct VAAPIEncodeType *codec; - // Global options. - // Use low power encoding mode. int low_power; - // Number of I frames between IDR frames. - int idr_interval; - - // Desired B frame reference depth. - int desired_b_depth; - // Max Frame Size int max_frame_size; @@ -371,14 +363,6 @@ int ff_vaapi_encode_close(AVCodecContext *avctx); "may not support all encoding features)", \ OFFSET(common.low_power), AV_OPT_TYPE_BOOL, \ { .i64 = 0 }, 0, 1, FLAGS }, \ - { "idr_interval", \ - "Distance (in I-frames) between IDR frames", \ - OFFSET(common.idr_interval), AV_OPT_TYPE_INT, \ - { .i64 = 0 }, 0, INT_MAX, FLAGS }, \ - { "b_depth", \ - "Maximum B-frame reference depth", \ - OFFSET(common.desired_b_depth), AV_OPT_TYPE_INT, \ - { .i64 = 1 }, 1, INT_MAX, FLAGS }, \ { "max_frame_size", \ "Maximum frame size (in bytes)",\ OFFSET(common.max_frame_size), AV_OPT_TYPE_INT, \ From patchwork Tue May 28 15:48:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 49323 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a59:8f0d:0:b0:460:55fa:d5ed with SMTP id i13csp54512vqu; Tue, 28 May 2024 08:51:25 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCWe3MxFQy19w+ef3xTndXZEfNNam+p/U+FKZw/2//oj7huJwRqdDe/CQARePyqtsSlBfKmdPjURHptAaviw1//hIG+FMp5B7SME+g== X-Google-Smtp-Source: AGHT+IEt9rjUlbMY+D9zyepveS2uS4B/F5p1dLNs8v8YW9WBGMHiDNrWCpnhjoEb3TqB8hwPr0Hi X-Received: by 2002:a17:907:76d6:b0:a63:3c57:5806 with SMTP id a640c23a62f3a-a633c575ccbmr169051066b.32.1716911484716; Tue, 28 May 2024 08:51:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1716911484; cv=none; d=google.com; s=arc-20160816; b=yzczrKT/4r9iqnwsYNesG52yNagmLHAEtsw3mTqluuqc45QFkn2w8Rc88M1Zl6Ziqv C3OshxcsjDYnK8TJBHMhZS3OL520ygaeWfDpv7VwdJLq1jKCamuSRdnYlJi/DgyBXGE3 IKBU7KFocsYJuo3GDaeCgYm1PiMdMZZ8zfeoLoq5cNNL0HHv6/FXAUw7ibscKVVZQGgY TkjF/6TTz6QF6AZgtsrwuSv9sK1IaMvFQWuFklswJTkFyKmJFFrpQr6pFGILEUjuH2r7 +VDHDIhjRttuKNDfNxtxmll1vEqDSpla9asOKDkLCRGJKyY/RAOw4GQfBqXtEcbSPJZh Jb8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=uXb3O7w5acGKwowr+0GutHEvlHuTtXbbLVyyJPUyzJ8=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=aoKjJ6GI3hgSqyiCdISQHMntnaiaZnV1GrndHxtecLwEe1uUlx2yg92aODg695TsML jjIQBdnyc7GZ3RW2ORrWgBRMQZn+Xpv/sDr81HojjTtJVBx9kcQuhHMmnDiU/VxfDZyX MPy7fZ4fqJF3UuvlrxhW+FyMYn4eKBO77yGgre51o7iELLdmo2u1jovDlGBY4pSF9ETn iIZwI3m8THfZAZ2SP/ulyXeEZ9ALikzbEjkaDXbfU9cZOTS6KlISd9/wNYnvO4QT6//X i6AV3jb/p/d7+iV2Vw4nLTxeHlgj9uBoYUG5wAoyCwv9FEp7nvNjFdbAloUYmwDbczut UkGg==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=m0QHRO4m; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id a640c23a62f3a-a626cc38489si510735266b.471.2024.05.28.08.51.21; Tue, 28 May 2024 08:51:24 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=m0QHRO4m; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 0577A68D565; Tue, 28 May 2024 18:50:09 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id F058468D53A for ; Tue, 28 May 2024 18:50:04 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716911405; x=1748447405; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kAugwS+vEoWg5vemeWk9mHJhM498n/k62qmPXAi6Hr8=; b=m0QHRO4mpO+IVlNfj5ls7yGWjptiIZ5sRqwhq3KOrcgIUkBX+I9qvcpC bV2GTU6FIhZjXvIiBFwj+crpyhFNcHOoUMg7tj1yY1QWPBvWg9NC1kvrl rz3T9Wds2gctjNneii3ez2goFHL9Yku2xcZZTiP6suBUVbsJNEiykkLf6 y7M52sCatbKY+uUTq7/ala4H1IyIRYWE2P7uF11vOpQ4pKDOkbtMX0a+f ZB0LoTPb4U+gyP0+uErgcrzixmbMn+3ec7KjfME00/zoIFPi4Ljf9cX+k auihG9a4+ppItzRmg83GoVOZmKVVOehfq+obchKP9b/Q384sqRlBer0aV w==; X-CSE-ConnectionGUID: Bk7CTxdoS6epB5nB4f8wDw== X-CSE-MsgGUID: ZiGJAOeaQt+XFVSuQY4Ejw== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="23821906" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="23821906" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 08:49:58 -0700 X-CSE-ConnectionGUID: KCXRjUsrR/ieGhFdwCyjOg== X-CSE-MsgGUID: yia46Zc6Qp+LrfDewehWdA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="35731100" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by orviesa007.jf.intel.com with ESMTP; 28 May 2024 08:49:57 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Tue, 28 May 2024 23:48:00 +0800 Message-ID: <20240528154807.1151-9-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240528154807.1151-1-tong1.wu@intel.com> References: <20240528154807.1151-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v12 09/15] avcodec/vaapi_encode: extract set_output_property to base layer X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: Dm68CFRq2ain From: Tong Wu Signed-off-by: Tong Wu --- libavcodec/hw_base_encode.c | 40 +++++++++++++++++++++++++++++++++ libavcodec/hw_base_encode.h | 3 +++ libavcodec/vaapi_encode.c | 44 ++----------------------------------- 3 files changed, 45 insertions(+), 42 deletions(-) diff --git a/libavcodec/hw_base_encode.c b/libavcodec/hw_base_encode.c index 56dc015e2e..25fcfdbb5e 100644 --- a/libavcodec/hw_base_encode.c +++ b/libavcodec/hw_base_encode.c @@ -488,6 +488,46 @@ fail: return err; } +int ff_hw_base_encode_set_output_property(AVCodecContext *avctx, + FFHWBaseEncodePicture *pic, + AVPacket *pkt, int flag_no_delay) +{ + FFHWBaseEncodeContext *ctx = avctx->priv_data; + + if (pic->type == FF_HW_PICTURE_TYPE_IDR) + pkt->flags |= AV_PKT_FLAG_KEY; + + pkt->pts = pic->pts; + pkt->duration = pic->duration; + + // for no-delay encoders this is handled in generic codec + if (avctx->codec->capabilities & AV_CODEC_CAP_DELAY && + avctx->flags & AV_CODEC_FLAG_COPY_OPAQUE) { + pkt->opaque = pic->opaque; + pkt->opaque_ref = pic->opaque_ref; + pic->opaque_ref = NULL; + } + + if (flag_no_delay) { + pkt->dts = pkt->pts; + return 0; + } + + if (ctx->output_delay == 0) { + pkt->dts = pkt->pts; + } else if (pic->encode_order < ctx->decode_delay) { + if (ctx->ts_ring[pic->encode_order] < INT64_MIN + ctx->dts_pts_diff) + pkt->dts = INT64_MIN; + else + pkt->dts = ctx->ts_ring[pic->encode_order] - ctx->dts_pts_diff; + } else { + pkt->dts = ctx->ts_ring[(pic->encode_order - ctx->decode_delay) % + (3 * ctx->output_delay + ctx->async_depth)]; + } + + return 0; +} + int ff_hw_base_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt) { FFHWBaseEncodeContext *ctx = avctx->priv_data; diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h index c7e535d35d..f097d826a7 100644 --- a/libavcodec/hw_base_encode.h +++ b/libavcodec/hw_base_encode.h @@ -204,6 +204,9 @@ typedef struct FFHWBaseEncodeContext { AVPacket *tail_pkt; } FFHWBaseEncodeContext; +int ff_hw_base_encode_set_output_property(AVCodecContext *avctx, FFHWBaseEncodePicture *pic, + AVPacket *pkt, int flag_no_delay); + int ff_hw_base_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt); int ff_hw_base_init_gop_structure(AVCodecContext *avctx, uint32_t ref_l0, uint32_t ref_l1, diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c index c29493e2c8..4193f3838f 100644 --- a/libavcodec/vaapi_encode.c +++ b/libavcodec/vaapi_encode.c @@ -660,47 +660,6 @@ fail_at_end: return err; } -static int vaapi_encode_set_output_property(AVCodecContext *avctx, - FFHWBaseEncodePicture *pic, - AVPacket *pkt) -{ - FFHWBaseEncodeContext *base_ctx = avctx->priv_data; - VAAPIEncodeContext *ctx = avctx->priv_data; - - if (pic->type == FF_HW_PICTURE_TYPE_IDR) - pkt->flags |= AV_PKT_FLAG_KEY; - - pkt->pts = pic->pts; - pkt->duration = pic->duration; - - // for no-delay encoders this is handled in generic codec - if (avctx->codec->capabilities & AV_CODEC_CAP_DELAY && - avctx->flags & AV_CODEC_FLAG_COPY_OPAQUE) { - pkt->opaque = pic->opaque; - pkt->opaque_ref = pic->opaque_ref; - pic->opaque_ref = NULL; - } - - if (ctx->codec->flags & FLAG_TIMESTAMP_NO_DELAY) { - pkt->dts = pkt->pts; - return 0; - } - - if (base_ctx->output_delay == 0) { - pkt->dts = pkt->pts; - } else if (pic->encode_order < base_ctx->decode_delay) { - if (base_ctx->ts_ring[pic->encode_order] < INT64_MIN + base_ctx->dts_pts_diff) - pkt->dts = INT64_MIN; - else - pkt->dts = base_ctx->ts_ring[pic->encode_order] - base_ctx->dts_pts_diff; - } else { - pkt->dts = base_ctx->ts_ring[(pic->encode_order - base_ctx->decode_delay) % - (3 * base_ctx->output_delay + base_ctx->async_depth)]; - } - - return 0; -} - static int vaapi_encode_get_coded_buffer_size(AVCodecContext *avctx, VABufferID buf_id) { VAAPIEncodeContext *ctx = avctx->priv_data; @@ -852,7 +811,8 @@ static int vaapi_encode_output(AVCodecContext *avctx, av_log(avctx, AV_LOG_DEBUG, "Output read for pic %"PRId64"/%"PRId64".\n", base_pic->display_order, base_pic->encode_order); - vaapi_encode_set_output_property(avctx, (FFHWBaseEncodePicture*)pic, pkt_ptr); + ff_hw_base_encode_set_output_property(avctx, (FFHWBaseEncodePicture*)base_pic, pkt_ptr, + ctx->codec->flags & FLAG_TIMESTAMP_NO_DELAY); end: ff_refstruct_unref(&pic->output_buffer_ref); From patchwork Tue May 28 15:48:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 49324 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a59:8f0d:0:b0:460:55fa:d5ed with SMTP id i13csp54615vqu; Tue, 28 May 2024 08:51:35 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCXYK6gmGz418YaBoRpxVNMUkfW19O2NGB++wzzKEhfMgrjBLkEy1RJjRpRbJPvHdcLd20XHUoyLY6hsJBwZDrNgc06U2Ef8bywHjA== X-Google-Smtp-Source: AGHT+IGtPaYh9kMDaXAd9FfV9sD3th5uEa0NzE9qcJv4rtEPW5V0z3zCl76IBYSV4PYWCAvI/XdJ X-Received: by 2002:a17:906:280a:b0:a59:dd90:baeb with SMTP id a640c23a62f3a-a6264f0ef79mr958070366b.48.1716911494915; Tue, 28 May 2024 08:51:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1716911494; cv=none; d=google.com; s=arc-20160816; b=SttE7rIrPptqf/2Lw4+byujybLIYpLG7r4ZTO2ggnIU++lhQWiCRW4hK5ONfUah00c RI8w2sivNi5aItThscHpW5WHZVLFUmJFRBoxWSLET4KyFX+52gtt+1WHIRU/d2zo7m0G pHyB23o+yyJ2NBU+bBnH8BV3kbjjpXy7vgUfiGPHQPD5PQD94cZ1lI65PSsSRFceEHn7 nRraX0u9vD0hux/Vu3qq3UWQcKwZEtCsx53T6V4PWW4wZUhhI3acHbdRHrZ0NT0Ay8EG BzM2rKNuyuJ0AvKQFj24DAwbJsAkX8mo/JANFKoWxlrpwI2t4gTftLnWlriEBqx+neyl KkxA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=4xMFEexlgaQrp0HZkcfzPskxR5mI4SrV9waT2flA/xw=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=WD5B18iyQ9I0ZxL+jee8SI1B4cyd1tTJ7u/Fy9rt9uIlmym/ObCKbFm6q80JVTdesP aqv4iZGX9knhFwRBAVJE1vys9Hg+HCP5ho5z7cE84uKW0kpUqJfZd45AXu1XrFE55zJv Abx60sLC19cEDYy4ZmfliuOYqoR4PVvh5PjXAZPugkZvR5NogaPBPELpeyQFIpsIYdpD kf7TKI2pW1n1q9FJ4vUqFSdLfqxVOQeQkCK8+SL4i9Va6vTKKz30qGe22772WEmbjqYl M7a6G/ZBZo7QBrlInE1ExOSXWu2SytWMxtdpajJr6GhAIj2CiXSZuZazBX4hWLDgJf3R RGBA==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=Ro9oMBDE; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id a640c23a62f3a-a62c7295601si368975866b.579.2024.05.28.08.51.31; Tue, 28 May 2024 08:51:34 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=Ro9oMBDE; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 51D5E68D589; Tue, 28 May 2024 18:50:10 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 4B3DF68D515 for ; Tue, 28 May 2024 18:50:04 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716911405; x=1748447405; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Xyi6IllrUnx78IZz/501jwAYBckBETlwrRJ1weZulec=; b=Ro9oMBDELpSxV3eJ8u9M8ddGgAk8Alx5UH3L+QIMFpgNUsLN4qh+S0nj Ev0YUhcrwnfsDG1zq4A8RRbEC8pwnM1rUqKS7v8ImYhp8yn9tfW/JPzZ9 2kMJUJ8GD03p4bxkX/Hflg91JhQ+lYX3DJT/xYZzEpBOeu4LWrJtUF8mV tNxz+OMGImdXeuA4/57JUIae8+UGsGoi4C3WBWV+O4V+Lbv7Ih6gC5jEu U531th+/fzE7LtW8lLCXoXI6JImCDGypYNsD2zvSogLTU0Bixb6IGn3Tq HhYH5akB8lrS7LrAOIo9lfCwHxH+HTn4HGKtbqMv+NhZcjtfFi1/k+h5j A==; X-CSE-ConnectionGUID: sNIG286ZQNGmA5Ne+OLfTA== X-CSE-MsgGUID: /FBQBL3uRGafkFooVACUQA== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="23821918" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="23821918" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 08:49:59 -0700 X-CSE-ConnectionGUID: T4NqjUrPRBinfcZ3b0qRAA== X-CSE-MsgGUID: VDdDpxosRsClrbNBWjjA2Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="35731105" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by orviesa007.jf.intel.com with ESMTP; 28 May 2024 08:49:58 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Tue, 28 May 2024 23:48:01 +0800 Message-ID: <20240528154807.1151-10-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240528154807.1151-1-tong1.wu@intel.com> References: <20240528154807.1151-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v12 10/15] avcodec/vaapi_encode: extract a get_recon_format function to base layer X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: pSLuVkxE09Ei From: Tong Wu Surface size and block size parameters are also moved to base layer. Signed-off-by: Tong Wu --- libavcodec/hw_base_encode.c | 58 +++++++++++++++++++++++ libavcodec/hw_base_encode.h | 12 +++++ libavcodec/vaapi_encode.c | 81 ++++++++------------------------- libavcodec/vaapi_encode.h | 10 ---- libavcodec/vaapi_encode_av1.c | 10 ++-- libavcodec/vaapi_encode_h264.c | 11 +++-- libavcodec/vaapi_encode_h265.c | 25 +++++----- libavcodec/vaapi_encode_mjpeg.c | 5 +- libavcodec/vaapi_encode_vp9.c | 6 +-- 9 files changed, 118 insertions(+), 100 deletions(-) diff --git a/libavcodec/hw_base_encode.c b/libavcodec/hw_base_encode.c index 25fcfdbb5e..31046bd73e 100644 --- a/libavcodec/hw_base_encode.c +++ b/libavcodec/hw_base_encode.c @@ -687,6 +687,64 @@ int ff_hw_base_init_gop_structure(AVCodecContext *avctx, uint32_t ref_l0, uint32 return 0; } +int ff_hw_base_get_recon_format(AVCodecContext *avctx, const void *hwconfig, enum AVPixelFormat *fmt) +{ + FFHWBaseEncodeContext *ctx = avctx->priv_data; + AVHWFramesConstraints *constraints = NULL; + enum AVPixelFormat recon_format; + int err, i; + + constraints = av_hwdevice_get_hwframe_constraints(ctx->device_ref, + hwconfig); + if (!constraints) { + err = AVERROR(ENOMEM); + goto fail; + } + + // Probably we can use the input surface format as the surface format + // of the reconstructed frames. If not, we just pick the first (only?) + // format in the valid list and hope that it all works. + recon_format = AV_PIX_FMT_NONE; + if (constraints->valid_sw_formats) { + for (i = 0; constraints->valid_sw_formats[i] != AV_PIX_FMT_NONE; i++) { + if (ctx->input_frames->sw_format == + constraints->valid_sw_formats[i]) { + recon_format = ctx->input_frames->sw_format; + break; + } + } + if (recon_format == AV_PIX_FMT_NONE) { + // No match. Just use the first in the supported list and + // hope for the best. + recon_format = constraints->valid_sw_formats[0]; + } + } else { + // No idea what to use; copy input format. + recon_format = ctx->input_frames->sw_format; + } + av_log(avctx, AV_LOG_DEBUG, "Using %s as format of " + "reconstructed frames.\n", av_get_pix_fmt_name(recon_format)); + + if (ctx->surface_width < constraints->min_width || + ctx->surface_height < constraints->min_height || + ctx->surface_width > constraints->max_width || + ctx->surface_height > constraints->max_height) { + av_log(avctx, AV_LOG_ERROR, "Hardware does not support encoding at " + "size %dx%d (constraints: width %d-%d height %d-%d).\n", + ctx->surface_width, ctx->surface_height, + constraints->min_width, constraints->max_width, + constraints->min_height, constraints->max_height); + err = AVERROR(EINVAL); + goto fail; + } + + *fmt = recon_format; + err = 0; +fail: + av_hwframe_constraints_free(&constraints); + return err; +} + int ff_hw_base_encode_init(AVCodecContext *avctx) { FFHWBaseEncodeContext *ctx = avctx->priv_data; diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h index f097d826a7..f5414e2c28 100644 --- a/libavcodec/hw_base_encode.h +++ b/libavcodec/hw_base_encode.h @@ -128,6 +128,16 @@ typedef struct FFHWBaseEncodeContext { // Desired B frame reference depth. int desired_b_depth; + // The required size of surfaces. This is probably the input + // size (AVCodecContext.width|height) aligned up to whatever + // block size is required by the codec. + int surface_width; + int surface_height; + + // The block size for slice calculations. + int slice_block_width; + int slice_block_height; + // The hardware device context. AVBufferRef *device_ref; AVHWDeviceContext *device; @@ -212,6 +222,8 @@ int ff_hw_base_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt); int ff_hw_base_init_gop_structure(AVCodecContext *avctx, uint32_t ref_l0, uint32_t ref_l1, int flags, int prediction_pre_only); +int ff_hw_base_get_recon_format(AVCodecContext *avctx, const void *hwconfig, enum AVPixelFormat *fmt); + int ff_hw_base_encode_init(AVCodecContext *avctx); int ff_hw_base_encode_close(AVCodecContext *avctx); diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c index 4193f3838f..d96f146b28 100644 --- a/libavcodec/vaapi_encode.c +++ b/libavcodec/vaapi_encode.c @@ -1777,6 +1777,7 @@ static av_cold int vaapi_encode_init_tile_slice_structure(AVCodecContext *avctx, static av_cold int vaapi_encode_init_slice_structure(AVCodecContext *avctx) { + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeContext *ctx = avctx->priv_data; VAConfigAttrib attr[3] = { { VAConfigAttribEncMaxSlices }, { VAConfigAttribEncSliceStructure }, @@ -1796,12 +1797,12 @@ static av_cold int vaapi_encode_init_slice_structure(AVCodecContext *avctx) return 0; } - av_assert0(ctx->slice_block_height > 0 && ctx->slice_block_width > 0); + av_assert0(base_ctx->slice_block_height > 0 && base_ctx->slice_block_width > 0); - ctx->slice_block_rows = (avctx->height + ctx->slice_block_height - 1) / - ctx->slice_block_height; - ctx->slice_block_cols = (avctx->width + ctx->slice_block_width - 1) / - ctx->slice_block_width; + ctx->slice_block_rows = (avctx->height + base_ctx->slice_block_height - 1) / + base_ctx->slice_block_height; + ctx->slice_block_cols = (avctx->width + base_ctx->slice_block_width - 1) / + base_ctx->slice_block_width; if (avctx->slices <= 1 && !ctx->tile_rows && !ctx->tile_cols) { ctx->nb_slices = 1; @@ -2023,7 +2024,8 @@ static void vaapi_encode_free_output_buffer(FFRefStructOpaque opaque, static int vaapi_encode_alloc_output_buffer(FFRefStructOpaque opaque, void *obj) { AVCodecContext *avctx = opaque.nc; - VAAPIEncodeContext *ctx = avctx->priv_data; + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; + VAAPIEncodeContext *ctx = avctx->priv_data; VABufferID *buffer_id = obj; VAStatus vas; @@ -2033,7 +2035,7 @@ static int vaapi_encode_alloc_output_buffer(FFRefStructOpaque opaque, void *obj) // bound on that. vas = vaCreateBuffer(ctx->hwctx->display, ctx->va_context, VAEncCodedBufferType, - 3 * ctx->surface_width * ctx->surface_height + + 3 * base_ctx->surface_width * base_ctx->surface_height + (1 << 16), 1, 0, buffer_id); if (vas != VA_STATUS_SUCCESS) { av_log(avctx, AV_LOG_ERROR, "Failed to create bitstream " @@ -2051,9 +2053,8 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx) FFHWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeContext *ctx = avctx->priv_data; AVVAAPIHWConfig *hwconfig = NULL; - AVHWFramesConstraints *constraints = NULL; enum AVPixelFormat recon_format; - int err, i; + int err; hwconfig = av_hwdevice_hwconfig_alloc(base_ctx->device_ref); if (!hwconfig) { @@ -2062,52 +2063,9 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx) } hwconfig->config_id = ctx->va_config; - constraints = av_hwdevice_get_hwframe_constraints(base_ctx->device_ref, - hwconfig); - if (!constraints) { - err = AVERROR(ENOMEM); - goto fail; - } - - // Probably we can use the input surface format as the surface format - // of the reconstructed frames. If not, we just pick the first (only?) - // format in the valid list and hope that it all works. - recon_format = AV_PIX_FMT_NONE; - if (constraints->valid_sw_formats) { - for (i = 0; constraints->valid_sw_formats[i] != AV_PIX_FMT_NONE; i++) { - if (base_ctx->input_frames->sw_format == - constraints->valid_sw_formats[i]) { - recon_format = base_ctx->input_frames->sw_format; - break; - } - } - if (recon_format == AV_PIX_FMT_NONE) { - // No match. Just use the first in the supported list and - // hope for the best. - recon_format = constraints->valid_sw_formats[0]; - } - } else { - // No idea what to use; copy input format. - recon_format = base_ctx->input_frames->sw_format; - } - av_log(avctx, AV_LOG_DEBUG, "Using %s as format of " - "reconstructed frames.\n", av_get_pix_fmt_name(recon_format)); - - if (ctx->surface_width < constraints->min_width || - ctx->surface_height < constraints->min_height || - ctx->surface_width > constraints->max_width || - ctx->surface_height > constraints->max_height) { - av_log(avctx, AV_LOG_ERROR, "Hardware does not support encoding at " - "size %dx%d (constraints: width %d-%d height %d-%d).\n", - ctx->surface_width, ctx->surface_height, - constraints->min_width, constraints->max_width, - constraints->min_height, constraints->max_height); - err = AVERROR(EINVAL); + err = ff_hw_base_get_recon_format(avctx, (const void*)hwconfig, &recon_format); + if (err < 0) goto fail; - } - - av_freep(&hwconfig); - av_hwframe_constraints_free(&constraints); base_ctx->recon_frames_ref = av_hwframe_ctx_alloc(base_ctx->device_ref); if (!base_ctx->recon_frames_ref) { @@ -2118,8 +2076,8 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx) base_ctx->recon_frames->format = AV_PIX_FMT_VAAPI; base_ctx->recon_frames->sw_format = recon_format; - base_ctx->recon_frames->width = ctx->surface_width; - base_ctx->recon_frames->height = ctx->surface_height; + base_ctx->recon_frames->width = base_ctx->surface_width; + base_ctx->recon_frames->height = base_ctx->surface_height; err = av_hwframe_ctx_init(base_ctx->recon_frames_ref); if (err < 0) { @@ -2131,7 +2089,6 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx) err = 0; fail: av_freep(&hwconfig); - av_hwframe_constraints_free(&constraints); return err; } @@ -2174,11 +2131,11 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx) goto fail; } else { // Assume 16x16 blocks. - ctx->surface_width = FFALIGN(avctx->width, 16); - ctx->surface_height = FFALIGN(avctx->height, 16); + base_ctx->surface_width = FFALIGN(avctx->width, 16); + base_ctx->surface_height = FFALIGN(avctx->height, 16); if (ctx->codec->flags & FF_HW_FLAG_SLICE_CONTROL) { - ctx->slice_block_width = 16; - ctx->slice_block_height = 16; + base_ctx->slice_block_width = 16; + base_ctx->slice_block_height = 16; } } @@ -2231,7 +2188,7 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx) recon_hwctx = base_ctx->recon_frames->hwctx; vas = vaCreateContext(ctx->hwctx->display, ctx->va_config, - ctx->surface_width, ctx->surface_height, + base_ctx->surface_width, base_ctx->surface_height, VA_PROGRESSIVE, recon_hwctx->surface_ids, recon_hwctx->nb_surfaces, diff --git a/libavcodec/vaapi_encode.h b/libavcodec/vaapi_encode.h index 088141ca76..8650255680 100644 --- a/libavcodec/vaapi_encode.h +++ b/libavcodec/vaapi_encode.h @@ -171,16 +171,6 @@ typedef struct VAAPIEncodeContext { // Desired packed headers. unsigned int desired_packed_headers; - // The required size of surfaces. This is probably the input - // size (AVCodecContext.width|height) aligned up to whatever - // block size is required by the codec. - int surface_width; - int surface_height; - - // The block size for slice calculations. - int slice_block_width; - int slice_block_height; - // Everything above this point must be set before calling // ff_vaapi_encode_init(). diff --git a/libavcodec/vaapi_encode_av1.c b/libavcodec/vaapi_encode_av1.c index a590f4cd8b..ea4c391e54 100644 --- a/libavcodec/vaapi_encode_av1.c +++ b/libavcodec/vaapi_encode_av1.c @@ -112,12 +112,12 @@ static void vaapi_encode_av1_trace_write_log(void *ctx, static av_cold int vaapi_encode_av1_get_encoder_caps(AVCodecContext *avctx) { - VAAPIEncodeContext *ctx = avctx->priv_data; - VAAPIEncodeAV1Context *priv = avctx->priv_data; + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; + VAAPIEncodeAV1Context *priv = avctx->priv_data; // Surfaces must be aligned to superblock boundaries. - ctx->surface_width = FFALIGN(avctx->width, priv->use_128x128_superblock ? 128 : 64); - ctx->surface_height = FFALIGN(avctx->height, priv->use_128x128_superblock ? 128 : 64); + base_ctx->surface_width = FFALIGN(avctx->width, priv->use_128x128_superblock ? 128 : 64); + base_ctx->surface_height = FFALIGN(avctx->height, priv->use_128x128_superblock ? 128 : 64); return 0; } @@ -425,7 +425,7 @@ static int vaapi_encode_av1_init_sequence_params(AVCodecContext *avctx) framerate = 0; level = ff_av1_guess_level(avctx->bit_rate, priv->tier, - ctx->surface_width, ctx->surface_height, + base_ctx->surface_width, base_ctx->surface_height, priv->tile_rows * priv->tile_cols, priv->tile_cols, framerate); if (level) { diff --git a/libavcodec/vaapi_encode_h264.c b/libavcodec/vaapi_encode_h264.c index eacfb12706..d20d8298db 100644 --- a/libavcodec/vaapi_encode_h264.c +++ b/libavcodec/vaapi_encode_h264.c @@ -1205,8 +1205,9 @@ static const VAAPIEncodeType vaapi_encode_type_h264 = { static av_cold int vaapi_encode_h264_init(AVCodecContext *avctx) { - VAAPIEncodeContext *ctx = avctx->priv_data; - VAAPIEncodeH264Context *priv = avctx->priv_data; + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; + VAAPIEncodeContext *ctx = avctx->priv_data; + VAAPIEncodeH264Context *priv = avctx->priv_data; ctx->codec = &vaapi_encode_type_h264; @@ -1254,10 +1255,10 @@ static av_cold int vaapi_encode_h264_init(AVCodecContext *avctx) VA_ENC_PACKED_HEADER_SLICE | // Slice headers. VA_ENC_PACKED_HEADER_MISC; // SEI. - ctx->surface_width = FFALIGN(avctx->width, 16); - ctx->surface_height = FFALIGN(avctx->height, 16); + base_ctx->surface_width = FFALIGN(avctx->width, 16); + base_ctx->surface_height = FFALIGN(avctx->height, 16); - ctx->slice_block_height = ctx->slice_block_width = 16; + base_ctx->slice_block_height = base_ctx->slice_block_width = 16; if (priv->qp > 0) ctx->explicit_qp = priv->qp; diff --git a/libavcodec/vaapi_encode_h265.c b/libavcodec/vaapi_encode_h265.c index 509251c97a..e1748e4419 100644 --- a/libavcodec/vaapi_encode_h265.c +++ b/libavcodec/vaapi_encode_h265.c @@ -352,7 +352,7 @@ static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx) const H265LevelDescriptor *level; level = ff_h265_guess_level(ptl, avctx->bit_rate, - ctx->surface_width, ctx->surface_height, + base_ctx->surface_width, base_ctx->surface_height, ctx->nb_slices, ctx->tile_rows, ctx->tile_cols, (base_ctx->b_per_p > 0) + 1); if (level) { @@ -410,18 +410,18 @@ static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx) sps->chroma_format_idc = chroma_format; sps->separate_colour_plane_flag = 0; - sps->pic_width_in_luma_samples = ctx->surface_width; - sps->pic_height_in_luma_samples = ctx->surface_height; + sps->pic_width_in_luma_samples = base_ctx->surface_width; + sps->pic_height_in_luma_samples = base_ctx->surface_height; - if (avctx->width != ctx->surface_width || - avctx->height != ctx->surface_height) { + if (avctx->width != base_ctx->surface_width || + avctx->height != base_ctx->surface_height) { sps->conformance_window_flag = 1; sps->conf_win_left_offset = 0; sps->conf_win_right_offset = - (ctx->surface_width - avctx->width) >> desc->log2_chroma_w; + (base_ctx->surface_width - avctx->width) >> desc->log2_chroma_w; sps->conf_win_top_offset = 0; sps->conf_win_bottom_offset = - (ctx->surface_height - avctx->height) >> desc->log2_chroma_h; + (base_ctx->surface_height - avctx->height) >> desc->log2_chroma_h; } else { sps->conformance_window_flag = 0; } @@ -1197,11 +1197,12 @@ static int vaapi_encode_h265_init_slice_params(AVCodecContext *avctx, static av_cold int vaapi_encode_h265_get_encoder_caps(AVCodecContext *avctx) { - VAAPIEncodeContext *ctx = avctx->priv_data; - VAAPIEncodeH265Context *priv = avctx->priv_data; + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; + VAAPIEncodeH265Context *priv = avctx->priv_data; #if VA_CHECK_VERSION(1, 13, 0) { + VAAPIEncodeContext *ctx = avctx->priv_data; VAConfigAttribValEncHEVCBlockSizes block_size; VAConfigAttrib attr; VAStatus vas; @@ -1249,10 +1250,10 @@ static av_cold int vaapi_encode_h265_get_encoder_caps(AVCodecContext *avctx) "min CB size %dx%d.\n", priv->ctu_size, priv->ctu_size, priv->min_cb_size, priv->min_cb_size); - ctx->surface_width = FFALIGN(avctx->width, priv->min_cb_size); - ctx->surface_height = FFALIGN(avctx->height, priv->min_cb_size); + base_ctx->surface_width = FFALIGN(avctx->width, priv->min_cb_size); + base_ctx->surface_height = FFALIGN(avctx->height, priv->min_cb_size); - ctx->slice_block_width = ctx->slice_block_height = priv->ctu_size; + base_ctx->slice_block_width = base_ctx->slice_block_height = priv->ctu_size; return 0; } diff --git a/libavcodec/vaapi_encode_mjpeg.c b/libavcodec/vaapi_encode_mjpeg.c index 48dc677bec..5d51a0b2b9 100644 --- a/libavcodec/vaapi_encode_mjpeg.c +++ b/libavcodec/vaapi_encode_mjpeg.c @@ -439,14 +439,13 @@ static int vaapi_encode_mjpeg_init_slice_params(AVCodecContext *avctx, static av_cold int vaapi_encode_mjpeg_get_encoder_caps(AVCodecContext *avctx) { FFHWBaseEncodeContext *base_ctx = avctx->priv_data; - VAAPIEncodeContext *ctx = avctx->priv_data; const AVPixFmtDescriptor *desc; desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format); av_assert0(desc); - ctx->surface_width = FFALIGN(avctx->width, 8 << desc->log2_chroma_w); - ctx->surface_height = FFALIGN(avctx->height, 8 << desc->log2_chroma_h); + base_ctx->surface_width = FFALIGN(avctx->width, 8 << desc->log2_chroma_w); + base_ctx->surface_height = FFALIGN(avctx->height, 8 << desc->log2_chroma_h); return 0; } diff --git a/libavcodec/vaapi_encode_vp9.c b/libavcodec/vaapi_encode_vp9.c index a737a0cdd7..96acd414af 100644 --- a/libavcodec/vaapi_encode_vp9.c +++ b/libavcodec/vaapi_encode_vp9.c @@ -190,11 +190,11 @@ static int vaapi_encode_vp9_init_picture_params(AVCodecContext *avctx, static av_cold int vaapi_encode_vp9_get_encoder_caps(AVCodecContext *avctx) { - VAAPIEncodeContext *ctx = avctx->priv_data; + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; // Surfaces must be aligned to 64x64 superblock boundaries. - ctx->surface_width = FFALIGN(avctx->width, 64); - ctx->surface_height = FFALIGN(avctx->height, 64); + base_ctx->surface_width = FFALIGN(avctx->width, 64); + base_ctx->surface_height = FFALIGN(avctx->height, 64); return 0; } From patchwork Tue May 28 15:48:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 49316 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a59:8f0d:0:b0:460:55fa:d5ed with SMTP id i13csp54695vqu; Tue, 28 May 2024 08:51:42 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCUBAO4wb3R6XuwJg8s2ctZTgei0s2oYoKANG+F3zxaBM8JpaYyT7MumdBZMLLyBhPyobOogM586LLitjwthATjeIrfaUzaq2PzEIg== X-Google-Smtp-Source: AGHT+IE7GxwPUZPyLbpDUKQjFrZEvN+J+HtS3EyFLcF9aUK3grkIF+ivv5I3ioJtr5ecelzq4G35 X-Received: by 2002:a19:6407:0:b0:51e:fc99:d6ef with SMTP id 2adb3069b0e04-529649c5d8cmr7438274e87.18.1716911502521; Tue, 28 May 2024 08:51:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1716911502; cv=none; d=google.com; s=arc-20160816; b=Fjo6WLGDED9GwdWXf/EmYXLszvQgIiOeyuQvzuIyr521MMLjkd4USHR28mNziv2zpp wtQxNrNPbCSBw3Ww/cZOL3qJesEoGzRGCzeQ/zOvRXe7Hh3w+T0d1LM+apZ1VJlvXDgV f5fh8D++CxHaZlvKijhbcant6unGF0WUacb5bPMRPuMKqHY3kbnAvafLcw71nNXhPZrp ETwSKIpm0r215nxbON1JX6heel6KHstnxnTSvgV+aPT/ZXZMzCn6RGdwSemetYyJl17O likOiE8+B6T4wStUDZ+bPc74l9cbsqweM0Tl4pIWYqjUBzaZSB4ao9oC0OcwUbFU+sWc HnvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=RthFtGn5gRPdt6lkYQroMQnxxHu9uAdSyZ2Cnmvlgrc=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=W3efWvZHaltxgDDWRUqQCrBrwy5W73MCX8d+7N1hZSvZj0bSVtW3Y1x/JP49VdDaWC k4Wms/YOF6qFsc/GtL8slh8Zx6fBzNEIZ/QbigyEPgcI8hrX2YrQc6oryU6oJeF4BKCR atzCTQ/vT5UPX9sm8j6NGlibYI3n8Vb0PoHS5ykDxF1NJ0WtoxgnEeyK9aTjxDW/6irs 9uotrGLt7It8jChqa89jZQXsOLoEcnNWiqMM4wDd4ctvHemh8cWZ2YGUxqKzHqDR0CPk JD6JKoClfpgd/dbamf17vvgIk4lwbb2XR3+ooA5hfYHpEuXmoUjtqstq8AqTf9LsQuBU 5qAg==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=mnyAixmR; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id a640c23a62f3a-a626cc68d08si518001966b.632.2024.05.28.08.51.42; Tue, 28 May 2024 08:51:42 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=mnyAixmR; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 66EB368D59F; Tue, 28 May 2024 18:50:11 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id DCA656801F2 for ; Tue, 28 May 2024 18:50:05 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716911406; x=1748447406; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SLTz+3Z0D0nonMsJKiN+ZdbfrJMZzteiCoNowvosqNs=; b=mnyAixmRU0HttTGY6KdSIIZn35KDAVCs1ZTw64eGZhobILbx25T9TDlo PjvCRHyMs6TTOP6TyO2iBHmupmVltTynX2qNuR2B+Dwlbjbek81/GCfRx sX90QQMmDFSw9NidTnjQZAGVA8fYabuItHxBU7I400E6dUz93yXM3ctLl R6qIbQTZQGwNrIpJKrjSAtR7omi24DelntiLG2p/8px/LZRcZ76RDdzax 9WRvn//yrNM7bFK+NTWsdAKL7TlyqjZyus29JGbf+d7IGetfXDfSHHyBq SvmgjE5PA5of3IpE/aFr3MTxTXlJmCxyP7w9uAWYBBRYuv9nxl3lJ5+Xo Q==; X-CSE-ConnectionGUID: 7iYg0q+pQLaez8BpFLEPEQ== X-CSE-MsgGUID: klBaczA2RGy6HnHJUdbu9A== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="23821926" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="23821926" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 08:50:00 -0700 X-CSE-ConnectionGUID: cuQk4huFQga93voZpywiEg== X-CSE-MsgGUID: HX/C3gESQYut/gRnGKifig== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="35731115" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by orviesa007.jf.intel.com with ESMTP; 28 May 2024 08:50:00 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Tue, 28 May 2024 23:48:02 +0800 Message-ID: <20240528154807.1151-11-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240528154807.1151-1-tong1.wu@intel.com> References: <20240528154807.1151-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v12 11/15] avcodec/vaapi_encode: extract a free funtion to base layer X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: mRYjQ9BG6dm7 From: Tong Wu Signed-off-by: Tong Wu --- libavcodec/hw_base_encode.c | 11 +++++++++++ libavcodec/hw_base_encode.h | 2 ++ libavcodec/vaapi_encode.c | 6 +----- 3 files changed, 14 insertions(+), 5 deletions(-) diff --git a/libavcodec/hw_base_encode.c b/libavcodec/hw_base_encode.c index 31046bd73e..92f69bb78c 100644 --- a/libavcodec/hw_base_encode.c +++ b/libavcodec/hw_base_encode.c @@ -745,6 +745,17 @@ fail: return err; } +int ff_hw_base_encode_free(FFHWBaseEncodePicture *pic) +{ + av_frame_free(&pic->input_image); + av_frame_free(&pic->recon_image); + + av_buffer_unref(&pic->opaque_ref); + av_freep(&pic->priv_data); + + return 0; +} + int ff_hw_base_encode_init(AVCodecContext *avctx) { FFHWBaseEncodeContext *ctx = avctx->priv_data; diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h index f5414e2c28..15ef3d7ac6 100644 --- a/libavcodec/hw_base_encode.h +++ b/libavcodec/hw_base_encode.h @@ -224,6 +224,8 @@ int ff_hw_base_init_gop_structure(AVCodecContext *avctx, uint32_t ref_l0, uint32 int ff_hw_base_get_recon_format(AVCodecContext *avctx, const void *hwconfig, enum AVPixelFormat *fmt); +int ff_hw_base_encode_free(FFHWBaseEncodePicture *pic); + int ff_hw_base_encode_init(AVCodecContext *avctx); int ff_hw_base_encode_close(AVCodecContext *avctx); diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c index d96f146b28..b35a23e852 100644 --- a/libavcodec/vaapi_encode.c +++ b/libavcodec/vaapi_encode.c @@ -878,17 +878,13 @@ static int vaapi_encode_free(AVCodecContext *avctx, av_freep(&pic->slices[i].codec_slice_params); } - av_frame_free(&base_pic->input_image); - av_frame_free(&base_pic->recon_image); - - av_buffer_unref(&base_pic->opaque_ref); + ff_hw_base_encode_free(base_pic); av_freep(&pic->param_buffers); av_freep(&pic->slices); // Output buffer should already be destroyed. av_assert0(pic->output_buffer == VA_INVALID_ID); - av_freep(&base_pic->priv_data); av_freep(&pic->codec_picture_params); av_freep(&pic->roi); From patchwork Tue May 28 15:48:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 49325 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a59:8f0d:0:b0:460:55fa:d5ed with SMTP id i13csp54786vqu; Tue, 28 May 2024 08:51:54 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCXJ3ORj1SYTk0BzNzaToUkLs0aEqQ65DT0QCIwnFK+pGiAGl5p1bIKNOzuwMp6VPu5SLUpV2jCibP3SCKI2jrJLgkFxZlgN277LZA== X-Google-Smtp-Source: AGHT+IHpxzdq42T2hSmf6Mzg07W1IGv2tfA9jfN3Kw46zfzpqQ0dFj5fB4040H582B9K7Bved4/r X-Received: by 2002:a17:907:9722:b0:a62:88ec:cdc6 with SMTP id a640c23a62f3a-a6288eccf02mr834370066b.0.1716911513998; Tue, 28 May 2024 08:51:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1716911513; cv=none; d=google.com; s=arc-20160816; b=Ek3hGMij5mWEsFrsij93hX1tgLf5fXUfFyLXzu+QXuhxEUxVfqXi1iwc5P+LbltSSr 1NXgQvElfrN/5xSr/GyTIMDeKNBsOvisIdIayVwgpW/7xwQ/lbjXizUYG+d0EvShPDhA zDrb/gtFBEdz47MOAaOG2TH2s+oTqTDqftfrjbd9JNc/fKa7ENvKzaIx9ArykyCZeAyq rbiWlbW9nofqPeK3RnLSbN+Yp+p+aX9VX4hW0PDc9+VHj/EDvfNpZxvUxhgpgzI3iehv 3MvEDCWlXlJV+x/qAjZXVZQteCYb3XwGwNU5/vmrXnngoRkG74pMPu272n2lIiv+S0aq Gnlg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=fIbJOg6Ks0BhsVbA1qkmQrTHs7xdiTzQM+di10El4pw=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=R82OPVOvwmC0RLPCiGe5hVOuUzPzax30VgcZL38xIkVXCkSoETnHSRMOYmsiV+ddkO UgVzlDt5E8H9wx7pVIG68N5XG8ntkZf+4nTdM8UAOAjUEZhuT+IG0dkcpMxAsailuWm3 FJN/rY82LWYjzqzn4xA4wbAhR9FOc791RD7stoAb62CuJFw8DK83Doj3FoHEdGWpvXgO dQgVxPOoco+8XzZJEmteZj9L00R58iUzF9J4sxKkZDA9P1kFSsAuUthhnCrzeTIrk0JB Nsq/SaKjoCYNZv/kBv7oeSMis3+QEnZBK3FhejcBgRQqVf7veMrv1xnRmcyce0fuGdQN rFWQ==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=GZtSiHQo; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id a640c23a62f3a-a626cd947easi526445766b.884.2024.05.28.08.51.52; Tue, 28 May 2024 08:51:53 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=GZtSiHQo; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id C3F6068D2FE; Tue, 28 May 2024 18:50:12 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 5AD6868D55F for ; Tue, 28 May 2024 18:50:06 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716911406; x=1748447406; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/EfJmpzjk0nTwd+bXyfZL9e+n94CesWqjyNA7586GMY=; b=GZtSiHQo6DbbUzk7xmghdZDDfMjj0bQTav2QI20mtrDZazftDonXIhrX kU1GxIZI7e2+/uZjyQeJ21bX3sByRbeTvb8FB0+vqcBjMl6gGWFg7hpZs 8d61K74cuGef4fMIjX+o5IDLLVvrNDCVidmUS6dkk1+PzTJOeFibNOeYP vJ2O3U3qul3wfg5ohx9kNANwgy2Lmfu9sRHFqRYBKx8wA9zf1H3Rctybi iNIHqsbuzpMtmoYfOKo0F6zf6mA/kavoMUwJJ7fUb4cXBIgi7WDP4QdVf UVuPk8MyHGc/2ZJA2espqUi87ZXI9QC1wLgvPTK6nOP2CD3QQjnoViE1y Q==; X-CSE-ConnectionGUID: l55tCi70RLO57V6Drzx0pw== X-CSE-MsgGUID: YFjngtSnRQOa1h2BoW6ROQ== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="23821929" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="23821929" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 08:50:01 -0700 X-CSE-ConnectionGUID: PMrxeec5T+inGC+hQBaRGg== X-CSE-MsgGUID: pkH2p8yjSEG5ijbAtLMdsQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="35731143" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by orviesa007.jf.intel.com with ESMTP; 28 May 2024 08:50:01 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Tue, 28 May 2024 23:48:03 +0800 Message-ID: <20240528154807.1151-12-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240528154807.1151-1-tong1.wu@intel.com> References: <20240528154807.1151-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v12 12/15] avutil/hwcontext_d3d12va: add Flags for resource creation X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: EUTvQ9u20Piz From: Tong Wu Flags field is added to support diffferent resource creation. Signed-off-by: Tong Wu --- doc/APIchanges | 3 +++ libavutil/hwcontext_d3d12va.c | 2 +- libavutil/hwcontext_d3d12va.h | 8 ++++++++ libavutil/version.h | 2 +- 4 files changed, 13 insertions(+), 2 deletions(-) diff --git a/doc/APIchanges b/doc/APIchanges index 60f056b863..0c3d0221e8 100644 --- a/doc/APIchanges +++ b/doc/APIchanges @@ -2,6 +2,9 @@ The last version increases of all libraries were on 2024-03-07 API changes, most recent first: +2024-01-xx - xxxxxxxxxx - lavu 59.21.100 - hwcontext_d3d12va.h + Add AVD3D12VAFramesContext.flags + 2024-05-23 - xxxxxxxxxx - lavu 59.20.100 - channel_layout.h Add av_channel_layout_ambisonic_order(). diff --git a/libavutil/hwcontext_d3d12va.c b/libavutil/hwcontext_d3d12va.c index cfc016315d..6507cf69c1 100644 --- a/libavutil/hwcontext_d3d12va.c +++ b/libavutil/hwcontext_d3d12va.c @@ -247,7 +247,7 @@ static AVBufferRef *d3d12va_pool_alloc(void *opaque, size_t size) .Format = hwctx->format, .SampleDesc = {.Count = 1, .Quality = 0 }, .Layout = D3D12_TEXTURE_LAYOUT_UNKNOWN, - .Flags = D3D12_RESOURCE_FLAG_NONE, + .Flags = hwctx->flags, }; frame = av_mallocz(sizeof(AVD3D12VAFrame)); diff --git a/libavutil/hwcontext_d3d12va.h b/libavutil/hwcontext_d3d12va.h index ff06e6f2ef..212a6a6146 100644 --- a/libavutil/hwcontext_d3d12va.h +++ b/libavutil/hwcontext_d3d12va.h @@ -129,6 +129,14 @@ typedef struct AVD3D12VAFramesContext { * If unset, will be automatically set. */ DXGI_FORMAT format; + + /** + * Options for working with resources. + * If unset, this will be D3D12_RESOURCE_FLAG_NONE. + * + * @see https://learn.microsoft.com/en-us/windows/win32/api/d3d12/ne-d3d12-d3d12_resource_flags + */ + D3D12_RESOURCE_FLAGS flags; } AVD3D12VAFramesContext; #endif /* AVUTIL_HWCONTEXT_D3D12VA_H */ diff --git a/libavutil/version.h b/libavutil/version.h index 9c7146c228..9d08d56884 100644 --- a/libavutil/version.h +++ b/libavutil/version.h @@ -79,7 +79,7 @@ */ #define LIBAVUTIL_VERSION_MAJOR 59 -#define LIBAVUTIL_VERSION_MINOR 20 +#define LIBAVUTIL_VERSION_MINOR 21 #define LIBAVUTIL_VERSION_MICRO 100 #define LIBAVUTIL_VERSION_INT AV_VERSION_INT(LIBAVUTIL_VERSION_MAJOR, \ From patchwork Tue May 28 15:48:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 49326 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a59:8f0d:0:b0:460:55fa:d5ed with SMTP id i13csp54887vqu; Tue, 28 May 2024 08:52:05 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCVrg6UBykDkEX40/wCHfm2wkiUAKylx9/KKw8+xXyTgyqsfsvwcVZy8h00kF4FxTynjGwIMJmt5o3R1gyiWTazRZDG14RyuhLMweQ== X-Google-Smtp-Source: AGHT+IGKTDZ3cMvcxLbGSaU5eJLwbovD+Gv+oV/XsUNak66bz/MDy8e2CfX7oqziCjueBc78BxwF X-Received: by 2002:a17:906:cc58:b0:a59:d2ac:3856 with SMTP id a640c23a62f3a-a62642e714emr1144893966b.22.1716911524889; Tue, 28 May 2024 08:52:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1716911524; cv=none; d=google.com; s=arc-20160816; b=vNX6vFS85N/RB2Ua58xvJOXtAi9tnbdFeKM4vEoy/AJZU3xPQUaYghLiqCu60ekI+0 3yP9Kka3EluN7AEDnnE1IVYtFbjm9FGR+MhwbQrBWqyRAKGUnMrU18Ow5WnqV8qdf4Cb tdNLVyKe0FrE889XQA9b5u9uOCmK+TeRW9XB0QF0QrASMISeU31Cg50bPe7BtSAWreEe HNzF0K0qpQ3SAOw5AbgempokpbvxtRu2uo+gMNCW3XnL5L4vGe6/aEAfBuQCaIq98s2Y NmqVAW+pz2vhX/MxoBGgegPi2fCFFum8pAK/fTTi9dUjePWUkkahOkpg1TPZQHOt7Sji /TPg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=OKl6Hk5Skof7nVIAPgmRhMPQA7GSBZUCllSAOpRRLSo=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=WZk3qFAMpdubg+03EYcalcKkEvRX2ZYPBhBsS0/5qq7sr4grxSRweERPunzvy75gOc XLKS2//Romj5BGPf456KKkloK2keBfPDWYs9LsZaz56KzvAAPA10dlsfXA2cLZ4mPOgM 3MizJyPq0DlS7WPWeht/qVgyzCmY+2uBRSs5s7fzQzXAuQHCJy0QS/uP6mYCG1miOh0/ t8Uz7h89FI4l8Zw2WTLhjwlRVPf2dYNg9/FL4VCG4zYLMchRMFwv5ucL6582/f66o+1x nA8dBRtSOGXhZtac1rBJK5T3aBnVOoMfcW5JtkjcjaCL0+wKZrWwEJpWVb5XgfYJmnb3 1DZw==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=LlEFJB8+; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id a640c23a62f3a-a626cc374e9si535830066b.376.2024.05.28.08.52.04; Tue, 28 May 2024 08:52:04 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=LlEFJB8+; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 6B48A68D5B3; Tue, 28 May 2024 18:50:15 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id D151F68D56D for ; Tue, 28 May 2024 18:50:06 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716911407; x=1748447407; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8NkAy4t8YbekShKA5lhbguEDhPaTprAEz3nPjMkeaiY=; b=LlEFJB8+atwEIYVmRIT35mnbLz1FjQW8Ve5j6OrTfXtgyORahhugjMjt /PWbXPRzEZuU3gL0G2npRefmF9jO1EXZh/6g1sRQz2/j7ShHButmzOvhr iGPPay76gLsbS7Ist9Jp7LEbShMV5hRvbTyEOz59jcGvmoAB73BpyvSV/ dUxJfRR2DeWxzEL/PYZOuEvDgTFh7m+OC6VR+lsue4c/a+qGPJE92PF9m gw3puyo9qqJcxYZSqWcNR0HvrbOaoYs4J8JrPjGE1eHF8T6xf/3E/1KqB W8nOudpCl1IoY1Hr8ZxzEYlJLq+k7wBVufKkTNfsDyrk4DTdzMoX0rIOL Q==; X-CSE-ConnectionGUID: oxgqtOuBTVqR8eHiaJ9/vQ== X-CSE-MsgGUID: ueRW1BkSRLqcOqHE2BxDRg== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="23821942" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="23821942" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 08:50:04 -0700 X-CSE-ConnectionGUID: 0oPB/RlTR6eNecX43ib7QA== X-CSE-MsgGUID: SDmRMMIFRaO3WlJMZKcxAg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="35731216" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by orviesa007.jf.intel.com with ESMTP; 28 May 2024 08:50:02 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Tue, 28 May 2024 23:48:04 +0800 Message-ID: <20240528154807.1151-13-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240528154807.1151-1-tong1.wu@intel.com> References: <20240528154807.1151-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v12 13/15] avcodec: add D3D12VA hardware HEVC encoder X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: 8hTsN5Bjh/j/ From: Tong Wu This implementation is based on D3D12 Video Encoding Spec: https://microsoft.github.io/DirectX-Specs/d3d/D3D12VideoEncoding.html Sample command line for transcoding: ffmpeg.exe -hwaccel d3d12va -hwaccel_output_format d3d12 -i input.mp4 -c:v hevc_d3d12va output.mp4 Signed-off-by: Tong Wu --- configure | 6 + libavcodec/Makefile | 5 +- libavcodec/allcodecs.c | 1 + libavcodec/d3d12va_encode.c | 1558 ++++++++++++++++++++++++++++++ libavcodec/d3d12va_encode.h | 334 +++++++ libavcodec/d3d12va_encode_hevc.c | 1007 +++++++++++++++++++ 6 files changed, 2910 insertions(+), 1 deletion(-) create mode 100644 libavcodec/d3d12va_encode.c create mode 100644 libavcodec/d3d12va_encode.h create mode 100644 libavcodec/d3d12va_encode_hevc.c diff --git a/configure b/configure index b16722d83d..127d68e60c 100755 --- a/configure +++ b/configure @@ -2551,6 +2551,7 @@ CONFIG_EXTRA=" cbs_mpeg2 cbs_vp8 cbs_vp9 + d3d12va_encode deflate_wrapper dirac_parse dnn @@ -3287,6 +3288,7 @@ wmv3_vaapi_hwaccel_select="vc1_vaapi_hwaccel" wmv3_vdpau_hwaccel_select="vc1_vdpau_hwaccel" # hardware-accelerated codecs +d3d12va_encode_deps="d3d12va ID3D12VideoEncoder d3d12_encoder_feature" mediafoundation_deps="mftransform_h MFCreateAlignedMemoryBuffer" omx_deps="libdl pthreads" omx_rpi_select="omx" @@ -3354,6 +3356,7 @@ h264_v4l2m2m_encoder_deps="v4l2_m2m h264_v4l2_m2m" hevc_amf_encoder_deps="amf" hevc_cuvid_decoder_deps="cuvid" hevc_cuvid_decoder_select="hevc_mp4toannexb_bsf" +hevc_d3d12va_encoder_select="cbs_h265 d3d12va_encode" hevc_mediacodec_decoder_deps="mediacodec" hevc_mediacodec_decoder_select="hevc_mp4toannexb_bsf hevc_parser" hevc_mediacodec_encoder_deps="mediacodec" @@ -6725,6 +6728,9 @@ check_type "windows.h d3d11.h" "ID3D11VideoDecoder" check_type "windows.h d3d11.h" "ID3D11VideoContext" check_type "windows.h d3d12.h" "ID3D12Device" check_type "windows.h d3d12video.h" "ID3D12VideoDecoder" +check_type "windows.h d3d12video.h" "ID3D12VideoEncoder" +test_code cc "windows.h d3d12video.h" "D3D12_FEATURE_VIDEO feature = D3D12_FEATURE_VIDEO_ENCODER_CODEC" && \ +test_code cc "windows.h d3d12video.h" "D3D12_FEATURE_DATA_VIDEO_ENCODER_RESOURCE_REQUIREMENTS req" && enable d3d12_encoder_feature check_type "windows.h" "DPI_AWARENESS_CONTEXT" -D_WIN32_WINNT=0x0A00 check_type "d3d9.h dxva2api.h" DXVA2_ConfigPictureDecode -D_WIN32_WINNT=0x0602 check_func_headers mfapi.h MFCreateAlignedMemoryBuffer -lmfplat diff --git a/libavcodec/Makefile b/libavcodec/Makefile index 998f6b7e12..6c4500ce6d 100644 --- a/libavcodec/Makefile +++ b/libavcodec/Makefile @@ -86,6 +86,7 @@ OBJS-$(CONFIG_CBS_JPEG) += cbs_jpeg.o OBJS-$(CONFIG_CBS_MPEG2) += cbs_mpeg2.o OBJS-$(CONFIG_CBS_VP8) += cbs_vp8.o vp8data.o OBJS-$(CONFIG_CBS_VP9) += cbs_vp9.o +OBJS-$(CONFIG_D3D12VA_ENCODE) += d3d12va_encode.o hw_base_encode.o OBJS-$(CONFIG_DEFLATE_WRAPPER) += zlib_wrapper.o OBJS-$(CONFIG_DOVI_RPUDEC) += dovi_rpu.o dovi_rpudec.o OBJS-$(CONFIG_DOVI_RPUENC) += dovi_rpu.o dovi_rpuenc.o @@ -436,6 +437,8 @@ OBJS-$(CONFIG_HEVC_DECODER) += hevcdec.o hevc_mvs.o \ h274.o aom_film_grain.o OBJS-$(CONFIG_HEVC_AMF_ENCODER) += amfenc_hevc.o OBJS-$(CONFIG_HEVC_CUVID_DECODER) += cuviddec.o +OBJS-$(CONFIG_HEVC_D3D12VA_ENCODER) += d3d12va_encode_hevc.o h265_profile_level.o \ + h2645data.o OBJS-$(CONFIG_HEVC_MEDIACODEC_DECODER) += mediacodecdec.o OBJS-$(CONFIG_HEVC_MEDIACODEC_ENCODER) += mediacodecenc.o OBJS-$(CONFIG_HEVC_MF_ENCODER) += mfenc.o mf_utils.o @@ -1265,7 +1268,7 @@ SKIPHEADERS += %_tablegen.h \ SKIPHEADERS-$(CONFIG_AMF) += amfenc.h SKIPHEADERS-$(CONFIG_D3D11VA) += d3d11va.h dxva2_internal.h -SKIPHEADERS-$(CONFIG_D3D12VA) += d3d12va_decode.h +SKIPHEADERS-$(CONFIG_D3D12VA) += d3d12va_decode.h d3d12va_encode.h SKIPHEADERS-$(CONFIG_DXVA2) += dxva2.h dxva2_internal.h SKIPHEADERS-$(CONFIG_JNI) += ffjni.h SKIPHEADERS-$(CONFIG_LCMS2) += fflcms2.h diff --git a/libavcodec/allcodecs.c b/libavcodec/allcodecs.c index b102a8069e..463ffbbd08 100644 --- a/libavcodec/allcodecs.c +++ b/libavcodec/allcodecs.c @@ -848,6 +848,7 @@ extern const FFCodec ff_h264_vaapi_encoder; extern const FFCodec ff_h264_videotoolbox_encoder; extern const FFCodec ff_hevc_amf_encoder; extern const FFCodec ff_hevc_cuvid_decoder; +extern const FFCodec ff_hevc_d3d12va_encoder; extern const FFCodec ff_hevc_mediacodec_decoder; extern const FFCodec ff_hevc_mediacodec_encoder; extern const FFCodec ff_hevc_mf_encoder; diff --git a/libavcodec/d3d12va_encode.c b/libavcodec/d3d12va_encode.c new file mode 100644 index 0000000000..0fbf8eb07c --- /dev/null +++ b/libavcodec/d3d12va_encode.c @@ -0,0 +1,1558 @@ +/* + * Direct3D 12 HW acceleration video encoder + * + * Copyright (c) 2024 Intel Corporation + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include "libavutil/avassert.h" +#include "libavutil/common.h" +#include "libavutil/internal.h" +#include "libavutil/log.h" +#include "libavutil/mem.h" +#include "libavutil/pixdesc.h" +#include "libavutil/hwcontext_d3d12va_internal.h" +#include "libavutil/hwcontext_d3d12va.h" + +#include "avcodec.h" +#include "d3d12va_encode.h" +#include "encode.h" + +const AVCodecHWConfigInternal *const ff_d3d12va_encode_hw_configs[] = { + HW_CONFIG_ENCODER_FRAMES(D3D12, D3D12VA), + NULL, +}; + +static int d3d12va_fence_completion(AVD3D12VASyncContext *psync_ctx) +{ + uint64_t completion = ID3D12Fence_GetCompletedValue(psync_ctx->fence); + if (completion < psync_ctx->fence_value) { + if (FAILED(ID3D12Fence_SetEventOnCompletion(psync_ctx->fence, psync_ctx->fence_value, psync_ctx->event))) + return AVERROR(EINVAL); + + WaitForSingleObjectEx(psync_ctx->event, INFINITE, FALSE); + } + + return 0; +} + +static int d3d12va_sync_with_gpu(AVCodecContext *avctx) +{ + D3D12VAEncodeContext *ctx = avctx->priv_data; + + DX_CHECK(ID3D12CommandQueue_Signal(ctx->command_queue, ctx->sync_ctx.fence, ++ctx->sync_ctx.fence_value)); + return d3d12va_fence_completion(&ctx->sync_ctx); + +fail: + return AVERROR(EINVAL); +} + +typedef struct CommandAllocator { + ID3D12CommandAllocator *command_allocator; + uint64_t fence_value; +} CommandAllocator; + +static int d3d12va_get_valid_command_allocator(AVCodecContext *avctx, ID3D12CommandAllocator **ppAllocator) +{ + HRESULT hr; + D3D12VAEncodeContext *ctx = avctx->priv_data; + CommandAllocator allocator; + + if (av_fifo_peek(ctx->allocator_queue, &allocator, 1, 0) >= 0) { + uint64_t completion = ID3D12Fence_GetCompletedValue(ctx->sync_ctx.fence); + if (completion >= allocator.fence_value) { + *ppAllocator = allocator.command_allocator; + av_fifo_read(ctx->allocator_queue, &allocator, 1); + return 0; + } + } + + hr = ID3D12Device_CreateCommandAllocator(ctx->hwctx->device, D3D12_COMMAND_LIST_TYPE_VIDEO_ENCODE, + &IID_ID3D12CommandAllocator, (void **)ppAllocator); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "Failed to create a new command allocator!\n"); + return AVERROR(EINVAL); + } + + return 0; +} + +static int d3d12va_discard_command_allocator(AVCodecContext *avctx, ID3D12CommandAllocator *pAllocator, uint64_t fence_value) +{ + D3D12VAEncodeContext *ctx = avctx->priv_data; + + CommandAllocator allocator = { + .command_allocator = pAllocator, + .fence_value = fence_value, + }; + + av_fifo_write(ctx->allocator_queue, &allocator, 1); + + return 0; +} + +static int d3d12va_encode_wait(AVCodecContext *avctx, + D3D12VAEncodePicture *pic) +{ + D3D12VAEncodeContext *ctx = avctx->priv_data; + FFHWBaseEncodePicture *base_pic = &pic->base; + uint64_t completion; + + av_assert0(base_pic->encode_issued); + + if (base_pic->encode_complete) { + // Already waited for this picture. + return 0; + } + + completion = ID3D12Fence_GetCompletedValue(ctx->sync_ctx.fence); + if (completion < pic->fence_value) { + if (FAILED(ID3D12Fence_SetEventOnCompletion(ctx->sync_ctx.fence, pic->fence_value, + ctx->sync_ctx.event))) + return AVERROR(EINVAL); + + WaitForSingleObjectEx(ctx->sync_ctx.event, INFINITE, FALSE); + } + + av_log(avctx, AV_LOG_DEBUG, "Sync to pic %"PRId64"/%"PRId64" " + "(input surface %p).\n", base_pic->display_order, + base_pic->encode_order, pic->input_surface->texture); + + av_frame_free(&base_pic->input_image); + + base_pic->encode_complete = 1; + return 0; +} + +static int d3d12va_encode_create_metadata_buffers(AVCodecContext *avctx, + D3D12VAEncodePicture *pic) +{ + D3D12VAEncodeContext *ctx = avctx->priv_data; + int width = sizeof(D3D12_VIDEO_ENCODER_OUTPUT_METADATA) + sizeof(D3D12_VIDEO_ENCODER_FRAME_SUBREGION_METADATA); + D3D12_HEAP_PROPERTIES encoded_meta_props = { .Type = D3D12_HEAP_TYPE_DEFAULT }, resolved_meta_props; + D3D12_HEAP_TYPE resolved_heap_type = D3D12_HEAP_TYPE_READBACK; + HRESULT hr; + + D3D12_RESOURCE_DESC meta_desc = { + .Dimension = D3D12_RESOURCE_DIMENSION_BUFFER, + .Alignment = 0, + .Width = ctx->req.MaxEncoderOutputMetadataBufferSize, + .Height = 1, + .DepthOrArraySize = 1, + .MipLevels = 1, + .Format = DXGI_FORMAT_UNKNOWN, + .SampleDesc = { .Count = 1, .Quality = 0 }, + .Layout = D3D12_TEXTURE_LAYOUT_ROW_MAJOR, + .Flags = D3D12_RESOURCE_FLAG_NONE, + }; + + hr = ID3D12Device_CreateCommittedResource(ctx->hwctx->device, &encoded_meta_props, D3D12_HEAP_FLAG_NONE, + &meta_desc, D3D12_RESOURCE_STATE_COMMON, NULL, + &IID_ID3D12Resource, (void **)&pic->encoded_metadata); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "Failed to create metadata buffer.\n"); + return AVERROR_UNKNOWN; + } + + ctx->hwctx->device->lpVtbl->GetCustomHeapProperties(ctx->hwctx->device, &resolved_meta_props, 0, resolved_heap_type); + + meta_desc.Width = width; + + hr = ID3D12Device_CreateCommittedResource(ctx->hwctx->device, &resolved_meta_props, D3D12_HEAP_FLAG_NONE, + &meta_desc, D3D12_RESOURCE_STATE_COMMON, NULL, + &IID_ID3D12Resource, (void **)&pic->resolved_metadata); + + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "Failed to create output metadata buffer.\n"); + return AVERROR_UNKNOWN; + } + + return 0; +} + +static int d3d12va_encode_issue(AVCodecContext *avctx, + const FFHWBaseEncodePicture *base_pic) +{ + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; + D3D12VAEncodeContext *ctx = avctx->priv_data; + AVD3D12VAFramesContext *frames_hwctx = base_ctx->input_frames->hwctx; + D3D12VAEncodePicture *pic = (D3D12VAEncodePicture *)base_pic; + int err, i, j; + HRESULT hr; + char data[MAX_PARAM_BUFFER_SIZE]; + void *ptr; + size_t bit_len; + ID3D12CommandAllocator *command_allocator = NULL; + ID3D12VideoEncodeCommandList2 *cmd_list = ctx->command_list; + D3D12_RESOURCE_BARRIER barriers[32] = { 0 }; + D3D12_VIDEO_ENCODE_REFERENCE_FRAMES d3d12_refs = { 0 }; + + D3D12_VIDEO_ENCODER_ENCODEFRAME_INPUT_ARGUMENTS input_args = { + .SequenceControlDesc = { + .Flags = D3D12_VIDEO_ENCODER_SEQUENCE_CONTROL_FLAG_NONE, + .IntraRefreshConfig = { 0 }, + .RateControl = ctx->rc, + .PictureTargetResolution = ctx->resolution, + .SelectedLayoutMode = D3D12_VIDEO_ENCODER_FRAME_SUBREGION_LAYOUT_MODE_FULL_FRAME, + .FrameSubregionsLayoutData = { 0 }, + .CodecGopSequence = ctx->gop, + }, + .pInputFrame = pic->input_surface->texture, + .InputFrameSubresource = 0, + }; + + D3D12_VIDEO_ENCODER_ENCODEFRAME_OUTPUT_ARGUMENTS output_args = { 0 }; + + D3D12_VIDEO_ENCODER_RESOLVE_METADATA_INPUT_ARGUMENTS input_metadata = { + .EncoderCodec = ctx->codec->d3d12_codec, + .EncoderProfile = ctx->profile->d3d12_profile, + .EncoderInputFormat = frames_hwctx->format, + .EncodedPictureEffectiveResolution = ctx->resolution, + }; + + D3D12_VIDEO_ENCODER_RESOLVE_METADATA_OUTPUT_ARGUMENTS output_metadata = { 0 }; + + memset(data, 0, sizeof(data)); + + av_log(avctx, AV_LOG_DEBUG, "Issuing encode for pic %"PRId64"/%"PRId64" " + "as type %s.\n", base_pic->display_order, base_pic->encode_order, + ff_hw_base_encode_get_pictype_name(base_pic->type)); + if (base_pic->nb_refs[0] == 0 && base_pic->nb_refs[1] == 0) { + av_log(avctx, AV_LOG_DEBUG, "No reference pictures.\n"); + } else { + av_log(avctx, AV_LOG_DEBUG, "L0 refers to"); + for (i = 0; i < base_pic->nb_refs[0]; i++) { + av_log(avctx, AV_LOG_DEBUG, " %"PRId64"/%"PRId64, + base_pic->refs[0][i]->display_order, base_pic->refs[0][i]->encode_order); + } + av_log(avctx, AV_LOG_DEBUG, ".\n"); + + if (base_pic->nb_refs[1]) { + av_log(avctx, AV_LOG_DEBUG, "L1 refers to"); + for (i = 0; i < base_pic->nb_refs[1]; i++) { + av_log(avctx, AV_LOG_DEBUG, " %"PRId64"/%"PRId64, + base_pic->refs[1][i]->display_order, base_pic->refs[1][i]->encode_order); + } + av_log(avctx, AV_LOG_DEBUG, ".\n"); + } + } + + av_assert0(!base_pic->encode_issued); + for (i = 0; i < base_pic->nb_refs[0]; i++) { + av_assert0(base_pic->refs[0][i]); + av_assert0(base_pic->refs[0][i]->encode_issued); + } + for (i = 0; i < base_pic->nb_refs[1]; i++) { + av_assert0(base_pic->refs[1][i]); + av_assert0(base_pic->refs[1][i]->encode_issued); + } + + av_log(avctx, AV_LOG_DEBUG, "Input surface is %p.\n", pic->input_surface->texture); + + err = av_hwframe_get_buffer(base_ctx->recon_frames_ref, base_pic->recon_image, 0); + if (err < 0) { + err = AVERROR(ENOMEM); + goto fail; + } + + pic->recon_surface = (AVD3D12VAFrame *)base_pic->recon_image->data[0]; + av_log(avctx, AV_LOG_DEBUG, "Recon surface is %p.\n", + pic->recon_surface->texture); + + pic->output_buffer_ref = av_buffer_pool_get(ctx->output_buffer_pool); + if (!pic->output_buffer_ref) { + err = AVERROR(ENOMEM); + goto fail; + } + pic->output_buffer = (ID3D12Resource *)pic->output_buffer_ref->data; + av_log(avctx, AV_LOG_DEBUG, "Output buffer is %p.\n", + pic->output_buffer); + + err = d3d12va_encode_create_metadata_buffers(avctx, pic); + if (err < 0) + goto fail; + + if (ctx->codec->init_picture_params) { + err = ctx->codec->init_picture_params(avctx, pic); + if (err < 0) { + av_log(avctx, AV_LOG_ERROR, "Failed to initialise picture " + "parameters: %d.\n", err); + goto fail; + } + } + + if (base_pic->type == FF_HW_PICTURE_TYPE_IDR) { + if (ctx->codec->write_sequence_header) { + bit_len = 8 * sizeof(data); + err = ctx->codec->write_sequence_header(avctx, data, &bit_len); + if (err < 0) { + av_log(avctx, AV_LOG_ERROR, "Failed to write per-sequence " + "header: %d.\n", err); + goto fail; + } + } + + pic->header_size = (int)bit_len / 8; + pic->header_size = pic->header_size % ctx->req.CompressedBitstreamBufferAccessAlignment ? + FFALIGN(pic->header_size, ctx->req.CompressedBitstreamBufferAccessAlignment) : + pic->header_size; + + hr = ID3D12Resource_Map(pic->output_buffer, 0, NULL, (void **)&ptr); + if (FAILED(hr)) { + err = AVERROR_UNKNOWN; + goto fail; + } + + memcpy(ptr, data, pic->header_size); + ID3D12Resource_Unmap(pic->output_buffer, 0, NULL); + } + + d3d12_refs.NumTexture2Ds = base_pic->nb_refs[0] + base_pic->nb_refs[1]; + if (d3d12_refs.NumTexture2Ds) { + d3d12_refs.ppTexture2Ds = av_calloc(d3d12_refs.NumTexture2Ds, + sizeof(*d3d12_refs.ppTexture2Ds)); + if (!d3d12_refs.ppTexture2Ds) { + err = AVERROR(ENOMEM); + goto fail; + } + + i = 0; + for (j = 0; j < base_pic->nb_refs[0]; j++) + d3d12_refs.ppTexture2Ds[i++] = ((D3D12VAEncodePicture *)base_pic->refs[0][j])->recon_surface->texture; + for (j = 0; j < base_pic->nb_refs[1]; j++) + d3d12_refs.ppTexture2Ds[i++] = ((D3D12VAEncodePicture *)base_pic->refs[1][j])->recon_surface->texture; + } + + input_args.PictureControlDesc.IntraRefreshFrameIndex = 0; + if (base_pic->is_reference) + input_args.PictureControlDesc.Flags |= D3D12_VIDEO_ENCODER_PICTURE_CONTROL_FLAG_USED_AS_REFERENCE_PICTURE; + + input_args.PictureControlDesc.PictureControlCodecData = pic->pic_ctl; + input_args.PictureControlDesc.ReferenceFrames = d3d12_refs; + input_args.CurrentFrameBitstreamMetadataSize = pic->header_size; + + output_args.Bitstream.pBuffer = pic->output_buffer; + output_args.Bitstream.FrameStartOffset = pic->header_size; + output_args.ReconstructedPicture.pReconstructedPicture = pic->recon_surface->texture; + output_args.ReconstructedPicture.ReconstructedPictureSubresource = 0; + output_args.EncoderOutputMetadata.pBuffer = pic->encoded_metadata; + output_args.EncoderOutputMetadata.Offset = 0; + + input_metadata.HWLayoutMetadata.pBuffer = pic->encoded_metadata; + input_metadata.HWLayoutMetadata.Offset = 0; + + output_metadata.ResolvedLayoutMetadata.pBuffer = pic->resolved_metadata; + output_metadata.ResolvedLayoutMetadata.Offset = 0; + + err = d3d12va_get_valid_command_allocator(avctx, &command_allocator); + if (err < 0) + goto fail; + + hr = ID3D12CommandAllocator_Reset(command_allocator); + if (FAILED(hr)) { + err = AVERROR_UNKNOWN; + goto fail; + } + + hr = ID3D12VideoEncodeCommandList2_Reset(cmd_list, command_allocator); + if (FAILED(hr)) { + err = AVERROR_UNKNOWN; + goto fail; + } + +#define TRANSITION_BARRIER(res, before, after) \ + (D3D12_RESOURCE_BARRIER) { \ + .Type = D3D12_RESOURCE_BARRIER_TYPE_TRANSITION, \ + .Flags = D3D12_RESOURCE_BARRIER_FLAG_NONE, \ + .Transition = { \ + .pResource = res, \ + .Subresource = D3D12_RESOURCE_BARRIER_ALL_SUBRESOURCES, \ + .StateBefore = before, \ + .StateAfter = after, \ + }, \ + } + + barriers[0] = TRANSITION_BARRIER(pic->input_surface->texture, + D3D12_RESOURCE_STATE_COMMON, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ); + barriers[1] = TRANSITION_BARRIER(pic->output_buffer, + D3D12_RESOURCE_STATE_COMMON, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE); + barriers[2] = TRANSITION_BARRIER(pic->recon_surface->texture, + D3D12_RESOURCE_STATE_COMMON, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE); + barriers[3] = TRANSITION_BARRIER(pic->encoded_metadata, + D3D12_RESOURCE_STATE_COMMON, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE); + barriers[4] = TRANSITION_BARRIER(pic->resolved_metadata, + D3D12_RESOURCE_STATE_COMMON, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE); + + ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, 5, barriers); + + if (d3d12_refs.NumTexture2Ds) { + D3D12_RESOURCE_BARRIER refs_barriers[3]; + + for (i = 0; i < d3d12_refs.NumTexture2Ds; i++) + refs_barriers[i] = TRANSITION_BARRIER(d3d12_refs.ppTexture2Ds[i], + D3D12_RESOURCE_STATE_COMMON, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ); + + ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, d3d12_refs.NumTexture2Ds, + refs_barriers); + } + + ID3D12VideoEncodeCommandList2_EncodeFrame(cmd_list, ctx->encoder, ctx->encoder_heap, + &input_args, &output_args); + + barriers[3] = TRANSITION_BARRIER(pic->encoded_metadata, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ); + + ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, 1, &barriers[3]); + + ID3D12VideoEncodeCommandList2_ResolveEncoderOutputMetadata(cmd_list, &input_metadata, &output_metadata); + + if (d3d12_refs.NumTexture2Ds) { + D3D12_RESOURCE_BARRIER refs_barriers[3]; + + for (i = 0; i < d3d12_refs.NumTexture2Ds; i++) + refs_barriers[i] = TRANSITION_BARRIER(d3d12_refs.ppTexture2Ds[i], + D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ, + D3D12_RESOURCE_STATE_COMMON); + + ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, d3d12_refs.NumTexture2Ds, + refs_barriers); + } + + barriers[0] = TRANSITION_BARRIER(pic->input_surface->texture, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ, + D3D12_RESOURCE_STATE_COMMON); + barriers[1] = TRANSITION_BARRIER(pic->output_buffer, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE, + D3D12_RESOURCE_STATE_COMMON); + barriers[2] = TRANSITION_BARRIER(pic->recon_surface->texture, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE, + D3D12_RESOURCE_STATE_COMMON); + barriers[3] = TRANSITION_BARRIER(pic->encoded_metadata, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ, + D3D12_RESOURCE_STATE_COMMON); + barriers[4] = TRANSITION_BARRIER(pic->resolved_metadata, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE, + D3D12_RESOURCE_STATE_COMMON); + + ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, 5, barriers); + + hr = ID3D12VideoEncodeCommandList2_Close(cmd_list); + if (FAILED(hr)) { + err = AVERROR_UNKNOWN; + goto fail; + } + + hr = ID3D12CommandQueue_Wait(ctx->command_queue, pic->input_surface->sync_ctx.fence, + pic->input_surface->sync_ctx.fence_value); + if (FAILED(hr)) { + err = AVERROR_UNKNOWN; + goto fail; + } + + ID3D12CommandQueue_ExecuteCommandLists(ctx->command_queue, 1, (ID3D12CommandList **)&ctx->command_list); + + hr = ID3D12CommandQueue_Signal(ctx->command_queue, pic->input_surface->sync_ctx.fence, + ++pic->input_surface->sync_ctx.fence_value); + if (FAILED(hr)) { + err = AVERROR_UNKNOWN; + goto fail; + } + + hr = ID3D12CommandQueue_Signal(ctx->command_queue, ctx->sync_ctx.fence, ++ctx->sync_ctx.fence_value); + if (FAILED(hr)) { + err = AVERROR_UNKNOWN; + goto fail; + } + + err = d3d12va_discard_command_allocator(avctx, command_allocator, ctx->sync_ctx.fence_value); + if (err < 0) + goto fail; + + pic->fence_value = ctx->sync_ctx.fence_value; + + if (d3d12_refs.ppTexture2Ds) + av_freep(&d3d12_refs.ppTexture2Ds); + + return 0; + +fail: + if (command_allocator) + d3d12va_discard_command_allocator(avctx, command_allocator, ctx->sync_ctx.fence_value); + + if (d3d12_refs.ppTexture2Ds) + av_freep(&d3d12_refs.ppTexture2Ds); + + if (ctx->codec->free_picture_params) + ctx->codec->free_picture_params(pic); + + av_buffer_unref(&pic->output_buffer_ref); + pic->output_buffer = NULL; + D3D12_OBJECT_RELEASE(pic->encoded_metadata); + D3D12_OBJECT_RELEASE(pic->resolved_metadata); + return err; +} + +static int d3d12va_encode_discard(AVCodecContext *avctx, + D3D12VAEncodePicture *pic) +{ + FFHWBaseEncodePicture *base_pic = &pic->base; + d3d12va_encode_wait(avctx, pic); + + if (pic->output_buffer_ref) { + av_log(avctx, AV_LOG_DEBUG, "Discard output for pic " + "%"PRId64"/%"PRId64".\n", + base_pic->display_order, base_pic->encode_order); + + av_buffer_unref(&pic->output_buffer_ref); + pic->output_buffer = NULL; + } + + D3D12_OBJECT_RELEASE(pic->encoded_metadata); + D3D12_OBJECT_RELEASE(pic->resolved_metadata); + + return 0; +} + +static int d3d12va_encode_free_rc_params(AVCodecContext *avctx) +{ + D3D12VAEncodeContext *ctx = avctx->priv_data; + + switch (ctx->rc.Mode) + { + case D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_CQP: + av_freep(&ctx->rc.ConfigParams.pConfiguration_CQP); + break; + case D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_CBR: + av_freep(&ctx->rc.ConfigParams.pConfiguration_CBR); + break; + case D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_VBR: + av_freep(&ctx->rc.ConfigParams.pConfiguration_VBR); + break; + case D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_QVBR: + av_freep(&ctx->rc.ConfigParams.pConfiguration_QVBR); + break; + default: + break; + } + + return 0; +} + +static FFHWBaseEncodePicture *d3d12va_encode_alloc(AVCodecContext *avctx, + const AVFrame *frame) +{ + D3D12VAEncodeContext *ctx = avctx->priv_data; + D3D12VAEncodePicture *pic; + + pic = av_mallocz(sizeof(*pic)); + if (!pic) + return NULL; + + if (ctx->codec->picture_priv_data_size > 0) { + pic->base.priv_data = av_mallocz(ctx->codec->picture_priv_data_size); + if (!pic->base.priv_data) { + av_freep(&pic); + return NULL; + } + } + + pic->input_surface = (AVD3D12VAFrame *)frame->data[0]; + + return &pic->base; +} + +static int d3d12va_encode_free(AVCodecContext *avctx, + FFHWBaseEncodePicture *base_pic) +{ + D3D12VAEncodeContext *ctx = avctx->priv_data; + D3D12VAEncodePicture *pic = (D3D12VAEncodePicture *)base_pic; + + if (base_pic->encode_issued) + d3d12va_encode_discard(avctx, pic); + + if (ctx->codec->free_picture_params) + ctx->codec->free_picture_params(pic); + + ff_hw_base_encode_free(base_pic); + + av_free(pic); + + return 0; +} + +static int d3d12va_encode_get_buffer_size(AVCodecContext *avctx, + D3D12VAEncodePicture *pic, size_t *size) +{ + D3D12_VIDEO_ENCODER_OUTPUT_METADATA *meta = NULL; + uint8_t *data; + HRESULT hr; + int err; + + hr = ID3D12Resource_Map(pic->resolved_metadata, 0, NULL, (void **)&data); + if (FAILED(hr)) { + err = AVERROR_UNKNOWN; + return err; + } + + meta = (D3D12_VIDEO_ENCODER_OUTPUT_METADATA *)data; + + if (meta->EncodeErrorFlags != D3D12_VIDEO_ENCODER_ENCODE_ERROR_FLAG_NO_ERROR) { + av_log(avctx, AV_LOG_ERROR, "Encode failed %"PRIu64"\n", meta->EncodeErrorFlags); + err = AVERROR(EINVAL); + return err; + } + + if (meta->EncodedBitstreamWrittenBytesCount == 0) { + av_log(avctx, AV_LOG_ERROR, "No bytes were written to encoded bitstream\n"); + err = AVERROR(EINVAL); + return err; + } + + *size = meta->EncodedBitstreamWrittenBytesCount; + + ID3D12Resource_Unmap(pic->resolved_metadata, 0, NULL); + + return 0; +} + +static int d3d12va_encode_get_coded_data(AVCodecContext *avctx, + D3D12VAEncodePicture *pic, AVPacket *pkt) +{ + int err; + uint8_t *ptr, *mapped_data; + size_t total_size = 0; + HRESULT hr; + + err = d3d12va_encode_get_buffer_size(avctx, pic, &total_size); + if (err < 0) + goto end; + + total_size += pic->header_size; + av_log(avctx, AV_LOG_DEBUG, "Output buffer size %"PRId64"\n", total_size); + + hr = ID3D12Resource_Map(pic->output_buffer, 0, NULL, (void **)&mapped_data); + if (FAILED(hr)) { + err = AVERROR_UNKNOWN; + goto end; + } + + err = ff_get_encode_buffer(avctx, pkt, total_size, 0); + if (err < 0) + goto end; + ptr = pkt->data; + + memcpy(ptr, mapped_data, total_size); + + ID3D12Resource_Unmap(pic->output_buffer, 0, NULL); + +end: + av_buffer_unref(&pic->output_buffer_ref); + pic->output_buffer = NULL; + return err; +} + +static int d3d12va_encode_output(AVCodecContext *avctx, + const FFHWBaseEncodePicture *base_pic, AVPacket *pkt) +{ + D3D12VAEncodePicture *pic = (D3D12VAEncodePicture *)base_pic; + AVPacket *pkt_ptr = pkt; + int err; + + err = d3d12va_encode_wait(avctx, pic); + if (err < 0) + return err; + + err = d3d12va_encode_get_coded_data(avctx, pic, pkt); + if (err < 0) + return err; + + av_log(avctx, AV_LOG_DEBUG, "Output read for pic %"PRId64"/%"PRId64".\n", + base_pic->display_order, base_pic->encode_order); + + ff_hw_base_encode_set_output_property(avctx, (FFHWBaseEncodePicture *)base_pic, pkt_ptr, 0); + + return 0; +} + +static int d3d12va_encode_set_profile(AVCodecContext *avctx) +{ + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; + D3D12VAEncodeContext *ctx = avctx->priv_data; + const D3D12VAEncodeProfile *profile; + const AVPixFmtDescriptor *desc; + int i, depth; + + desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format); + if (!desc) { + av_log(avctx, AV_LOG_ERROR, "Invalid input pixfmt (%d).\n", + base_ctx->input_frames->sw_format); + return AVERROR(EINVAL); + } + + depth = desc->comp[0].depth; + for (i = 1; i < desc->nb_components; i++) { + if (desc->comp[i].depth != depth) { + av_log(avctx, AV_LOG_ERROR, "Invalid input pixfmt (%s).\n", + desc->name); + return AVERROR(EINVAL); + } + } + av_log(avctx, AV_LOG_VERBOSE, "Input surface format is %s.\n", + desc->name); + + av_assert0(ctx->codec->profiles); + for (i = 0; (ctx->codec->profiles[i].av_profile != + AV_PROFILE_UNKNOWN); i++) { + profile = &ctx->codec->profiles[i]; + if (depth != profile->depth || + desc->nb_components != profile->nb_components) + continue; + if (desc->nb_components > 1 && + (desc->log2_chroma_w != profile->log2_chroma_w || + desc->log2_chroma_h != profile->log2_chroma_h)) + continue; + if (avctx->profile != profile->av_profile && + avctx->profile != AV_PROFILE_UNKNOWN) + continue; + + ctx->profile = profile; + break; + } + if (!ctx->profile) { + av_log(avctx, AV_LOG_ERROR, "No usable encoding profile found.\n"); + return AVERROR(ENOSYS); + } + + avctx->profile = profile->av_profile; + return 0; +} + +static const D3D12VAEncodeRCMode d3d12va_encode_rc_modes[] = { + // Bitrate Quality + // | Maxrate | HRD/VBV + { 0 }, // | | | | + { RC_MODE_CQP, "CQP", 0, 0, 1, 0, D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_CQP }, + { RC_MODE_CBR, "CBR", 1, 0, 0, 1, D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_CBR }, + { RC_MODE_VBR, "VBR", 1, 1, 0, 1, D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_VBR }, + { RC_MODE_QVBR, "QVBR", 1, 1, 1, 1, D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_QVBR }, +}; + +static int check_rate_control_support(AVCodecContext *avctx, const D3D12VAEncodeRCMode *rc_mode) +{ + HRESULT hr; + D3D12VAEncodeContext *ctx = avctx->priv_data; + D3D12_FEATURE_DATA_VIDEO_ENCODER_RATE_CONTROL_MODE d3d12_rc_mode = { + .Codec = ctx->codec->d3d12_codec, + }; + + if (!rc_mode->d3d12_mode) + return 0; + + d3d12_rc_mode.IsSupported = 0; + d3d12_rc_mode.RateControlMode = rc_mode->d3d12_mode; + + hr = ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3, + D3D12_FEATURE_VIDEO_ENCODER_RATE_CONTROL_MODE, + &d3d12_rc_mode, sizeof(d3d12_rc_mode)); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "Failed to check rate control support.\n"); + return 0; + } + + return d3d12_rc_mode.IsSupported; +} + +static int d3d12va_encode_init_rate_control(AVCodecContext *avctx) +{ + D3D12VAEncodeContext *ctx = avctx->priv_data; + int64_t rc_target_bitrate; + int64_t rc_peak_bitrate; + int rc_quality; + int64_t hrd_buffer_size; + int64_t hrd_initial_buffer_fullness; + int fr_num, fr_den; + const D3D12VAEncodeRCMode *rc_mode; + + // Rate control mode selection: + // * If the user has set a mode explicitly with the rc_mode option, + // use it and fail if it is not available. + // * If an explicit QP option has been set, use CQP. + // * If the codec is CQ-only, use CQP. + // * If the QSCALE avcodec option is set, use CQP. + // * If bitrate and quality are both set, try QVBR. + // * If quality is set, try CQP. + // * If bitrate and maxrate are set and have the same value, try CBR. + // * If a bitrate is set, try VBR, then CBR. + // * If no bitrate is set, try CQP. + +#define TRY_RC_MODE(mode, fail) do { \ + rc_mode = &d3d12va_encode_rc_modes[mode]; \ + if (!(rc_mode->d3d12_mode && check_rate_control_support(avctx, rc_mode))) { \ + if (fail) { \ + av_log(avctx, AV_LOG_ERROR, "Driver does not support %s " \ + "RC mode.\n", rc_mode->name); \ + return AVERROR(EINVAL); \ + } \ + av_log(avctx, AV_LOG_DEBUG, "Driver does not support %s " \ + "RC mode.\n", rc_mode->name); \ + rc_mode = NULL; \ + } else { \ + goto rc_mode_found; \ + } \ + } while (0) + + if (ctx->explicit_rc_mode) + TRY_RC_MODE(ctx->explicit_rc_mode, 1); + + if (ctx->explicit_qp) + TRY_RC_MODE(RC_MODE_CQP, 1); + + if (ctx->codec->flags & FF_HW_FLAG_CONSTANT_QUALITY_ONLY) + TRY_RC_MODE(RC_MODE_CQP, 1); + + if (avctx->flags & AV_CODEC_FLAG_QSCALE) + TRY_RC_MODE(RC_MODE_CQP, 1); + + if (avctx->bit_rate > 0 && avctx->global_quality > 0) + TRY_RC_MODE(RC_MODE_QVBR, 0); + + if (avctx->global_quality > 0) { + TRY_RC_MODE(RC_MODE_CQP, 0); + } + + if (avctx->bit_rate > 0 && avctx->rc_max_rate == avctx->bit_rate) + TRY_RC_MODE(RC_MODE_CBR, 0); + + if (avctx->bit_rate > 0) { + TRY_RC_MODE(RC_MODE_VBR, 0); + TRY_RC_MODE(RC_MODE_CBR, 0); + } else { + TRY_RC_MODE(RC_MODE_CQP, 0); + } + + av_log(avctx, AV_LOG_ERROR, "Driver does not support any " + "RC mode compatible with selected options.\n"); + return AVERROR(EINVAL); + +rc_mode_found: + if (rc_mode->bitrate) { + if (avctx->bit_rate <= 0) { + av_log(avctx, AV_LOG_ERROR, "Bitrate must be set for %s " + "RC mode.\n", rc_mode->name); + return AVERROR(EINVAL); + } + + if (rc_mode->maxrate) { + if (avctx->rc_max_rate > 0) { + if (avctx->rc_max_rate < avctx->bit_rate) { + av_log(avctx, AV_LOG_ERROR, "Invalid bitrate settings: " + "bitrate (%"PRId64") must not be greater than " + "maxrate (%"PRId64").\n", avctx->bit_rate, + avctx->rc_max_rate); + return AVERROR(EINVAL); + } + rc_target_bitrate = avctx->bit_rate; + rc_peak_bitrate = avctx->rc_max_rate; + } else { + // We only have a target bitrate, but this mode requires + // that a maximum rate be supplied as well. Since the + // user does not want this to be a constraint, arbitrarily + // pick a maximum rate of double the target rate. + rc_target_bitrate = avctx->bit_rate; + rc_peak_bitrate = 2 * avctx->bit_rate; + } + } else { + if (avctx->rc_max_rate > avctx->bit_rate) { + av_log(avctx, AV_LOG_WARNING, "Max bitrate is ignored " + "in %s RC mode.\n", rc_mode->name); + } + rc_target_bitrate = avctx->bit_rate; + rc_peak_bitrate = 0; + } + } else { + rc_target_bitrate = 0; + rc_peak_bitrate = 0; + } + + if (rc_mode->quality) { + if (ctx->explicit_qp) { + rc_quality = ctx->explicit_qp; + } else if (avctx->global_quality > 0) { + if (avctx->flags & AV_CODEC_FLAG_QSCALE) + rc_quality = avctx->global_quality / FF_QP2LAMBDA; + else + rc_quality = avctx->global_quality; + } else { + rc_quality = ctx->codec->default_quality; + av_log(avctx, AV_LOG_WARNING, "No quality level set; " + "using default (%d).\n", rc_quality); + } + } else { + rc_quality = 0; + } + + if (rc_mode->hrd) { + if (avctx->rc_buffer_size) + hrd_buffer_size = avctx->rc_buffer_size; + else if (avctx->rc_max_rate > 0) + hrd_buffer_size = avctx->rc_max_rate; + else + hrd_buffer_size = avctx->bit_rate; + if (avctx->rc_initial_buffer_occupancy) { + if (avctx->rc_initial_buffer_occupancy > hrd_buffer_size) { + av_log(avctx, AV_LOG_ERROR, "Invalid RC buffer settings: " + "must have initial buffer size (%d) <= " + "buffer size (%"PRId64").\n", + avctx->rc_initial_buffer_occupancy, hrd_buffer_size); + return AVERROR(EINVAL); + } + hrd_initial_buffer_fullness = avctx->rc_initial_buffer_occupancy; + } else { + hrd_initial_buffer_fullness = hrd_buffer_size * 3 / 4; + } + } else { + if (avctx->rc_buffer_size || avctx->rc_initial_buffer_occupancy) { + av_log(avctx, AV_LOG_WARNING, "Buffering settings are ignored " + "in %s RC mode.\n", rc_mode->name); + } + + hrd_buffer_size = 0; + hrd_initial_buffer_fullness = 0; + } + + if (rc_target_bitrate > UINT32_MAX || + hrd_buffer_size > UINT32_MAX || + hrd_initial_buffer_fullness > UINT32_MAX) { + av_log(avctx, AV_LOG_ERROR, "RC parameters of 2^32 or " + "greater are not supported by D3D12.\n"); + return AVERROR(EINVAL); + } + + ctx->rc_quality = rc_quality; + + av_log(avctx, AV_LOG_VERBOSE, "RC mode: %s.\n", rc_mode->name); + + if (rc_mode->quality) + av_log(avctx, AV_LOG_VERBOSE, "RC quality: %d.\n", rc_quality); + + if (rc_mode->hrd) { + av_log(avctx, AV_LOG_VERBOSE, "RC buffer: %"PRId64" bits, " + "initial fullness %"PRId64" bits.\n", + hrd_buffer_size, hrd_initial_buffer_fullness); + } + + if (avctx->framerate.num > 0 && avctx->framerate.den > 0) + av_reduce(&fr_num, &fr_den, + avctx->framerate.num, avctx->framerate.den, 65535); + else + av_reduce(&fr_num, &fr_den, + avctx->time_base.den, avctx->time_base.num, 65535); + + av_log(avctx, AV_LOG_VERBOSE, "RC framerate: %d/%d (%.2f fps).\n", + fr_num, fr_den, (double)fr_num / fr_den); + + ctx->rc.Flags = D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_NONE; + ctx->rc.TargetFrameRate.Numerator = fr_num; + ctx->rc.TargetFrameRate.Denominator = fr_den; + ctx->rc.Mode = rc_mode->d3d12_mode; + + switch (rc_mode->mode) { + case RC_MODE_CQP: + // cqp ConfigParams will be updated in ctx->codec->configure. + break; + + case RC_MODE_CBR: + D3D12_VIDEO_ENCODER_RATE_CONTROL_CBR *cbr_ctl; + + ctx->rc.ConfigParams.DataSize = sizeof(D3D12_VIDEO_ENCODER_RATE_CONTROL_CBR); + cbr_ctl = av_mallocz(ctx->rc.ConfigParams.DataSize); + if (!cbr_ctl) + return AVERROR(ENOMEM); + + cbr_ctl->TargetBitRate = rc_target_bitrate; + cbr_ctl->VBVCapacity = hrd_buffer_size; + cbr_ctl->InitialVBVFullness = hrd_initial_buffer_fullness; + ctx->rc.Flags |= D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_ENABLE_VBV_SIZES; + + if (avctx->qmin > 0 || avctx->qmax > 0) { + cbr_ctl->MinQP = avctx->qmin; + cbr_ctl->MaxQP = avctx->qmax; + ctx->rc.Flags |= D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_ENABLE_QP_RANGE; + } + + ctx->rc.ConfigParams.pConfiguration_CBR = cbr_ctl; + break; + + case RC_MODE_VBR: + D3D12_VIDEO_ENCODER_RATE_CONTROL_VBR *vbr_ctl; + + ctx->rc.ConfigParams.DataSize = sizeof(D3D12_VIDEO_ENCODER_RATE_CONTROL_VBR); + vbr_ctl = av_mallocz(ctx->rc.ConfigParams.DataSize); + if (!vbr_ctl) + return AVERROR(ENOMEM); + + vbr_ctl->TargetAvgBitRate = rc_target_bitrate; + vbr_ctl->PeakBitRate = rc_peak_bitrate; + vbr_ctl->VBVCapacity = hrd_buffer_size; + vbr_ctl->InitialVBVFullness = hrd_initial_buffer_fullness; + ctx->rc.Flags |= D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_ENABLE_VBV_SIZES; + + if (avctx->qmin > 0 || avctx->qmax > 0) { + vbr_ctl->MinQP = avctx->qmin; + vbr_ctl->MaxQP = avctx->qmax; + ctx->rc.Flags |= D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_ENABLE_QP_RANGE; + } + + ctx->rc.ConfigParams.pConfiguration_VBR = vbr_ctl; + break; + + case RC_MODE_QVBR: + D3D12_VIDEO_ENCODER_RATE_CONTROL_QVBR *qvbr_ctl; + + ctx->rc.ConfigParams.DataSize = sizeof(D3D12_VIDEO_ENCODER_RATE_CONTROL_QVBR); + qvbr_ctl = av_mallocz(ctx->rc.ConfigParams.DataSize); + if (!qvbr_ctl) + return AVERROR(ENOMEM); + + qvbr_ctl->TargetAvgBitRate = rc_target_bitrate; + qvbr_ctl->PeakBitRate = rc_peak_bitrate; + qvbr_ctl->ConstantQualityTarget = rc_quality; + + if (avctx->qmin > 0 || avctx->qmax > 0) { + qvbr_ctl->MinQP = avctx->qmin; + qvbr_ctl->MaxQP = avctx->qmax; + ctx->rc.Flags |= D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_ENABLE_QP_RANGE; + } + + ctx->rc.ConfigParams.pConfiguration_QVBR = qvbr_ctl; + break; + + default: + break; + } + return 0; +} + +static int d3d12va_encode_init_gop_structure(AVCodecContext *avctx) +{ + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; + D3D12VAEncodeContext *ctx = avctx->priv_data; + uint32_t ref_l0, ref_l1; + int err; + HRESULT hr; + D3D12_FEATURE_DATA_VIDEO_ENCODER_CODEC_PICTURE_CONTROL_SUPPORT support; + union { + D3D12_VIDEO_ENCODER_CODEC_PICTURE_CONTROL_SUPPORT_H264 h264; + D3D12_VIDEO_ENCODER_CODEC_PICTURE_CONTROL_SUPPORT_HEVC hevc; + } codec_support; + + support.NodeIndex = 0; + support.Codec = ctx->codec->d3d12_codec; + support.Profile = ctx->profile->d3d12_profile; + + switch (ctx->codec->d3d12_codec) { + case D3D12_VIDEO_ENCODER_CODEC_H264: + support.PictureSupport.DataSize = sizeof(codec_support.h264); + support.PictureSupport.pH264Support = &codec_support.h264; + break; + + case D3D12_VIDEO_ENCODER_CODEC_HEVC: + support.PictureSupport.DataSize = sizeof(codec_support.hevc); + support.PictureSupport.pHEVCSupport = &codec_support.hevc; + break; + + default: + av_assert0(0); + } + + hr = ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3, D3D12_FEATURE_VIDEO_ENCODER_CODEC_PICTURE_CONTROL_SUPPORT, + &support, sizeof(support)); + if (FAILED(hr)) + return AVERROR(EINVAL); + + if (support.IsSupported) { + switch (ctx->codec->d3d12_codec) { + case D3D12_VIDEO_ENCODER_CODEC_H264: + ref_l0 = FFMIN(support.PictureSupport.pH264Support->MaxL0ReferencesForP, + support.PictureSupport.pH264Support->MaxL1ReferencesForB); + ref_l1 = support.PictureSupport.pH264Support->MaxL1ReferencesForB; + break; + + case D3D12_VIDEO_ENCODER_CODEC_HEVC: + ref_l0 = FFMIN(support.PictureSupport.pHEVCSupport->MaxL0ReferencesForP, + support.PictureSupport.pHEVCSupport->MaxL1ReferencesForB); + ref_l1 = support.PictureSupport.pHEVCSupport->MaxL1ReferencesForB; + break; + + default: + av_assert0(0); + } + } else { + ref_l0 = ref_l1 = 0; + } + + if (ref_l0 > 0 && ref_l1 > 0 && ctx->bi_not_empty) { + base_ctx->p_to_gpb = 1; + av_log(avctx, AV_LOG_VERBOSE, "Driver does not support P-frames, " + "replacing them with B-frames.\n"); + } + + err = ff_hw_base_init_gop_structure(avctx, ref_l0, ref_l1, ctx->codec->flags, 0); + if (err < 0) + return err; + + return 0; +} + +static int d3d12va_create_encoder(AVCodecContext *avctx) +{ + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; + D3D12VAEncodeContext *ctx = avctx->priv_data; + AVD3D12VAFramesContext *frames_hwctx = base_ctx->input_frames->hwctx; + HRESULT hr; + + D3D12_VIDEO_ENCODER_DESC desc = { + .NodeMask = 0, + .Flags = D3D12_VIDEO_ENCODER_FLAG_NONE, + .EncodeCodec = ctx->codec->d3d12_codec, + .EncodeProfile = ctx->profile->d3d12_profile, + .InputFormat = frames_hwctx->format, + .CodecConfiguration = ctx->codec_conf, + .MaxMotionEstimationPrecision = D3D12_VIDEO_ENCODER_MOTION_ESTIMATION_PRECISION_MODE_MAXIMUM, + }; + + hr = ID3D12VideoDevice3_CreateVideoEncoder(ctx->video_device3, &desc, &IID_ID3D12VideoEncoder, + (void **)&ctx->encoder); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "Failed to create encoder.\n"); + return AVERROR(EINVAL); + } + + return 0; +} + +static int d3d12va_create_encoder_heap(AVCodecContext *avctx) +{ + D3D12VAEncodeContext *ctx = avctx->priv_data; + HRESULT hr; + + D3D12_VIDEO_ENCODER_HEAP_DESC desc = { + .NodeMask = 0, + .Flags = D3D12_VIDEO_ENCODER_FLAG_NONE, + .EncodeCodec = ctx->codec->d3d12_codec, + .EncodeProfile = ctx->profile->d3d12_profile, + .EncodeLevel = ctx->level, + .ResolutionsListCount = 1, + .pResolutionList = &ctx->resolution, + }; + + hr = ID3D12VideoDevice3_CreateVideoEncoderHeap(ctx->video_device3, &desc, + &IID_ID3D12VideoEncoderHeap, (void **)&ctx->encoder_heap); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "Failed to create encoder heap.\n"); + return AVERROR(EINVAL); + } + + return 0; +} + +static void d3d12va_encode_free_buffer(void *opaque, uint8_t *data) +{ + ID3D12Resource *pResource; + + pResource = (ID3D12Resource *)data; + D3D12_OBJECT_RELEASE(pResource); +} + +static AVBufferRef *d3d12va_encode_alloc_output_buffer(void *opaque, size_t size) +{ + AVCodecContext *avctx = opaque; + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; + D3D12VAEncodeContext *ctx = avctx->priv_data; + ID3D12Resource *pResource = NULL; + HRESULT hr; + AVBufferRef *ref; + D3D12_HEAP_PROPERTIES heap_props; + D3D12_HEAP_TYPE heap_type = D3D12_HEAP_TYPE_READBACK; + + D3D12_RESOURCE_DESC desc = { + .Dimension = D3D12_RESOURCE_DIMENSION_BUFFER, + .Alignment = 0, + .Width = FFALIGN(3 * base_ctx->surface_width * base_ctx->surface_height + (1 << 16), + D3D12_TEXTURE_DATA_PLACEMENT_ALIGNMENT), + .Height = 1, + .DepthOrArraySize = 1, + .MipLevels = 1, + .Format = DXGI_FORMAT_UNKNOWN, + .SampleDesc = { .Count = 1, .Quality = 0 }, + .Layout = D3D12_TEXTURE_LAYOUT_ROW_MAJOR, + .Flags = D3D12_RESOURCE_FLAG_NONE, + }; + + ctx->hwctx->device->lpVtbl->GetCustomHeapProperties(ctx->hwctx->device, &heap_props, 0, heap_type); + + hr = ID3D12Device_CreateCommittedResource(ctx->hwctx->device, &heap_props, D3D12_HEAP_FLAG_NONE, + &desc, D3D12_RESOURCE_STATE_COMMON, NULL, &IID_ID3D12Resource, + (void **)&pResource); + + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "Failed to create d3d12 buffer.\n"); + return NULL; + } + + ref = av_buffer_create((uint8_t *)(uintptr_t)pResource, + sizeof(pResource), + &d3d12va_encode_free_buffer, + avctx, AV_BUFFER_FLAG_READONLY); + if (!ref) { + D3D12_OBJECT_RELEASE(pResource); + return NULL; + } + + return ref; +} + +static int d3d12va_encode_prepare_output_buffers(AVCodecContext *avctx) +{ + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; + D3D12VAEncodeContext *ctx = avctx->priv_data; + AVD3D12VAFramesContext *frames_ctx = base_ctx->input_frames->hwctx; + HRESULT hr; + + ctx->req.NodeIndex = 0; + ctx->req.Codec = ctx->codec->d3d12_codec; + ctx->req.Profile = ctx->profile->d3d12_profile; + ctx->req.InputFormat = frames_ctx->format; + ctx->req.PictureTargetResolution = ctx->resolution; + + hr = ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3, + D3D12_FEATURE_VIDEO_ENCODER_RESOURCE_REQUIREMENTS, + &ctx->req, sizeof(ctx->req)); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "Failed to check encoder resource requirements support.\n"); + return AVERROR(EINVAL); + } + + if (!ctx->req.IsSupported) { + av_log(avctx, AV_LOG_ERROR, "Encoder resource requirements unsupported.\n"); + return AVERROR(EINVAL); + } + + ctx->output_buffer_pool = av_buffer_pool_init2(sizeof(ID3D12Resource *), avctx, + &d3d12va_encode_alloc_output_buffer, NULL); + if (!ctx->output_buffer_pool) + return AVERROR(ENOMEM); + + return 0; +} + +static int d3d12va_encode_create_command_objects(AVCodecContext *avctx) +{ + D3D12VAEncodeContext *ctx = avctx->priv_data; + ID3D12CommandAllocator *command_allocator = NULL; + int err; + HRESULT hr; + + D3D12_COMMAND_QUEUE_DESC queue_desc = { + .Type = D3D12_COMMAND_LIST_TYPE_VIDEO_ENCODE, + .Priority = 0, + .Flags = D3D12_COMMAND_QUEUE_FLAG_NONE, + .NodeMask = 0, + }; + + ctx->allocator_queue = av_fifo_alloc2(D3D12VA_VIDEO_ENC_ASYNC_DEPTH, + sizeof(CommandAllocator), AV_FIFO_FLAG_AUTO_GROW); + if (!ctx->allocator_queue) + return AVERROR(ENOMEM); + + hr = ID3D12Device_CreateFence(ctx->hwctx->device, 0, D3D12_FENCE_FLAG_NONE, + &IID_ID3D12Fence, (void **)&ctx->sync_ctx.fence); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "Failed to create fence(%lx)\n", (long)hr); + err = AVERROR_UNKNOWN; + goto fail; + } + + ctx->sync_ctx.event = CreateEvent(NULL, FALSE, FALSE, NULL); + if (!ctx->sync_ctx.event) + goto fail; + + err = d3d12va_get_valid_command_allocator(avctx, &command_allocator); + if (err < 0) + goto fail; + + hr = ID3D12Device_CreateCommandQueue(ctx->hwctx->device, &queue_desc, + &IID_ID3D12CommandQueue, (void **)&ctx->command_queue); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "Failed to create command queue(%lx)\n", (long)hr); + err = AVERROR_UNKNOWN; + goto fail; + } + + hr = ID3D12Device_CreateCommandList(ctx->hwctx->device, 0, queue_desc.Type, + command_allocator, NULL, &IID_ID3D12CommandList, + (void **)&ctx->command_list); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "Failed to create command list(%lx)\n", (long)hr); + err = AVERROR_UNKNOWN; + goto fail; + } + + hr = ID3D12VideoEncodeCommandList2_Close(ctx->command_list); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "Failed to close the command list(%lx)\n", (long)hr); + err = AVERROR_UNKNOWN; + goto fail; + } + + ID3D12CommandQueue_ExecuteCommandLists(ctx->command_queue, 1, (ID3D12CommandList **)&ctx->command_list); + + err = d3d12va_sync_with_gpu(avctx); + if (err < 0) + goto fail; + + err = d3d12va_discard_command_allocator(avctx, command_allocator, ctx->sync_ctx.fence_value); + if (err < 0) + goto fail; + + return 0; + +fail: + D3D12_OBJECT_RELEASE(command_allocator); + return err; +} + +static int d3d12va_encode_create_recon_frames(AVCodecContext *avctx) +{ + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; + AVD3D12VAFramesContext *hwctx; + enum AVPixelFormat recon_format; + int err; + + err = ff_hw_base_get_recon_format(avctx, NULL, &recon_format); + if (err < 0) + return err; + + base_ctx->recon_frames_ref = av_hwframe_ctx_alloc(base_ctx->device_ref); + if (!base_ctx->recon_frames_ref) + return AVERROR(ENOMEM); + + base_ctx->recon_frames = (AVHWFramesContext *)base_ctx->recon_frames_ref->data; + hwctx = (AVD3D12VAFramesContext *)base_ctx->recon_frames->hwctx; + + base_ctx->recon_frames->format = AV_PIX_FMT_D3D12; + base_ctx->recon_frames->sw_format = recon_format; + base_ctx->recon_frames->width = base_ctx->surface_width; + base_ctx->recon_frames->height = base_ctx->surface_height; + + hwctx->flags = D3D12_RESOURCE_FLAG_VIDEO_ENCODE_REFERENCE_ONLY | + D3D12_RESOURCE_FLAG_DENY_SHADER_RESOURCE; + + err = av_hwframe_ctx_init(base_ctx->recon_frames_ref); + if (err < 0) { + av_log(avctx, AV_LOG_ERROR, "Failed to initialise reconstructed " + "frame context: %d.\n", err); + return err; + } + + return 0; +} + +static const FFHWEncodePictureOperation d3d12va_type = { + .alloc = &d3d12va_encode_alloc, + + .issue = &d3d12va_encode_issue, + + .output = &d3d12va_encode_output, + + .free = &d3d12va_encode_free, +}; + +int ff_d3d12va_encode_init(AVCodecContext *avctx) +{ + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; + D3D12VAEncodeContext *ctx = avctx->priv_data; + D3D12_FEATURE_DATA_VIDEO_FEATURE_AREA_SUPPORT support = { 0 }; + int err; + HRESULT hr; + + err = ff_hw_base_encode_init(avctx); + if (err < 0) + goto fail; + + base_ctx->op = &d3d12va_type; + + ctx->hwctx = base_ctx->device->hwctx; + + ctx->resolution.Width = base_ctx->input_frames->width; + ctx->resolution.Height = base_ctx->input_frames->height; + + hr = ID3D12Device_QueryInterface(ctx->hwctx->device, &IID_ID3D12Device3, (void **)&ctx->device3); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "ID3D12Device3 interface is not supported.\n"); + err = AVERROR_UNKNOWN; + goto fail; + } + + hr = ID3D12Device3_QueryInterface(ctx->device3, &IID_ID3D12VideoDevice3, (void **)&ctx->video_device3); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "ID3D12VideoDevice3 interface is not supported.\n"); + err = AVERROR_UNKNOWN; + goto fail; + } + + if (FAILED(ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3, D3D12_FEATURE_VIDEO_FEATURE_AREA_SUPPORT, + &support, sizeof(support))) && !support.VideoEncodeSupport) { + av_log(avctx, AV_LOG_ERROR, "D3D12 video device has no video encoder support.\n"); + err = AVERROR(EINVAL); + goto fail; + } + + err = d3d12va_encode_set_profile(avctx); + if (err < 0) + goto fail; + + err = d3d12va_encode_init_rate_control(avctx); + if (err < 0) + goto fail; + + if (ctx->codec->get_encoder_caps) { + err = ctx->codec->get_encoder_caps(avctx); + if (err < 0) + goto fail; + } + + err = d3d12va_encode_init_gop_structure(avctx); + if (err < 0) + goto fail; + + if (!(ctx->codec->flags & FF_HW_FLAG_SLICE_CONTROL) && avctx->slices > 0) { + av_log(avctx, AV_LOG_WARNING, "Multiple slices were requested " + "but this codec does not support controlling slices.\n"); + } + + err = d3d12va_encode_create_command_objects(avctx); + if (err < 0) + goto fail; + + err = d3d12va_encode_create_recon_frames(avctx); + if (err < 0) + goto fail; + + err = d3d12va_encode_prepare_output_buffers(avctx); + if (err < 0) + goto fail; + + if (ctx->codec->configure) { + err = ctx->codec->configure(avctx); + if (err < 0) + goto fail; + } + + if (ctx->codec->init_sequence_params) { + err = ctx->codec->init_sequence_params(avctx); + if (err < 0) { + av_log(avctx, AV_LOG_ERROR, "Codec sequence initialisation " + "failed: %d.\n", err); + goto fail; + } + } + + if (ctx->codec->set_level) { + err = ctx->codec->set_level(avctx); + if (err < 0) + goto fail; + } + + base_ctx->output_delay = base_ctx->b_per_p; + base_ctx->decode_delay = base_ctx->max_b_depth; + + err = d3d12va_create_encoder(avctx); + if (err < 0) + goto fail; + + err = d3d12va_create_encoder_heap(avctx); + if (err < 0) + goto fail; + + base_ctx->async_encode = 1; + base_ctx->encode_fifo = av_fifo_alloc2(base_ctx->async_depth, + sizeof(D3D12VAEncodePicture *), 0); + if (!base_ctx->encode_fifo) + return AVERROR(ENOMEM); + + return 0; + +fail: + return err; +} + +int ff_d3d12va_encode_close(AVCodecContext *avctx) +{ + int num_allocator = 0; + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; + D3D12VAEncodeContext *ctx = avctx->priv_data; + FFHWBaseEncodePicture *pic, *next; + CommandAllocator allocator; + + if (!base_ctx->frame) + return 0; + + for (pic = base_ctx->pic_start; pic; pic = next) { + next = pic->next; + d3d12va_encode_free(avctx, pic); + } + + d3d12va_encode_free_rc_params(avctx); + + av_buffer_pool_uninit(&ctx->output_buffer_pool); + + D3D12_OBJECT_RELEASE(ctx->command_list); + D3D12_OBJECT_RELEASE(ctx->command_queue); + + if (ctx->allocator_queue) { + while (av_fifo_read(ctx->allocator_queue, &allocator, 1) >= 0) { + num_allocator++; + D3D12_OBJECT_RELEASE(allocator.command_allocator); + } + + av_log(avctx, AV_LOG_VERBOSE, "Total number of command allocators reused: %d\n", num_allocator); + } + + av_fifo_freep2(&ctx->allocator_queue); + + D3D12_OBJECT_RELEASE(ctx->sync_ctx.fence); + if (ctx->sync_ctx.event) + CloseHandle(ctx->sync_ctx.event); + + D3D12_OBJECT_RELEASE(ctx->encoder_heap); + D3D12_OBJECT_RELEASE(ctx->encoder); + D3D12_OBJECT_RELEASE(ctx->video_device3); + D3D12_OBJECT_RELEASE(ctx->device3); + + ff_hw_base_encode_close(avctx); + + return 0; +} diff --git a/libavcodec/d3d12va_encode.h b/libavcodec/d3d12va_encode.h new file mode 100644 index 0000000000..f355261f66 --- /dev/null +++ b/libavcodec/d3d12va_encode.h @@ -0,0 +1,334 @@ +/* + * Direct3D 12 HW acceleration video encoder + * + * Copyright (c) 2024 Intel Corporation + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#ifndef AVCODEC_D3D12VA_ENCODE_H +#define AVCODEC_D3D12VA_ENCODE_H + +#include "libavutil/fifo.h" +#include "libavutil/hwcontext.h" +#include "libavutil/hwcontext_d3d12va_internal.h" +#include "libavutil/hwcontext_d3d12va.h" +#include "avcodec.h" +#include "internal.h" +#include "hwconfig.h" +#include "hw_base_encode.h" + +struct D3D12VAEncodeType; + +extern const AVCodecHWConfigInternal *const ff_d3d12va_encode_hw_configs[]; + +#define MAX_PARAM_BUFFER_SIZE 4096 +#define D3D12VA_VIDEO_ENC_ASYNC_DEPTH 8 + +typedef struct D3D12VAEncodePicture { + FFHWBaseEncodePicture base; + + int header_size; + + AVD3D12VAFrame *input_surface; + AVD3D12VAFrame *recon_surface; + + AVBufferRef *output_buffer_ref; + ID3D12Resource *output_buffer; + + ID3D12Resource *encoded_metadata; + ID3D12Resource *resolved_metadata; + + D3D12_VIDEO_ENCODER_PICTURE_CONTROL_CODEC_DATA pic_ctl; + + int fence_value; +} D3D12VAEncodePicture; + +typedef struct D3D12VAEncodeProfile { + /** + * lavc profile value (AV_PROFILE_*). + */ + int av_profile; + + /** + * Supported bit depth. + */ + int depth; + + /** + * Number of components. + */ + int nb_components; + + /** + * Chroma subsampling in width dimension. + */ + int log2_chroma_w; + + /** + * Chroma subsampling in height dimension. + */ + int log2_chroma_h; + + /** + * D3D12 profile value. + */ + D3D12_VIDEO_ENCODER_PROFILE_DESC d3d12_profile; +} D3D12VAEncodeProfile; + +enum { + RC_MODE_AUTO, + RC_MODE_CQP, + RC_MODE_CBR, + RC_MODE_VBR, + RC_MODE_QVBR, + RC_MODE_MAX = RC_MODE_QVBR, +}; + + +typedef struct D3D12VAEncodeRCMode { + /** + * Mode from above enum (RC_MODE_*). + */ + int mode; + + /** + * Name. + * + */ + const char *name; + + /** + * Uses bitrate parameters. + * + */ + int bitrate; + + /** + * Supports maxrate distinct from bitrate. + * + */ + int maxrate; + + /** + * Uses quality value. + * + */ + int quality; + + /** + * Supports HRD/VBV parameters. + * + */ + int hrd; + + /** + * D3D12 mode value. + */ + D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE d3d12_mode; +} D3D12VAEncodeRCMode; + +typedef struct D3D12VAEncodeContext { + FFHWBaseEncodeContext base; + + /** + * Codec-specific hooks. + */ + const struct D3D12VAEncodeType *codec; + + /** + * Explicitly set RC mode (otherwise attempt to pick from + * available modes). + */ + int explicit_rc_mode; + + /** + * Explicitly-set QP, for use with the "qp" options. + * (Forces CQP mode when set, overriding everything else.) + */ + int explicit_qp; + + /** + * RC quality level - meaning depends on codec and RC mode. + * In CQP mode this sets the fixed quantiser value. + */ + int rc_quality; + + /** + * Chosen encoding profile details. + */ + const D3D12VAEncodeProfile *profile; + + AVD3D12VADeviceContext *hwctx; + + /** + * ID3D12Device3 interface. + */ + ID3D12Device3 *device3; + + /** + * ID3D12VideoDevice3 interface. + */ + ID3D12VideoDevice3 *video_device3; + + /** + * Pool of (reusable) bitstream output buffers. + */ + AVBufferPool *output_buffer_pool; + + /** + * D3D12 video encoder. + */ + AVBufferRef *encoder_ref; + + ID3D12VideoEncoder *encoder; + + /** + * D3D12 video encoder heap. + */ + ID3D12VideoEncoderHeap *encoder_heap; + + /** + * A cached queue for reusing the D3D12 command allocators. + * + * @see https://learn.microsoft.com/en-us/windows/win32/direct3d12/recording-command-lists-and-bundles#id3d12commandallocator + */ + AVFifo *allocator_queue; + + /** + * D3D12 command queue. + */ + ID3D12CommandQueue *command_queue; + + /** + * D3D12 video encode command list. + */ + ID3D12VideoEncodeCommandList2 *command_list; + + /** + * The sync context used to sync command queue. + */ + AVD3D12VASyncContext sync_ctx; + + /** + * The bi_not_empty feature. + */ + int bi_not_empty; + + /** + * D3D12_FEATURE structures. + */ + D3D12_FEATURE_DATA_VIDEO_ENCODER_RESOURCE_REQUIREMENTS req; + + D3D12_FEATURE_DATA_VIDEO_ENCODER_RESOLUTION_SUPPORT_LIMITS res_limits; + + /** + * D3D12_VIDEO_ENCODER structures. + */ + D3D12_VIDEO_ENCODER_PICTURE_RESOLUTION_DESC resolution; + + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION codec_conf; + + D3D12_VIDEO_ENCODER_RATE_CONTROL rc; + + D3D12_VIDEO_ENCODER_SEQUENCE_GOP_STRUCTURE gop; + + D3D12_VIDEO_ENCODER_LEVEL_SETTING level; +} D3D12VAEncodeContext; + +typedef struct D3D12VAEncodeType { + /** + * List of supported profiles. + */ + const D3D12VAEncodeProfile *profiles; + + /** + * D3D12 codec name. + */ + D3D12_VIDEO_ENCODER_CODEC d3d12_codec; + + /** + * Codec feature flags. + */ + int flags; + + /** + * Default quality for this codec - used as quantiser or RC quality + * factor depending on RC mode. + */ + int default_quality; + + /** + * Query codec configuration and determine encode parameters like + * block sizes for surface alignment and slices. If not set, assume + * that all blocks are 16x16 and that surfaces should be aligned to match + * this. + */ + int (*get_encoder_caps)(AVCodecContext *avctx); + + /** + * Perform any extra codec-specific configuration. + */ + int (*configure)(AVCodecContext *avctx); + + /** + * Set codec-specific level setting. + */ + int (*set_level)(AVCodecContext *avctx); + + /** + * The size of any private data structure associated with each + * picture (can be zero if not required). + */ + size_t picture_priv_data_size; + + /** + * Fill the corresponding parameters. + */ + int (*init_sequence_params)(AVCodecContext *avctx); + + int (*init_picture_params)(AVCodecContext *avctx, + D3D12VAEncodePicture *pic); + + void (*free_picture_params)(D3D12VAEncodePicture *pic); + + /** + * Write the packed header data to the provided buffer. + */ + int (*write_sequence_header)(AVCodecContext *avctx, + char *data, size_t *data_len); +} D3D12VAEncodeType; + +int ff_d3d12va_encode_init(AVCodecContext *avctx); +int ff_d3d12va_encode_close(AVCodecContext *avctx); + +#define D3D12VA_ENCODE_RC_MODE(name, desc) \ + { #name, desc, 0, AV_OPT_TYPE_CONST, { .i64 = RC_MODE_ ## name }, \ + 0, 0, FLAGS, .unit = "rc_mode" } +#define D3D12VA_ENCODE_RC_OPTIONS \ + { "rc_mode",\ + "Set rate control mode", \ + OFFSET(common.explicit_rc_mode), AV_OPT_TYPE_INT, \ + { .i64 = RC_MODE_AUTO }, RC_MODE_AUTO, RC_MODE_MAX, FLAGS, .unit = "rc_mode" }, \ + { "auto", "Choose mode automatically based on other parameters", \ + 0, AV_OPT_TYPE_CONST, { .i64 = RC_MODE_AUTO }, 0, 0, FLAGS, .unit = "rc_mode" }, \ + D3D12VA_ENCODE_RC_MODE(CQP, "Constant-quality"), \ + D3D12VA_ENCODE_RC_MODE(CBR, "Constant-bitrate"), \ + D3D12VA_ENCODE_RC_MODE(VBR, "Variable-bitrate"), \ + D3D12VA_ENCODE_RC_MODE(QVBR, "Quality-defined variable-bitrate") + +#endif /* AVCODEC_D3D12VA_ENCODE_H */ diff --git a/libavcodec/d3d12va_encode_hevc.c b/libavcodec/d3d12va_encode_hevc.c new file mode 100644 index 0000000000..24823b3c56 --- /dev/null +++ b/libavcodec/d3d12va_encode_hevc.c @@ -0,0 +1,1007 @@ +/* + * Direct3D 12 HW acceleration video encoder + * + * Copyright (c) 2024 Intel Corporation + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ +#include "libavutil/opt.h" +#include "libavutil/common.h" +#include "libavutil/mem.h" +#include "libavutil/pixdesc.h" +#include "libavutil/hwcontext_d3d12va_internal.h" + +#include "avcodec.h" +#include "cbs.h" +#include "cbs_h265.h" +#include "h2645data.h" +#include "h265_profile_level.h" +#include "codec_internal.h" +#include "d3d12va_encode.h" + +typedef struct D3D12VAEncodeHEVCPicture { + int pic_order_cnt; + int64_t last_idr_frame; +} D3D12VAEncodeHEVCPicture; + +typedef struct D3D12VAEncodeHEVCContext { + D3D12VAEncodeContext common; + + // User options. + int qp; + int profile; + int tier; + int level; + + // Writer structures. + H265RawVPS raw_vps; + H265RawSPS raw_sps; + H265RawPPS raw_pps; + + CodedBitstreamContext *cbc; + CodedBitstreamFragment current_access_unit; +} D3D12VAEncodeHEVCContext; + +typedef struct D3D12VAEncodeHEVCLevel { + int level; + D3D12_VIDEO_ENCODER_LEVELS_HEVC d3d12_level; +} D3D12VAEncodeHEVCLevel; + +static const D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC hevc_config_support_sets[] = +{ + { + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_NONE, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_32x32, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32, + 3, + 3, + }, + { + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_NONE, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_32x32, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32, + 0, + 0, + }, + { + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_NONE, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_32x32, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32, + 2, + 2, + }, + { + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_NONE, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_64x64, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32, + 2, + 2, + }, + { + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_NONE, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_64x64, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32, + 4, + 4, + }, +}; + +static const D3D12VAEncodeHEVCLevel hevc_levels[] = { + { 30, D3D12_VIDEO_ENCODER_LEVELS_HEVC_1 }, + { 60, D3D12_VIDEO_ENCODER_LEVELS_HEVC_2 }, + { 63, D3D12_VIDEO_ENCODER_LEVELS_HEVC_21 }, + { 90, D3D12_VIDEO_ENCODER_LEVELS_HEVC_3 }, + { 93, D3D12_VIDEO_ENCODER_LEVELS_HEVC_31 }, + { 120, D3D12_VIDEO_ENCODER_LEVELS_HEVC_4 }, + { 123, D3D12_VIDEO_ENCODER_LEVELS_HEVC_41 }, + { 150, D3D12_VIDEO_ENCODER_LEVELS_HEVC_5 }, + { 153, D3D12_VIDEO_ENCODER_LEVELS_HEVC_51 }, + { 156, D3D12_VIDEO_ENCODER_LEVELS_HEVC_52 }, + { 180, D3D12_VIDEO_ENCODER_LEVELS_HEVC_6 }, + { 183, D3D12_VIDEO_ENCODER_LEVELS_HEVC_61 }, + { 186, D3D12_VIDEO_ENCODER_LEVELS_HEVC_62 }, +}; + +static const D3D12_VIDEO_ENCODER_PROFILE_HEVC profile_main = D3D12_VIDEO_ENCODER_PROFILE_HEVC_MAIN; +static const D3D12_VIDEO_ENCODER_PROFILE_HEVC profile_main10 = D3D12_VIDEO_ENCODER_PROFILE_HEVC_MAIN10; + +#define D3D_PROFILE_DESC(name) \ + { sizeof(D3D12_VIDEO_ENCODER_PROFILE_HEVC), { .pHEVCProfile = (D3D12_VIDEO_ENCODER_PROFILE_HEVC *)&profile_ ## name } } +static const D3D12VAEncodeProfile d3d12va_encode_hevc_profiles[] = { + { AV_PROFILE_HEVC_MAIN, 8, 3, 1, 1, D3D_PROFILE_DESC(main) }, + { AV_PROFILE_HEVC_MAIN_10, 10, 3, 1, 1, D3D_PROFILE_DESC(main10) }, + { AV_PROFILE_UNKNOWN }, +}; + +static uint8_t d3d12va_encode_hevc_map_cusize(D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE cusize) +{ + switch (cusize) { + case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8: return 8; + case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_16x16: return 16; + case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_32x32: return 32; + case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_64x64: return 64; + default: av_assert0(0); + } + return 0; +} + +static uint8_t d3d12va_encode_hevc_map_tusize(D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE tusize) +{ + switch (tusize) { + case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4: return 4; + case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_8x8: return 8; + case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_16x16: return 16; + case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32: return 32; + default: av_assert0(0); + } + return 0; +} + +static int d3d12va_encode_hevc_write_access_unit(AVCodecContext *avctx, + char *data, size_t *data_len, + CodedBitstreamFragment *au) +{ + D3D12VAEncodeHEVCContext *priv = avctx->priv_data; + int err; + + err = ff_cbs_write_fragment_data(priv->cbc, au); + if (err < 0) { + av_log(avctx, AV_LOG_ERROR, "Failed to write packed header.\n"); + return err; + } + + if (*data_len < 8 * au->data_size - au->data_bit_padding) { + av_log(avctx, AV_LOG_ERROR, "Access unit too large: " + "%zu < %zu.\n", *data_len, + 8 * au->data_size - au->data_bit_padding); + return AVERROR(ENOSPC); + } + + memcpy(data, au->data, au->data_size); + *data_len = 8 * au->data_size - au->data_bit_padding; + + return 0; +} + +static int d3d12va_encode_hevc_add_nal(AVCodecContext *avctx, + CodedBitstreamFragment *au, + void *nal_unit) +{ + H265RawNALUnitHeader *header = nal_unit; + int err; + + err = ff_cbs_insert_unit_content(au, -1, + header->nal_unit_type, nal_unit, NULL); + if (err < 0) { + av_log(avctx, AV_LOG_ERROR, "Failed to add NAL unit: " + "type = %d.\n", header->nal_unit_type); + return err; + } + + return 0; +} + +static int d3d12va_encode_hevc_write_sequence_header(AVCodecContext *avctx, + char *data, size_t *data_len) +{ + D3D12VAEncodeHEVCContext *priv = avctx->priv_data; + CodedBitstreamFragment *au = &priv->current_access_unit; + int err; + + err = d3d12va_encode_hevc_add_nal(avctx, au, &priv->raw_vps); + if (err < 0) + goto fail; + + err = d3d12va_encode_hevc_add_nal(avctx, au, &priv->raw_sps); + if (err < 0) + goto fail; + + err = d3d12va_encode_hevc_add_nal(avctx, au, &priv->raw_pps); + if (err < 0) + goto fail; + + err = d3d12va_encode_hevc_write_access_unit(avctx, data, data_len, au); +fail: + ff_cbs_fragment_reset(au); + return err; + +} + +static int d3d12va_encode_hevc_init_sequence_params(AVCodecContext *avctx) +{ + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; + D3D12VAEncodeContext *ctx = avctx->priv_data; + D3D12VAEncodeHEVCContext *priv = avctx->priv_data; + AVD3D12VAFramesContext *hwctx = base_ctx->input_frames->hwctx; + H265RawVPS *vps = &priv->raw_vps; + H265RawSPS *sps = &priv->raw_sps; + H265RawPPS *pps = &priv->raw_pps; + H265RawProfileTierLevel *ptl = &vps->profile_tier_level; + H265RawVUI *vui = &sps->vui; + D3D12_VIDEO_ENCODER_PROFILE_HEVC profile = D3D12_VIDEO_ENCODER_PROFILE_HEVC_MAIN; + D3D12_VIDEO_ENCODER_LEVEL_TIER_CONSTRAINTS_HEVC level = { 0 }; + const AVPixFmtDescriptor *desc; + uint8_t min_cu_size, max_cu_size, min_tu_size, max_tu_size; + int chroma_format, bit_depth; + HRESULT hr; + int i; + + D3D12_FEATURE_DATA_VIDEO_ENCODER_SUPPORT support = { + .NodeIndex = 0, + .Codec = D3D12_VIDEO_ENCODER_CODEC_HEVC, + .InputFormat = hwctx->format, + .RateControl = ctx->rc, + .IntraRefresh = D3D12_VIDEO_ENCODER_INTRA_REFRESH_MODE_NONE, + .SubregionFrameEncoding = D3D12_VIDEO_ENCODER_FRAME_SUBREGION_LAYOUT_MODE_FULL_FRAME, + .ResolutionsListCount = 1, + .pResolutionList = &ctx->resolution, + .CodecGopSequence = ctx->gop, + .MaxReferenceFramesInDPB = MAX_DPB_SIZE - 1, + .CodecConfiguration = ctx->codec_conf, + .SuggestedProfile.DataSize = sizeof(D3D12_VIDEO_ENCODER_PROFILE_HEVC), + .SuggestedProfile.pHEVCProfile = &profile, + .SuggestedLevel.DataSize = sizeof(D3D12_VIDEO_ENCODER_LEVEL_TIER_CONSTRAINTS_HEVC), + .SuggestedLevel.pHEVCLevelSetting = &level, + .pResolutionDependentSupport = &ctx->res_limits, + }; + + hr = ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3, D3D12_FEATURE_VIDEO_ENCODER_SUPPORT, + &support, sizeof(support)); + + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "Failed to check encoder support(%lx).\n", (long)hr); + return AVERROR(EINVAL); + } + + if (!(support.SupportFlags & D3D12_VIDEO_ENCODER_SUPPORT_FLAG_GENERAL_SUPPORT_OK)) { + av_log(avctx, AV_LOG_ERROR, "Driver does not support some request features. %#x\n", + support.ValidationFlags); + return AVERROR(EINVAL); + } + + if (support.SupportFlags & D3D12_VIDEO_ENCODER_SUPPORT_FLAG_RECONSTRUCTED_FRAMES_REQUIRE_TEXTURE_ARRAYS) { + av_log(avctx, AV_LOG_ERROR, "D3D12 video encode on this device requires texture array support, " + "but it's not implemented.\n"); + return AVERROR_PATCHWELCOME; + } + + memset(vps, 0, sizeof(*vps)); + memset(sps, 0, sizeof(*sps)); + memset(pps, 0, sizeof(*pps)); + + desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format); + av_assert0(desc); + if (desc->nb_components == 1) { + chroma_format = 0; + } else { + if (desc->log2_chroma_w == 1 && desc->log2_chroma_h == 1) { + chroma_format = 1; + } else if (desc->log2_chroma_w == 1 && desc->log2_chroma_h == 0) { + chroma_format = 2; + } else if (desc->log2_chroma_w == 0 && desc->log2_chroma_h == 0) { + chroma_format = 3; + } else { + av_log(avctx, AV_LOG_ERROR, "Chroma format of input pixel format " + "%s is not supported.\n", desc->name); + return AVERROR(EINVAL); + } + } + bit_depth = desc->comp[0].depth; + + min_cu_size = d3d12va_encode_hevc_map_cusize(ctx->codec_conf.pHEVCConfig->MinLumaCodingUnitSize); + max_cu_size = d3d12va_encode_hevc_map_cusize(ctx->codec_conf.pHEVCConfig->MaxLumaCodingUnitSize); + min_tu_size = d3d12va_encode_hevc_map_tusize(ctx->codec_conf.pHEVCConfig->MinLumaTransformUnitSize); + max_tu_size = d3d12va_encode_hevc_map_tusize(ctx->codec_conf.pHEVCConfig->MaxLumaTransformUnitSize); + + // VPS + + vps->nal_unit_header = (H265RawNALUnitHeader) { + .nal_unit_type = HEVC_NAL_VPS, + .nuh_layer_id = 0, + .nuh_temporal_id_plus1 = 1, + }; + + vps->vps_video_parameter_set_id = 0; + + vps->vps_base_layer_internal_flag = 1; + vps->vps_base_layer_available_flag = 1; + vps->vps_max_layers_minus1 = 0; + vps->vps_max_sub_layers_minus1 = 0; + vps->vps_temporal_id_nesting_flag = 1; + + ptl->general_profile_space = 0; + ptl->general_profile_idc = avctx->profile; + ptl->general_tier_flag = priv->tier; + + ptl->general_profile_compatibility_flag[ptl->general_profile_idc] = 1; + + ptl->general_progressive_source_flag = 1; + ptl->general_interlaced_source_flag = 0; + ptl->general_non_packed_constraint_flag = 1; + ptl->general_frame_only_constraint_flag = 1; + + ptl->general_max_14bit_constraint_flag = bit_depth <= 14; + ptl->general_max_12bit_constraint_flag = bit_depth <= 12; + ptl->general_max_10bit_constraint_flag = bit_depth <= 10; + ptl->general_max_8bit_constraint_flag = bit_depth == 8; + + ptl->general_max_422chroma_constraint_flag = chroma_format <= 2; + ptl->general_max_420chroma_constraint_flag = chroma_format <= 1; + ptl->general_max_monochrome_constraint_flag = chroma_format == 0; + + ptl->general_intra_constraint_flag = base_ctx->gop_size == 1; + ptl->general_one_picture_only_constraint_flag = 0; + + ptl->general_lower_bit_rate_constraint_flag = 1; + + if (avctx->level != FF_LEVEL_UNKNOWN) { + ptl->general_level_idc = avctx->level; + } else { + const H265LevelDescriptor *level; + + level = ff_h265_guess_level(ptl, avctx->bit_rate, + base_ctx->surface_width, base_ctx->surface_height, + 1, 1, 1, (base_ctx->b_per_p > 0) + 1); + if (level) { + av_log(avctx, AV_LOG_VERBOSE, "Using level %s.\n", level->name); + ptl->general_level_idc = level->level_idc; + } else { + av_log(avctx, AV_LOG_VERBOSE, "Stream will not conform to " + "any normal level; using level 8.5.\n"); + ptl->general_level_idc = 255; + // The tier flag must be set in level 8.5. + ptl->general_tier_flag = 1; + } + avctx->level = ptl->general_level_idc; + } + + vps->vps_sub_layer_ordering_info_present_flag = 0; + vps->vps_max_dec_pic_buffering_minus1[0] = base_ctx->max_b_depth + 1; + vps->vps_max_num_reorder_pics[0] = base_ctx->max_b_depth; + vps->vps_max_latency_increase_plus1[0] = 0; + + vps->vps_max_layer_id = 0; + vps->vps_num_layer_sets_minus1 = 0; + vps->layer_id_included_flag[0][0] = 1; + + vps->vps_timing_info_present_flag = 1; + if (avctx->framerate.num > 0 && avctx->framerate.den > 0) { + vps->vps_num_units_in_tick = avctx->framerate.den; + vps->vps_time_scale = avctx->framerate.num; + vps->vps_poc_proportional_to_timing_flag = 1; + vps->vps_num_ticks_poc_diff_one_minus1 = 0; + } else { + vps->vps_num_units_in_tick = avctx->time_base.num; + vps->vps_time_scale = avctx->time_base.den; + vps->vps_poc_proportional_to_timing_flag = 0; + } + vps->vps_num_hrd_parameters = 0; + + // SPS + + sps->nal_unit_header = (H265RawNALUnitHeader) { + .nal_unit_type = HEVC_NAL_SPS, + .nuh_layer_id = 0, + .nuh_temporal_id_plus1 = 1, + }; + + sps->sps_video_parameter_set_id = vps->vps_video_parameter_set_id; + + sps->sps_max_sub_layers_minus1 = vps->vps_max_sub_layers_minus1; + sps->sps_temporal_id_nesting_flag = vps->vps_temporal_id_nesting_flag; + + sps->profile_tier_level = vps->profile_tier_level; + + sps->sps_seq_parameter_set_id = 0; + + sps->chroma_format_idc = chroma_format; + sps->separate_colour_plane_flag = 0; + + av_assert0(ctx->res_limits.SubregionBlockPixelsSize % min_cu_size == 0); + + sps->pic_width_in_luma_samples = FFALIGN(base_ctx->surface_width, + ctx->res_limits.SubregionBlockPixelsSize); + sps->pic_height_in_luma_samples = FFALIGN(base_ctx->surface_height, + ctx->res_limits.SubregionBlockPixelsSize); + + if (avctx->width != sps->pic_width_in_luma_samples || + avctx->height != sps->pic_height_in_luma_samples) { + sps->conformance_window_flag = 1; + sps->conf_win_left_offset = 0; + sps->conf_win_right_offset = + (sps->pic_width_in_luma_samples - avctx->width) >> desc->log2_chroma_w; + sps->conf_win_top_offset = 0; + sps->conf_win_bottom_offset = + (sps->pic_height_in_luma_samples - avctx->height) >> desc->log2_chroma_h; + } else { + sps->conformance_window_flag = 0; + } + + sps->bit_depth_luma_minus8 = bit_depth - 8; + sps->bit_depth_chroma_minus8 = bit_depth - 8; + + sps->log2_max_pic_order_cnt_lsb_minus4 = ctx->gop.pHEVCGroupOfPictures->log2_max_pic_order_cnt_lsb_minus4; + + sps->sps_sub_layer_ordering_info_present_flag = + vps->vps_sub_layer_ordering_info_present_flag; + for (i = 0; i <= sps->sps_max_sub_layers_minus1; i++) { + sps->sps_max_dec_pic_buffering_minus1[i] = + vps->vps_max_dec_pic_buffering_minus1[i]; + sps->sps_max_num_reorder_pics[i] = + vps->vps_max_num_reorder_pics[i]; + sps->sps_max_latency_increase_plus1[i] = + vps->vps_max_latency_increase_plus1[i]; + } + + sps->log2_min_luma_coding_block_size_minus3 = (uint8_t)(av_log2(min_cu_size) - 3); + sps->log2_diff_max_min_luma_coding_block_size = (uint8_t)(av_log2(max_cu_size) - av_log2(min_cu_size)); + sps->log2_min_luma_transform_block_size_minus2 = (uint8_t)(av_log2(min_tu_size) - 2); + sps->log2_diff_max_min_luma_transform_block_size = (uint8_t)(av_log2(max_tu_size) - av_log2(min_tu_size)); + + sps->max_transform_hierarchy_depth_inter = ctx->codec_conf.pHEVCConfig->max_transform_hierarchy_depth_inter; + sps->max_transform_hierarchy_depth_intra = ctx->codec_conf.pHEVCConfig->max_transform_hierarchy_depth_intra; + + sps->amp_enabled_flag = !!(ctx->codec_conf.pHEVCConfig->ConfigurationFlags & + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_USE_ASYMETRIC_MOTION_PARTITION); + sps->sample_adaptive_offset_enabled_flag = !!(ctx->codec_conf.pHEVCConfig->ConfigurationFlags & + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_ENABLE_SAO_FILTER); + sps->sps_temporal_mvp_enabled_flag = 0; + sps->pcm_enabled_flag = 0; + + sps->vui_parameters_present_flag = 1; + + if (avctx->sample_aspect_ratio.num != 0 && + avctx->sample_aspect_ratio.den != 0) { + int num, den, i; + av_reduce(&num, &den, avctx->sample_aspect_ratio.num, + avctx->sample_aspect_ratio.den, 65535); + for (i = 0; i < FF_ARRAY_ELEMS(ff_h2645_pixel_aspect); i++) { + if (num == ff_h2645_pixel_aspect[i].num && + den == ff_h2645_pixel_aspect[i].den) { + vui->aspect_ratio_idc = i; + break; + } + } + if (i >= FF_ARRAY_ELEMS(ff_h2645_pixel_aspect)) { + vui->aspect_ratio_idc = 255; + vui->sar_width = num; + vui->sar_height = den; + } + vui->aspect_ratio_info_present_flag = 1; + } + + // Unspecified video format, from table E-2. + vui->video_format = 5; + vui->video_full_range_flag = + avctx->color_range == AVCOL_RANGE_JPEG; + vui->colour_primaries = avctx->color_primaries; + vui->transfer_characteristics = avctx->color_trc; + vui->matrix_coefficients = avctx->colorspace; + if (avctx->color_primaries != AVCOL_PRI_UNSPECIFIED || + avctx->color_trc != AVCOL_TRC_UNSPECIFIED || + avctx->colorspace != AVCOL_SPC_UNSPECIFIED) + vui->colour_description_present_flag = 1; + if (avctx->color_range != AVCOL_RANGE_UNSPECIFIED || + vui->colour_description_present_flag) + vui->video_signal_type_present_flag = 1; + + if (avctx->chroma_sample_location != AVCHROMA_LOC_UNSPECIFIED) { + vui->chroma_loc_info_present_flag = 1; + vui->chroma_sample_loc_type_top_field = + vui->chroma_sample_loc_type_bottom_field = + avctx->chroma_sample_location - 1; + } + + vui->vui_timing_info_present_flag = 1; + vui->vui_num_units_in_tick = vps->vps_num_units_in_tick; + vui->vui_time_scale = vps->vps_time_scale; + vui->vui_poc_proportional_to_timing_flag = vps->vps_poc_proportional_to_timing_flag; + vui->vui_num_ticks_poc_diff_one_minus1 = vps->vps_num_ticks_poc_diff_one_minus1; + vui->vui_hrd_parameters_present_flag = 0; + + vui->bitstream_restriction_flag = 1; + vui->motion_vectors_over_pic_boundaries_flag = 1; + vui->restricted_ref_pic_lists_flag = 1; + vui->max_bytes_per_pic_denom = 0; + vui->max_bits_per_min_cu_denom = 0; + vui->log2_max_mv_length_horizontal = 15; + vui->log2_max_mv_length_vertical = 15; + + // PPS + + pps->nal_unit_header = (H265RawNALUnitHeader) { + .nal_unit_type = HEVC_NAL_PPS, + .nuh_layer_id = 0, + .nuh_temporal_id_plus1 = 1, + }; + + pps->pps_pic_parameter_set_id = 0; + pps->pps_seq_parameter_set_id = sps->sps_seq_parameter_set_id; + + pps->cabac_init_present_flag = 1; + + pps->num_ref_idx_l0_default_active_minus1 = 0; + pps->num_ref_idx_l1_default_active_minus1 = 0; + + pps->init_qp_minus26 = 0; + + pps->transform_skip_enabled_flag = !!(ctx->codec_conf.pHEVCConfig->ConfigurationFlags & + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_ENABLE_TRANSFORM_SKIPPING); + + // cu_qp_delta always required to be 1 in https://github.com/microsoft/DirectX-Specs/blob/master/d3d/D3D12VideoEncoding.md + pps->cu_qp_delta_enabled_flag = 1; + + pps->diff_cu_qp_delta_depth = 0; + + pps->pps_slice_chroma_qp_offsets_present_flag = 1; + + pps->tiles_enabled_flag = 0; // no tiling in D3D12 + + pps->pps_loop_filter_across_slices_enabled_flag = !(ctx->codec_conf.pHEVCConfig->ConfigurationFlags & + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_DISABLE_LOOP_FILTER_ACROSS_SLICES); + pps->deblocking_filter_control_present_flag = 1; + + return 0; +} + +static int d3d12va_encode_hevc_get_encoder_caps(AVCodecContext *avctx) +{ + int i; + HRESULT hr; + uint8_t min_cu_size, max_cu_size; + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; + D3D12VAEncodeContext *ctx = avctx->priv_data; + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC *config; + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC hevc_caps; + + D3D12_FEATURE_DATA_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT codec_caps = { + .NodeIndex = 0, + .Codec = D3D12_VIDEO_ENCODER_CODEC_HEVC, + .Profile = ctx->profile->d3d12_profile, + .CodecSupportLimits.DataSize = sizeof(D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC), + }; + + for (i = 0; i < FF_ARRAY_ELEMS(hevc_config_support_sets); i++) { + hevc_caps = hevc_config_support_sets[i]; + codec_caps.CodecSupportLimits.pHEVCSupport = &hevc_caps; + hr = ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3, D3D12_FEATURE_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT, + &codec_caps, sizeof(codec_caps)); + if (SUCCEEDED(hr) && codec_caps.IsSupported) + break; + } + + if (i == FF_ARRAY_ELEMS(hevc_config_support_sets)) { + av_log(avctx, AV_LOG_ERROR, "Unsupported codec configuration\n"); + return AVERROR(EINVAL); + } + + ctx->codec_conf.DataSize = sizeof(D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC); + ctx->codec_conf.pHEVCConfig = av_mallocz(ctx->codec_conf.DataSize); + if (!ctx->codec_conf.pHEVCConfig) + return AVERROR(ENOMEM); + + config = ctx->codec_conf.pHEVCConfig; + + config->ConfigurationFlags = D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_NONE; + config->MinLumaCodingUnitSize = hevc_caps.MinLumaCodingUnitSize; + config->MaxLumaCodingUnitSize = hevc_caps.MaxLumaCodingUnitSize; + config->MinLumaTransformUnitSize = hevc_caps.MinLumaTransformUnitSize; + config->MaxLumaTransformUnitSize = hevc_caps.MaxLumaTransformUnitSize; + config->max_transform_hierarchy_depth_inter = hevc_caps.max_transform_hierarchy_depth_inter; + config->max_transform_hierarchy_depth_intra = hevc_caps.max_transform_hierarchy_depth_intra; + + if (hevc_caps.SupportFlags & D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_ASYMETRIC_MOTION_PARTITION_SUPPORT || + hevc_caps.SupportFlags & D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_ASYMETRIC_MOTION_PARTITION_REQUIRED) + config->ConfigurationFlags |= D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_USE_ASYMETRIC_MOTION_PARTITION; + + if (hevc_caps.SupportFlags & D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_SAO_FILTER_SUPPORT) + config->ConfigurationFlags |= D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_ENABLE_SAO_FILTER; + + if (hevc_caps.SupportFlags & D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_DISABLING_LOOP_FILTER_ACROSS_SLICES_SUPPORT) + config->ConfigurationFlags |= D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_DISABLE_LOOP_FILTER_ACROSS_SLICES; + + if (hevc_caps.SupportFlags & D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_TRANSFORM_SKIP_SUPPORT) + config->ConfigurationFlags |= D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_ENABLE_TRANSFORM_SKIPPING; + + if (hevc_caps.SupportFlags & D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_P_FRAMES_IMPLEMENTED_AS_LOW_DELAY_B_FRAMES) + ctx->bi_not_empty = 1; + + // block sizes + min_cu_size = d3d12va_encode_hevc_map_cusize(hevc_caps.MinLumaCodingUnitSize); + max_cu_size = d3d12va_encode_hevc_map_cusize(hevc_caps.MaxLumaCodingUnitSize); + + av_log(avctx, AV_LOG_VERBOSE, "Using CTU size %dx%d, " + "min CB size %dx%d.\n", max_cu_size, max_cu_size, + min_cu_size, min_cu_size); + + base_ctx->surface_width = FFALIGN(avctx->width, min_cu_size); + base_ctx->surface_height = FFALIGN(avctx->height, min_cu_size); + + return 0; +} + +static int d3d12va_encode_hevc_configure(AVCodecContext *avctx) +{ + FFHWBaseEncodeContext *base_ctx = avctx->priv_data; + D3D12VAEncodeContext *ctx = avctx->priv_data; + D3D12VAEncodeHEVCContext *priv = avctx->priv_data; + int fixed_qp_idr, fixed_qp_p, fixed_qp_b; + int err; + + err = ff_cbs_init(&priv->cbc, AV_CODEC_ID_HEVC, avctx); + if (err < 0) + return err; + + // Rate control + if (ctx->rc.Mode == D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_CQP) { + D3D12_VIDEO_ENCODER_RATE_CONTROL_CQP *cqp_ctl; + fixed_qp_p = av_clip(ctx->rc_quality, 1, 51); + if (avctx->i_quant_factor > 0.0) + fixed_qp_idr = av_clip((avctx->i_quant_factor * fixed_qp_p + + avctx->i_quant_offset) + 0.5, 1, 51); + else + fixed_qp_idr = fixed_qp_p; + if (avctx->b_quant_factor > 0.0) + fixed_qp_b = av_clip((avctx->b_quant_factor * fixed_qp_p + + avctx->b_quant_offset) + 0.5, 1, 51); + else + fixed_qp_b = fixed_qp_p; + + av_log(avctx, AV_LOG_DEBUG, "Using fixed QP = " + "%d / %d / %d for IDR- / P- / B-frames.\n", + fixed_qp_idr, fixed_qp_p, fixed_qp_b); + + ctx->rc.ConfigParams.DataSize = sizeof(D3D12_VIDEO_ENCODER_RATE_CONTROL_CQP); + cqp_ctl = av_mallocz(ctx->rc.ConfigParams.DataSize); + if (!cqp_ctl) + return AVERROR(ENOMEM); + + cqp_ctl->ConstantQP_FullIntracodedFrame = fixed_qp_idr; + cqp_ctl->ConstantQP_InterPredictedFrame_PrevRefOnly = fixed_qp_p; + cqp_ctl->ConstantQP_InterPredictedFrame_BiDirectionalRef = fixed_qp_b; + + ctx->rc.ConfigParams.pConfiguration_CQP = cqp_ctl; + } + + // GOP + ctx->gop.DataSize = sizeof(D3D12_VIDEO_ENCODER_SEQUENCE_GOP_STRUCTURE_HEVC); + ctx->gop.pHEVCGroupOfPictures = av_mallocz(ctx->gop.DataSize); + if (!ctx->gop.pHEVCGroupOfPictures) + return AVERROR(ENOMEM); + + ctx->gop.pHEVCGroupOfPictures->GOPLength = base_ctx->gop_size; + ctx->gop.pHEVCGroupOfPictures->PPicturePeriod = base_ctx->b_per_p + 1; + // Power of 2 + if (base_ctx->gop_size & base_ctx->gop_size - 1 == 0) + ctx->gop.pHEVCGroupOfPictures->log2_max_pic_order_cnt_lsb_minus4 = + FFMAX(av_log2(base_ctx->gop_size) - 4, 0); + else + ctx->gop.pHEVCGroupOfPictures->log2_max_pic_order_cnt_lsb_minus4 = + FFMAX(av_log2(base_ctx->gop_size) - 3, 0); + + return 0; +} + +static int d3d12va_encode_hevc_set_level(AVCodecContext *avctx) +{ + D3D12VAEncodeContext *ctx = avctx->priv_data; + D3D12VAEncodeHEVCContext *priv = avctx->priv_data; + int i; + + ctx->level.DataSize = sizeof(D3D12_VIDEO_ENCODER_LEVEL_TIER_CONSTRAINTS_HEVC); + ctx->level.pHEVCLevelSetting = av_mallocz(ctx->level.DataSize); + if (!ctx->level.pHEVCLevelSetting) + return AVERROR(ENOMEM); + + for (i = 0; i < FF_ARRAY_ELEMS(hevc_levels); i++) { + if (avctx->level == hevc_levels[i].level) { + ctx->level.pHEVCLevelSetting->Level = hevc_levels[i].d3d12_level; + break; + } + } + + if (i == FF_ARRAY_ELEMS(hevc_levels)) { + av_log(avctx, AV_LOG_ERROR, "Invalid level %d.\n", avctx->level); + return AVERROR(EINVAL); + } + + ctx->level.pHEVCLevelSetting->Tier = priv->raw_vps.profile_tier_level.general_tier_flag == 0 ? + D3D12_VIDEO_ENCODER_TIER_HEVC_MAIN : + D3D12_VIDEO_ENCODER_TIER_HEVC_HIGH; + + return 0; +} + +static void d3d12va_encode_hevc_free_picture_params(D3D12VAEncodePicture *pic) +{ + if (!pic->pic_ctl.pHEVCPicData) + return; + + av_freep(&pic->pic_ctl.pHEVCPicData->pList0ReferenceFrames); + av_freep(&pic->pic_ctl.pHEVCPicData->pList1ReferenceFrames); + av_freep(&pic->pic_ctl.pHEVCPicData->pReferenceFramesReconPictureDescriptors); + av_freep(&pic->pic_ctl.pHEVCPicData); +} + +static int d3d12va_encode_hevc_init_picture_params(AVCodecContext *avctx, + D3D12VAEncodePicture *pic) +{ + FFHWBaseEncodePicture *base_pic = &pic->base; + D3D12VAEncodeHEVCPicture *hpic = base_pic->priv_data; + FFHWBaseEncodePicture *prev = base_pic->prev; + D3D12VAEncodeHEVCPicture *hprev = prev ? prev->priv_data : NULL; + D3D12_VIDEO_ENCODER_REFERENCE_PICTURE_DESCRIPTOR_HEVC *pd = NULL; + UINT *ref_list0 = NULL, *ref_list1 = NULL; + int i, idx = 0; + + pic->pic_ctl.DataSize = sizeof(D3D12_VIDEO_ENCODER_PICTURE_CONTROL_CODEC_DATA_HEVC); + pic->pic_ctl.pHEVCPicData = av_mallocz(pic->pic_ctl.DataSize); + if (!pic->pic_ctl.pHEVCPicData) + return AVERROR(ENOMEM); + + if (base_pic->type == FF_HW_PICTURE_TYPE_IDR) { + av_assert0(base_pic->display_order == base_pic->encode_order); + hpic->last_idr_frame = base_pic->display_order; + } else { + av_assert0(prev); + hpic->last_idr_frame = hprev->last_idr_frame; + } + hpic->pic_order_cnt = base_pic->display_order - hpic->last_idr_frame; + + switch(base_pic->type) { + case FF_HW_PICTURE_TYPE_IDR: + pic->pic_ctl.pHEVCPicData->FrameType = D3D12_VIDEO_ENCODER_FRAME_TYPE_HEVC_IDR_FRAME; + break; + case FF_HW_PICTURE_TYPE_I: + pic->pic_ctl.pHEVCPicData->FrameType = D3D12_VIDEO_ENCODER_FRAME_TYPE_HEVC_I_FRAME; + break; + case FF_HW_PICTURE_TYPE_P: + pic->pic_ctl.pHEVCPicData->FrameType = D3D12_VIDEO_ENCODER_FRAME_TYPE_HEVC_P_FRAME; + break; + case FF_HW_PICTURE_TYPE_B: + pic->pic_ctl.pHEVCPicData->FrameType = D3D12_VIDEO_ENCODER_FRAME_TYPE_HEVC_B_FRAME; + break; + default: + av_assert0(0 && "invalid picture type"); + } + + pic->pic_ctl.pHEVCPicData->slice_pic_parameter_set_id = 0; + pic->pic_ctl.pHEVCPicData->PictureOrderCountNumber = hpic->pic_order_cnt; + + if (base_pic->type == FF_HW_PICTURE_TYPE_P || base_pic->type == FF_HW_PICTURE_TYPE_B) { + pd = av_calloc(MAX_PICTURE_REFERENCES, sizeof(*pd)); + if (!pd) + return AVERROR(ENOMEM); + + ref_list0 = av_calloc(MAX_PICTURE_REFERENCES, sizeof(*ref_list0)); + if (!ref_list0) + return AVERROR(ENOMEM); + + pic->pic_ctl.pHEVCPicData->List0ReferenceFramesCount = base_pic->nb_refs[0]; + for (i = 0; i < base_pic->nb_refs[0]; i++) { + FFHWBaseEncodePicture *ref = base_pic->refs[0][i]; + D3D12VAEncodeHEVCPicture *href; + + av_assert0(ref && ref->encode_order < base_pic->encode_order); + href = ref->priv_data; + + ref_list0[i] = idx; + pd[idx].ReconstructedPictureResourceIndex = idx; + pd[idx].IsRefUsedByCurrentPic = TRUE; + pd[idx].PictureOrderCountNumber = href->pic_order_cnt; + idx++; + } + } + + if (base_pic->type == FF_HW_PICTURE_TYPE_B) { + ref_list1 = av_calloc(MAX_PICTURE_REFERENCES, sizeof(*ref_list1)); + if (!ref_list1) + return AVERROR(ENOMEM); + + pic->pic_ctl.pHEVCPicData->List1ReferenceFramesCount = base_pic->nb_refs[1]; + for (i = 0; i < base_pic->nb_refs[1]; i++) { + FFHWBaseEncodePicture *ref = base_pic->refs[1][i]; + D3D12VAEncodeHEVCPicture *href; + + av_assert0(ref && ref->encode_order < base_pic->encode_order); + href = ref->priv_data; + + ref_list1[i] = idx; + pd[idx].ReconstructedPictureResourceIndex = idx; + pd[idx].IsRefUsedByCurrentPic = TRUE; + pd[idx].PictureOrderCountNumber = href->pic_order_cnt; + idx++; + } + } + + pic->pic_ctl.pHEVCPicData->pList0ReferenceFrames = ref_list0; + pic->pic_ctl.pHEVCPicData->pList1ReferenceFrames = ref_list1; + pic->pic_ctl.pHEVCPicData->ReferenceFramesReconPictureDescriptorsCount = idx; + pic->pic_ctl.pHEVCPicData->pReferenceFramesReconPictureDescriptors = pd; + + return 0; +} + +static const D3D12VAEncodeType d3d12va_encode_type_hevc = { + .profiles = d3d12va_encode_hevc_profiles, + + .d3d12_codec = D3D12_VIDEO_ENCODER_CODEC_HEVC, + + .flags = FF_HW_FLAG_B_PICTURES | + FF_HW_FLAG_B_PICTURE_REFERENCES | + FF_HW_FLAG_NON_IDR_KEY_PICTURES, + + .default_quality = 25, + + .get_encoder_caps = &d3d12va_encode_hevc_get_encoder_caps, + + .configure = &d3d12va_encode_hevc_configure, + + .set_level = &d3d12va_encode_hevc_set_level, + + .picture_priv_data_size = sizeof(D3D12VAEncodeHEVCPicture), + + .init_sequence_params = &d3d12va_encode_hevc_init_sequence_params, + + .init_picture_params = &d3d12va_encode_hevc_init_picture_params, + + .free_picture_params = &d3d12va_encode_hevc_free_picture_params, + + .write_sequence_header = &d3d12va_encode_hevc_write_sequence_header, +}; + +static int d3d12va_encode_hevc_init(AVCodecContext *avctx) +{ + D3D12VAEncodeContext *ctx = avctx->priv_data; + D3D12VAEncodeHEVCContext *priv = avctx->priv_data; + + ctx->codec = &d3d12va_encode_type_hevc; + + if (avctx->profile == AV_PROFILE_UNKNOWN) + avctx->profile = priv->profile; + if (avctx->level == FF_LEVEL_UNKNOWN) + avctx->level = priv->level; + + if (avctx->level != FF_LEVEL_UNKNOWN && avctx->level & ~0xff) { + av_log(avctx, AV_LOG_ERROR, "Invalid level %d: must fit " + "in 8-bit unsigned integer.\n", avctx->level); + return AVERROR(EINVAL); + } + + if (priv->qp > 0) + ctx->explicit_qp = priv->qp; + + return ff_d3d12va_encode_init(avctx); +} + +static int d3d12va_encode_hevc_close(AVCodecContext *avctx) +{ + D3D12VAEncodeHEVCContext *priv = avctx->priv_data; + + ff_cbs_fragment_free(&priv->current_access_unit); + ff_cbs_close(&priv->cbc); + + av_freep(&priv->common.codec_conf.pHEVCConfig); + av_freep(&priv->common.gop.pHEVCGroupOfPictures); + av_freep(&priv->common.level.pHEVCLevelSetting); + + return ff_d3d12va_encode_close(avctx); +} + +#define OFFSET(x) offsetof(D3D12VAEncodeHEVCContext, x) +#define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM) +static const AVOption d3d12va_encode_hevc_options[] = { + HW_BASE_ENCODE_COMMON_OPTIONS, + D3D12VA_ENCODE_RC_OPTIONS, + + { "qp", "Constant QP (for P-frames; scaled by qfactor/qoffset for I/B)", + OFFSET(qp), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 52, FLAGS }, + + { "profile", "Set profile (general_profile_idc)", + OFFSET(profile), AV_OPT_TYPE_INT, + { .i64 = AV_PROFILE_UNKNOWN }, AV_PROFILE_UNKNOWN, 0xff, FLAGS, "profile" }, + +#define PROFILE(name, value) name, NULL, 0, AV_OPT_TYPE_CONST, \ + { .i64 = value }, 0, 0, FLAGS, "profile" + { PROFILE("main", AV_PROFILE_HEVC_MAIN) }, + { PROFILE("main10", AV_PROFILE_HEVC_MAIN_10) }, +#undef PROFILE + + { "tier", "Set tier (general_tier_flag)", + OFFSET(tier), AV_OPT_TYPE_INT, + { .i64 = 0 }, 0, 1, FLAGS, "tier" }, + { "main", NULL, 0, AV_OPT_TYPE_CONST, + { .i64 = 0 }, 0, 0, FLAGS, "tier" }, + { "high", NULL, 0, AV_OPT_TYPE_CONST, + { .i64 = 1 }, 0, 0, FLAGS, "tier" }, + + { "level", "Set level (general_level_idc)", + OFFSET(level), AV_OPT_TYPE_INT, + { .i64 = FF_LEVEL_UNKNOWN }, FF_LEVEL_UNKNOWN, 0xff, FLAGS, "level" }, + +#define LEVEL(name, value) name, NULL, 0, AV_OPT_TYPE_CONST, \ + { .i64 = value }, 0, 0, FLAGS, "level" + { LEVEL("1", 30) }, + { LEVEL("2", 60) }, + { LEVEL("2.1", 63) }, + { LEVEL("3", 90) }, + { LEVEL("3.1", 93) }, + { LEVEL("4", 120) }, + { LEVEL("4.1", 123) }, + { LEVEL("5", 150) }, + { LEVEL("5.1", 153) }, + { LEVEL("5.2", 156) }, + { LEVEL("6", 180) }, + { LEVEL("6.1", 183) }, + { LEVEL("6.2", 186) }, +#undef LEVEL + + { NULL }, +}; + +static const FFCodecDefault d3d12va_encode_hevc_defaults[] = { + { "b", "0" }, + { "bf", "2" }, + { "g", "120" }, + { "i_qfactor", "1" }, + { "i_qoffset", "0" }, + { "b_qfactor", "1" }, + { "b_qoffset", "0" }, + { "qmin", "-1" }, + { "qmax", "-1" }, + { NULL }, +}; + +static const AVClass d3d12va_encode_hevc_class = { + .class_name = "hevc_d3d12va", + .item_name = av_default_item_name, + .option = d3d12va_encode_hevc_options, + .version = LIBAVUTIL_VERSION_INT, +}; + +const FFCodec ff_hevc_d3d12va_encoder = { + .p.name = "hevc_d3d12va", + CODEC_LONG_NAME("D3D12VA hevc encoder"), + .p.type = AVMEDIA_TYPE_VIDEO, + .p.id = AV_CODEC_ID_HEVC, + .priv_data_size = sizeof(D3D12VAEncodeHEVCContext), + .init = &d3d12va_encode_hevc_init, + FF_CODEC_RECEIVE_PACKET_CB(&ff_hw_base_encode_receive_packet), + .close = &d3d12va_encode_hevc_close, + .p.priv_class = &d3d12va_encode_hevc_class, + .p.capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE | + AV_CODEC_CAP_DR1 | AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE, + .caps_internal = FF_CODEC_CAP_NOT_INIT_THREADSAFE | + FF_CODEC_CAP_INIT_CLEANUP, + .defaults = d3d12va_encode_hevc_defaults, + .p.pix_fmts = (const enum AVPixelFormat[]) { + AV_PIX_FMT_D3D12, + AV_PIX_FMT_NONE, + }, + .hw_configs = ff_d3d12va_encode_hw_configs, + .p.wrapper_name = "d3d12va", +}; From patchwork Tue May 28 15:48:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 49328 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a59:8f0d:0:b0:460:55fa:d5ed with SMTP id i13csp55150vqu; Tue, 28 May 2024 08:52:31 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCU6rfIOo8Ofh/IBTuNibUldDXAnydNb27/GQc13CFDoAkyJdnEBM9jS09MfCSPK+P0KlOFVPFm/fV0J4Oum6qV/65We+QoQL5aRaA== X-Google-Smtp-Source: AGHT+IGBZMTqGNLqXVMq0aD//PiGkQaxmJK/JUpxrbpJ6A/lRCfJGqp9ZIgH6jx3VdKSiujFF5Po X-Received: by 2002:a17:907:9704:b0:a63:5137:935 with SMTP id a640c23a62f3a-a63513709e7mr147977466b.21.1716911551559; Tue, 28 May 2024 08:52:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1716911551; cv=none; d=google.com; s=arc-20160816; b=I7mWiHh9YJ18YBgAHPRlgHce2h+oV5FFPLTdUEBzNAY3hZFNkYFWIybUADhR9QJQo9 qFF26Yp45lVaj91sssozylD7gFmaDud9+sykapjlOvPV7hssOCguj9wyAjcuQqpNUUZX 9mgXdtlnPzTWDR1QNt8pbCGRmGthdUje3x2S3fCWbPQztGpW1q46VIBkCI6FKIVkYzTC 53uQEHs11S6Xka/uhpzXbU4U4Sx9aYU/eaGRYajVb1IfEQY0WDMJdsf2DvOwCFDo7Cmz i4w6HBOcIM6hle6dT/0FBu7MEnan1/TbWDukgeJWVzPMeoGyDdyzTNN42M5FpYfEIzOX bvmA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=yqPTFxhj67B8oPXMxhn8jPeeMILP8f2Kyl94Jo1CXIk=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=AAJk7U+q0cHJWVlIQ+xy7mMSmVb9YB16sQeh/esNWqNnD3LKBJeHJlJYll1IBJkPyf 3WsCR9CczG866IoVc1/OGihScNQQ0E3q92kcCDVW3JT3LRJ9T6V2Uoz9pQS/39Lr+PIp B7wgw2D0pbV7/nauGJVmIGUTZimwYfJ8pHXkRZKuOij/RNYlBIdD8lw2RlVZdzNd4SgL cAdDJtGkn+HN+3OBDYHgH+JulS4+OVV/JCcJsJ/Ibzpgg4iTc7EHyHCMSQ48ITPLiG9u GRZPH8nF8LJXIvG91LhkZvLjzkWLqPWlYuVnaYIDeDGuyRl+bD9yI/+BOcv8Yz+r0+4E hoRA==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=HJbR+BoO; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id a640c23a62f3a-a626cd92577si526754366b.771.2024.05.28.08.52.27; Tue, 28 May 2024 08:52:31 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=HJbR+BoO; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id BDEC468D5BD; Tue, 28 May 2024 18:50:17 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 6B68468D565 for ; Tue, 28 May 2024 18:50:07 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716911408; x=1748447408; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3eTf9yZQee05Owy7Ceb3MKzDKgoBSiNiFSUpGPTdeDg=; b=HJbR+BoOXp+a+DgDTMxLDAGoF8ITNjNFxykZMZejn7hYKEVqYyicl2Mo CVI8yXDkDAqB4a49t6gDdNZ9L8hZaAVxfaCplS9YRiGW5VNoKEHeji6jv OjSvLJRUUowzkUV3dWRNqkHK+vYp+pQjnkRdxoHPIVNEuF/5/OOiGFxtP ZQpYZLckHdJMn2ASAcWY3Tk7uZP4cRFf/uJ0+ZqPQt9MD7XvFI369fop2 HbTmIKECTvSRlzbQ5yk8psUp0iCFcx0ZPVO43xbD6/UjgRG9eP+8Ec/GL /gegaGDSu9e3oQeHVPeOl9wVQ5dBebpsRLXuhGxBMc5fcVgACLlFXIPx5 w==; X-CSE-ConnectionGUID: Wj0BC/NlRRWwtnynXf53ug== X-CSE-MsgGUID: kbkO4/HySdG1nHFLen01Ig== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="23821945" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="23821945" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 08:50:05 -0700 X-CSE-ConnectionGUID: z8UyzSmYQZyNj6zOUYYSXg== X-CSE-MsgGUID: OkOd3NXETeaSFpgOPJbP4A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="35731259" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by orviesa007.jf.intel.com with ESMTP; 28 May 2024 08:50:04 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Tue, 28 May 2024 23:48:05 +0800 Message-ID: <20240528154807.1151-14-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240528154807.1151-1-tong1.wu@intel.com> References: <20240528154807.1151-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v12 14/15] Changelog: add D3D12VA HEVC encoder changelog X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: akGYbWQLdCCL From: Tong Wu Signed-off-by: Tong Wu --- Changelog | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Changelog b/Changelog index 12770e4296..c5d57b3813 100644 --- a/Changelog +++ b/Changelog @@ -11,7 +11,7 @@ version : - vf_scale2ref deprecated - qsv_params option added for QSV encoders - VVC decoder compatible with DVB test content - +- D3D12VA HEVC encoder version 7.0: - DXV DXT1 encoder From patchwork Tue May 28 15:48:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 49329 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a59:8f0d:0:b0:460:55fa:d5ed with SMTP id i13csp55348vqu; Tue, 28 May 2024 08:52:53 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCV+rn21CInUd1JTKx3xS1hQL1MUIK+8IqU9/4R+y66jmd5lTKnRCbAbh41Bolg7Jyr9H8Y3cddt835w829cLwzn9iBAbVHIVAmHig== X-Google-Smtp-Source: AGHT+IFeR9G1HU3uTXB54aTp8/Eg+oLadkfz67NW8zniTLpFC+EUnGIyV/hTcNxpcDitDJuVVXKl X-Received: by 2002:a17:906:b0c5:b0:a59:b490:e77b with SMTP id a640c23a62f3a-a62646d6a39mr848122566b.40.1716911573380; Tue, 28 May 2024 08:52:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1716911573; cv=none; d=google.com; s=arc-20160816; b=GkrDLwNfUgq/XAlpo37+LiI2UwdkHhkso2Tmzm1lTPKzdd3bt3VKEZG7bV3n6alA4V b6rE8nTbAhB9ZkjJ0Snr10+n63iKRKQXZIUCHM9+lQLE2pr6a1QKKIphHIZ7KTlNGksV XDZCIT/jykUpTVWFWUok73GSuwSY01mNpuJ6tMkOPaZBSnPyYvOOdIdWtd8cwyZegTnb iTUOz4SSDdDu6MEggw7fLzZk2ZlJhP82pUDEvK9XoXrg3KZX2LpB8XItFZjAqHpW8f+5 AdlxlHK9b/KBA9Ugj1sgEklqO9hkNg4hYthOfLFFVfXgQ2rOU+OrEQAJBjHDiyl1xd23 6gBg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=RC7+7LDsznffsHZ6ZcZEzUTUpKUHIawtTstDyAxJvPI=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=ari+nHeGdJJ1/ebWS9koHYRS2V9MY3Vi095FbHpRUhxuh3bt3obtlMHpNw8HeDQgzm ic84Fxx414yizos2KGDh/lKX9fTrHljtsKvdpVge1tjSCHCWRyzG42U6xqJ8nIDMVRkO 5C7POCLPtUXGBj02zrGPpFYTin3WLYUg/Vo4OlDQ7euQinPr1xt1bp6QU4M2DGbGTIzo adFo84AOCsRmT0gi67yaqgTmr2pNC0LRaqxkv5cAOPHAcmYogzBWr1eI0+yjSdncb8Zk 1W6UQs1f8SJpKp0ON56KaHarFAFh5de4xnnvzOilsJqNF0FFE7odSzSM90RKvSz2yhvb M7eQ==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=e2a78w+W; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id a640c23a62f3a-a630d03b27dsi196241266b.380.2024.05.28.08.52.37; Tue, 28 May 2024 08:52:53 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=e2a78w+W; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id CFE9768D5C2; Tue, 28 May 2024 18:50:18 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 4371868D565 for ; Tue, 28 May 2024 18:50:07 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716911408; x=1748447408; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Ib3rcgCEEAUme//lwGB16CAjVdkUKLVS0rAkZR7dmNc=; b=e2a78w+WPJ4mgA0zm5NRKPfmkXCJxqZgBWdj3uU9z2GB/X7NkD8OKDMM /XfID4L+e415nC7ajClk699C/8j5talBIeURXoiXF3L4z/bJR5UEJD+6T mqxiXMLKb+sFYC6eNcEgvJ1j41WuvP4GFa1Hn3ryFcXzH5VBgNJtskvma Wbi3FgKJeFoBDd2XTTibxI/KaEoYWMuAQylBbUrulArIZokZvyXIesXZ1 P3L0s/ypqX/RQOWpntoeUrmXPbxRkyK1GYdQPqz7eNvTXImcHNJ3+COHC 1kN7a8qM5/ZlkfA/j4KyGBBf23BatcQwmFrnKgTheEBxe0P598CTrilsP Q==; X-CSE-ConnectionGUID: eLTewQgaRSGxn/Ibjm9/dg== X-CSE-MsgGUID: eoel3Q7bQoS+9TyLkrDE0g== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="23821950" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="23821950" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 08:50:06 -0700 X-CSE-ConnectionGUID: KJARbb28SZiT+vbZaXtAPg== X-CSE-MsgGUID: 6PO7KWx0QIC9+2dP/xxr4w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="35731313" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by orviesa007.jf.intel.com with ESMTP; 28 May 2024 08:50:05 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Tue, 28 May 2024 23:48:06 +0800 Message-ID: <20240528154807.1151-15-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240528154807.1151-1-tong1.wu@intel.com> References: <20240528154807.1151-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v12 15/15] avcodec/hw_base_encode: add avctx pointer for FFHWBaseEncodeContext X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: DqcCVLuRFxuU From: Tong Wu An avctx pointer is added to FFHWBaseEncodeContext. This is to make FFHWBaseEncodeContext a standalone component for ff_hw_base_* functions. This patch also removes some unnecessary AVCodecContext arguments. Signed-off-by: Tong Wu --- libavcodec/d3d12va_encode.c | 6 +++--- libavcodec/hw_base_encode.c | 31 +++++++++++++------------------ libavcodec/hw_base_encode.h | 8 +++++--- libavcodec/vaapi_encode.c | 6 +++--- 4 files changed, 24 insertions(+), 27 deletions(-) diff --git a/libavcodec/d3d12va_encode.c b/libavcodec/d3d12va_encode.c index 0fbf8eb07c..6d3a53c6ca 100644 --- a/libavcodec/d3d12va_encode.c +++ b/libavcodec/d3d12va_encode.c @@ -1351,7 +1351,7 @@ static int d3d12va_encode_create_recon_frames(AVCodecContext *avctx) enum AVPixelFormat recon_format; int err; - err = ff_hw_base_get_recon_format(avctx, NULL, &recon_format); + err = ff_hw_base_get_recon_format(base_ctx, NULL, &recon_format); if (err < 0) return err; @@ -1398,7 +1398,7 @@ int ff_d3d12va_encode_init(AVCodecContext *avctx) int err; HRESULT hr; - err = ff_hw_base_encode_init(avctx); + err = ff_hw_base_encode_init(avctx, base_ctx); if (err < 0) goto fail; @@ -1552,7 +1552,7 @@ int ff_d3d12va_encode_close(AVCodecContext *avctx) D3D12_OBJECT_RELEASE(ctx->video_device3); D3D12_OBJECT_RELEASE(ctx->device3); - ff_hw_base_encode_close(avctx); + ff_hw_base_encode_close(base_ctx); return 0; } diff --git a/libavcodec/hw_base_encode.c b/libavcodec/hw_base_encode.c index 92f69bb78c..88efdf672c 100644 --- a/libavcodec/hw_base_encode.c +++ b/libavcodec/hw_base_encode.c @@ -94,14 +94,13 @@ static void hw_base_encode_remove_refs(FFHWBaseEncodePicture *pic, int level) pic->ref_removed[level] = 1; } -static void hw_base_encode_set_b_pictures(AVCodecContext *avctx, +static void hw_base_encode_set_b_pictures(FFHWBaseEncodeContext *ctx, FFHWBaseEncodePicture *start, FFHWBaseEncodePicture *end, FFHWBaseEncodePicture *prev, int current_depth, FFHWBaseEncodePicture **last) { - FFHWBaseEncodeContext *ctx = avctx->priv_data; FFHWBaseEncodePicture *pic, *next, *ref; int i, len; @@ -148,20 +147,19 @@ static void hw_base_encode_set_b_pictures(AVCodecContext *avctx, hw_base_encode_add_ref(pic, ref, 0, 1, 0); if (i > 1) - hw_base_encode_set_b_pictures(avctx, start, pic, pic, + hw_base_encode_set_b_pictures(ctx, start, pic, pic, current_depth + 1, &next); else next = pic; - hw_base_encode_set_b_pictures(avctx, pic, end, next, + hw_base_encode_set_b_pictures(ctx, pic, end, next, current_depth + 1, last); } } -static void hw_base_encode_add_next_prev(AVCodecContext *avctx, +static void hw_base_encode_add_next_prev(FFHWBaseEncodeContext *ctx, FFHWBaseEncodePicture *pic) { - FFHWBaseEncodeContext *ctx = avctx->priv_data; int i; if (!pic) @@ -333,12 +331,12 @@ static int hw_base_encode_pick_next(AVCodecContext *avctx, } if (b_counter > 0) { - hw_base_encode_set_b_pictures(avctx, start, pic, pic, 1, + hw_base_encode_set_b_pictures(ctx, start, pic, pic, 1, &prev); } else { prev = pic; } - hw_base_encode_add_next_prev(avctx, prev); + hw_base_encode_add_next_prev(ctx, prev); return 0; } @@ -687,9 +685,9 @@ int ff_hw_base_init_gop_structure(AVCodecContext *avctx, uint32_t ref_l0, uint32 return 0; } -int ff_hw_base_get_recon_format(AVCodecContext *avctx, const void *hwconfig, enum AVPixelFormat *fmt) +int ff_hw_base_get_recon_format(FFHWBaseEncodeContext *ctx, const void *hwconfig, + enum AVPixelFormat *fmt) { - FFHWBaseEncodeContext *ctx = avctx->priv_data; AVHWFramesConstraints *constraints = NULL; enum AVPixelFormat recon_format; int err, i; @@ -722,14 +720,14 @@ int ff_hw_base_get_recon_format(AVCodecContext *avctx, const void *hwconfig, enu // No idea what to use; copy input format. recon_format = ctx->input_frames->sw_format; } - av_log(avctx, AV_LOG_DEBUG, "Using %s as format of " + av_log(ctx->avctx, AV_LOG_DEBUG, "Using %s as format of " "reconstructed frames.\n", av_get_pix_fmt_name(recon_format)); if (ctx->surface_width < constraints->min_width || ctx->surface_height < constraints->min_height || ctx->surface_width > constraints->max_width || ctx->surface_height > constraints->max_height) { - av_log(avctx, AV_LOG_ERROR, "Hardware does not support encoding at " + av_log(ctx->avctx, AV_LOG_ERROR, "Hardware does not support encoding at " "size %dx%d (constraints: width %d-%d height %d-%d).\n", ctx->surface_width, ctx->surface_height, constraints->min_width, constraints->max_width, @@ -756,10 +754,9 @@ int ff_hw_base_encode_free(FFHWBaseEncodePicture *pic) return 0; } -int ff_hw_base_encode_init(AVCodecContext *avctx) +int ff_hw_base_encode_init(AVCodecContext *avctx, FFHWBaseEncodeContext *ctx) { - FFHWBaseEncodeContext *ctx = avctx->priv_data; - + ctx->avctx = avctx; ctx->frame = av_frame_alloc(); if (!ctx->frame) return AVERROR(ENOMEM); @@ -789,10 +786,8 @@ int ff_hw_base_encode_init(AVCodecContext *avctx) return 0; } -int ff_hw_base_encode_close(AVCodecContext *avctx) +int ff_hw_base_encode_close(FFHWBaseEncodeContext *ctx) { - FFHWBaseEncodeContext *ctx = avctx->priv_data; - av_fifo_freep2(&ctx->encode_fifo); av_frame_free(&ctx->frame); diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h index 15ef3d7ac6..13c1fc0f69 100644 --- a/libavcodec/hw_base_encode.h +++ b/libavcodec/hw_base_encode.h @@ -116,6 +116,7 @@ typedef struct FFHWEncodePictureOperation { typedef struct FFHWBaseEncodeContext { const AVClass *class; + AVCodecContext *avctx; // Hardware-specific hooks. const struct FFHWEncodePictureOperation *op; @@ -222,13 +223,14 @@ int ff_hw_base_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt); int ff_hw_base_init_gop_structure(AVCodecContext *avctx, uint32_t ref_l0, uint32_t ref_l1, int flags, int prediction_pre_only); -int ff_hw_base_get_recon_format(AVCodecContext *avctx, const void *hwconfig, enum AVPixelFormat *fmt); +int ff_hw_base_get_recon_format(FFHWBaseEncodeContext *ctx, const void *hwconfig, + enum AVPixelFormat *fmt); int ff_hw_base_encode_free(FFHWBaseEncodePicture *pic); -int ff_hw_base_encode_init(AVCodecContext *avctx); +int ff_hw_base_encode_init(AVCodecContext *avctx, FFHWBaseEncodeContext *ctx); -int ff_hw_base_encode_close(AVCodecContext *avctx); +int ff_hw_base_encode_close(FFHWBaseEncodeContext *ctx); #define HW_BASE_ENCODE_COMMON_OPTIONS \ { "idr_interval", \ diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c index b35a23e852..0693e77548 100644 --- a/libavcodec/vaapi_encode.c +++ b/libavcodec/vaapi_encode.c @@ -2059,7 +2059,7 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx) } hwconfig->config_id = ctx->va_config; - err = ff_hw_base_get_recon_format(avctx, (const void*)hwconfig, &recon_format); + err = ff_hw_base_get_recon_format(base_ctx, (const void*)hwconfig, &recon_format); if (err < 0) goto fail; @@ -2106,7 +2106,7 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx) VAStatus vas; int err; - err = ff_hw_base_encode_init(avctx); + err = ff_hw_base_encode_init(avctx, base_ctx); if (err < 0) goto fail; @@ -2313,7 +2313,7 @@ av_cold int ff_vaapi_encode_close(AVCodecContext *avctx) av_freep(&ctx->codec_sequence_params); av_freep(&ctx->codec_picture_params); - ff_hw_base_encode_close(avctx); + ff_hw_base_encode_close(base_ctx); return 0; }