From patchwork Thu Apr 18 08:58:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 48136 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a20:ce4e:b0:1a9:af23:56c1 with SMTP id id14csp1551539pzb; Thu, 18 Apr 2024 02:01:03 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCXncjXKOX9t3azBT0F1572xnV+wYonRtjj6Ek5cr8lOzh5QdIQm03sqddp8HxKSCdIFHBsTFl9vwHpyZxLQ/sLHuZIAvUiYUePHSA== X-Google-Smtp-Source: AGHT+IG3qvIW2lx3ub76g3OYbxvmz9tv8L8xjoficXk3yzrFBexg60fzeiBfafiztfPeZKY8vga9 X-Received: by 2002:a19:5e4a:0:b0:519:6f94:3803 with SMTP id z10-20020a195e4a000000b005196f943803mr671969lfi.6.1713430862687; Thu, 18 Apr 2024 02:01:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1713430862; cv=none; d=google.com; s=arc-20160816; b=EWy9mweMnK6euJH7ArozLBkeqtZ0LHlwdF00zEHp04GbppGxtYB1NzD4ZkuMZu3Vzm JTmJdpbSNV136rounds213+8g8KvdMeASDRFpH9NiBoW5dXJP7sZMTEhY+lyHdBDavLU EdHyrMJ3WVv9meNj4pRK4EEU1kWNlIYtrRE60p2DWoadvyUO+MSi0UyeDl32O45mpADv xRIXsKhcFVcfdQhEpFxFAme2fzfyXBSXtRWZo1otM+EHx2xJcW9hQpq7gvluM6KCQkeT rtCHbzt1LKVzCKl/SZJ8/Xqfa0WuPFvVlAXw7S1eD3Yh/x2+NQBGTFEXyUOEdqQNUpGD 2hQQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:message-id:date:to:from :dkim-signature:delivered-to; bh=hYaaet/KSAK1wxMwotV047JINGAryp7zMHCpPfBWTHg=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=p9WXP7R3YFrGRUPmJ/N/QGmo8lTP5MdqT1/+m1iM0CCVsDz1RAzzh6FuT0oH4QVDlB l1GOVbqGEKwzABIj4RGWLMGL1qMcGAIKnzBat1yHA70y+KD06WBBevVhTA85PfFWhklq lr1YzdGUT91ml5bZ7Jzzwr2IW2yetvaRugbV/5hMoP2p2eRKPKkL/UjlMcha5e+984gA TUIXwh6bKF4bEVKj1lwt09LVSfNErwXLR7KvA+5fFnp30TEUATGdb7z9JUAbSMDs8MaP /b+7rgEt8S3TcJv4HskeejbMxOcXH2jHU+t/aaykEbyIXw3E47zPJ3A5sB1XbUiLc0DY Rnyw==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=dwfkmArr; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id cw24-20020a056402229800b00570091fc474si606074edb.468.2024.04.18.02.01.02; Thu, 18 Apr 2024 02:01:02 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=dwfkmArr; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id F067368D41E; Thu, 18 Apr 2024 12:00:59 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 6C73968D2BC for ; Thu, 18 Apr 2024 12:00:52 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713430857; x=1744966857; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=9d14znv2DhMmS7JRGtgXX2Lg0y0Xs2wXoGJLxci9ZrI=; b=dwfkmArrL7jcorfgJ/q2w1EGYmNDA5vXwGXMPJqY5HnznI3ehUaHTrrx /KnrK811/7JCa9YDUoG07M85oZjeaRXXnmf39sy4Hn211wEagXRAiF/is G3B+wBA/TlvOeLZhHcI2IJV8WpvE+yE7GahWkhoY/5Q3hUCLgAfprzeWF HAjvSlB5ful6bIjCaT/OQREXR62/xoVZ79Nhd1gYQG7jFVXkYFciJVj2W 7hZyDEYFwp4iBFGh2p7AHZnuWhmjtHLVsViSCS4xX+SHry9/X4Dldm9m0 xJrf3jV223DU3q0LCg0VweW6PcSmLZEb8njXO7cYHrbe2xsYuGeCloscs A==; X-CSE-ConnectionGUID: DUb0u8T8TTqKMFO/I/DqHQ== X-CSE-MsgGUID: lhN8aq53RVimeUIwNC85CA== X-IronPort-AV: E=McAfee;i="6600,9927,11047"; a="12748287" X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="12748287" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Apr 2024 02:00:41 -0700 X-CSE-ConnectionGUID: cCLnN2X0Q0eudWgskD5Elg== X-CSE-MsgGUID: OiSlu84UQCiq42aSZUtzrA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="54127503" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by fmviesa001.fm.intel.com with ESMTP; 18 Apr 2024 02:00:31 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Thu, 18 Apr 2024 16:58:55 +0800 Message-ID: <20240418085910.547-1-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v8 01/15] avcodec/vaapi_encode: introduce a base layer for vaapi encode X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: nMTNwMygDAxN From: Tong Wu Since VAAPI and future D3D12VA implementation may share some common parameters, a base layer encode context is introduced as vaapi context's base. Signed-off-by: Tong Wu --- libavcodec/hw_base_encode.h | 52 +++++++++++++++++++++++++++++++++++++ libavcodec/vaapi_encode.h | 36 ++++--------------------- 2 files changed, 57 insertions(+), 31 deletions(-) create mode 100644 libavcodec/hw_base_encode.h diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h new file mode 100644 index 0000000000..3d1974bba3 --- /dev/null +++ b/libavcodec/hw_base_encode.h @@ -0,0 +1,52 @@ +/* + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#ifndef AVCODEC_HW_BASE_ENCODE_H +#define AVCODEC_HW_BASE_ENCODE_H + +#define MAX_DPB_SIZE 16 +#define MAX_PICTURE_REFERENCES 2 +#define MAX_REORDER_DELAY 16 +#define MAX_ASYNC_DEPTH 64 +#define MAX_REFERENCE_LIST_NUM 2 + +enum { + PICTURE_TYPE_IDR = 0, + PICTURE_TYPE_I = 1, + PICTURE_TYPE_P = 2, + PICTURE_TYPE_B = 3, +}; + +enum { + // Codec supports controlling the subdivision of pictures into slices. + FLAG_SLICE_CONTROL = 1 << 0, + // Codec only supports constant quality (no rate control). + FLAG_CONSTANT_QUALITY_ONLY = 1 << 1, + // Codec is intra-only. + FLAG_INTRA_ONLY = 1 << 2, + // Codec supports B-pictures. + FLAG_B_PICTURES = 1 << 3, + // Codec supports referencing B-pictures. + FLAG_B_PICTURE_REFERENCES = 1 << 4, + // Codec supports non-IDR key pictures (that is, key pictures do + // not necessarily empty the DPB). + FLAG_NON_IDR_KEY_PICTURES = 1 << 5, +}; + +#endif /* AVCODEC_HW_BASE_ENCODE_H */ + diff --git a/libavcodec/vaapi_encode.h b/libavcodec/vaapi_encode.h index 0eed9691ca..5e18f374a7 100644 --- a/libavcodec/vaapi_encode.h +++ b/libavcodec/vaapi_encode.h @@ -33,34 +33,27 @@ #include "avcodec.h" #include "hwconfig.h" +#include "hw_base_encode.h" struct VAAPIEncodeType; struct VAAPIEncodePicture; +// Codec output packet without timestamp delay, which means the +// output packet has same PTS and DTS. +#define FLAG_TIMESTAMP_NO_DELAY 1 << 6 + enum { MAX_CONFIG_ATTRIBUTES = 4, MAX_GLOBAL_PARAMS = 4, - MAX_DPB_SIZE = 16, - MAX_PICTURE_REFERENCES = 2, - MAX_REORDER_DELAY = 16, MAX_PARAM_BUFFER_SIZE = 1024, // A.4.1: table A.6 allows at most 22 tile rows for any level. MAX_TILE_ROWS = 22, // A.4.1: table A.6 allows at most 20 tile columns for any level. MAX_TILE_COLS = 20, - MAX_ASYNC_DEPTH = 64, - MAX_REFERENCE_LIST_NUM = 2, }; extern const AVCodecHWConfigInternal *const ff_vaapi_encode_hw_configs[]; -enum { - PICTURE_TYPE_IDR = 0, - PICTURE_TYPE_I = 1, - PICTURE_TYPE_P = 2, - PICTURE_TYPE_B = 3, -}; - typedef struct VAAPIEncodeSlice { int index; int row_start; @@ -397,25 +390,6 @@ typedef struct VAAPIEncodeContext { AVPacket *tail_pkt; } VAAPIEncodeContext; -enum { - // Codec supports controlling the subdivision of pictures into slices. - FLAG_SLICE_CONTROL = 1 << 0, - // Codec only supports constant quality (no rate control). - FLAG_CONSTANT_QUALITY_ONLY = 1 << 1, - // Codec is intra-only. - FLAG_INTRA_ONLY = 1 << 2, - // Codec supports B-pictures. - FLAG_B_PICTURES = 1 << 3, - // Codec supports referencing B-pictures. - FLAG_B_PICTURE_REFERENCES = 1 << 4, - // Codec supports non-IDR key pictures (that is, key pictures do - // not necessarily empty the DPB). - FLAG_NON_IDR_KEY_PICTURES = 1 << 5, - // Codec output packet without timestamp delay, which means the - // output packet has same PTS and DTS. - FLAG_TIMESTAMP_NO_DELAY = 1 << 6, -}; - typedef struct VAAPIEncodeType { // List of supported profiles and corresponding VAAPI profiles. // (Must end with AV_PROFILE_UNKNOWN.) From patchwork Thu Apr 18 08:58:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 48137 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a20:ce4e:b0:1a9:af23:56c1 with SMTP id id14csp1551666pzb; Thu, 18 Apr 2024 02:01:14 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCUfqzHBTRR3bzdtBeTo8zbrNd5DRNZhzf321WRlM7HcuIs9Xv1v5DvDCdhQyO9FR4PXreEpakdUGOlsOqaBhKE6KmeARjFIueHTpQ== X-Google-Smtp-Source: AGHT+IFgQqMF/lHu4lWE5p/0ZStLuV9c8Jek3CiGy0QVGhIzZ+wNOs/176gW+MsN3r48wFS0GYgg X-Received: by 2002:a50:a698:0:b0:570:1ea6:e9d8 with SMTP id e24-20020a50a698000000b005701ea6e9d8mr1471737edc.9.1713430874321; Thu, 18 Apr 2024 02:01:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1713430874; cv=none; d=google.com; s=arc-20160816; b=dq/B+K/Jy89DJkpCDNZYnz7gyHtqeBXjlJLR9cyvJX9i0RiTgiR3CnW5aWbcuaVrtz foVOwvDK3dCv5BJpISLPKeVvVbWP+k+xicNlL1FtJqo8aVSHIW1Sn2zAqK1nNFjZgsSJ DDtUTLA0m81kHKlDpCBLAZNt6dn+Qh+bNc6zccNlNLqJ4xpQdyr/P5kY7TnniHzdPlZs I49Z3Iew60OBGtk/nMKzDg2zKQzKKI5d1nEskE0vrUm3TtxszcdAYPAMP6dcQV4bbTY7 a0T2hqkDLoOTyDoIXh2iAMjIV1nk0y22HE74EF4HcyrCQHnHFU4FEVx4fZwrZhtC8srg o3wQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=dW+7o2oga9hheNoc9HRN/28nwOUM8jQrCHpmOIcRmmc=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=yxQUzwfndq2OEIDT6hC0wtN6PLhMZTc+tv2+iOe4bH/Ux9QQr5Vq4hQuxzE7k1Ohxx 682f6ljTHvCsXMAjdHl4UDm+OcjZo6ar+x91ny+IZtgaCNEtW8IaKsZSBHdlpMByzW55 glKnOM4aUJeBiNOZxajhOlvBWXdHDk46rv5erEVUAkkh2xi61VGGJu1rwzkr8DCeQUS2 ky9vLN+tiO3BsJ3NKabANZNLwmh2Z92Puh94BmlyRrRl59/93JX4NGWa5JVMHQSg3Bq4 i1Jn+oTbQ/vjoy9GzLv6f9+h9Um28BZHDxXje9z1xbnbaaV2sE586/roRv0x2kZNx+j8 e6ig==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=l8blJjtw; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id cx13-20020a05640222ad00b00571be28d33asi349232edb.659.2024.04.18.02.01.13; Thu, 18 Apr 2024 02:01:14 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=l8blJjtw; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id EDCD268D426; Thu, 18 Apr 2024 12:01:00 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id D914168D2BC for ; Thu, 18 Apr 2024 12:00:53 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713430859; x=1744966859; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3nXdcttStWWr05DgxElCK/8BcMEMW8n04wSpIrtbtHQ=; b=l8blJjtwUlLR8aS1VUaUZijIoBgQV/oIwBLJ1MF6u/5wtBKc6Rv26mSX L74LDpDbO9twVKv06ErB974d8jq0vsM16td848GsP1uxCKeEA/I2nRFGh RhpnA4/BT+DoibpDp5mtKlGkhxC9SkNqvE+X9N6pO/dpaa6K6CkeIareK xp9db5Y0w7Dn+YyQ54PfgBA59MDu8Z1/B9ZSwZ/G7pIN88xcjBP+GTIMn HMS1kMGNTMLVf5L8lFwaa+a5FuLJls2vqa6csPbcI3fE54CXM/1nIES02 /LeL/4mGygetMye7XrBSRQJyMvqYdZdhM7e8OQ11Bb59tcYxlObIt8qTC Q==; X-CSE-ConnectionGUID: ermnPa3OTXixkbonpiZP+w== X-CSE-MsgGUID: AMa0zq3fTGeZ8hAyHgZ8VQ== X-IronPort-AV: E=McAfee;i="6600,9927,11047"; a="12748313" X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="12748313" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Apr 2024 02:00:51 -0700 X-CSE-ConnectionGUID: wEH6Ig6nSxiXnjNRCea/HQ== X-CSE-MsgGUID: LnjqD1eJRLi/WfF8pi3Keg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="54127535" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by fmviesa001.fm.intel.com with ESMTP; 18 Apr 2024 02:00:32 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Thu, 18 Apr 2024 16:58:56 +0800 Message-ID: <20240418085910.547-2-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240418085910.547-1-tong1.wu@intel.com> References: <20240418085910.547-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v8 02/15] avcodec/vaapi_encode: add async_depth to common options X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: 62juZXl+4U53 From: Tong Wu Signed-off-by: Tong Wu --- libavcodec/hw_base_encode.h | 14 +++++++++++++- libavcodec/vaapi_encode.c | 13 ++++++++----- libavcodec/vaapi_encode.h | 10 ++-------- libavcodec/vaapi_encode_av1.c | 1 + libavcodec/vaapi_encode_h264.c | 1 + libavcodec/vaapi_encode_h265.c | 1 + libavcodec/vaapi_encode_mjpeg.c | 1 + libavcodec/vaapi_encode_mpeg2.c | 1 + libavcodec/vaapi_encode_vp8.c | 1 + libavcodec/vaapi_encode_vp9.c | 1 + 10 files changed, 30 insertions(+), 14 deletions(-) diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h index 3d1974bba3..5272f2836d 100644 --- a/libavcodec/hw_base_encode.h +++ b/libavcodec/hw_base_encode.h @@ -48,5 +48,17 @@ enum { FLAG_NON_IDR_KEY_PICTURES = 1 << 5, }; -#endif /* AVCODEC_HW_BASE_ENCODE_H */ +typedef struct HWBaseEncodeContext { + const AVClass *class; + + // Max number of frame buffered in encoder. + int async_depth; +} HWBaseEncodeContext; +#define HW_BASE_ENCODE_COMMON_OPTIONS \ + { "async_depth", "Maximum processing parallelism. " \ + "Increase this to improve single channel performance.", \ + OFFSET(common.base.async_depth), AV_OPT_TYPE_INT, \ + { .i64 = 2 }, 1, MAX_ASYNC_DEPTH, FLAGS } + +#endif /* AVCODEC_HW_BASE_ENCODE_H */ diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c index f54b2579ec..9373512417 100644 --- a/libavcodec/vaapi_encode.c +++ b/libavcodec/vaapi_encode.c @@ -669,7 +669,8 @@ static int vaapi_encode_set_output_property(AVCodecContext *avctx, VAAPIEncodePicture *pic, AVPacket *pkt) { - VAAPIEncodeContext *ctx = avctx->priv_data; + HWBaseEncodeContext *base_ctx = avctx->priv_data; + VAAPIEncodeContext *ctx = avctx->priv_data; if (pic->type == PICTURE_TYPE_IDR) pkt->flags |= AV_PKT_FLAG_KEY; @@ -699,7 +700,7 @@ static int vaapi_encode_set_output_property(AVCodecContext *avctx, pkt->dts = ctx->ts_ring[pic->encode_order] - ctx->dts_pts_diff; } else { pkt->dts = ctx->ts_ring[(pic->encode_order - ctx->decode_delay) % - (3 * ctx->output_delay + ctx->async_depth)]; + (3 * ctx->output_delay + base_ctx->async_depth)]; } return 0; @@ -1320,6 +1321,7 @@ static int vaapi_encode_check_frame(AVCodecContext *avctx, static int vaapi_encode_send_frame(AVCodecContext *avctx, AVFrame *frame) { + HWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeContext *ctx = avctx->priv_data; VAAPIEncodePicture *pic; int err; @@ -1365,7 +1367,7 @@ static int vaapi_encode_send_frame(AVCodecContext *avctx, AVFrame *frame) ctx->dts_pts_diff = pic->pts - ctx->first_pts; if (ctx->output_delay > 0) ctx->ts_ring[ctx->input_order % - (3 * ctx->output_delay + ctx->async_depth)] = pic->pts; + (3 * ctx->output_delay + base_ctx->async_depth)] = pic->pts; pic->display_order = ctx->input_order; ++ctx->input_order; @@ -2773,7 +2775,8 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx) av_cold int ff_vaapi_encode_init(AVCodecContext *avctx) { - VAAPIEncodeContext *ctx = avctx->priv_data; + HWBaseEncodeContext *base_ctx = avctx->priv_data; + VAAPIEncodeContext *ctx = avctx->priv_data; AVVAAPIFramesContext *recon_hwctx = NULL; VAStatus vas; int err; @@ -2966,7 +2969,7 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx) vas = vaSyncBuffer(ctx->hwctx->display, VA_INVALID_ID, 0); if (vas != VA_STATUS_ERROR_UNIMPLEMENTED) { ctx->has_sync_buffer_func = 1; - ctx->encode_fifo = av_fifo_alloc2(ctx->async_depth, + ctx->encode_fifo = av_fifo_alloc2(base_ctx->async_depth, sizeof(VAAPIEncodePicture *), 0); if (!ctx->encode_fifo) diff --git a/libavcodec/vaapi_encode.h b/libavcodec/vaapi_encode.h index 5e18f374a7..02410c72ec 100644 --- a/libavcodec/vaapi_encode.h +++ b/libavcodec/vaapi_encode.h @@ -186,7 +186,8 @@ typedef struct VAAPIEncodeRCMode { } VAAPIEncodeRCMode; typedef struct VAAPIEncodeContext { - const AVClass *class; + // Base context. + HWBaseEncodeContext base; // Codec-specific hooks. const struct VAAPIEncodeType *codec; @@ -373,8 +374,6 @@ typedef struct VAAPIEncodeContext { int has_sync_buffer_func; // Store buffered pic AVFifo *encode_fifo; - // Max number of frame buffered in encoder. - int async_depth; /** Head data for current output pkt, used only for AV1. */ //void *header_data; @@ -490,11 +489,6 @@ int ff_vaapi_encode_close(AVCodecContext *avctx); "Maximum B-frame reference depth", \ OFFSET(common.desired_b_depth), AV_OPT_TYPE_INT, \ { .i64 = 1 }, 1, INT_MAX, FLAGS }, \ - { "async_depth", "Maximum processing parallelism. " \ - "Increase this to improve single channel performance. This option " \ - "doesn't work if driver doesn't implement vaSyncBuffer function.", \ - OFFSET(common.async_depth), AV_OPT_TYPE_INT, \ - { .i64 = 2 }, 1, MAX_ASYNC_DEPTH, FLAGS }, \ { "max_frame_size", \ "Maximum frame size (in bytes)",\ OFFSET(common.max_frame_size), AV_OPT_TYPE_INT, \ diff --git a/libavcodec/vaapi_encode_av1.c b/libavcodec/vaapi_encode_av1.c index 02a31b894d..f4b4586583 100644 --- a/libavcodec/vaapi_encode_av1.c +++ b/libavcodec/vaapi_encode_av1.c @@ -864,6 +864,7 @@ static av_cold int vaapi_encode_av1_close(AVCodecContext *avctx) #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM) static const AVOption vaapi_encode_av1_options[] = { + HW_BASE_ENCODE_COMMON_OPTIONS, VAAPI_ENCODE_COMMON_OPTIONS, VAAPI_ENCODE_RC_OPTIONS, { "profile", "Set profile (seq_profile)", diff --git a/libavcodec/vaapi_encode_h264.c b/libavcodec/vaapi_encode_h264.c index d656b1020f..ebb1760cd3 100644 --- a/libavcodec/vaapi_encode_h264.c +++ b/libavcodec/vaapi_encode_h264.c @@ -1276,6 +1276,7 @@ static av_cold int vaapi_encode_h264_close(AVCodecContext *avctx) #define OFFSET(x) offsetof(VAAPIEncodeH264Context, x) #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM) static const AVOption vaapi_encode_h264_options[] = { + HW_BASE_ENCODE_COMMON_OPTIONS, VAAPI_ENCODE_COMMON_OPTIONS, VAAPI_ENCODE_RC_OPTIONS, diff --git a/libavcodec/vaapi_encode_h265.c b/libavcodec/vaapi_encode_h265.c index 2f59161346..77bd5e31af 100644 --- a/libavcodec/vaapi_encode_h265.c +++ b/libavcodec/vaapi_encode_h265.c @@ -1394,6 +1394,7 @@ static av_cold int vaapi_encode_h265_close(AVCodecContext *avctx) #define OFFSET(x) offsetof(VAAPIEncodeH265Context, x) #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM) static const AVOption vaapi_encode_h265_options[] = { + HW_BASE_ENCODE_COMMON_OPTIONS, VAAPI_ENCODE_COMMON_OPTIONS, VAAPI_ENCODE_RC_OPTIONS, diff --git a/libavcodec/vaapi_encode_mjpeg.c b/libavcodec/vaapi_encode_mjpeg.c index c17747e3a9..fb5c0d34c6 100644 --- a/libavcodec/vaapi_encode_mjpeg.c +++ b/libavcodec/vaapi_encode_mjpeg.c @@ -540,6 +540,7 @@ static av_cold int vaapi_encode_mjpeg_close(AVCodecContext *avctx) #define OFFSET(x) offsetof(VAAPIEncodeMJPEGContext, x) #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM) static const AVOption vaapi_encode_mjpeg_options[] = { + HW_BASE_ENCODE_COMMON_OPTIONS, VAAPI_ENCODE_COMMON_OPTIONS, { "jfif", "Include JFIF header", diff --git a/libavcodec/vaapi_encode_mpeg2.c b/libavcodec/vaapi_encode_mpeg2.c index c9b16fbcfc..d0980c52b0 100644 --- a/libavcodec/vaapi_encode_mpeg2.c +++ b/libavcodec/vaapi_encode_mpeg2.c @@ -639,6 +639,7 @@ static av_cold int vaapi_encode_mpeg2_close(AVCodecContext *avctx) #define OFFSET(x) offsetof(VAAPIEncodeMPEG2Context, x) #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM) static const AVOption vaapi_encode_mpeg2_options[] = { + HW_BASE_ENCODE_COMMON_OPTIONS, VAAPI_ENCODE_COMMON_OPTIONS, VAAPI_ENCODE_RC_OPTIONS, diff --git a/libavcodec/vaapi_encode_vp8.c b/libavcodec/vaapi_encode_vp8.c index 8a557b967e..4e284f86e2 100644 --- a/libavcodec/vaapi_encode_vp8.c +++ b/libavcodec/vaapi_encode_vp8.c @@ -216,6 +216,7 @@ static av_cold int vaapi_encode_vp8_init(AVCodecContext *avctx) #define OFFSET(x) offsetof(VAAPIEncodeVP8Context, x) #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM) static const AVOption vaapi_encode_vp8_options[] = { + HW_BASE_ENCODE_COMMON_OPTIONS, VAAPI_ENCODE_COMMON_OPTIONS, VAAPI_ENCODE_RC_OPTIONS, diff --git a/libavcodec/vaapi_encode_vp9.c b/libavcodec/vaapi_encode_vp9.c index c2a8dec71b..88f951652c 100644 --- a/libavcodec/vaapi_encode_vp9.c +++ b/libavcodec/vaapi_encode_vp9.c @@ -273,6 +273,7 @@ static av_cold int vaapi_encode_vp9_init(AVCodecContext *avctx) #define OFFSET(x) offsetof(VAAPIEncodeVP9Context, x) #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM) static const AVOption vaapi_encode_vp9_options[] = { + HW_BASE_ENCODE_COMMON_OPTIONS, VAAPI_ENCODE_COMMON_OPTIONS, VAAPI_ENCODE_RC_OPTIONS, From patchwork Thu Apr 18 08:58:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 48138 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a20:ce4e:b0:1a9:af23:56c1 with SMTP id id14csp1551763pzb; Thu, 18 Apr 2024 02:01:25 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCXA8L8B+8vEAFLceR2Gle0ZkaQrcCXisvhNQpxBGoFavMv/cEg1dsi+REO7dz2VsLDR2P5Hc62obEnrJ/faL2cVQnxYuNjsyvV1Jw== X-Google-Smtp-Source: AGHT+IFPewdBFodVq8aOqAI7nGrksjTqgH6xjPSrj3g+cMVemC/H1W2fFeuAFYkNlO3gLkFw63LX X-Received: by 2002:ac2:5188:0:b0:518:8c8c:db57 with SMTP id u8-20020ac25188000000b005188c8cdb57mr1094464lfi.1.1713430885451; Thu, 18 Apr 2024 02:01:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1713430885; cv=none; d=google.com; s=arc-20160816; b=pqkvd1QM7VsKTTvtm18qADAPEmxfdYiS0XjBX3iqpIu7BrLjueFlH0hiydDCzbJfS+ DylMEdTj+Op3xPl6/heGvnz9zeMduDu+un2W9EzYcuFCt/jVsvfOryvt4BfG9FgKrb9d kYNl9ZUydPjNZzZ7T2g7Z63I/uweKPNeHK0yAOHvOSyUgbPyHT6OK3OkBORtL72TODHN AffM5InchBXAW9CvlcS52PiJWVgB3dlroeHc8NoN4eBYt0GzILYo5ZYHrDNkM19130e9 yT3kqHb3zAKy74C7a8WNZbu9vgjPcuRgNBBjQculRbbf0lwyhxBSZ1LxlOyzPyemxu0r vTfQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=q1kdfZ2WVoEFjfviXQUkQo/qA20jDHG+TCQ8dQEuWeI=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=Cu9qgP2VTLmwINnqcDHkBrk4sPDB+eE1MMV150e7/6W2dEHGEaK/y6ixTV63CQ6kZd TWVJjTNreEPP0dj4TPMVFYDjU4DJmijYtXtSTBThCkna+srH8R/6ErIXccNHRfHjAg3e Skz0QJGbqmq+b+JjfWSzlk1jLtLNbhS+0j/7HKhqx4yr5Fg9lZlkLo8IlVjUtltjwJcD /Jfgq5Ap751fvlk5wzDKPUK1aoyMnbSgDhwaZikxLU01uD88BouWxrJK4+3yvDFsSEKO 7lA/bptdh237lVdDkbPirVbiDP/Gjrk2/mYBbSBFa+5+mkX8TAsE+SYFNia5fHw+pIKi 5yhw==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b="mJ/4NT+o"; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id h4-20020a170906590400b00a5554e96232si619012ejq.618.2024.04.18.02.01.25; Thu, 18 Apr 2024 02:01:25 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b="mJ/4NT+o"; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 15E2D68D42F; Thu, 18 Apr 2024 12:01:05 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id EFEF068D41D for ; Thu, 18 Apr 2024 12:00:54 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713430863; x=1744966863; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KAG/yivTgGEHchufiYiAVJxZSI/iQ/DiNn32BP+e50Y=; b=mJ/4NT+oE4yl89t+soF2Xbmjik7VrziivqkZCH4twimi/X++Noc5EZf1 Flw0AE1g9OlxIpwxEnyxqCTQmWyGzFxJcq6dZItS7Kjue7TJrmz0lkrWT Q7gnEY3vXIJK854PMMLMEspjzp1cGY7VegalOwPtIFjITp/216pnK0OS8 +Nvu/2QWHCNPp821sxK1R5/9ZS8McleYdJXbdb4Os8BSwFB2fxAWzfuwM 07zCJsJ6L9m7rMZeukbSDwBwGXFd3L+7k06FZ+/tMsVemG769PaovvgM0 VsTeY3zRi4wPaB7OyXSvI4i7jPwRC/c8PGO+oCo/LxCrGCtOMT+KIU73R A==; X-CSE-ConnectionGUID: JmZSTxBVSDqlpnsOkf07ow== X-CSE-MsgGUID: OTkI00PAT36xszkfN7683A== X-IronPort-AV: E=McAfee;i="6600,9927,11047"; a="12748315" X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="12748315" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Apr 2024 02:00:52 -0700 X-CSE-ConnectionGUID: f3Z1XLCLRLug5GbUGKJxiA== X-CSE-MsgGUID: /AdDcfU+Qi+XcnIC/a3ObA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="54127538" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by fmviesa001.fm.intel.com with ESMTP; 18 Apr 2024 02:00:35 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Thu, 18 Apr 2024 16:58:57 +0800 Message-ID: <20240418085910.547-3-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240418085910.547-1-tong1.wu@intel.com> References: <20240418085910.547-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v8 03/15] avcodec/vaapi_encode: add picture type name to base X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: uANrlyUN27Ja From: Tong Wu Signed-off-by: Tong Wu --- libavcodec/hw_base_encode.h | 5 +++++ libavcodec/vaapi_encode.c | 4 +--- 2 files changed, 6 insertions(+), 3 deletions(-) diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h index 5272f2836d..a578db8c06 100644 --- a/libavcodec/hw_base_encode.h +++ b/libavcodec/hw_base_encode.h @@ -25,6 +25,11 @@ #define MAX_ASYNC_DEPTH 64 #define MAX_REFERENCE_LIST_NUM 2 +static inline const char *ff_hw_base_encode_get_pictype_name(const int type) { + const char * const picture_type_name[] = { "IDR", "I", "P", "B" }; + return picture_type_name[type]; +} + enum { PICTURE_TYPE_IDR = 0, PICTURE_TYPE_I = 1, diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c index 9373512417..2d22e4bd85 100644 --- a/libavcodec/vaapi_encode.c +++ b/libavcodec/vaapi_encode.c @@ -38,8 +38,6 @@ const AVCodecHWConfigInternal *const ff_vaapi_encode_hw_configs[] = { NULL, }; -static const char * const picture_type_name[] = { "IDR", "I", "P", "B" }; - static int vaapi_encode_make_packed_header(AVCodecContext *avctx, VAAPIEncodePicture *pic, int type, char *data, size_t bit_len) @@ -277,7 +275,7 @@ static int vaapi_encode_issue(AVCodecContext *avctx, av_log(avctx, AV_LOG_DEBUG, "Issuing encode for pic %"PRId64"/%"PRId64" " "as type %s.\n", pic->display_order, pic->encode_order, - picture_type_name[pic->type]); + ff_hw_base_encode_get_pictype_name(pic->type)); if (pic->nb_refs[0] == 0 && pic->nb_refs[1] == 0) { av_log(avctx, AV_LOG_DEBUG, "No reference pictures.\n"); } else { From patchwork Thu Apr 18 08:58:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 48139 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a20:ce4e:b0:1a9:af23:56c1 with SMTP id id14csp1551855pzb; Thu, 18 Apr 2024 02:01:36 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCWS9tb2sjicnHn8Gr37siVe7ODEa1MDdJ6/2DVHwDjZ0Nc454h9d78G3n6+ZoEVaxg0kVnqDsb4whsvdpkul+KQtU5ifnffmaIMew== X-Google-Smtp-Source: AGHT+IEIouwZvIvuRcCibY4hKMYlyYXDZneut3eb4tUOtBKyK/QRwseuL1pA9Lr02+iwGg1acCJW X-Received: by 2002:a05:6512:23a9:b0:517:79db:e85c with SMTP id c41-20020a05651223a900b0051779dbe85cmr1498674lfv.4.1713430895681; Thu, 18 Apr 2024 02:01:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1713430895; cv=none; d=google.com; s=arc-20160816; b=Daq6ASZ0Sk+MM7hSm0rS1fttZBMTwszGHUYl1Uayr8s89l3xin53AI9ys2QvbYQ7Li Mm6EL7/NvaV0c+P6Hs0wE5hYWt4pqnnMtqUpiw004uQoj15+ULuSi1GTMCyHvCc/wRyC Y4uKl5mJBWlqDLAFokKafKdaXjV4mdi0bzVF0mpFMFRVye0wf1cUsFs8+Ns3RjTA/0z7 vRCW1NRIhAVOPoz47fsKnYouBtWVlnV3lI/5/SY/InGhYj7bs6Gxf5nL0DPQcFh8Ocg0 mTUgwuYKIo/t+ZmDShgUmpaIrUaZr9bLMaaGcm95skZuRC0sRSY4V4+1VVA+tzeUEO7Z SyDA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=DruZqgCM0Ll/K9SNdY0r0CexfVTk23BkLKZDcD+w+o4=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=Vdp63UHnh2xS5K1qDATE0aetvuQOFOcyjTZNYITu6wJizLzQ6lidoaElc+9+fiboV/ ueHqV26yk7bCpYHszoVT2I26PwcLr131hOWaeX1fHWh4BxOBEx5EsLqQKM11vReSyxWJ Nv0PyZ1Wb2Y868QaUeuv369tdzatcOb6ltw289dfIE4hYPfCOpQzXo0spQAowo7rkD58 CFXDEqmuneDicu2NZwHsTcd4Io2DscGlh1I9FdibHwx3ODgorWZ+2xmFcas58yq95pTv 3TBlUMr4AM1M9ujCh1bnCtUyDJx1AkR04XnklMf4nILEFG7uuCpHhObBFUYyenotIelB 4wTQ==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b="d/ytDRDf"; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id lc1-20020a170906f90100b00a4666ecd61bsi588165ejb.217.2024.04.18.02.01.35; Thu, 18 Apr 2024 02:01:35 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b="d/ytDRDf"; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 383F468D435; Thu, 18 Apr 2024 12:01:06 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 66E6768D40C for ; Thu, 18 Apr 2024 12:00:58 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713430863; x=1744966863; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=k+Cd3SbMsZI2AOBFYmvXlX99kdjYu6SRZFiVO28Uq+c=; b=d/ytDRDfmeVojAuibZSHQCwXni2dGTkIMqximCppdw82a94+JT1dBjBk gVtQXPhNgsHRC/JD9QHVjAOozz2Gf8SD7Q6IXfq+RGNs9DZrXOqUekQFa YUCYR/nnjL0fZLzZwESzPGdLSWlClZncHnpA7M8eKVqvLckE875lccjkM Aj4RAkpE4HxSnC0wLx5hna6ayCGupP3Wl0SI6Ui4s4ffNF6gc40uzjoSv cnO+JX1DxrO9zWp7Y+WDK9pBuLXNLswqXB7ph2gzGIRt9MFYX6kEBmc4n U286NKW5Dd0KHPn/8zKu+CejPl3pqFZHcIilnXOgikCy9CY5Pft4uj/qG g==; X-CSE-ConnectionGUID: QMp8ybbHSveknoOF0u9RAw== X-CSE-MsgGUID: 1WyMJOi6TiaIpdopjF98DQ== X-IronPort-AV: E=McAfee;i="6600,9927,11047"; a="12748319" X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="12748319" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Apr 2024 02:00:52 -0700 X-CSE-ConnectionGUID: tceQfAYGSTK/0R6Rv4xW6w== X-CSE-MsgGUID: FspdHqraTSCUd1b+21XQnA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="54127544" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by fmviesa001.fm.intel.com with ESMTP; 18 Apr 2024 02:00:41 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Thu, 18 Apr 2024 16:58:58 +0800 Message-ID: <20240418085910.547-4-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240418085910.547-1-tong1.wu@intel.com> References: <20240418085910.547-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v8 04/15] avcodec/vaapi_encode: move pic->input_surface initialization to encode_alloc X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: pIkQYZnSY5tE From: Tong Wu When allocating the VAAPIEncodePicture, pic->input_surface can be initialized right in the place. This movement simplifies the send_frame logic and is the preparation for moving vaapi_encode_send_frame to the base layer. Signed-off-by: Tong Wu --- libavcodec/vaapi_encode.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c index 2d22e4bd85..227cccae64 100644 --- a/libavcodec/vaapi_encode.c +++ b/libavcodec/vaapi_encode.c @@ -878,7 +878,8 @@ static int vaapi_encode_discard(AVCodecContext *avctx, return 0; } -static VAAPIEncodePicture *vaapi_encode_alloc(AVCodecContext *avctx) +static VAAPIEncodePicture *vaapi_encode_alloc(AVCodecContext *avctx, + const AVFrame *frame) { VAAPIEncodeContext *ctx = avctx->priv_data; VAAPIEncodePicture *pic; @@ -895,7 +896,7 @@ static VAAPIEncodePicture *vaapi_encode_alloc(AVCodecContext *avctx) } } - pic->input_surface = VA_INVALID_ID; + pic->input_surface = (VASurfaceID)(uintptr_t)frame->data[3]; pic->recon_surface = VA_INVALID_ID; pic->output_buffer = VA_INVALID_ID; @@ -1332,7 +1333,7 @@ static int vaapi_encode_send_frame(AVCodecContext *avctx, AVFrame *frame) if (err < 0) return err; - pic = vaapi_encode_alloc(avctx); + pic = vaapi_encode_alloc(avctx, frame); if (!pic) return AVERROR(ENOMEM); @@ -1345,7 +1346,6 @@ static int vaapi_encode_send_frame(AVCodecContext *avctx, AVFrame *frame) if (ctx->input_order == 0 || frame->pict_type == AV_PICTURE_TYPE_I) pic->force_idr = 1; - pic->input_surface = (VASurfaceID)(uintptr_t)frame->data[3]; pic->pts = frame->pts; pic->duration = frame->duration; From patchwork Thu Apr 18 08:58:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 48140 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a20:ce4e:b0:1a9:af23:56c1 with SMTP id id14csp1551965pzb; Thu, 18 Apr 2024 02:01:48 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCXgibqJSTWxKAVJfW6zhvrB1qpYzxoC+WhS97GV4HveaXdKik01YCuVmOO1SYmPG0Oa0nlKUh1jFc07tasvYefz+6k08sWQNwp60A== X-Google-Smtp-Source: AGHT+IHW1O6aes0D/4GXrrZBiLlfPyP0vTrYXUAGpM2po8Np/AWsgaTPMU+ZcRb+/oYLEQ1U+5Rn X-Received: by 2002:a17:906:4ad0:b0:a51:d48e:52a0 with SMTP id u16-20020a1709064ad000b00a51d48e52a0mr1280623ejt.27.1713430908397; Thu, 18 Apr 2024 02:01:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1713430908; cv=none; d=google.com; s=arc-20160816; b=klcHp/agEBagJkn6n6F+S39mmY3rzlHh+QEYqM8INnVa4xS7kRclCGlRjcsTyXwBsE /hGLJgFbJgyKLH4WO9OmlT7L6V3BivAvyaJyRZUKNJ3aPFRt0Ve+ruuSlykztyZ+maMl HCX6ngr+0kuPp4uzYve8Qac4wz3yjiqV0aTQQ05htHvl9jMOthplNRN0TD680IsEYoxe 1Klo1oklVtngVm/CPsPl84El40bsVNi4q0vtX5R4uPkkIDM54h69QI+PvSdJOzaM7uw1 VCNku1IwmVWSaip+7Yu/JlNL6eIUj95d4Yg1f2+uy5LQX0KpbpCtqmObwaPiRBSERaLV 42Ww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=3XGAp6R+z/XLD3rOo7bFhCXx0PyA5sjkdiEQkFJNmPA=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=A/+togCEczkBEK5qDSPURCwN6keXqJyP/oUySUbzbP7umDgSNkQdHbSQST1+CqLHTz 4g9iU3WkUPobPLQXgk1Xhs8jTJyTlnS4T3HjkTQ57vstarHNMy/cBWWSzm6vSItgMGSD ecTYIofilVH6q/i8Wl1+BuZBSpFtOvrK214dpB2F20gxvlGX73sQucT8M0XOBBJ30ZOL OztIgivSDkwCQJQ2LHAcs2PQF/swTOokq9Q/Eu5UIXH1TTTKANmWJtKVUxT8SIG1YAM8 VL7eDMT2pHr0V8KrpP/aLAVVXBhjg5/A9KFc+APqxfrZ6Tj2k7TXdN1cT3zhywnWtMaU g6Tg==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=QTkdYinn; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id u6-20020a1709063b8600b00a463c3828besi577739ejf.491.2024.04.18.02.01.47; Thu, 18 Apr 2024 02:01:48 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=QTkdYinn; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 31A0F68D43A; Thu, 18 Apr 2024 12:01:08 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id D252C68D42E for ; Thu, 18 Apr 2024 12:00:59 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713430865; x=1744966865; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KJgBdaYtdzV4ofrM3riquL0s8+qg5coWXDP+H4jy8k8=; b=QTkdYinnoAACba14ipMXD7V/kxUtj5i40tf9L3pwvbwo+jKjzXDHTEOq bX79OLkzcMHg3LpJDZfgHdQZ7vpBpcrvEpsUWUvLmhXRZnwPbqqR26ksb 9V9Z5Y2IGW/nVe1hg5MdV/GA1fSj9+6zp13H1AuoO594ReBxhTXT8wxqJ AqJaOK18UmR18KRG3m5nLMrNuhZ0im+fU3ROImAUUAmsz/k7HLq97I1TY 16LSoiZ+x0rQ+x7IruVGx24hQrJJqE2jwVnmvN3wd6rdsrWITV0KhPDvp oW8JRekTNRaiWuQ4Jr/ZaUxKvQl1spERo/VvefzAbYfxQLxKjkx/uUyYI g==; X-CSE-ConnectionGUID: JMj92Dj+SE6Oit/ncbwQEQ== X-CSE-MsgGUID: cM/Be7zAT9Oc1EiMv9LpFA== X-IronPort-AV: E=McAfee;i="6600,9927,11047"; a="12748320" X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="12748320" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Apr 2024 02:00:52 -0700 X-CSE-ConnectionGUID: 87DraajaR2WLCEwoEpIgpA== X-CSE-MsgGUID: XseXgUU6QMSb6jCVsOFwVg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="54127554" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by fmviesa001.fm.intel.com with ESMTP; 18 Apr 2024 02:00:42 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Thu, 18 Apr 2024 16:58:59 +0800 Message-ID: <20240418085910.547-5-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240418085910.547-1-tong1.wu@intel.com> References: <20240418085910.547-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v8 05/15] avcodec/vaapi_encode: move the dpb logic from VAAPI to base layer X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: pVOh5FfPE1t9 From: Tong Wu Move receive_packet function to base. This requires adding *alloc, *issue, *output, *free as hardware callbacks. HWBaseEncodePicture is introduced as the base layer structure. The related parameters in VAAPIEncodeContext are also extracted to HWBaseEncodeContext. Then DPB management logic can be fully extracted to base layer as-is. Signed-off-by: Tong Wu --- libavcodec/Makefile | 2 +- libavcodec/hw_base_encode.c | 600 ++++++++++++++++++++++++ libavcodec/hw_base_encode.h | 123 +++++ libavcodec/vaapi_encode.c | 793 +++++--------------------------- libavcodec/vaapi_encode.h | 102 +--- libavcodec/vaapi_encode_av1.c | 51 +- libavcodec/vaapi_encode_h264.c | 176 +++---- libavcodec/vaapi_encode_h265.c | 121 ++--- libavcodec/vaapi_encode_mjpeg.c | 7 +- libavcodec/vaapi_encode_mpeg2.c | 47 +- libavcodec/vaapi_encode_vp8.c | 18 +- libavcodec/vaapi_encode_vp9.c | 54 +-- 12 files changed, 1097 insertions(+), 997 deletions(-) create mode 100644 libavcodec/hw_base_encode.c diff --git a/libavcodec/Makefile b/libavcodec/Makefile index 7f6de4470e..a2174dcb2f 100644 --- a/libavcodec/Makefile +++ b/libavcodec/Makefile @@ -162,7 +162,7 @@ OBJS-$(CONFIG_STARTCODE) += startcode.o OBJS-$(CONFIG_TEXTUREDSP) += texturedsp.o OBJS-$(CONFIG_TEXTUREDSPENC) += texturedspenc.o OBJS-$(CONFIG_TPELDSP) += tpeldsp.o -OBJS-$(CONFIG_VAAPI_ENCODE) += vaapi_encode.o +OBJS-$(CONFIG_VAAPI_ENCODE) += vaapi_encode.o hw_base_encode.o OBJS-$(CONFIG_AV1_AMF_ENCODER) += amfenc_av1.o OBJS-$(CONFIG_VC1DSP) += vc1dsp.o OBJS-$(CONFIG_VIDEODSP) += videodsp.o diff --git a/libavcodec/hw_base_encode.c b/libavcodec/hw_base_encode.c new file mode 100644 index 0000000000..1d9a255f69 --- /dev/null +++ b/libavcodec/hw_base_encode.c @@ -0,0 +1,600 @@ +/* + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include "libavutil/avassert.h" +#include "libavutil/common.h" +#include "libavutil/internal.h" +#include "libavutil/log.h" +#include "libavutil/mem.h" +#include "libavutil/pixdesc.h" + +#include "encode.h" +#include "avcodec.h" +#include "hw_base_encode.h" + +static void hw_base_encode_add_ref(AVCodecContext *avctx, + HWBaseEncodePicture *pic, + HWBaseEncodePicture *target, + int is_ref, int in_dpb, int prev) +{ + int refs = 0; + + if (is_ref) { + av_assert0(pic != target); + av_assert0(pic->nb_refs[0] < MAX_PICTURE_REFERENCES && + pic->nb_refs[1] < MAX_PICTURE_REFERENCES); + if (target->display_order < pic->display_order) + pic->refs[0][pic->nb_refs[0]++] = target; + else + pic->refs[1][pic->nb_refs[1]++] = target; + ++refs; + } + + if (in_dpb) { + av_assert0(pic->nb_dpb_pics < MAX_DPB_SIZE); + pic->dpb[pic->nb_dpb_pics++] = target; + ++refs; + } + + if (prev) { + av_assert0(!pic->prev); + pic->prev = target; + ++refs; + } + + target->ref_count[0] += refs; + target->ref_count[1] += refs; +} + +static void hw_base_encode_remove_refs(AVCodecContext *avctx, + HWBaseEncodePicture *pic, + int level) +{ + int i; + + if (pic->ref_removed[level]) + return; + + for (i = 0; i < pic->nb_refs[0]; i++) { + av_assert0(pic->refs[0][i]); + --pic->refs[0][i]->ref_count[level]; + av_assert0(pic->refs[0][i]->ref_count[level] >= 0); + } + + for (i = 0; i < pic->nb_refs[1]; i++) { + av_assert0(pic->refs[1][i]); + --pic->refs[1][i]->ref_count[level]; + av_assert0(pic->refs[1][i]->ref_count[level] >= 0); + } + + for (i = 0; i < pic->nb_dpb_pics; i++) { + av_assert0(pic->dpb[i]); + --pic->dpb[i]->ref_count[level]; + av_assert0(pic->dpb[i]->ref_count[level] >= 0); + } + + av_assert0(pic->prev || pic->type == PICTURE_TYPE_IDR); + if (pic->prev) { + --pic->prev->ref_count[level]; + av_assert0(pic->prev->ref_count[level] >= 0); + } + + pic->ref_removed[level] = 1; +} + +static void hw_base_encode_set_b_pictures(AVCodecContext *avctx, + HWBaseEncodePicture *start, + HWBaseEncodePicture *end, + HWBaseEncodePicture *prev, + int current_depth, + HWBaseEncodePicture **last) +{ + HWBaseEncodeContext *ctx = avctx->priv_data; + HWBaseEncodePicture *pic, *next, *ref; + int i, len; + + av_assert0(start && end && start != end && start->next != end); + + // If we are at the maximum depth then encode all pictures as + // non-referenced B-pictures. Also do this if there is exactly one + // picture left, since there will be nothing to reference it. + if (current_depth == ctx->max_b_depth || start->next->next == end) { + for (pic = start->next; pic; pic = pic->next) { + if (pic == end) + break; + pic->type = PICTURE_TYPE_B; + pic->b_depth = current_depth; + + hw_base_encode_add_ref(avctx, pic, start, 1, 1, 0); + hw_base_encode_add_ref(avctx, pic, end, 1, 1, 0); + hw_base_encode_add_ref(avctx, pic, prev, 0, 0, 1); + + for (ref = end->refs[1][0]; ref; ref = ref->refs[1][0]) + hw_base_encode_add_ref(avctx, pic, ref, 0, 1, 0); + } + *last = prev; + + } else { + // Split the current list at the midpoint with a referenced + // B-picture, then descend into each side separately. + len = 0; + for (pic = start->next; pic != end; pic = pic->next) + ++len; + for (pic = start->next, i = 1; 2 * i < len; pic = pic->next, i++); + + pic->type = PICTURE_TYPE_B; + pic->b_depth = current_depth; + + pic->is_reference = 1; + + hw_base_encode_add_ref(avctx, pic, pic, 0, 1, 0); + hw_base_encode_add_ref(avctx, pic, start, 1, 1, 0); + hw_base_encode_add_ref(avctx, pic, end, 1, 1, 0); + hw_base_encode_add_ref(avctx, pic, prev, 0, 0, 1); + + for (ref = end->refs[1][0]; ref; ref = ref->refs[1][0]) + hw_base_encode_add_ref(avctx, pic, ref, 0, 1, 0); + + if (i > 1) + hw_base_encode_set_b_pictures(avctx, start, pic, pic, + current_depth + 1, &next); + else + next = pic; + + hw_base_encode_set_b_pictures(avctx, pic, end, next, + current_depth + 1, last); + } +} + +static void hw_base_encode_add_next_prev(AVCodecContext *avctx, + HWBaseEncodePicture *pic) +{ + HWBaseEncodeContext *ctx = avctx->priv_data; + int i; + + if (!pic) + return; + + if (pic->type == PICTURE_TYPE_IDR) { + for (i = 0; i < ctx->nb_next_prev; i++) { + --ctx->next_prev[i]->ref_count[0]; + ctx->next_prev[i] = NULL; + } + ctx->next_prev[0] = pic; + ++pic->ref_count[0]; + ctx->nb_next_prev = 1; + + return; + } + + if (ctx->nb_next_prev < MAX_PICTURE_REFERENCES) { + ctx->next_prev[ctx->nb_next_prev++] = pic; + ++pic->ref_count[0]; + } else { + --ctx->next_prev[0]->ref_count[0]; + for (i = 0; i < MAX_PICTURE_REFERENCES - 1; i++) + ctx->next_prev[i] = ctx->next_prev[i + 1]; + ctx->next_prev[i] = pic; + ++pic->ref_count[0]; + } +} + +static int hw_base_encode_pick_next(AVCodecContext *avctx, + HWBaseEncodePicture **pic_out) +{ + HWBaseEncodeContext *ctx = avctx->priv_data; + HWBaseEncodePicture *pic = NULL, *prev = NULL, *next, *start; + int i, b_counter, closed_gop_end; + + // If there are any B-frames already queued, the next one to encode + // is the earliest not-yet-issued frame for which all references are + // available. + for (pic = ctx->pic_start; pic; pic = pic->next) { + if (pic->encode_issued) + continue; + if (pic->type != PICTURE_TYPE_B) + continue; + for (i = 0; i < pic->nb_refs[0]; i++) { + if (!pic->refs[0][i]->encode_issued) + break; + } + if (i != pic->nb_refs[0]) + continue; + + for (i = 0; i < pic->nb_refs[1]; i++) { + if (!pic->refs[1][i]->encode_issued) + break; + } + if (i == pic->nb_refs[1]) + break; + } + + if (pic) { + av_log(avctx, AV_LOG_DEBUG, "Pick B-picture at depth %d to " + "encode next.\n", pic->b_depth); + *pic_out = pic; + return 0; + } + + // Find the B-per-Pth available picture to become the next picture + // on the top layer. + start = NULL; + b_counter = 0; + closed_gop_end = ctx->closed_gop || + ctx->idr_counter == ctx->gop_per_idr; + for (pic = ctx->pic_start; pic; pic = next) { + next = pic->next; + if (pic->encode_issued) { + start = pic; + continue; + } + // If the next available picture is force-IDR, encode it to start + // a new GOP immediately. + if (pic->force_idr) + break; + if (b_counter == ctx->b_per_p) + break; + // If this picture ends a closed GOP or starts a new GOP then it + // needs to be in the top layer. + if (ctx->gop_counter + b_counter + closed_gop_end >= ctx->gop_size) + break; + // If the picture after this one is force-IDR, we need to encode + // this one in the top layer. + if (next && next->force_idr) + break; + ++b_counter; + } + + // At the end of the stream the last picture must be in the top layer. + if (!pic && ctx->end_of_stream) { + --b_counter; + pic = ctx->pic_end; + if (pic->encode_complete) + return AVERROR_EOF; + else if (pic->encode_issued) + return AVERROR(EAGAIN); + } + + if (!pic) { + av_log(avctx, AV_LOG_DEBUG, "Pick nothing to encode next - " + "need more input for reference pictures.\n"); + return AVERROR(EAGAIN); + } + if (ctx->input_order <= ctx->decode_delay && !ctx->end_of_stream) { + av_log(avctx, AV_LOG_DEBUG, "Pick nothing to encode next - " + "need more input for timestamps.\n"); + return AVERROR(EAGAIN); + } + + if (pic->force_idr) { + av_log(avctx, AV_LOG_DEBUG, "Pick forced IDR-picture to " + "encode next.\n"); + pic->type = PICTURE_TYPE_IDR; + ctx->idr_counter = 1; + ctx->gop_counter = 1; + + } else if (ctx->gop_counter + b_counter >= ctx->gop_size) { + if (ctx->idr_counter == ctx->gop_per_idr) { + av_log(avctx, AV_LOG_DEBUG, "Pick new-GOP IDR-picture to " + "encode next.\n"); + pic->type = PICTURE_TYPE_IDR; + ctx->idr_counter = 1; + } else { + av_log(avctx, AV_LOG_DEBUG, "Pick new-GOP I-picture to " + "encode next.\n"); + pic->type = PICTURE_TYPE_I; + ++ctx->idr_counter; + } + ctx->gop_counter = 1; + + } else { + if (ctx->gop_counter + b_counter + closed_gop_end == ctx->gop_size) { + av_log(avctx, AV_LOG_DEBUG, "Pick group-end P-picture to " + "encode next.\n"); + } else { + av_log(avctx, AV_LOG_DEBUG, "Pick normal P-picture to " + "encode next.\n"); + } + pic->type = PICTURE_TYPE_P; + av_assert0(start); + ctx->gop_counter += 1 + b_counter; + } + pic->is_reference = 1; + *pic_out = pic; + + hw_base_encode_add_ref(avctx, pic, pic, 0, 1, 0); + if (pic->type != PICTURE_TYPE_IDR) { + // TODO: apply both previous and forward multi reference for all vaapi encoders. + // And L0/L1 reference frame number can be set dynamically through query + // VAConfigAttribEncMaxRefFrames attribute. + if (avctx->codec_id == AV_CODEC_ID_AV1) { + for (i = 0; i < ctx->nb_next_prev; i++) + hw_base_encode_add_ref(avctx, pic, ctx->next_prev[i], + pic->type == PICTURE_TYPE_P, + b_counter > 0, 0); + } else + hw_base_encode_add_ref(avctx, pic, start, + pic->type == PICTURE_TYPE_P, + b_counter > 0, 0); + + hw_base_encode_add_ref(avctx, pic, ctx->next_prev[ctx->nb_next_prev - 1], 0, 0, 1); + } + + if (b_counter > 0) { + hw_base_encode_set_b_pictures(avctx, start, pic, pic, 1, + &prev); + } else { + prev = pic; + } + hw_base_encode_add_next_prev(avctx, prev); + + return 0; +} + +static int hw_base_encode_clear_old(AVCodecContext *avctx) +{ + HWBaseEncodeContext *ctx = avctx->priv_data; + HWBaseEncodePicture *pic, *prev, *next; + + av_assert0(ctx->pic_start); + + // Remove direct references once each picture is complete. + for (pic = ctx->pic_start; pic; pic = pic->next) { + if (pic->encode_complete && pic->next) + hw_base_encode_remove_refs(avctx, pic, 0); + } + + // Remove indirect references once a picture has no direct references. + for (pic = ctx->pic_start; pic; pic = pic->next) { + if (pic->encode_complete && pic->ref_count[0] == 0) + hw_base_encode_remove_refs(avctx, pic, 1); + } + + // Clear out all complete pictures with no remaining references. + prev = NULL; + for (pic = ctx->pic_start; pic; pic = next) { + next = pic->next; + if (pic->encode_complete && pic->ref_count[1] == 0) { + av_assert0(pic->ref_removed[0] && pic->ref_removed[1]); + if (prev) + prev->next = next; + else + ctx->pic_start = next; + ctx->op->free(avctx, pic); + } else { + prev = pic; + } + } + + return 0; +} + +static int hw_base_encode_check_frame(AVCodecContext *avctx, + const AVFrame *frame) +{ + HWBaseEncodeContext *ctx = avctx->priv_data; + + if ((frame->crop_top || frame->crop_bottom || + frame->crop_left || frame->crop_right) && !ctx->crop_warned) { + av_log(avctx, AV_LOG_WARNING, "Cropping information on input " + "frames ignored due to lack of API support.\n"); + ctx->crop_warned = 1; + } + + if (!ctx->roi_allowed) { + AVFrameSideData *sd = + av_frame_get_side_data(frame, AV_FRAME_DATA_REGIONS_OF_INTEREST); + + if (sd && !ctx->roi_warned) { + av_log(avctx, AV_LOG_WARNING, "ROI side data on input " + "frames ignored due to lack of driver support.\n"); + ctx->roi_warned = 1; + } + } + + return 0; +} + +static int hw_base_encode_send_frame(AVCodecContext *avctx, AVFrame *frame) +{ + HWBaseEncodeContext *ctx = avctx->priv_data; + HWBaseEncodePicture *pic; + int err; + + if (frame) { + av_log(avctx, AV_LOG_DEBUG, "Input frame: %ux%u (%"PRId64").\n", + frame->width, frame->height, frame->pts); + + err = hw_base_encode_check_frame(avctx, frame); + if (err < 0) + return err; + + pic = ctx->op->alloc(avctx, frame); + if (!pic) + return AVERROR(ENOMEM); + + pic->input_image = av_frame_alloc(); + if (!pic->input_image) { + err = AVERROR(ENOMEM); + goto fail; + } + + pic->recon_image = av_frame_alloc(); + if (!pic->recon_image) { + err = AVERROR(ENOMEM); + goto fail; + } + + if (ctx->input_order == 0 || frame->pict_type == AV_PICTURE_TYPE_I) + pic->force_idr = 1; + + pic->pts = frame->pts; + pic->duration = frame->duration; + + if (avctx->flags & AV_CODEC_FLAG_COPY_OPAQUE) { + err = av_buffer_replace(&pic->opaque_ref, frame->opaque_ref); + if (err < 0) + goto fail; + + pic->opaque = frame->opaque; + } + + av_frame_move_ref(pic->input_image, frame); + + if (ctx->input_order == 0) + ctx->first_pts = pic->pts; + if (ctx->input_order == ctx->decode_delay) + ctx->dts_pts_diff = pic->pts - ctx->first_pts; + if (ctx->output_delay > 0) + ctx->ts_ring[ctx->input_order % + (3 * ctx->output_delay + ctx->async_depth)] = pic->pts; + + pic->display_order = ctx->input_order; + ++ctx->input_order; + + if (ctx->pic_start) { + ctx->pic_end->next = pic; + ctx->pic_end = pic; + } else { + ctx->pic_start = pic; + ctx->pic_end = pic; + } + + } else { + ctx->end_of_stream = 1; + + // Fix timestamps if we hit end-of-stream before the initial decode + // delay has elapsed. + if (ctx->input_order < ctx->decode_delay) + ctx->dts_pts_diff = ctx->pic_end->pts - ctx->first_pts; + } + + return 0; + +fail: + ctx->op->free(avctx, pic); + return err; +} + +int ff_hw_base_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt) +{ + HWBaseEncodeContext *ctx = avctx->priv_data; + HWBaseEncodePicture *pic = NULL; + AVFrame *frame = ctx->frame; + int err; + +start: + /** if no B frame before repeat P frame, sent repeat P frame out. */ + if (ctx->tail_pkt->size) { + for (HWBaseEncodePicture *tmp = ctx->pic_start; tmp; tmp = tmp->next) { + if (tmp->type == PICTURE_TYPE_B && tmp->pts < ctx->tail_pkt->pts) + break; + else if (!tmp->next) { + av_packet_move_ref(pkt, ctx->tail_pkt); + goto end; + } + } + } + + err = ff_encode_get_frame(avctx, frame); + if (err < 0 && err != AVERROR_EOF) + return err; + + if (err == AVERROR_EOF) + frame = NULL; + + if (!(ctx->op && ctx->op->alloc && ctx->op->issue && + ctx->op->output && ctx->op->free)) { + err = AVERROR(EINVAL); + return err; + } + + err = hw_base_encode_send_frame(avctx, frame); + if (err < 0) + return err; + + if (!ctx->pic_start) { + if (ctx->end_of_stream) + return AVERROR_EOF; + else + return AVERROR(EAGAIN); + } + + if (ctx->async_encode) { + if (av_fifo_can_write(ctx->encode_fifo)) { + err = hw_base_encode_pick_next(avctx, &pic); + if (!err) { + av_assert0(pic); + pic->encode_order = ctx->encode_order + + av_fifo_can_read(ctx->encode_fifo); + err = ctx->op->issue(avctx, pic); + if (err < 0) { + av_log(avctx, AV_LOG_ERROR, "Encode failed: %d.\n", err); + return err; + } + pic->encode_issued = 1; + av_fifo_write(ctx->encode_fifo, &pic, 1); + } + } + + if (!av_fifo_can_read(ctx->encode_fifo)) + return err; + + // More frames can be buffered + if (av_fifo_can_write(ctx->encode_fifo) && !ctx->end_of_stream) + return AVERROR(EAGAIN); + + av_fifo_read(ctx->encode_fifo, &pic, 1); + ctx->encode_order = pic->encode_order + 1; + } else { + err = hw_base_encode_pick_next(avctx, &pic); + if (err < 0) + return err; + av_assert0(pic); + + pic->encode_order = ctx->encode_order++; + + err = ctx->op->issue(avctx, pic); + if (err < 0) { + av_log(avctx, AV_LOG_ERROR, "Encode failed: %d.\n", err); + return err; + } + + pic->encode_issued = 1; + } + + err = ctx->op->output(avctx, pic, pkt); + if (err < 0) { + av_log(avctx, AV_LOG_ERROR, "Output failed: %d.\n", err); + return err; + } + + ctx->output_order = pic->encode_order; + hw_base_encode_clear_old(avctx); + + /** loop to get an available pkt in encoder flushing. */ + if (ctx->end_of_stream && !pkt->size) + goto start; + +end: + if (pkt->size) + av_log(avctx, AV_LOG_DEBUG, "Output packet: pts %"PRId64", dts %"PRId64", " + "size %d bytes.\n", pkt->pts, pkt->dts, pkt->size); + + return 0; +} diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h index a578db8c06..b5b676b9a8 100644 --- a/libavcodec/hw_base_encode.h +++ b/libavcodec/hw_base_encode.h @@ -19,6 +19,8 @@ #ifndef AVCODEC_HW_BASE_ENCODE_H #define AVCODEC_HW_BASE_ENCODE_H +#include "libavutil/fifo.h" + #define MAX_DPB_SIZE 16 #define MAX_PICTURE_REFERENCES 2 #define MAX_REORDER_DELAY 16 @@ -53,13 +55,134 @@ enum { FLAG_NON_IDR_KEY_PICTURES = 1 << 5, }; +typedef struct HWBaseEncodePicture { + struct HWBaseEncodePicture *next; + + int64_t display_order; + int64_t encode_order; + int64_t pts; + int64_t duration; + int force_idr; + + void *opaque; + AVBufferRef *opaque_ref; + + int type; + int b_depth; + int encode_issued; + int encode_complete; + + AVFrame *input_image; + AVFrame *recon_image; + + void *priv_data; + + // Whether this picture is a reference picture. + int is_reference; + + // The contents of the DPB after this picture has been decoded. + // This will contain the picture itself if it is a reference picture, + // but not if it isn't. + int nb_dpb_pics; + struct HWBaseEncodePicture *dpb[MAX_DPB_SIZE]; + // The reference pictures used in decoding this picture. If they are + // used by later pictures they will also appear in the DPB. ref[0][] for + // previous reference frames. ref[1][] for future reference frames. + int nb_refs[MAX_REFERENCE_LIST_NUM]; + struct HWBaseEncodePicture *refs[MAX_REFERENCE_LIST_NUM][MAX_PICTURE_REFERENCES]; + // The previous reference picture in encode order. Must be in at least + // one of the reference list and DPB list. + struct HWBaseEncodePicture *prev; + // Reference count for other pictures referring to this one through + // the above pointers, directly from incomplete pictures and indirectly + // through completed pictures. + int ref_count[2]; + int ref_removed[2]; +} HWBaseEncodePicture; + +typedef struct HWEncodePictureOperation { + // Alloc memory for the picture structure. + HWBaseEncodePicture * (*alloc)(AVCodecContext *avctx, const AVFrame *frame); + // Issue the picture structure, which will send the frame surface to HW Encode API. + int (*issue)(AVCodecContext *avctx, const HWBaseEncodePicture *base_pic); + // Get the output AVPacket. + int (*output)(AVCodecContext *avctx, const HWBaseEncodePicture *base_pic, AVPacket *pkt); + // Free the picture structure. + int (*free)(AVCodecContext *avctx, HWBaseEncodePicture *base_pic); +} HWEncodePictureOperation; + typedef struct HWBaseEncodeContext { const AVClass *class; + // Hardware-specific hooks. + const struct HWEncodePictureOperation *op; + + // Current encoding window, in display (input) order. + HWBaseEncodePicture *pic_start, *pic_end; + // The next picture to use as the previous reference picture in + // encoding order. Order from small to large in encoding order. + HWBaseEncodePicture *next_prev[MAX_PICTURE_REFERENCES]; + int nb_next_prev; + + // Next input order index (display order). + int64_t input_order; + // Number of frames that output is behind input. + int64_t output_delay; + // Next encode order index. + int64_t encode_order; + // Number of frames decode output will need to be delayed. + int64_t decode_delay; + // Next output order index (in encode order). + int64_t output_order; + + // Timestamp handling. + int64_t first_pts; + int64_t dts_pts_diff; + int64_t ts_ring[MAX_REORDER_DELAY * 3 + + MAX_ASYNC_DEPTH]; + + // Frame type decision. + int gop_size; + int closed_gop; + int gop_per_idr; + int p_per_i; + int max_b_depth; + int b_per_p; + int force_idr; + int idr_counter; + int gop_counter; + int end_of_stream; + int p_to_gpb; + + // Whether the driver supports ROI at all. + int roi_allowed; + + // The encoder does not support cropping information, so warn about + // it the first time we encounter any nonzero crop fields. + int crop_warned; + // If the driver does not support ROI then warn the first time we + // encounter a frame with ROI side data. + int roi_warned; + + // The frame to be filled with data. + AVFrame *frame; + + // Whether the HW supports sync buffer function. + // If supported, encode_fifo/async_depth will be used together. + // Used for output buffer synchronization. + int async_encode; + + // Store buffered pic. + AVFifo *encode_fifo; // Max number of frame buffered in encoder. int async_depth; + + /** Tail data of a pic, now only used for av1 repeat frame header. */ + AVPacket *tail_pkt; } HWBaseEncodeContext; +int ff_hw_base_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt); + #define HW_BASE_ENCODE_COMMON_OPTIONS \ { "async_depth", "Maximum processing parallelism. " \ "Increase this to improve single channel performance.", \ diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c index 227cccae64..18966596e1 100644 --- a/libavcodec/vaapi_encode.c +++ b/libavcodec/vaapi_encode.c @@ -138,22 +138,26 @@ static int vaapi_encode_make_misc_param_buffer(AVCodecContext *avctx, static int vaapi_encode_wait(AVCodecContext *avctx, VAAPIEncodePicture *pic) { +#if VA_CHECK_VERSION(1, 9, 0) + HWBaseEncodeContext *base_ctx = avctx->priv_data; +#endif VAAPIEncodeContext *ctx = avctx->priv_data; + HWBaseEncodePicture *base_pic = (HWBaseEncodePicture*)pic; VAStatus vas; - av_assert0(pic->encode_issued); + av_assert0(base_pic->encode_issued); - if (pic->encode_complete) { + if (base_pic->encode_complete) { // Already waited for this picture. return 0; } av_log(avctx, AV_LOG_DEBUG, "Sync to pic %"PRId64"/%"PRId64" " - "(input surface %#x).\n", pic->display_order, - pic->encode_order, pic->input_surface); + "(input surface %#x).\n", base_pic->display_order, + base_pic->encode_order, pic->input_surface); #if VA_CHECK_VERSION(1, 9, 0) - if (ctx->has_sync_buffer_func) { + if (base_ctx->async_encode) { vas = vaSyncBuffer(ctx->hwctx->display, pic->output_buffer, VA_TIMEOUT_INFINITE); @@ -174,9 +178,9 @@ static int vaapi_encode_wait(AVCodecContext *avctx, } // Input is definitely finished with now. - av_frame_free(&pic->input_image); + av_frame_free(&base_pic->input_image); - pic->encode_complete = 1; + base_pic->encode_complete = 1; return 0; } @@ -263,9 +267,11 @@ static int vaapi_encode_make_tile_slice(AVCodecContext *avctx, } static int vaapi_encode_issue(AVCodecContext *avctx, - VAAPIEncodePicture *pic) + const HWBaseEncodePicture *base_pic) { - VAAPIEncodeContext *ctx = avctx->priv_data; + HWBaseEncodeContext *base_ctx = avctx->priv_data; + VAAPIEncodeContext *ctx = avctx->priv_data; + VAAPIEncodePicture *pic = (VAAPIEncodePicture*)base_pic; VAAPIEncodeSlice *slice; VAStatus vas; int err, i; @@ -274,52 +280,46 @@ static int vaapi_encode_issue(AVCodecContext *avctx, av_unused AVFrameSideData *sd; av_log(avctx, AV_LOG_DEBUG, "Issuing encode for pic %"PRId64"/%"PRId64" " - "as type %s.\n", pic->display_order, pic->encode_order, - ff_hw_base_encode_get_pictype_name(pic->type)); - if (pic->nb_refs[0] == 0 && pic->nb_refs[1] == 0) { + "as type %s.\n", base_pic->display_order, base_pic->encode_order, + ff_hw_base_encode_get_pictype_name(base_pic->type)); + if (base_pic->nb_refs[0] == 0 && base_pic->nb_refs[1] == 0) { av_log(avctx, AV_LOG_DEBUG, "No reference pictures.\n"); } else { av_log(avctx, AV_LOG_DEBUG, "L0 refers to"); - for (i = 0; i < pic->nb_refs[0]; i++) { + for (i = 0; i < base_pic->nb_refs[0]; i++) { av_log(avctx, AV_LOG_DEBUG, " %"PRId64"/%"PRId64, - pic->refs[0][i]->display_order, pic->refs[0][i]->encode_order); + base_pic->refs[0][i]->display_order, base_pic->refs[0][i]->encode_order); } av_log(avctx, AV_LOG_DEBUG, ".\n"); - if (pic->nb_refs[1]) { + if (base_pic->nb_refs[1]) { av_log(avctx, AV_LOG_DEBUG, "L1 refers to"); - for (i = 0; i < pic->nb_refs[1]; i++) { + for (i = 0; i < base_pic->nb_refs[1]; i++) { av_log(avctx, AV_LOG_DEBUG, " %"PRId64"/%"PRId64, - pic->refs[1][i]->display_order, pic->refs[1][i]->encode_order); + base_pic->refs[1][i]->display_order, base_pic->refs[1][i]->encode_order); } av_log(avctx, AV_LOG_DEBUG, ".\n"); } } - av_assert0(!pic->encode_issued); - for (i = 0; i < pic->nb_refs[0]; i++) { - av_assert0(pic->refs[0][i]); - av_assert0(pic->refs[0][i]->encode_issued); + av_assert0(!base_pic->encode_issued); + for (i = 0; i < base_pic->nb_refs[0]; i++) { + av_assert0(base_pic->refs[0][i]); + av_assert0(base_pic->refs[0][i]->encode_issued); } - for (i = 0; i < pic->nb_refs[1]; i++) { - av_assert0(pic->refs[1][i]); - av_assert0(pic->refs[1][i]->encode_issued); + for (i = 0; i < base_pic->nb_refs[1]; i++) { + av_assert0(base_pic->refs[1][i]); + av_assert0(base_pic->refs[1][i]->encode_issued); } av_log(avctx, AV_LOG_DEBUG, "Input surface is %#x.\n", pic->input_surface); - pic->recon_image = av_frame_alloc(); - if (!pic->recon_image) { - err = AVERROR(ENOMEM); - goto fail; - } - - err = av_hwframe_get_buffer(ctx->recon_frames_ref, pic->recon_image, 0); + err = av_hwframe_get_buffer(ctx->recon_frames_ref, base_pic->recon_image, 0); if (err < 0) { err = AVERROR(ENOMEM); goto fail; } - pic->recon_surface = (VASurfaceID)(uintptr_t)pic->recon_image->data[3]; + pic->recon_surface = (VASurfaceID)(uintptr_t)base_pic->recon_image->data[3]; av_log(avctx, AV_LOG_DEBUG, "Recon surface is %#x.\n", pic->recon_surface); pic->output_buffer_ref = ff_refstruct_pool_get(ctx->output_buffer_pool); @@ -343,7 +343,7 @@ static int vaapi_encode_issue(AVCodecContext *avctx, pic->nb_param_buffers = 0; - if (pic->type == PICTURE_TYPE_IDR && ctx->codec->init_sequence_params) { + if (base_pic->type == PICTURE_TYPE_IDR && ctx->codec->init_sequence_params) { err = vaapi_encode_make_param_buffer(avctx, pic, VAEncSequenceParameterBufferType, ctx->codec_sequence_params, @@ -352,7 +352,7 @@ static int vaapi_encode_issue(AVCodecContext *avctx, goto fail; } - if (pic->type == PICTURE_TYPE_IDR) { + if (base_pic->type == PICTURE_TYPE_IDR) { for (i = 0; i < ctx->nb_global_params; i++) { err = vaapi_encode_make_misc_param_buffer(avctx, pic, ctx->global_params_type[i], @@ -389,7 +389,7 @@ static int vaapi_encode_issue(AVCodecContext *avctx, } #endif - if (pic->type == PICTURE_TYPE_IDR) { + if (base_pic->type == PICTURE_TYPE_IDR) { if (ctx->va_packed_headers & VA_ENC_PACKED_HEADER_SEQUENCE && ctx->codec->write_sequence_header) { bit_len = 8 * sizeof(data); @@ -529,9 +529,9 @@ static int vaapi_encode_issue(AVCodecContext *avctx, } #if VA_CHECK_VERSION(1, 0, 0) - sd = av_frame_get_side_data(pic->input_image, + sd = av_frame_get_side_data(base_pic->input_image, AV_FRAME_DATA_REGIONS_OF_INTEREST); - if (sd && ctx->roi_allowed) { + if (sd && base_ctx->roi_allowed) { const AVRegionOfInterest *roi; uint32_t roi_size; VAEncMiscParameterBufferROI param_roi; @@ -542,11 +542,11 @@ static int vaapi_encode_issue(AVCodecContext *avctx, av_assert0(roi_size && sd->size % roi_size == 0); nb_roi = sd->size / roi_size; if (nb_roi > ctx->roi_max_regions) { - if (!ctx->roi_warned) { + if (!base_ctx->roi_warned) { av_log(avctx, AV_LOG_WARNING, "More ROIs set than " "supported by driver (%d > %d).\n", nb_roi, ctx->roi_max_regions); - ctx->roi_warned = 1; + base_ctx->roi_warned = 1; } nb_roi = ctx->roi_max_regions; } @@ -639,8 +639,6 @@ static int vaapi_encode_issue(AVCodecContext *avctx, } } - pic->encode_issued = 1; - return 0; fail_with_picture: @@ -657,14 +655,13 @@ fail_at_end: av_freep(&pic->param_buffers); av_freep(&pic->slices); av_freep(&pic->roi); - av_frame_free(&pic->recon_image); ff_refstruct_unref(&pic->output_buffer_ref); pic->output_buffer = VA_INVALID_ID; return err; } static int vaapi_encode_set_output_property(AVCodecContext *avctx, - VAAPIEncodePicture *pic, + HWBaseEncodePicture *pic, AVPacket *pkt) { HWBaseEncodeContext *base_ctx = avctx->priv_data; @@ -689,16 +686,16 @@ static int vaapi_encode_set_output_property(AVCodecContext *avctx, return 0; } - if (ctx->output_delay == 0) { + if (base_ctx->output_delay == 0) { pkt->dts = pkt->pts; - } else if (pic->encode_order < ctx->decode_delay) { - if (ctx->ts_ring[pic->encode_order] < INT64_MIN + ctx->dts_pts_diff) + } else if (pic->encode_order < base_ctx->decode_delay) { + if (base_ctx->ts_ring[pic->encode_order] < INT64_MIN + base_ctx->dts_pts_diff) pkt->dts = INT64_MIN; else - pkt->dts = ctx->ts_ring[pic->encode_order] - ctx->dts_pts_diff; + pkt->dts = base_ctx->ts_ring[pic->encode_order] - base_ctx->dts_pts_diff; } else { - pkt->dts = ctx->ts_ring[(pic->encode_order - ctx->decode_delay) % - (3 * ctx->output_delay + base_ctx->async_depth)]; + pkt->dts = base_ctx->ts_ring[(pic->encode_order - base_ctx->decode_delay) % + (3 * base_ctx->output_delay + base_ctx->async_depth)]; } return 0; @@ -817,9 +814,11 @@ end: } static int vaapi_encode_output(AVCodecContext *avctx, - VAAPIEncodePicture *pic, AVPacket *pkt) + const HWBaseEncodePicture *base_pic, AVPacket *pkt) { - VAAPIEncodeContext *ctx = avctx->priv_data; + HWBaseEncodeContext *base_ctx = avctx->priv_data; + VAAPIEncodeContext *ctx = avctx->priv_data; + VAAPIEncodePicture *pic = (VAAPIEncodePicture*)base_pic; AVPacket *pkt_ptr = pkt; int err; @@ -832,17 +831,17 @@ static int vaapi_encode_output(AVCodecContext *avctx, ctx->coded_buffer_ref = ff_refstruct_ref(pic->output_buffer_ref); if (pic->tail_size) { - if (ctx->tail_pkt->size) { + if (base_ctx->tail_pkt->size) { err = AVERROR_BUG; goto end; } - err = ff_get_encode_buffer(avctx, ctx->tail_pkt, pic->tail_size, 0); + err = ff_get_encode_buffer(avctx, base_ctx->tail_pkt, pic->tail_size, 0); if (err < 0) goto end; - memcpy(ctx->tail_pkt->data, pic->tail_data, pic->tail_size); - pkt_ptr = ctx->tail_pkt; + memcpy(base_ctx->tail_pkt->data, pic->tail_data, pic->tail_size); + pkt_ptr = base_ctx->tail_pkt; } } else { err = vaapi_encode_get_coded_data(avctx, pic, pkt); @@ -851,9 +850,9 @@ static int vaapi_encode_output(AVCodecContext *avctx, } av_log(avctx, AV_LOG_DEBUG, "Output read for pic %"PRId64"/%"PRId64".\n", - pic->display_order, pic->encode_order); + base_pic->display_order, base_pic->encode_order); - vaapi_encode_set_output_property(avctx, pic, pkt_ptr); + vaapi_encode_set_output_property(avctx, (HWBaseEncodePicture*)pic, pkt_ptr); end: ff_refstruct_unref(&pic->output_buffer_ref); @@ -864,12 +863,14 @@ end: static int vaapi_encode_discard(AVCodecContext *avctx, VAAPIEncodePicture *pic) { + HWBaseEncodePicture *base_pic = (HWBaseEncodePicture*)pic; + vaapi_encode_wait(avctx, pic); if (pic->output_buffer_ref) { av_log(avctx, AV_LOG_DEBUG, "Discard output for pic " "%"PRId64"/%"PRId64".\n", - pic->display_order, pic->encode_order); + base_pic->display_order, base_pic->encode_order); ff_refstruct_unref(&pic->output_buffer_ref); pic->output_buffer = VA_INVALID_ID; @@ -878,8 +879,8 @@ static int vaapi_encode_discard(AVCodecContext *avctx, return 0; } -static VAAPIEncodePicture *vaapi_encode_alloc(AVCodecContext *avctx, - const AVFrame *frame) +static HWBaseEncodePicture *vaapi_encode_alloc(AVCodecContext *avctx, + const AVFrame *frame) { VAAPIEncodeContext *ctx = avctx->priv_data; VAAPIEncodePicture *pic; @@ -889,8 +890,8 @@ static VAAPIEncodePicture *vaapi_encode_alloc(AVCodecContext *avctx, return NULL; if (ctx->codec->picture_priv_data_size > 0) { - pic->priv_data = av_mallocz(ctx->codec->picture_priv_data_size); - if (!pic->priv_data) { + pic->base.priv_data = av_mallocz(ctx->codec->picture_priv_data_size); + if (!pic->base.priv_data) { av_freep(&pic); return NULL; } @@ -900,15 +901,16 @@ static VAAPIEncodePicture *vaapi_encode_alloc(AVCodecContext *avctx, pic->recon_surface = VA_INVALID_ID; pic->output_buffer = VA_INVALID_ID; - return pic; + return (HWBaseEncodePicture*)pic; } static int vaapi_encode_free(AVCodecContext *avctx, - VAAPIEncodePicture *pic) + HWBaseEncodePicture *base_pic) { + VAAPIEncodePicture *pic = (VAAPIEncodePicture*)base_pic; int i; - if (pic->encode_issued) + if (base_pic->encode_issued) vaapi_encode_discard(avctx, pic); if (pic->slices) { @@ -916,17 +918,17 @@ static int vaapi_encode_free(AVCodecContext *avctx, av_freep(&pic->slices[i].codec_slice_params); } - av_frame_free(&pic->input_image); - av_frame_free(&pic->recon_image); + av_frame_free(&base_pic->input_image); + av_frame_free(&base_pic->recon_image); - av_buffer_unref(&pic->opaque_ref); + av_buffer_unref(&base_pic->opaque_ref); av_freep(&pic->param_buffers); av_freep(&pic->slices); // Output buffer should already be destroyed. av_assert0(pic->output_buffer == VA_INVALID_ID); - av_freep(&pic->priv_data); + av_freep(&base_pic->priv_data); av_freep(&pic->codec_picture_params); av_freep(&pic->roi); @@ -935,564 +937,6 @@ static int vaapi_encode_free(AVCodecContext *avctx, return 0; } -static void vaapi_encode_add_ref(AVCodecContext *avctx, - VAAPIEncodePicture *pic, - VAAPIEncodePicture *target, - int is_ref, int in_dpb, int prev) -{ - int refs = 0; - - if (is_ref) { - av_assert0(pic != target); - av_assert0(pic->nb_refs[0] < MAX_PICTURE_REFERENCES && - pic->nb_refs[1] < MAX_PICTURE_REFERENCES); - if (target->display_order < pic->display_order) - pic->refs[0][pic->nb_refs[0]++] = target; - else - pic->refs[1][pic->nb_refs[1]++] = target; - ++refs; - } - - if (in_dpb) { - av_assert0(pic->nb_dpb_pics < MAX_DPB_SIZE); - pic->dpb[pic->nb_dpb_pics++] = target; - ++refs; - } - - if (prev) { - av_assert0(!pic->prev); - pic->prev = target; - ++refs; - } - - target->ref_count[0] += refs; - target->ref_count[1] += refs; -} - -static void vaapi_encode_remove_refs(AVCodecContext *avctx, - VAAPIEncodePicture *pic, - int level) -{ - int i; - - if (pic->ref_removed[level]) - return; - - for (i = 0; i < pic->nb_refs[0]; i++) { - av_assert0(pic->refs[0][i]); - --pic->refs[0][i]->ref_count[level]; - av_assert0(pic->refs[0][i]->ref_count[level] >= 0); - } - - for (i = 0; i < pic->nb_refs[1]; i++) { - av_assert0(pic->refs[1][i]); - --pic->refs[1][i]->ref_count[level]; - av_assert0(pic->refs[1][i]->ref_count[level] >= 0); - } - - for (i = 0; i < pic->nb_dpb_pics; i++) { - av_assert0(pic->dpb[i]); - --pic->dpb[i]->ref_count[level]; - av_assert0(pic->dpb[i]->ref_count[level] >= 0); - } - - av_assert0(pic->prev || pic->type == PICTURE_TYPE_IDR); - if (pic->prev) { - --pic->prev->ref_count[level]; - av_assert0(pic->prev->ref_count[level] >= 0); - } - - pic->ref_removed[level] = 1; -} - -static void vaapi_encode_set_b_pictures(AVCodecContext *avctx, - VAAPIEncodePicture *start, - VAAPIEncodePicture *end, - VAAPIEncodePicture *prev, - int current_depth, - VAAPIEncodePicture **last) -{ - VAAPIEncodeContext *ctx = avctx->priv_data; - VAAPIEncodePicture *pic, *next, *ref; - int i, len; - - av_assert0(start && end && start != end && start->next != end); - - // If we are at the maximum depth then encode all pictures as - // non-referenced B-pictures. Also do this if there is exactly one - // picture left, since there will be nothing to reference it. - if (current_depth == ctx->max_b_depth || start->next->next == end) { - for (pic = start->next; pic; pic = pic->next) { - if (pic == end) - break; - pic->type = PICTURE_TYPE_B; - pic->b_depth = current_depth; - - vaapi_encode_add_ref(avctx, pic, start, 1, 1, 0); - vaapi_encode_add_ref(avctx, pic, end, 1, 1, 0); - vaapi_encode_add_ref(avctx, pic, prev, 0, 0, 1); - - for (ref = end->refs[1][0]; ref; ref = ref->refs[1][0]) - vaapi_encode_add_ref(avctx, pic, ref, 0, 1, 0); - } - *last = prev; - - } else { - // Split the current list at the midpoint with a referenced - // B-picture, then descend into each side separately. - len = 0; - for (pic = start->next; pic != end; pic = pic->next) - ++len; - for (pic = start->next, i = 1; 2 * i < len; pic = pic->next, i++); - - pic->type = PICTURE_TYPE_B; - pic->b_depth = current_depth; - - pic->is_reference = 1; - - vaapi_encode_add_ref(avctx, pic, pic, 0, 1, 0); - vaapi_encode_add_ref(avctx, pic, start, 1, 1, 0); - vaapi_encode_add_ref(avctx, pic, end, 1, 1, 0); - vaapi_encode_add_ref(avctx, pic, prev, 0, 0, 1); - - for (ref = end->refs[1][0]; ref; ref = ref->refs[1][0]) - vaapi_encode_add_ref(avctx, pic, ref, 0, 1, 0); - - if (i > 1) - vaapi_encode_set_b_pictures(avctx, start, pic, pic, - current_depth + 1, &next); - else - next = pic; - - vaapi_encode_set_b_pictures(avctx, pic, end, next, - current_depth + 1, last); - } -} - -static void vaapi_encode_add_next_prev(AVCodecContext *avctx, - VAAPIEncodePicture *pic) -{ - VAAPIEncodeContext *ctx = avctx->priv_data; - int i; - - if (!pic) - return; - - if (pic->type == PICTURE_TYPE_IDR) { - for (i = 0; i < ctx->nb_next_prev; i++) { - --ctx->next_prev[i]->ref_count[0]; - ctx->next_prev[i] = NULL; - } - ctx->next_prev[0] = pic; - ++pic->ref_count[0]; - ctx->nb_next_prev = 1; - - return; - } - - if (ctx->nb_next_prev < MAX_PICTURE_REFERENCES) { - ctx->next_prev[ctx->nb_next_prev++] = pic; - ++pic->ref_count[0]; - } else { - --ctx->next_prev[0]->ref_count[0]; - for (i = 0; i < MAX_PICTURE_REFERENCES - 1; i++) - ctx->next_prev[i] = ctx->next_prev[i + 1]; - ctx->next_prev[i] = pic; - ++pic->ref_count[0]; - } -} - -static int vaapi_encode_pick_next(AVCodecContext *avctx, - VAAPIEncodePicture **pic_out) -{ - VAAPIEncodeContext *ctx = avctx->priv_data; - VAAPIEncodePicture *pic = NULL, *prev = NULL, *next, *start; - int i, b_counter, closed_gop_end; - - // If there are any B-frames already queued, the next one to encode - // is the earliest not-yet-issued frame for which all references are - // available. - for (pic = ctx->pic_start; pic; pic = pic->next) { - if (pic->encode_issued) - continue; - if (pic->type != PICTURE_TYPE_B) - continue; - for (i = 0; i < pic->nb_refs[0]; i++) { - if (!pic->refs[0][i]->encode_issued) - break; - } - if (i != pic->nb_refs[0]) - continue; - - for (i = 0; i < pic->nb_refs[1]; i++) { - if (!pic->refs[1][i]->encode_issued) - break; - } - if (i == pic->nb_refs[1]) - break; - } - - if (pic) { - av_log(avctx, AV_LOG_DEBUG, "Pick B-picture at depth %d to " - "encode next.\n", pic->b_depth); - *pic_out = pic; - return 0; - } - - // Find the B-per-Pth available picture to become the next picture - // on the top layer. - start = NULL; - b_counter = 0; - closed_gop_end = ctx->closed_gop || - ctx->idr_counter == ctx->gop_per_idr; - for (pic = ctx->pic_start; pic; pic = next) { - next = pic->next; - if (pic->encode_issued) { - start = pic; - continue; - } - // If the next available picture is force-IDR, encode it to start - // a new GOP immediately. - if (pic->force_idr) - break; - if (b_counter == ctx->b_per_p) - break; - // If this picture ends a closed GOP or starts a new GOP then it - // needs to be in the top layer. - if (ctx->gop_counter + b_counter + closed_gop_end >= ctx->gop_size) - break; - // If the picture after this one is force-IDR, we need to encode - // this one in the top layer. - if (next && next->force_idr) - break; - ++b_counter; - } - - // At the end of the stream the last picture must be in the top layer. - if (!pic && ctx->end_of_stream) { - --b_counter; - pic = ctx->pic_end; - if (pic->encode_complete) - return AVERROR_EOF; - else if (pic->encode_issued) - return AVERROR(EAGAIN); - } - - if (!pic) { - av_log(avctx, AV_LOG_DEBUG, "Pick nothing to encode next - " - "need more input for reference pictures.\n"); - return AVERROR(EAGAIN); - } - if (ctx->input_order <= ctx->decode_delay && !ctx->end_of_stream) { - av_log(avctx, AV_LOG_DEBUG, "Pick nothing to encode next - " - "need more input for timestamps.\n"); - return AVERROR(EAGAIN); - } - - if (pic->force_idr) { - av_log(avctx, AV_LOG_DEBUG, "Pick forced IDR-picture to " - "encode next.\n"); - pic->type = PICTURE_TYPE_IDR; - ctx->idr_counter = 1; - ctx->gop_counter = 1; - - } else if (ctx->gop_counter + b_counter >= ctx->gop_size) { - if (ctx->idr_counter == ctx->gop_per_idr) { - av_log(avctx, AV_LOG_DEBUG, "Pick new-GOP IDR-picture to " - "encode next.\n"); - pic->type = PICTURE_TYPE_IDR; - ctx->idr_counter = 1; - } else { - av_log(avctx, AV_LOG_DEBUG, "Pick new-GOP I-picture to " - "encode next.\n"); - pic->type = PICTURE_TYPE_I; - ++ctx->idr_counter; - } - ctx->gop_counter = 1; - - } else { - if (ctx->gop_counter + b_counter + closed_gop_end == ctx->gop_size) { - av_log(avctx, AV_LOG_DEBUG, "Pick group-end P-picture to " - "encode next.\n"); - } else { - av_log(avctx, AV_LOG_DEBUG, "Pick normal P-picture to " - "encode next.\n"); - } - pic->type = PICTURE_TYPE_P; - av_assert0(start); - ctx->gop_counter += 1 + b_counter; - } - pic->is_reference = 1; - *pic_out = pic; - - vaapi_encode_add_ref(avctx, pic, pic, 0, 1, 0); - if (pic->type != PICTURE_TYPE_IDR) { - // TODO: apply both previous and forward multi reference for all vaapi encoders. - // And L0/L1 reference frame number can be set dynamically through query - // VAConfigAttribEncMaxRefFrames attribute. - if (avctx->codec_id == AV_CODEC_ID_AV1) { - for (i = 0; i < ctx->nb_next_prev; i++) - vaapi_encode_add_ref(avctx, pic, ctx->next_prev[i], - pic->type == PICTURE_TYPE_P, - b_counter > 0, 0); - } else - vaapi_encode_add_ref(avctx, pic, start, - pic->type == PICTURE_TYPE_P, - b_counter > 0, 0); - - vaapi_encode_add_ref(avctx, pic, ctx->next_prev[ctx->nb_next_prev - 1], 0, 0, 1); - } - - if (b_counter > 0) { - vaapi_encode_set_b_pictures(avctx, start, pic, pic, 1, - &prev); - } else { - prev = pic; - } - vaapi_encode_add_next_prev(avctx, prev); - - return 0; -} - -static int vaapi_encode_clear_old(AVCodecContext *avctx) -{ - VAAPIEncodeContext *ctx = avctx->priv_data; - VAAPIEncodePicture *pic, *prev, *next; - - av_assert0(ctx->pic_start); - - // Remove direct references once each picture is complete. - for (pic = ctx->pic_start; pic; pic = pic->next) { - if (pic->encode_complete && pic->next) - vaapi_encode_remove_refs(avctx, pic, 0); - } - - // Remove indirect references once a picture has no direct references. - for (pic = ctx->pic_start; pic; pic = pic->next) { - if (pic->encode_complete && pic->ref_count[0] == 0) - vaapi_encode_remove_refs(avctx, pic, 1); - } - - // Clear out all complete pictures with no remaining references. - prev = NULL; - for (pic = ctx->pic_start; pic; pic = next) { - next = pic->next; - if (pic->encode_complete && pic->ref_count[1] == 0) { - av_assert0(pic->ref_removed[0] && pic->ref_removed[1]); - if (prev) - prev->next = next; - else - ctx->pic_start = next; - vaapi_encode_free(avctx, pic); - } else { - prev = pic; - } - } - - return 0; -} - -static int vaapi_encode_check_frame(AVCodecContext *avctx, - const AVFrame *frame) -{ - VAAPIEncodeContext *ctx = avctx->priv_data; - - if ((frame->crop_top || frame->crop_bottom || - frame->crop_left || frame->crop_right) && !ctx->crop_warned) { - av_log(avctx, AV_LOG_WARNING, "Cropping information on input " - "frames ignored due to lack of API support.\n"); - ctx->crop_warned = 1; - } - - if (!ctx->roi_allowed) { - AVFrameSideData *sd = - av_frame_get_side_data(frame, AV_FRAME_DATA_REGIONS_OF_INTEREST); - - if (sd && !ctx->roi_warned) { - av_log(avctx, AV_LOG_WARNING, "ROI side data on input " - "frames ignored due to lack of driver support.\n"); - ctx->roi_warned = 1; - } - } - - return 0; -} - -static int vaapi_encode_send_frame(AVCodecContext *avctx, AVFrame *frame) -{ - HWBaseEncodeContext *base_ctx = avctx->priv_data; - VAAPIEncodeContext *ctx = avctx->priv_data; - VAAPIEncodePicture *pic; - int err; - - if (frame) { - av_log(avctx, AV_LOG_DEBUG, "Input frame: %ux%u (%"PRId64").\n", - frame->width, frame->height, frame->pts); - - err = vaapi_encode_check_frame(avctx, frame); - if (err < 0) - return err; - - pic = vaapi_encode_alloc(avctx, frame); - if (!pic) - return AVERROR(ENOMEM); - - pic->input_image = av_frame_alloc(); - if (!pic->input_image) { - err = AVERROR(ENOMEM); - goto fail; - } - - if (ctx->input_order == 0 || frame->pict_type == AV_PICTURE_TYPE_I) - pic->force_idr = 1; - - pic->pts = frame->pts; - pic->duration = frame->duration; - - if (avctx->flags & AV_CODEC_FLAG_COPY_OPAQUE) { - err = av_buffer_replace(&pic->opaque_ref, frame->opaque_ref); - if (err < 0) - goto fail; - - pic->opaque = frame->opaque; - } - - av_frame_move_ref(pic->input_image, frame); - - if (ctx->input_order == 0) - ctx->first_pts = pic->pts; - if (ctx->input_order == ctx->decode_delay) - ctx->dts_pts_diff = pic->pts - ctx->first_pts; - if (ctx->output_delay > 0) - ctx->ts_ring[ctx->input_order % - (3 * ctx->output_delay + base_ctx->async_depth)] = pic->pts; - - pic->display_order = ctx->input_order; - ++ctx->input_order; - - if (ctx->pic_start) { - ctx->pic_end->next = pic; - ctx->pic_end = pic; - } else { - ctx->pic_start = pic; - ctx->pic_end = pic; - } - - } else { - ctx->end_of_stream = 1; - - // Fix timestamps if we hit end-of-stream before the initial decode - // delay has elapsed. - if (ctx->input_order < ctx->decode_delay) - ctx->dts_pts_diff = ctx->pic_end->pts - ctx->first_pts; - } - - return 0; - -fail: - vaapi_encode_free(avctx, pic); - return err; -} - -int ff_vaapi_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt) -{ - VAAPIEncodeContext *ctx = avctx->priv_data; - VAAPIEncodePicture *pic = NULL; - AVFrame *frame = ctx->frame; - int err; - -start: - /** if no B frame before repeat P frame, sent repeat P frame out. */ - if (ctx->tail_pkt->size) { - for (VAAPIEncodePicture *tmp = ctx->pic_start; tmp; tmp = tmp->next) { - if (tmp->type == PICTURE_TYPE_B && tmp->pts < ctx->tail_pkt->pts) - break; - else if (!tmp->next) { - av_packet_move_ref(pkt, ctx->tail_pkt); - goto end; - } - } - } - - err = ff_encode_get_frame(avctx, frame); - if (err < 0 && err != AVERROR_EOF) - return err; - - if (err == AVERROR_EOF) - frame = NULL; - - err = vaapi_encode_send_frame(avctx, frame); - if (err < 0) - return err; - - if (!ctx->pic_start) { - if (ctx->end_of_stream) - return AVERROR_EOF; - else - return AVERROR(EAGAIN); - } - - if (ctx->has_sync_buffer_func) { - if (av_fifo_can_write(ctx->encode_fifo)) { - err = vaapi_encode_pick_next(avctx, &pic); - if (!err) { - av_assert0(pic); - pic->encode_order = ctx->encode_order + - av_fifo_can_read(ctx->encode_fifo); - err = vaapi_encode_issue(avctx, pic); - if (err < 0) { - av_log(avctx, AV_LOG_ERROR, "Encode failed: %d.\n", err); - return err; - } - av_fifo_write(ctx->encode_fifo, &pic, 1); - } - } - - if (!av_fifo_can_read(ctx->encode_fifo)) - return err; - - // More frames can be buffered - if (av_fifo_can_write(ctx->encode_fifo) && !ctx->end_of_stream) - return AVERROR(EAGAIN); - - av_fifo_read(ctx->encode_fifo, &pic, 1); - ctx->encode_order = pic->encode_order + 1; - } else { - err = vaapi_encode_pick_next(avctx, &pic); - if (err < 0) - return err; - av_assert0(pic); - - pic->encode_order = ctx->encode_order++; - - err = vaapi_encode_issue(avctx, pic); - if (err < 0) { - av_log(avctx, AV_LOG_ERROR, "Encode failed: %d.\n", err); - return err; - } - } - - err = vaapi_encode_output(avctx, pic, pkt); - if (err < 0) { - av_log(avctx, AV_LOG_ERROR, "Output failed: %d.\n", err); - return err; - } - - ctx->output_order = pic->encode_order; - vaapi_encode_clear_old(avctx); - - /** loop to get an available pkt in encoder flushing. */ - if (ctx->end_of_stream && !pkt->size) - goto start; - -end: - if (pkt->size) - av_log(avctx, AV_LOG_DEBUG, "Output packet: pts %"PRId64", dts %"PRId64", " - "size %d bytes.\n", pkt->pts, pkt->dts, pkt->size); - - return 0; -} - static av_cold void vaapi_encode_add_global_param(AVCodecContext *avctx, int type, void *buffer, size_t size) { @@ -2188,7 +1632,8 @@ static av_cold int vaapi_encode_init_max_frame_size(AVCodecContext *avctx) static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx) { - VAAPIEncodeContext *ctx = avctx->priv_data; + HWBaseEncodeContext *base_ctx = avctx->priv_data; + VAAPIEncodeContext *ctx = avctx->priv_data; VAStatus vas; VAConfigAttrib attr = { VAConfigAttribEncMaxRefFrames }; uint32_t ref_l0, ref_l1; @@ -2211,7 +1656,7 @@ static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx) ref_l1 = attr.value >> 16 & 0xffff; } - ctx->p_to_gpb = 0; + base_ctx->p_to_gpb = 0; prediction_pre_only = 0; #if VA_CHECK_VERSION(1, 9, 0) @@ -2247,7 +1692,7 @@ static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx) if (attr.value & VA_PREDICTION_DIRECTION_BI_NOT_EMPTY) { if (ref_l0 > 0 && ref_l1 > 0) { - ctx->p_to_gpb = 1; + base_ctx->p_to_gpb = 1; av_log(avctx, AV_LOG_VERBOSE, "Driver does not support P-frames, " "replacing them with B-frames.\n"); } @@ -2259,7 +1704,7 @@ static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx) if (ctx->codec->flags & FLAG_INTRA_ONLY || avctx->gop_size <= 1) { av_log(avctx, AV_LOG_VERBOSE, "Using intra frames only.\n"); - ctx->gop_size = 1; + base_ctx->gop_size = 1; } else if (ref_l0 < 1) { av_log(avctx, AV_LOG_ERROR, "Driver does not support any " "reference frames.\n"); @@ -2267,41 +1712,41 @@ static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx) } else if (!(ctx->codec->flags & FLAG_B_PICTURES) || ref_l1 < 1 || avctx->max_b_frames < 1 || prediction_pre_only) { - if (ctx->p_to_gpb) + if (base_ctx->p_to_gpb) av_log(avctx, AV_LOG_VERBOSE, "Using intra and B-frames " "(supported references: %d / %d).\n", ref_l0, ref_l1); else av_log(avctx, AV_LOG_VERBOSE, "Using intra and P-frames " "(supported references: %d / %d).\n", ref_l0, ref_l1); - ctx->gop_size = avctx->gop_size; - ctx->p_per_i = INT_MAX; - ctx->b_per_p = 0; + base_ctx->gop_size = avctx->gop_size; + base_ctx->p_per_i = INT_MAX; + base_ctx->b_per_p = 0; } else { - if (ctx->p_to_gpb) + if (base_ctx->p_to_gpb) av_log(avctx, AV_LOG_VERBOSE, "Using intra and B-frames " "(supported references: %d / %d).\n", ref_l0, ref_l1); else av_log(avctx, AV_LOG_VERBOSE, "Using intra, P- and B-frames " "(supported references: %d / %d).\n", ref_l0, ref_l1); - ctx->gop_size = avctx->gop_size; - ctx->p_per_i = INT_MAX; - ctx->b_per_p = avctx->max_b_frames; + base_ctx->gop_size = avctx->gop_size; + base_ctx->p_per_i = INT_MAX; + base_ctx->b_per_p = avctx->max_b_frames; if (ctx->codec->flags & FLAG_B_PICTURE_REFERENCES) { - ctx->max_b_depth = FFMIN(ctx->desired_b_depth, - av_log2(ctx->b_per_p) + 1); + base_ctx->max_b_depth = FFMIN(ctx->desired_b_depth, + av_log2(base_ctx->b_per_p) + 1); } else { - ctx->max_b_depth = 1; + base_ctx->max_b_depth = 1; } } if (ctx->codec->flags & FLAG_NON_IDR_KEY_PICTURES) { - ctx->closed_gop = !!(avctx->flags & AV_CODEC_FLAG_CLOSED_GOP); - ctx->gop_per_idr = ctx->idr_interval + 1; + base_ctx->closed_gop = !!(avctx->flags & AV_CODEC_FLAG_CLOSED_GOP); + base_ctx->gop_per_idr = ctx->idr_interval + 1; } else { - ctx->closed_gop = 1; - ctx->gop_per_idr = 1; + base_ctx->closed_gop = 1; + base_ctx->gop_per_idr = 1; } return 0; @@ -2614,7 +2059,8 @@ static av_cold int vaapi_encode_init_quality(AVCodecContext *avctx) static av_cold int vaapi_encode_init_roi(AVCodecContext *avctx) { #if VA_CHECK_VERSION(1, 0, 0) - VAAPIEncodeContext *ctx = avctx->priv_data; + HWBaseEncodeContext *base_ctx = avctx->priv_data; + VAAPIEncodeContext *ctx = avctx->priv_data; VAStatus vas; VAConfigAttrib attr = { VAConfigAttribEncROI }; @@ -2629,14 +2075,14 @@ static av_cold int vaapi_encode_init_roi(AVCodecContext *avctx) } if (attr.value == VA_ATTRIB_NOT_SUPPORTED) { - ctx->roi_allowed = 0; + base_ctx->roi_allowed = 0; } else { VAConfigAttribValEncROI roi = { .value = attr.value, }; ctx->roi_max_regions = roi.bits.num_roi_regions; - ctx->roi_allowed = ctx->roi_max_regions > 0 && + base_ctx->roi_allowed = ctx->roi_max_regions > 0 && (ctx->va_rc_mode == VA_RC_CQP || roi.bits.roi_rc_qp_delta_support); } @@ -2771,6 +2217,16 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx) return err; } +static const HWEncodePictureOperation vaapi_op = { + .alloc = &vaapi_encode_alloc, + + .issue = &vaapi_encode_issue, + + .output = &vaapi_encode_output, + + .free = &vaapi_encode_free, +}; + av_cold int ff_vaapi_encode_init(AVCodecContext *avctx) { HWBaseEncodeContext *base_ctx = avctx->priv_data; @@ -2782,10 +2238,12 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx) ctx->va_config = VA_INVALID_ID; ctx->va_context = VA_INVALID_ID; + base_ctx->op = &vaapi_op; + /* If you add something that can fail above this av_frame_alloc(), * modify ff_vaapi_encode_close() accordingly. */ - ctx->frame = av_frame_alloc(); - if (!ctx->frame) { + base_ctx->frame = av_frame_alloc(); + if (!base_ctx->frame) { return AVERROR(ENOMEM); } @@ -2810,8 +2268,8 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx) ctx->device = (AVHWDeviceContext*)ctx->device_ref->data; ctx->hwctx = ctx->device->hwctx; - ctx->tail_pkt = av_packet_alloc(); - if (!ctx->tail_pkt) { + base_ctx->tail_pkt = av_packet_alloc(); + if (!base_ctx->tail_pkt) { err = AVERROR(ENOMEM); goto fail; } @@ -2910,8 +2368,8 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx) goto fail; } - ctx->output_delay = ctx->b_per_p; - ctx->decode_delay = ctx->max_b_depth; + base_ctx->output_delay = base_ctx->b_per_p; + base_ctx->decode_delay = base_ctx->max_b_depth; if (ctx->codec->sequence_params_size > 0) { ctx->codec_sequence_params = @@ -2966,11 +2424,11 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx) // check vaSyncBuffer function vas = vaSyncBuffer(ctx->hwctx->display, VA_INVALID_ID, 0); if (vas != VA_STATUS_ERROR_UNIMPLEMENTED) { - ctx->has_sync_buffer_func = 1; - ctx->encode_fifo = av_fifo_alloc2(base_ctx->async_depth, - sizeof(VAAPIEncodePicture *), - 0); - if (!ctx->encode_fifo) + base_ctx->async_encode = 1; + base_ctx->encode_fifo = av_fifo_alloc2(base_ctx->async_depth, + sizeof(VAAPIEncodePicture*), + 0); + if (!base_ctx->encode_fifo) return AVERROR(ENOMEM); } #endif @@ -2983,15 +2441,16 @@ fail: av_cold int ff_vaapi_encode_close(AVCodecContext *avctx) { - VAAPIEncodeContext *ctx = avctx->priv_data; - VAAPIEncodePicture *pic, *next; + HWBaseEncodeContext *base_ctx = avctx->priv_data; + VAAPIEncodeContext *ctx = avctx->priv_data; + HWBaseEncodePicture *pic, *next; /* We check ctx->frame to know whether ff_vaapi_encode_init() * has been called and va_config/va_context initialized. */ - if (!ctx->frame) + if (!base_ctx->frame) return 0; - for (pic = ctx->pic_start; pic; pic = next) { + for (pic = base_ctx->pic_start; pic; pic = next) { next = pic->next; vaapi_encode_free(avctx, pic); } @@ -3008,12 +2467,12 @@ av_cold int ff_vaapi_encode_close(AVCodecContext *avctx) ctx->va_config = VA_INVALID_ID; } - av_frame_free(&ctx->frame); - av_packet_free(&ctx->tail_pkt); + av_frame_free(&base_ctx->frame); + av_packet_free(&base_ctx->tail_pkt); av_freep(&ctx->codec_sequence_params); av_freep(&ctx->codec_picture_params); - av_fifo_freep2(&ctx->encode_fifo); + av_fifo_freep2(&base_ctx->encode_fifo); av_buffer_unref(&ctx->recon_frames_ref); av_buffer_unref(&ctx->input_frames_ref); diff --git a/libavcodec/vaapi_encode.h b/libavcodec/vaapi_encode.h index 02410c72ec..13ccad8e47 100644 --- a/libavcodec/vaapi_encode.h +++ b/libavcodec/vaapi_encode.h @@ -29,7 +29,6 @@ #include "libavutil/hwcontext.h" #include "libavutil/hwcontext_vaapi.h" -#include "libavutil/fifo.h" #include "avcodec.h" #include "hwconfig.h" @@ -64,16 +63,7 @@ typedef struct VAAPIEncodeSlice { } VAAPIEncodeSlice; typedef struct VAAPIEncodePicture { - struct VAAPIEncodePicture *next; - - int64_t display_order; - int64_t encode_order; - int64_t pts; - int64_t duration; - int force_idr; - - void *opaque; - AVBufferRef *opaque_ref; + HWBaseEncodePicture base; #if VA_CHECK_VERSION(1, 0, 0) // ROI regions. @@ -82,15 +72,7 @@ typedef struct VAAPIEncodePicture { void *roi; #endif - int type; - int b_depth; - int encode_issued; - int encode_complete; - - AVFrame *input_image; VASurfaceID input_surface; - - AVFrame *recon_image; VASurfaceID recon_surface; int nb_param_buffers; @@ -100,31 +82,8 @@ typedef struct VAAPIEncodePicture { VABufferID *output_buffer_ref; VABufferID output_buffer; - void *priv_data; void *codec_picture_params; - // Whether this picture is a reference picture. - int is_reference; - - // The contents of the DPB after this picture has been decoded. - // This will contain the picture itself if it is a reference picture, - // but not if it isn't. - int nb_dpb_pics; - struct VAAPIEncodePicture *dpb[MAX_DPB_SIZE]; - // The reference pictures used in decoding this picture. If they are - // used by later pictures they will also appear in the DPB. ref[0][] for - // previous reference frames. ref[1][] for future reference frames. - int nb_refs[MAX_REFERENCE_LIST_NUM]; - struct VAAPIEncodePicture *refs[MAX_REFERENCE_LIST_NUM][MAX_PICTURE_REFERENCES]; - // The previous reference picture in encode order. Must be in at least - // one of the reference list and DPB list. - struct VAAPIEncodePicture *prev; - // Reference count for other pictures referring to this one through - // the above pointers, directly from incomplete pictures and indirectly - // through completed pictures. - int ref_count[2]; - int ref_removed[2]; - int nb_slices; VAAPIEncodeSlice *slices; @@ -298,30 +257,6 @@ typedef struct VAAPIEncodeContext { // structure (VAEncPictureParameterBuffer*). void *codec_picture_params; - // Current encoding window, in display (input) order. - VAAPIEncodePicture *pic_start, *pic_end; - // The next picture to use as the previous reference picture in - // encoding order. Order from small to large in encoding order. - VAAPIEncodePicture *next_prev[MAX_PICTURE_REFERENCES]; - int nb_next_prev; - - // Next input order index (display order). - int64_t input_order; - // Number of frames that output is behind input. - int64_t output_delay; - // Next encode order index. - int64_t encode_order; - // Number of frames decode output will need to be delayed. - int64_t decode_delay; - // Next output order index (in encode order). - int64_t output_order; - - // Timestamp handling. - int64_t first_pts; - int64_t dts_pts_diff; - int64_t ts_ring[MAX_REORDER_DELAY * 3 + - MAX_ASYNC_DEPTH]; - // Slice structure. int slice_block_rows; int slice_block_cols; @@ -340,41 +275,12 @@ typedef struct VAAPIEncodeContext { // Location of the i-th tile row boundary. int row_bd[MAX_TILE_ROWS + 1]; - // Frame type decision. - int gop_size; - int closed_gop; - int gop_per_idr; - int p_per_i; - int max_b_depth; - int b_per_p; - int force_idr; - int idr_counter; - int gop_counter; - int end_of_stream; - int p_to_gpb; - - // Whether the driver supports ROI at all. - int roi_allowed; // Maximum number of regions supported by the driver. int roi_max_regions; // Quantisation range for offset calculations. Set by codec-specific // code, as it may change based on parameters. int roi_quant_range; - // The encoder does not support cropping information, so warn about - // it the first time we encounter any nonzero crop fields. - int crop_warned; - // If the driver does not support ROI then warn the first time we - // encounter a frame with ROI side data. - int roi_warned; - - AVFrame *frame; - - // Whether the driver support vaSyncBuffer - int has_sync_buffer_func; - // Store buffered pic - AVFifo *encode_fifo; - /** Head data for current output pkt, used only for AV1. */ //void *header_data; //size_t header_data_size; @@ -384,9 +290,6 @@ typedef struct VAAPIEncodeContext { * This is a RefStruct reference. */ VABufferID *coded_buffer_ref; - - /** Tail data of a pic, now only used for av1 repeat frame header. */ - AVPacket *tail_pkt; } VAAPIEncodeContext; typedef struct VAAPIEncodeType { @@ -468,9 +371,6 @@ typedef struct VAAPIEncodeType { char *data, size_t *data_len); } VAAPIEncodeType; - -int ff_vaapi_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt); - int ff_vaapi_encode_init(AVCodecContext *avctx); int ff_vaapi_encode_close(AVCodecContext *avctx); diff --git a/libavcodec/vaapi_encode_av1.c b/libavcodec/vaapi_encode_av1.c index f4b4586583..393b479f99 100644 --- a/libavcodec/vaapi_encode_av1.c +++ b/libavcodec/vaapi_encode_av1.c @@ -357,6 +357,7 @@ static int vaapi_encode_av1_write_sequence_header(AVCodecContext *avctx, static int vaapi_encode_av1_init_sequence_params(AVCodecContext *avctx) { + HWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeContext *ctx = avctx->priv_data; VAAPIEncodeAV1Context *priv = avctx->priv_data; AV1RawOBU *sh_obu = &priv->sh; @@ -438,8 +439,8 @@ static int vaapi_encode_av1_init_sequence_params(AVCodecContext *avctx) vseq->seq_level_idx = sh->seq_level_idx[0]; vseq->seq_tier = sh->seq_tier[0]; vseq->order_hint_bits_minus_1 = sh->order_hint_bits_minus_1; - vseq->intra_period = ctx->gop_size; - vseq->ip_period = ctx->b_per_p + 1; + vseq->intra_period = base_ctx->gop_size; + vseq->ip_period = base_ctx->b_per_p + 1; vseq->seq_fields.bits.enable_order_hint = sh->enable_order_hint; @@ -466,12 +467,13 @@ static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx, { VAAPIEncodeContext *ctx = avctx->priv_data; VAAPIEncodeAV1Context *priv = avctx->priv_data; - VAAPIEncodeAV1Picture *hpic = pic->priv_data; + HWBaseEncodePicture *base_pic = (HWBaseEncodePicture *)pic; + VAAPIEncodeAV1Picture *hpic = base_pic->priv_data; AV1RawOBU *fh_obu = &priv->fh; AV1RawFrameHeader *fh = &fh_obu->obu.frame.header; VAEncPictureParameterBufferAV1 *vpic = pic->codec_picture_params; CodedBitstreamFragment *obu = &priv->current_obu; - VAAPIEncodePicture *ref; + HWBaseEncodePicture *ref; VAAPIEncodeAV1Picture *href; int slot, i; int ret; @@ -480,24 +482,24 @@ static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx, memset(fh_obu, 0, sizeof(*fh_obu)); pic->nb_slices = priv->tile_groups; - pic->non_independent_frame = pic->encode_order < pic->display_order; + pic->non_independent_frame = base_pic->encode_order < base_pic->display_order; fh_obu->header.obu_type = AV1_OBU_FRAME_HEADER; fh_obu->header.obu_has_size_field = 1; - switch (pic->type) { + switch (base_pic->type) { case PICTURE_TYPE_IDR: - av_assert0(pic->nb_refs[0] == 0 || pic->nb_refs[1]); + av_assert0(base_pic->nb_refs[0] == 0 || base_pic->nb_refs[1]); fh->frame_type = AV1_FRAME_KEY; fh->refresh_frame_flags = 0xFF; fh->base_q_idx = priv->q_idx_idr; hpic->slot = 0; - hpic->last_idr_frame = pic->display_order; + hpic->last_idr_frame = base_pic->display_order; break; case PICTURE_TYPE_P: - av_assert0(pic->nb_refs[0]); + av_assert0(base_pic->nb_refs[0]); fh->frame_type = AV1_FRAME_INTER; fh->base_q_idx = priv->q_idx_p; - ref = pic->refs[0][pic->nb_refs[0] - 1]; + ref = base_pic->refs[0][base_pic->nb_refs[0] - 1]; href = ref->priv_data; hpic->slot = !href->slot; hpic->last_idr_frame = href->last_idr_frame; @@ -512,8 +514,8 @@ static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx, vpic->ref_frame_ctrl_l0.fields.search_idx0 = AV1_REF_FRAME_LAST; /** set the 2nd nearest frame in L0 as Golden frame. */ - if (pic->nb_refs[0] > 1) { - ref = pic->refs[0][pic->nb_refs[0] - 2]; + if (base_pic->nb_refs[0] > 1) { + ref = base_pic->refs[0][base_pic->nb_refs[0] - 2]; href = ref->priv_data; fh->ref_frame_idx[3] = href->slot; fh->ref_order_hint[href->slot] = ref->display_order - href->last_idr_frame; @@ -521,7 +523,7 @@ static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx, } break; case PICTURE_TYPE_B: - av_assert0(pic->nb_refs[0] && pic->nb_refs[1]); + av_assert0(base_pic->nb_refs[0] && base_pic->nb_refs[1]); fh->frame_type = AV1_FRAME_INTER; fh->base_q_idx = priv->q_idx_b; fh->refresh_frame_flags = 0x0; @@ -534,7 +536,7 @@ static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx, vpic->ref_frame_ctrl_l0.fields.search_idx0 = AV1_REF_FRAME_LAST; vpic->ref_frame_ctrl_l1.fields.search_idx0 = AV1_REF_FRAME_BWDREF; - ref = pic->refs[0][pic->nb_refs[0] - 1]; + ref = base_pic->refs[0][base_pic->nb_refs[0] - 1]; href = ref->priv_data; hpic->last_idr_frame = href->last_idr_frame; fh->primary_ref_frame = href->slot; @@ -543,7 +545,7 @@ static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx, fh->ref_frame_idx[i] = href->slot; } - ref = pic->refs[1][pic->nb_refs[1] - 1]; + ref = base_pic->refs[1][base_pic->nb_refs[1] - 1]; href = ref->priv_data; fh->ref_order_hint[href->slot] = ref->display_order - href->last_idr_frame; for (i = AV1_REF_FRAME_GOLDEN; i < AV1_REFS_PER_FRAME; i++) { @@ -554,13 +556,13 @@ static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx, av_assert0(0 && "invalid picture type"); } - fh->show_frame = pic->display_order <= pic->encode_order; + fh->show_frame = base_pic->display_order <= base_pic->encode_order; fh->showable_frame = fh->frame_type != AV1_FRAME_KEY; fh->frame_width_minus_1 = avctx->width - 1; fh->frame_height_minus_1 = avctx->height - 1; fh->render_width_minus_1 = fh->frame_width_minus_1; fh->render_height_minus_1 = fh->frame_height_minus_1; - fh->order_hint = pic->display_order - hpic->last_idr_frame; + fh->order_hint = base_pic->display_order - hpic->last_idr_frame; fh->tile_cols = priv->tile_cols; fh->tile_rows = priv->tile_rows; fh->tile_cols_log2 = priv->tile_cols_log2; @@ -626,13 +628,13 @@ static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx, vpic->reference_frames[i] = VA_INVALID_SURFACE; for (i = 0; i < MAX_REFERENCE_LIST_NUM; i++) { - for (int j = 0; j < pic->nb_refs[i]; j++) { - VAAPIEncodePicture *ref_pic = pic->refs[i][j]; + for (int j = 0; j < base_pic->nb_refs[i]; j++) { + HWBaseEncodePicture *ref_pic = base_pic->refs[i][j]; slot = ((VAAPIEncodeAV1Picture*)ref_pic->priv_data)->slot; av_assert0(vpic->reference_frames[slot] == VA_INVALID_SURFACE); - vpic->reference_frames[slot] = ref_pic->recon_surface; + vpic->reference_frames[slot] = ((VAAPIEncodePicture *)ref_pic)->recon_surface; } } @@ -653,7 +655,7 @@ static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx, vpic->bit_offset_cdef_params = priv->cdef_start_offset; vpic->size_in_bits_cdef_params = priv->cdef_param_size; vpic->size_in_bits_frame_hdr_obu = priv->fh_data_len; - vpic->byte_offset_frame_hdr_obu_size = (((pic->type == PICTURE_TYPE_IDR) ? + vpic->byte_offset_frame_hdr_obu_size = (((base_pic->type == PICTURE_TYPE_IDR) ? priv->sh_data_len / 8 : 0) + (fh_obu->header.obu_extension_flag ? 2 : 1)); @@ -695,14 +697,15 @@ static int vaapi_encode_av1_write_picture_header(AVCodecContext *avctx, CodedBitstreamAV1Context *cbctx = priv->cbc->priv_data; AV1RawOBU *fh_obu = &priv->fh; AV1RawFrameHeader *rep_fh = &fh_obu->obu.frame_header; + HWBaseEncodePicture *base_pic = (HWBaseEncodePicture *)pic; VAAPIEncodeAV1Picture *href; int ret = 0; pic->tail_size = 0; /** Pack repeat frame header. */ - if (pic->display_order > pic->encode_order) { + if (base_pic->display_order > base_pic->encode_order) { memset(fh_obu, 0, sizeof(*fh_obu)); - href = pic->refs[0][pic->nb_refs[0] - 1]->priv_data; + href = base_pic->refs[0][base_pic->nb_refs[0] - 1]->priv_data; fh_obu->header.obu_type = AV1_OBU_FRAME_HEADER; fh_obu->header.obu_has_size_field = 1; @@ -937,7 +940,7 @@ const FFCodec ff_av1_vaapi_encoder = { .p.id = AV_CODEC_ID_AV1, .priv_data_size = sizeof(VAAPIEncodeAV1Context), .init = &vaapi_encode_av1_init, - FF_CODEC_RECEIVE_PACKET_CB(&ff_vaapi_encode_receive_packet), + FF_CODEC_RECEIVE_PACKET_CB(&ff_hw_base_encode_receive_packet), .close = &vaapi_encode_av1_close, .p.priv_class = &vaapi_encode_av1_class, .p.capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE | diff --git a/libavcodec/vaapi_encode_h264.c b/libavcodec/vaapi_encode_h264.c index ebb1760cd3..412e392ac4 100644 --- a/libavcodec/vaapi_encode_h264.c +++ b/libavcodec/vaapi_encode_h264.c @@ -233,7 +233,7 @@ static int vaapi_encode_h264_write_extra_header(AVCodecContext *avctx, goto fail; } if (priv->sei_needed & SEI_TIMING) { - if (pic->type == PICTURE_TYPE_IDR) { + if (pic->base.type == PICTURE_TYPE_IDR) { err = ff_cbs_sei_add_message(priv->cbc, au, 1, SEI_TYPE_BUFFERING_PERIOD, &priv->sei_buffering_period, NULL); @@ -295,6 +295,7 @@ fail: static int vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx) { + HWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeContext *ctx = avctx->priv_data; VAAPIEncodeH264Context *priv = avctx->priv_data; H264RawSPS *sps = &priv->raw_sps; @@ -326,18 +327,18 @@ static int vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx) sps->constraint_set1_flag = 1; if (avctx->profile == AV_PROFILE_H264_HIGH || avctx->profile == AV_PROFILE_H264_HIGH_10) - sps->constraint_set3_flag = ctx->gop_size == 1; + sps->constraint_set3_flag = base_ctx->gop_size == 1; if (avctx->profile == AV_PROFILE_H264_MAIN || avctx->profile == AV_PROFILE_H264_HIGH || avctx->profile == AV_PROFILE_H264_HIGH_10) { sps->constraint_set4_flag = 1; - sps->constraint_set5_flag = ctx->b_per_p == 0; + sps->constraint_set5_flag = base_ctx->b_per_p == 0; } - if (ctx->gop_size == 1) + if (base_ctx->gop_size == 1) priv->dpb_frames = 0; else - priv->dpb_frames = 1 + ctx->max_b_depth; + priv->dpb_frames = 1 + base_ctx->max_b_depth; if (avctx->level != AV_LEVEL_UNKNOWN) { sps->level_idc = avctx->level; @@ -374,7 +375,7 @@ static int vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx) sps->bit_depth_chroma_minus8 = bit_depth - 8; sps->log2_max_frame_num_minus4 = 4; - sps->pic_order_cnt_type = ctx->max_b_depth ? 0 : 2; + sps->pic_order_cnt_type = base_ctx->max_b_depth ? 0 : 2; if (sps->pic_order_cnt_type == 0) { sps->log2_max_pic_order_cnt_lsb_minus4 = 4; } @@ -501,8 +502,8 @@ static int vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx) sps->vui.motion_vectors_over_pic_boundaries_flag = 1; sps->vui.log2_max_mv_length_horizontal = 15; sps->vui.log2_max_mv_length_vertical = 15; - sps->vui.max_num_reorder_frames = ctx->max_b_depth; - sps->vui.max_dec_frame_buffering = ctx->max_b_depth + 1; + sps->vui.max_num_reorder_frames = base_ctx->max_b_depth; + sps->vui.max_dec_frame_buffering = base_ctx->max_b_depth + 1; pps->nal_unit_header.nal_ref_idc = 3; pps->nal_unit_header.nal_unit_type = H264_NAL_PPS; @@ -535,9 +536,9 @@ static int vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx) *vseq = (VAEncSequenceParameterBufferH264) { .seq_parameter_set_id = sps->seq_parameter_set_id, .level_idc = sps->level_idc, - .intra_period = ctx->gop_size, - .intra_idr_period = ctx->gop_size, - .ip_period = ctx->b_per_p + 1, + .intra_period = base_ctx->gop_size, + .intra_idr_period = base_ctx->gop_size, + .ip_period = base_ctx->b_per_p + 1, .bits_per_second = ctx->va_bit_rate, .max_num_ref_frames = sps->max_num_ref_frames, @@ -621,19 +622,20 @@ static int vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx) static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx, VAAPIEncodePicture *pic) { - VAAPIEncodeContext *ctx = avctx->priv_data; + HWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeH264Context *priv = avctx->priv_data; - VAAPIEncodeH264Picture *hpic = pic->priv_data; - VAAPIEncodePicture *prev = pic->prev; + HWBaseEncodePicture *base_pic = (HWBaseEncodePicture *)pic; + VAAPIEncodeH264Picture *hpic = base_pic->priv_data; + HWBaseEncodePicture *prev = base_pic->prev; VAAPIEncodeH264Picture *hprev = prev ? prev->priv_data : NULL; VAEncPictureParameterBufferH264 *vpic = pic->codec_picture_params; int i, j = 0; - if (pic->type == PICTURE_TYPE_IDR) { - av_assert0(pic->display_order == pic->encode_order); + if (base_pic->type == PICTURE_TYPE_IDR) { + av_assert0(base_pic->display_order == base_pic->encode_order); hpic->frame_num = 0; - hpic->last_idr_frame = pic->display_order; + hpic->last_idr_frame = base_pic->display_order; hpic->idr_pic_id = hprev ? hprev->idr_pic_id + 1 : 0; hpic->primary_pic_type = 0; @@ -646,10 +648,10 @@ static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx, hpic->last_idr_frame = hprev->last_idr_frame; hpic->idr_pic_id = hprev->idr_pic_id; - if (pic->type == PICTURE_TYPE_I) { + if (base_pic->type == PICTURE_TYPE_I) { hpic->slice_type = 7; hpic->primary_pic_type = 0; - } else if (pic->type == PICTURE_TYPE_P) { + } else if (base_pic->type == PICTURE_TYPE_P) { hpic->slice_type = 5; hpic->primary_pic_type = 1; } else { @@ -657,13 +659,13 @@ static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx, hpic->primary_pic_type = 2; } } - hpic->pic_order_cnt = pic->display_order - hpic->last_idr_frame; + hpic->pic_order_cnt = base_pic->display_order - hpic->last_idr_frame; if (priv->raw_sps.pic_order_cnt_type == 2) { hpic->pic_order_cnt *= 2; } - hpic->dpb_delay = pic->display_order - pic->encode_order + ctx->max_b_depth; - hpic->cpb_delay = pic->encode_order - hpic->last_idr_frame; + hpic->dpb_delay = base_pic->display_order - base_pic->encode_order + base_ctx->max_b_depth; + hpic->cpb_delay = base_pic->encode_order - hpic->last_idr_frame; if (priv->aud) { priv->aud_needed = 1; @@ -679,7 +681,7 @@ static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx, priv->sei_needed = 0; - if (priv->sei & SEI_IDENTIFIER && pic->encode_order == 0) + if (priv->sei & SEI_IDENTIFIER && base_pic->encode_order == 0) priv->sei_needed |= SEI_IDENTIFIER; #if !CONFIG_VAAPI_1 if (ctx->va_rc_mode == VA_RC_CBR) @@ -695,11 +697,11 @@ static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx, priv->sei_needed |= SEI_TIMING; } - if (priv->sei & SEI_RECOVERY_POINT && pic->type == PICTURE_TYPE_I) { + if (priv->sei & SEI_RECOVERY_POINT && base_pic->type == PICTURE_TYPE_I) { priv->sei_recovery_point = (H264RawSEIRecoveryPoint) { .recovery_frame_cnt = 0, .exact_match_flag = 1, - .broken_link_flag = ctx->b_per_p > 0, + .broken_link_flag = base_ctx->b_per_p > 0, }; priv->sei_needed |= SEI_RECOVERY_POINT; @@ -709,7 +711,7 @@ static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx, int err; size_t sei_a53cc_len; av_freep(&priv->sei_a53cc_data); - err = ff_alloc_a53_sei(pic->input_image, 0, &priv->sei_a53cc_data, &sei_a53cc_len); + err = ff_alloc_a53_sei(base_pic->input_image, 0, &priv->sei_a53cc_data, &sei_a53cc_len); if (err < 0) return err; if (priv->sei_a53cc_data != NULL) { @@ -729,15 +731,15 @@ static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx, .BottomFieldOrderCnt = hpic->pic_order_cnt, }; for (int k = 0; k < MAX_REFERENCE_LIST_NUM; k++) { - for (i = 0; i < pic->nb_refs[k]; i++) { - VAAPIEncodePicture *ref = pic->refs[k][i]; + for (i = 0; i < base_pic->nb_refs[k]; i++) { + HWBaseEncodePicture *ref = base_pic->refs[k][i]; VAAPIEncodeH264Picture *href; - av_assert0(ref && ref->encode_order < pic->encode_order); + av_assert0(ref && ref->encode_order < base_pic->encode_order); href = ref->priv_data; vpic->ReferenceFrames[j++] = (VAPictureH264) { - .picture_id = ref->recon_surface, + .picture_id = ((VAAPIEncodePicture *)ref)->recon_surface, .frame_idx = href->frame_num, .flags = VA_PICTURE_H264_SHORT_TERM_REFERENCE, .TopFieldOrderCnt = href->pic_order_cnt, @@ -757,8 +759,8 @@ static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx, vpic->frame_num = hpic->frame_num; - vpic->pic_fields.bits.idr_pic_flag = (pic->type == PICTURE_TYPE_IDR); - vpic->pic_fields.bits.reference_pic_flag = pic->is_reference; + vpic->pic_fields.bits.idr_pic_flag = (base_pic->type == PICTURE_TYPE_IDR); + vpic->pic_fields.bits.reference_pic_flag = base_pic->is_reference; return 0; } @@ -769,31 +771,32 @@ static void vaapi_encode_h264_default_ref_pic_list(AVCodecContext *avctx, VAAPIEncodePicture **rpl1, int *rpl_size) { - VAAPIEncodePicture *prev; + HWBaseEncodePicture *base_pic = (HWBaseEncodePicture *)pic; + HWBaseEncodePicture *prev; VAAPIEncodeH264Picture *hp, *hn, *hc; int i, j, n = 0; - prev = pic->prev; + prev = base_pic->prev; av_assert0(prev); - hp = pic->priv_data; + hp = base_pic->priv_data; - for (i = 0; i < pic->prev->nb_dpb_pics; i++) { + for (i = 0; i < base_pic->prev->nb_dpb_pics; i++) { hn = prev->dpb[i]->priv_data; av_assert0(hn->frame_num < hp->frame_num); - if (pic->type == PICTURE_TYPE_P) { + if (base_pic->type == PICTURE_TYPE_P) { for (j = n; j > 0; j--) { - hc = rpl0[j - 1]->priv_data; + hc = rpl0[j - 1]->base.priv_data; av_assert0(hc->frame_num != hn->frame_num); if (hc->frame_num > hn->frame_num) break; rpl0[j] = rpl0[j - 1]; } - rpl0[j] = prev->dpb[i]; + rpl0[j] = (VAAPIEncodePicture *)prev->dpb[i]; - } else if (pic->type == PICTURE_TYPE_B) { + } else if (base_pic->type == PICTURE_TYPE_B) { for (j = n; j > 0; j--) { - hc = rpl0[j - 1]->priv_data; + hc = rpl0[j - 1]->base.priv_data; av_assert0(hc->pic_order_cnt != hp->pic_order_cnt); if (hc->pic_order_cnt < hp->pic_order_cnt) { if (hn->pic_order_cnt > hp->pic_order_cnt || @@ -805,10 +808,10 @@ static void vaapi_encode_h264_default_ref_pic_list(AVCodecContext *avctx, } rpl0[j] = rpl0[j - 1]; } - rpl0[j] = prev->dpb[i]; + rpl0[j] = (VAAPIEncodePicture *)prev->dpb[i]; for (j = n; j > 0; j--) { - hc = rpl1[j - 1]->priv_data; + hc = rpl1[j - 1]->base.priv_data; av_assert0(hc->pic_order_cnt != hp->pic_order_cnt); if (hc->pic_order_cnt > hp->pic_order_cnt) { if (hn->pic_order_cnt < hp->pic_order_cnt || @@ -820,13 +823,13 @@ static void vaapi_encode_h264_default_ref_pic_list(AVCodecContext *avctx, } rpl1[j] = rpl1[j - 1]; } - rpl1[j] = prev->dpb[i]; + rpl1[j] = (VAAPIEncodePicture *)prev->dpb[i]; } ++n; } - if (pic->type == PICTURE_TYPE_B) { + if (base_pic->type == PICTURE_TYPE_B) { for (i = 0; i < n; i++) { if (rpl0[i] != rpl1[i]) break; @@ -835,22 +838,22 @@ static void vaapi_encode_h264_default_ref_pic_list(AVCodecContext *avctx, FFSWAP(VAAPIEncodePicture*, rpl1[0], rpl1[1]); } - if (pic->type == PICTURE_TYPE_P || - pic->type == PICTURE_TYPE_B) { + if (base_pic->type == PICTURE_TYPE_P || + base_pic->type == PICTURE_TYPE_B) { av_log(avctx, AV_LOG_DEBUG, "Default RefPicList0 for fn=%d/poc=%d:", hp->frame_num, hp->pic_order_cnt); for (i = 0; i < n; i++) { - hn = rpl0[i]->priv_data; + hn = rpl0[i]->base.priv_data; av_log(avctx, AV_LOG_DEBUG, " fn=%d/poc=%d", hn->frame_num, hn->pic_order_cnt); } av_log(avctx, AV_LOG_DEBUG, "\n"); } - if (pic->type == PICTURE_TYPE_B) { + if (base_pic->type == PICTURE_TYPE_B) { av_log(avctx, AV_LOG_DEBUG, "Default RefPicList1 for fn=%d/poc=%d:", hp->frame_num, hp->pic_order_cnt); for (i = 0; i < n; i++) { - hn = rpl1[i]->priv_data; + hn = rpl1[i]->base.priv_data; av_log(avctx, AV_LOG_DEBUG, " fn=%d/poc=%d", hn->frame_num, hn->pic_order_cnt); } @@ -865,8 +868,9 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx, VAAPIEncodeSlice *slice) { VAAPIEncodeH264Context *priv = avctx->priv_data; - VAAPIEncodeH264Picture *hpic = pic->priv_data; - VAAPIEncodePicture *prev = pic->prev; + HWBaseEncodePicture *base_pic = (HWBaseEncodePicture *)pic; + VAAPIEncodeH264Picture *hpic = base_pic->priv_data; + HWBaseEncodePicture *prev = base_pic->prev; H264RawSPS *sps = &priv->raw_sps; H264RawPPS *pps = &priv->raw_pps; H264RawSliceHeader *sh = &priv->raw_slice.header; @@ -874,12 +878,12 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx, VAEncSliceParameterBufferH264 *vslice = slice->codec_slice_params; int i, j; - if (pic->type == PICTURE_TYPE_IDR) { + if (base_pic->type == PICTURE_TYPE_IDR) { sh->nal_unit_header.nal_unit_type = H264_NAL_IDR_SLICE; sh->nal_unit_header.nal_ref_idc = 3; } else { sh->nal_unit_header.nal_unit_type = H264_NAL_SLICE; - sh->nal_unit_header.nal_ref_idc = pic->is_reference; + sh->nal_unit_header.nal_ref_idc = base_pic->is_reference; } sh->first_mb_in_slice = slice->block_start; @@ -895,25 +899,25 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx, sh->direct_spatial_mv_pred_flag = 1; - if (pic->type == PICTURE_TYPE_B) + if (base_pic->type == PICTURE_TYPE_B) sh->slice_qp_delta = priv->fixed_qp_b - (pps->pic_init_qp_minus26 + 26); - else if (pic->type == PICTURE_TYPE_P) + else if (base_pic->type == PICTURE_TYPE_P) sh->slice_qp_delta = priv->fixed_qp_p - (pps->pic_init_qp_minus26 + 26); else sh->slice_qp_delta = priv->fixed_qp_idr - (pps->pic_init_qp_minus26 + 26); - if (pic->is_reference && pic->type != PICTURE_TYPE_IDR) { - VAAPIEncodePicture *discard_list[MAX_DPB_SIZE]; + if (base_pic->is_reference && base_pic->type != PICTURE_TYPE_IDR) { + HWBaseEncodePicture *discard_list[MAX_DPB_SIZE]; int discard = 0, keep = 0; // Discard everything which is in the DPB of the previous frame but // not in the DPB of this one. for (i = 0; i < prev->nb_dpb_pics; i++) { - for (j = 0; j < pic->nb_dpb_pics; j++) { - if (prev->dpb[i] == pic->dpb[j]) + for (j = 0; j < base_pic->nb_dpb_pics; j++) { + if (prev->dpb[i] == base_pic->dpb[j]) break; } - if (j == pic->nb_dpb_pics) { + if (j == base_pic->nb_dpb_pics) { discard_list[discard] = prev->dpb[i]; ++discard; } else { @@ -939,7 +943,7 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx, // If the intended references are not the first entries of RefPicListN // by default, use ref-pic-list-modification to move them there. - if (pic->type == PICTURE_TYPE_P || pic->type == PICTURE_TYPE_B) { + if (base_pic->type == PICTURE_TYPE_P || base_pic->type == PICTURE_TYPE_B) { VAAPIEncodePicture *def_l0[MAX_DPB_SIZE], *def_l1[MAX_DPB_SIZE]; VAAPIEncodeH264Picture *href; int n; @@ -947,19 +951,19 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx, vaapi_encode_h264_default_ref_pic_list(avctx, pic, def_l0, def_l1, &n); - if (pic->type == PICTURE_TYPE_P) { + if (base_pic->type == PICTURE_TYPE_P) { int need_rplm = 0; - for (i = 0; i < pic->nb_refs[0]; i++) { - av_assert0(pic->refs[0][i]); - if (pic->refs[0][i] != def_l0[i]) + for (i = 0; i < base_pic->nb_refs[0]; i++) { + av_assert0(base_pic->refs[0][i]); + if (base_pic->refs[0][i] != (HWBaseEncodePicture *)def_l0[i]) need_rplm = 1; } sh->ref_pic_list_modification_flag_l0 = need_rplm; if (need_rplm) { int pic_num = hpic->frame_num; - for (i = 0; i < pic->nb_refs[0]; i++) { - href = pic->refs[0][i]->priv_data; + for (i = 0; i < base_pic->nb_refs[0]; i++) { + href = base_pic->refs[0][i]->priv_data; av_assert0(href->frame_num != pic_num); if (href->frame_num < pic_num) { sh->rplm_l0[i].modification_of_pic_nums_idc = 0; @@ -978,20 +982,20 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx, } else { int need_rplm_l0 = 0, need_rplm_l1 = 0; int n0 = 0, n1 = 0; - for (i = 0; i < pic->nb_refs[0]; i++) { - av_assert0(pic->refs[0][i]); - href = pic->refs[0][i]->priv_data; + for (i = 0; i < base_pic->nb_refs[0]; i++) { + av_assert0(base_pic->refs[0][i]); + href = base_pic->refs[0][i]->priv_data; av_assert0(href->pic_order_cnt < hpic->pic_order_cnt); - if (pic->refs[0][i] != def_l0[n0]) + if (base_pic->refs[0][i] != (HWBaseEncodePicture *)def_l0[n0]) need_rplm_l0 = 1; ++n0; } - for (i = 0; i < pic->nb_refs[1]; i++) { - av_assert0(pic->refs[1][i]); - href = pic->refs[1][i]->priv_data; + for (i = 0; i < base_pic->nb_refs[1]; i++) { + av_assert0(base_pic->refs[1][i]); + href = base_pic->refs[1][i]->priv_data; av_assert0(href->pic_order_cnt > hpic->pic_order_cnt); - if (pic->refs[1][i] != def_l1[n1]) + if (base_pic->refs[1][i] != (HWBaseEncodePicture *)def_l1[n1]) need_rplm_l1 = 1; ++n1; } @@ -999,8 +1003,8 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx, sh->ref_pic_list_modification_flag_l0 = need_rplm_l0; if (need_rplm_l0) { int pic_num = hpic->frame_num; - for (i = j = 0; i < pic->nb_refs[0]; i++) { - href = pic->refs[0][i]->priv_data; + for (i = j = 0; i < base_pic->nb_refs[0]; i++) { + href = base_pic->refs[0][i]->priv_data; av_assert0(href->frame_num != pic_num); if (href->frame_num < pic_num) { sh->rplm_l0[j].modification_of_pic_nums_idc = 0; @@ -1021,8 +1025,8 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx, sh->ref_pic_list_modification_flag_l1 = need_rplm_l1; if (need_rplm_l1) { int pic_num = hpic->frame_num; - for (i = j = 0; i < pic->nb_refs[1]; i++) { - href = pic->refs[1][i]->priv_data; + for (i = j = 0; i < base_pic->nb_refs[1]; i++) { + href = base_pic->refs[1][i]->priv_data; av_assert0(href->frame_num != pic_num); if (href->frame_num < pic_num) { sh->rplm_l1[j].modification_of_pic_nums_idc = 0; @@ -1062,15 +1066,15 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx, vslice->RefPicList1[i].flags = VA_PICTURE_H264_INVALID; } - if (pic->nb_refs[0]) { + if (base_pic->nb_refs[0]) { // Backward reference for P- or B-frame. - av_assert0(pic->type == PICTURE_TYPE_P || - pic->type == PICTURE_TYPE_B); + av_assert0(base_pic->type == PICTURE_TYPE_P || + base_pic->type == PICTURE_TYPE_B); vslice->RefPicList0[0] = vpic->ReferenceFrames[0]; } - if (pic->nb_refs[1]) { + if (base_pic->nb_refs[1]) { // Forward reference for B-frame. - av_assert0(pic->type == PICTURE_TYPE_B); + av_assert0(base_pic->type == PICTURE_TYPE_B); vslice->RefPicList1[0] = vpic->ReferenceFrames[1]; } @@ -1380,7 +1384,7 @@ const FFCodec ff_h264_vaapi_encoder = { .p.id = AV_CODEC_ID_H264, .priv_data_size = sizeof(VAAPIEncodeH264Context), .init = &vaapi_encode_h264_init, - FF_CODEC_RECEIVE_PACKET_CB(&ff_vaapi_encode_receive_packet), + FF_CODEC_RECEIVE_PACKET_CB(&ff_hw_base_encode_receive_packet), .close = &vaapi_encode_h264_close, .p.priv_class = &vaapi_encode_h264_class, .p.capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE | diff --git a/libavcodec/vaapi_encode_h265.c b/libavcodec/vaapi_encode_h265.c index 77bd5e31af..bea35d7ca8 100644 --- a/libavcodec/vaapi_encode_h265.c +++ b/libavcodec/vaapi_encode_h265.c @@ -259,6 +259,7 @@ fail: static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx) { + HWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeContext *ctx = avctx->priv_data; VAAPIEncodeH265Context *priv = avctx->priv_data; H265RawVPS *vps = &priv->raw_vps; @@ -340,7 +341,7 @@ static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx) ptl->general_max_420chroma_constraint_flag = chroma_format <= 1; ptl->general_max_monochrome_constraint_flag = chroma_format == 0; - ptl->general_intra_constraint_flag = ctx->gop_size == 1; + ptl->general_intra_constraint_flag = base_ctx->gop_size == 1; ptl->general_one_picture_only_constraint_flag = 0; ptl->general_lower_bit_rate_constraint_flag = 1; @@ -353,7 +354,7 @@ static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx) level = ff_h265_guess_level(ptl, avctx->bit_rate, ctx->surface_width, ctx->surface_height, ctx->nb_slices, ctx->tile_rows, ctx->tile_cols, - (ctx->b_per_p > 0) + 1); + (base_ctx->b_per_p > 0) + 1); if (level) { av_log(avctx, AV_LOG_VERBOSE, "Using level %s.\n", level->name); ptl->general_level_idc = level->level_idc; @@ -367,8 +368,8 @@ static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx) } vps->vps_sub_layer_ordering_info_present_flag = 0; - vps->vps_max_dec_pic_buffering_minus1[0] = ctx->max_b_depth + 1; - vps->vps_max_num_reorder_pics[0] = ctx->max_b_depth; + vps->vps_max_dec_pic_buffering_minus1[0] = base_ctx->max_b_depth + 1; + vps->vps_max_num_reorder_pics[0] = base_ctx->max_b_depth; vps->vps_max_latency_increase_plus1[0] = 0; vps->vps_max_layer_id = 0; @@ -642,9 +643,9 @@ static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx) .general_level_idc = vps->profile_tier_level.general_level_idc, .general_tier_flag = vps->profile_tier_level.general_tier_flag, - .intra_period = ctx->gop_size, - .intra_idr_period = ctx->gop_size, - .ip_period = ctx->b_per_p + 1, + .intra_period = base_ctx->gop_size, + .intra_idr_period = base_ctx->gop_size, + .ip_period = base_ctx->b_per_p + 1, .bits_per_second = ctx->va_bit_rate, .pic_width_in_luma_samples = sps->pic_width_in_luma_samples, @@ -757,18 +758,19 @@ static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx) static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx, VAAPIEncodePicture *pic) { - VAAPIEncodeContext *ctx = avctx->priv_data; + HWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeH265Context *priv = avctx->priv_data; - VAAPIEncodeH265Picture *hpic = pic->priv_data; - VAAPIEncodePicture *prev = pic->prev; + HWBaseEncodePicture *base_pic = (HWBaseEncodePicture *)pic; + VAAPIEncodeH265Picture *hpic = base_pic->priv_data; + HWBaseEncodePicture *prev = base_pic->prev; VAAPIEncodeH265Picture *hprev = prev ? prev->priv_data : NULL; VAEncPictureParameterBufferHEVC *vpic = pic->codec_picture_params; int i, j = 0; - if (pic->type == PICTURE_TYPE_IDR) { - av_assert0(pic->display_order == pic->encode_order); + if (base_pic->type == PICTURE_TYPE_IDR) { + av_assert0(base_pic->display_order == base_pic->encode_order); - hpic->last_idr_frame = pic->display_order; + hpic->last_idr_frame = base_pic->display_order; hpic->slice_nal_unit = HEVC_NAL_IDR_W_RADL; hpic->slice_type = HEVC_SLICE_I; @@ -777,23 +779,23 @@ static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx, av_assert0(prev); hpic->last_idr_frame = hprev->last_idr_frame; - if (pic->type == PICTURE_TYPE_I) { + if (base_pic->type == PICTURE_TYPE_I) { hpic->slice_nal_unit = HEVC_NAL_CRA_NUT; hpic->slice_type = HEVC_SLICE_I; hpic->pic_type = 0; - } else if (pic->type == PICTURE_TYPE_P) { - av_assert0(pic->refs[0]); + } else if (base_pic->type == PICTURE_TYPE_P) { + av_assert0(base_pic->refs[0]); hpic->slice_nal_unit = HEVC_NAL_TRAIL_R; hpic->slice_type = HEVC_SLICE_P; hpic->pic_type = 1; } else { - VAAPIEncodePicture *irap_ref; - av_assert0(pic->refs[0][0] && pic->refs[1][0]); - for (irap_ref = pic; irap_ref; irap_ref = irap_ref->refs[1][0]) { + HWBaseEncodePicture *irap_ref; + av_assert0(base_pic->refs[0][0] && base_pic->refs[1][0]); + for (irap_ref = base_pic; irap_ref; irap_ref = irap_ref->refs[1][0]) { if (irap_ref->type == PICTURE_TYPE_I) break; } - if (pic->b_depth == ctx->max_b_depth) { + if (base_pic->b_depth == base_ctx->max_b_depth) { hpic->slice_nal_unit = irap_ref ? HEVC_NAL_RASL_N : HEVC_NAL_TRAIL_N; } else { @@ -804,7 +806,7 @@ static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx, hpic->pic_type = 2; } } - hpic->pic_order_cnt = pic->display_order - hpic->last_idr_frame; + hpic->pic_order_cnt = base_pic->display_order - hpic->last_idr_frame; if (priv->aud) { priv->aud_needed = 1; @@ -826,9 +828,9 @@ static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx, // may force an IDR frame on the output where the medadata gets // changed on the input frame. if ((priv->sei & SEI_MASTERING_DISPLAY) && - (pic->type == PICTURE_TYPE_I || pic->type == PICTURE_TYPE_IDR)) { + (base_pic->type == PICTURE_TYPE_I || base_pic->type == PICTURE_TYPE_IDR)) { AVFrameSideData *sd = - av_frame_get_side_data(pic->input_image, + av_frame_get_side_data(base_pic->input_image, AV_FRAME_DATA_MASTERING_DISPLAY_METADATA); if (sd) { @@ -874,9 +876,9 @@ static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx, } if ((priv->sei & SEI_CONTENT_LIGHT_LEVEL) && - (pic->type == PICTURE_TYPE_I || pic->type == PICTURE_TYPE_IDR)) { + (base_pic->type == PICTURE_TYPE_I || base_pic->type == PICTURE_TYPE_IDR)) { AVFrameSideData *sd = - av_frame_get_side_data(pic->input_image, + av_frame_get_side_data(base_pic->input_image, AV_FRAME_DATA_CONTENT_LIGHT_LEVEL); if (sd) { @@ -896,7 +898,7 @@ static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx, int err; size_t sei_a53cc_len; av_freep(&priv->sei_a53cc_data); - err = ff_alloc_a53_sei(pic->input_image, 0, &priv->sei_a53cc_data, &sei_a53cc_len); + err = ff_alloc_a53_sei(base_pic->input_image, 0, &priv->sei_a53cc_data, &sei_a53cc_len); if (err < 0) return err; if (priv->sei_a53cc_data != NULL) { @@ -915,19 +917,19 @@ static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx, }; for (int k = 0; k < MAX_REFERENCE_LIST_NUM; k++) { - for (i = 0; i < pic->nb_refs[k]; i++) { - VAAPIEncodePicture *ref = pic->refs[k][i]; + for (i = 0; i < base_pic->nb_refs[k]; i++) { + HWBaseEncodePicture *ref = base_pic->refs[k][i]; VAAPIEncodeH265Picture *href; - av_assert0(ref && ref->encode_order < pic->encode_order); + av_assert0(ref && ref->encode_order < base_pic->encode_order); href = ref->priv_data; vpic->reference_frames[j++] = (VAPictureHEVC) { - .picture_id = ref->recon_surface, + .picture_id = ((VAAPIEncodePicture *)ref)->recon_surface, .pic_order_cnt = href->pic_order_cnt, - .flags = (ref->display_order < pic->display_order ? + .flags = (ref->display_order < base_pic->display_order ? VA_PICTURE_HEVC_RPS_ST_CURR_BEFORE : 0) | - (ref->display_order > pic->display_order ? + (ref->display_order > base_pic->display_order ? VA_PICTURE_HEVC_RPS_ST_CURR_AFTER : 0), }; } @@ -944,8 +946,8 @@ static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx, vpic->nal_unit_type = hpic->slice_nal_unit; - vpic->pic_fields.bits.reference_pic_flag = pic->is_reference; - switch (pic->type) { + vpic->pic_fields.bits.reference_pic_flag = base_pic->is_reference; + switch (base_pic->type) { case PICTURE_TYPE_IDR: vpic->pic_fields.bits.idr_pic_flag = 1; vpic->pic_fields.bits.coding_type = 1; @@ -973,9 +975,10 @@ static int vaapi_encode_h265_init_slice_params(AVCodecContext *avctx, VAAPIEncodePicture *pic, VAAPIEncodeSlice *slice) { - VAAPIEncodeContext *ctx = avctx->priv_data; + HWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeH265Context *priv = avctx->priv_data; - VAAPIEncodeH265Picture *hpic = pic->priv_data; + HWBaseEncodePicture *base_pic = (HWBaseEncodePicture *)pic; + VAAPIEncodeH265Picture *hpic = base_pic->priv_data; const H265RawSPS *sps = &priv->raw_sps; const H265RawPPS *pps = &priv->raw_pps; H265RawSliceHeader *sh = &priv->raw_slice.header; @@ -996,13 +999,13 @@ static int vaapi_encode_h265_init_slice_params(AVCodecContext *avctx, sh->slice_type = hpic->slice_type; - if (sh->slice_type == HEVC_SLICE_P && ctx->p_to_gpb) + if (sh->slice_type == HEVC_SLICE_P && base_ctx->p_to_gpb) sh->slice_type = HEVC_SLICE_B; sh->slice_pic_order_cnt_lsb = hpic->pic_order_cnt & (1 << (sps->log2_max_pic_order_cnt_lsb_minus4 + 4)) - 1; - if (pic->type != PICTURE_TYPE_IDR) { + if (base_pic->type != PICTURE_TYPE_IDR) { H265RawSTRefPicSet *rps; const VAAPIEncodeH265Picture *strp; int rps_poc[MAX_DPB_SIZE]; @@ -1016,33 +1019,33 @@ static int vaapi_encode_h265_init_slice_params(AVCodecContext *avctx, rps_pics = 0; for (i = 0; i < MAX_REFERENCE_LIST_NUM; i++) { - for (j = 0; j < pic->nb_refs[i]; j++) { - strp = pic->refs[i][j]->priv_data; + for (j = 0; j < base_pic->nb_refs[i]; j++) { + strp = base_pic->refs[i][j]->priv_data; rps_poc[rps_pics] = strp->pic_order_cnt; rps_used[rps_pics] = 1; ++rps_pics; } } - for (i = 0; i < pic->nb_dpb_pics; i++) { - if (pic->dpb[i] == pic) + for (i = 0; i < base_pic->nb_dpb_pics; i++) { + if (base_pic->dpb[i] == base_pic) continue; - for (j = 0; j < pic->nb_refs[0]; j++) { - if (pic->dpb[i] == pic->refs[0][j]) + for (j = 0; j < base_pic->nb_refs[0]; j++) { + if (base_pic->dpb[i] == base_pic->refs[0][j]) break; } - if (j < pic->nb_refs[0]) + if (j < base_pic->nb_refs[0]) continue; - for (j = 0; j < pic->nb_refs[1]; j++) { - if (pic->dpb[i] == pic->refs[1][j]) + for (j = 0; j < base_pic->nb_refs[1]; j++) { + if (base_pic->dpb[i] == base_pic->refs[1][j]) break; } - if (j < pic->nb_refs[1]) + if (j < base_pic->nb_refs[1]) continue; - strp = pic->dpb[i]->priv_data; + strp = base_pic->dpb[i]->priv_data; rps_poc[rps_pics] = strp->pic_order_cnt; rps_used[rps_pics] = 0; ++rps_pics; @@ -1109,9 +1112,9 @@ static int vaapi_encode_h265_init_slice_params(AVCodecContext *avctx, sh->slice_sao_luma_flag = sh->slice_sao_chroma_flag = sps->sample_adaptive_offset_enabled_flag; - if (pic->type == PICTURE_TYPE_B) + if (base_pic->type == PICTURE_TYPE_B) sh->slice_qp_delta = priv->fixed_qp_b - (pps->init_qp_minus26 + 26); - else if (pic->type == PICTURE_TYPE_P) + else if (base_pic->type == PICTURE_TYPE_P) sh->slice_qp_delta = priv->fixed_qp_p - (pps->init_qp_minus26 + 26); else sh->slice_qp_delta = priv->fixed_qp_idr - (pps->init_qp_minus26 + 26); @@ -1166,22 +1169,22 @@ static int vaapi_encode_h265_init_slice_params(AVCodecContext *avctx, vslice->ref_pic_list1[i].flags = VA_PICTURE_HEVC_INVALID; } - if (pic->nb_refs[0]) { + if (base_pic->nb_refs[0]) { // Backward reference for P- or B-frame. - av_assert0(pic->type == PICTURE_TYPE_P || - pic->type == PICTURE_TYPE_B); + av_assert0(base_pic->type == PICTURE_TYPE_P || + base_pic->type == PICTURE_TYPE_B); vslice->ref_pic_list0[0] = vpic->reference_frames[0]; - if (ctx->p_to_gpb && pic->type == PICTURE_TYPE_P) + if (base_ctx->p_to_gpb && base_pic->type == PICTURE_TYPE_P) // Reference for GPB B-frame, L0 == L1 vslice->ref_pic_list1[0] = vpic->reference_frames[0]; } - if (pic->nb_refs[1]) { + if (base_pic->nb_refs[1]) { // Forward reference for B-frame. - av_assert0(pic->type == PICTURE_TYPE_B); + av_assert0(base_pic->type == PICTURE_TYPE_B); vslice->ref_pic_list1[0] = vpic->reference_frames[1]; } - if (pic->type == PICTURE_TYPE_P && ctx->p_to_gpb) { + if (base_pic->type == PICTURE_TYPE_P && base_ctx->p_to_gpb) { vslice->slice_type = HEVC_SLICE_B; for (i = 0; i < FF_ARRAY_ELEMS(vslice->ref_pic_list0); i++) { vslice->ref_pic_list1[i].picture_id = vslice->ref_pic_list0[i].picture_id; @@ -1494,7 +1497,7 @@ const FFCodec ff_hevc_vaapi_encoder = { .p.id = AV_CODEC_ID_HEVC, .priv_data_size = sizeof(VAAPIEncodeH265Context), .init = &vaapi_encode_h265_init, - FF_CODEC_RECEIVE_PACKET_CB(&ff_vaapi_encode_receive_packet), + FF_CODEC_RECEIVE_PACKET_CB(&ff_hw_base_encode_receive_packet), .close = &vaapi_encode_h265_close, .p.priv_class = &vaapi_encode_h265_class, .p.capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE | diff --git a/libavcodec/vaapi_encode_mjpeg.c b/libavcodec/vaapi_encode_mjpeg.c index fb5c0d34c6..8ca1be192a 100644 --- a/libavcodec/vaapi_encode_mjpeg.c +++ b/libavcodec/vaapi_encode_mjpeg.c @@ -223,6 +223,7 @@ static int vaapi_encode_mjpeg_init_picture_params(AVCodecContext *avctx, VAAPIEncodePicture *pic) { VAAPIEncodeMJPEGContext *priv = avctx->priv_data; + HWBaseEncodePicture *base_pic = (HWBaseEncodePicture *)pic; JPEGRawFrameHeader *fh = &priv->frame_header; JPEGRawScanHeader *sh = &priv->scan.header; VAEncPictureParameterBufferJPEG *vpic = pic->codec_picture_params; @@ -232,7 +233,7 @@ static int vaapi_encode_mjpeg_init_picture_params(AVCodecContext *avctx, const uint8_t *components; int t, i, quant_scale, len; - av_assert0(pic->type == PICTURE_TYPE_IDR); + av_assert0(base_pic->type == PICTURE_TYPE_IDR); desc = av_pix_fmt_desc_get(priv->common.input_frames->sw_format); av_assert0(desc); @@ -261,7 +262,7 @@ static int vaapi_encode_mjpeg_init_picture_params(AVCodecContext *avctx, // JFIF header. if (priv->jfif) { JPEGRawApplicationData *app = &priv->jfif_header; - AVRational sar = pic->input_image->sample_aspect_ratio; + AVRational sar = base_pic->input_image->sample_aspect_ratio; int sar_w, sar_h; PutByteContext pbc; @@ -572,7 +573,7 @@ const FFCodec ff_mjpeg_vaapi_encoder = { .p.id = AV_CODEC_ID_MJPEG, .priv_data_size = sizeof(VAAPIEncodeMJPEGContext), .init = &vaapi_encode_mjpeg_init, - FF_CODEC_RECEIVE_PACKET_CB(&ff_vaapi_encode_receive_packet), + FF_CODEC_RECEIVE_PACKET_CB(&ff_hw_base_encode_receive_packet), .close = &vaapi_encode_mjpeg_close, .p.priv_class = &vaapi_encode_mjpeg_class, .p.capabilities = AV_CODEC_CAP_HARDWARE | AV_CODEC_CAP_DR1 | diff --git a/libavcodec/vaapi_encode_mpeg2.c b/libavcodec/vaapi_encode_mpeg2.c index d0980c52b0..f36251aa2b 100644 --- a/libavcodec/vaapi_encode_mpeg2.c +++ b/libavcodec/vaapi_encode_mpeg2.c @@ -166,6 +166,7 @@ fail: static int vaapi_encode_mpeg2_init_sequence_params(AVCodecContext *avctx) { + HWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeContext *ctx = avctx->priv_data; VAAPIEncodeMPEG2Context *priv = avctx->priv_data; MPEG2RawSequenceHeader *sh = &priv->sequence_header; @@ -281,7 +282,7 @@ static int vaapi_encode_mpeg2_init_sequence_params(AVCodecContext *avctx) se->bit_rate_extension = priv->bit_rate >> 18; se->vbv_buffer_size_extension = priv->vbv_buffer_size >> 10; - se->low_delay = ctx->b_per_p == 0; + se->low_delay = base_ctx->b_per_p == 0; se->frame_rate_extension_n = ext_n; se->frame_rate_extension_d = ext_d; @@ -353,8 +354,8 @@ static int vaapi_encode_mpeg2_init_sequence_params(AVCodecContext *avctx) *vseq = (VAEncSequenceParameterBufferMPEG2) { - .intra_period = ctx->gop_size, - .ip_period = ctx->b_per_p + 1, + .intra_period = base_ctx->gop_size, + .ip_period = base_ctx->b_per_p + 1, .picture_width = avctx->width, .picture_height = avctx->height, @@ -417,30 +418,31 @@ static int vaapi_encode_mpeg2_init_sequence_params(AVCodecContext *avctx) } static int vaapi_encode_mpeg2_init_picture_params(AVCodecContext *avctx, - VAAPIEncodePicture *pic) + VAAPIEncodePicture *pic) { VAAPIEncodeMPEG2Context *priv = avctx->priv_data; + HWBaseEncodePicture *base_pic = (HWBaseEncodePicture *)pic; MPEG2RawPictureHeader *ph = &priv->picture_header; MPEG2RawPictureCodingExtension *pce = &priv->picture_coding_extension.data.picture_coding; VAEncPictureParameterBufferMPEG2 *vpic = pic->codec_picture_params; - if (pic->type == PICTURE_TYPE_IDR || pic->type == PICTURE_TYPE_I) { + if (base_pic->type == PICTURE_TYPE_IDR || base_pic->type == PICTURE_TYPE_I) { ph->temporal_reference = 0; ph->picture_coding_type = 1; - priv->last_i_frame = pic->display_order; + priv->last_i_frame = base_pic->display_order; } else { - ph->temporal_reference = pic->display_order - priv->last_i_frame; - ph->picture_coding_type = pic->type == PICTURE_TYPE_B ? 3 : 2; + ph->temporal_reference = base_pic->display_order - priv->last_i_frame; + ph->picture_coding_type = base_pic->type == PICTURE_TYPE_B ? 3 : 2; } - if (pic->type == PICTURE_TYPE_P || pic->type == PICTURE_TYPE_B) { + if (base_pic->type == PICTURE_TYPE_P || base_pic->type == PICTURE_TYPE_B) { pce->f_code[0][0] = priv->f_code_horizontal; pce->f_code[0][1] = priv->f_code_vertical; } else { pce->f_code[0][0] = 15; pce->f_code[0][1] = 15; } - if (pic->type == PICTURE_TYPE_B) { + if (base_pic->type == PICTURE_TYPE_B) { pce->f_code[1][0] = priv->f_code_horizontal; pce->f_code[1][1] = priv->f_code_vertical; } else { @@ -451,19 +453,19 @@ static int vaapi_encode_mpeg2_init_picture_params(AVCodecContext *avctx, vpic->reconstructed_picture = pic->recon_surface; vpic->coded_buf = pic->output_buffer; - switch (pic->type) { + switch (base_pic->type) { case PICTURE_TYPE_IDR: case PICTURE_TYPE_I: vpic->picture_type = VAEncPictureTypeIntra; break; case PICTURE_TYPE_P: vpic->picture_type = VAEncPictureTypePredictive; - vpic->forward_reference_picture = pic->refs[0][0]->recon_surface; + vpic->forward_reference_picture = ((VAAPIEncodePicture *)base_pic->refs[0][0])->recon_surface; break; case PICTURE_TYPE_B: vpic->picture_type = VAEncPictureTypeBidirectional; - vpic->forward_reference_picture = pic->refs[0][0]->recon_surface; - vpic->backward_reference_picture = pic->refs[1][0]->recon_surface; + vpic->forward_reference_picture = ((VAAPIEncodePicture *)base_pic->refs[0][0])->recon_surface; + vpic->backward_reference_picture = ((VAAPIEncodePicture *)base_pic->refs[1][0])->recon_surface; break; default: av_assert0(0 && "invalid picture type"); @@ -479,17 +481,18 @@ static int vaapi_encode_mpeg2_init_picture_params(AVCodecContext *avctx, } static int vaapi_encode_mpeg2_init_slice_params(AVCodecContext *avctx, - VAAPIEncodePicture *pic, - VAAPIEncodeSlice *slice) + VAAPIEncodePicture *pic, + VAAPIEncodeSlice *slice) { - VAAPIEncodeMPEG2Context *priv = avctx->priv_data; - VAEncSliceParameterBufferMPEG2 *vslice = slice->codec_slice_params; + HWBaseEncodePicture *base_pic = (HWBaseEncodePicture *)pic; + VAAPIEncodeMPEG2Context *priv = avctx->priv_data; + VAEncSliceParameterBufferMPEG2 *vslice = slice->codec_slice_params; int qp; vslice->macroblock_address = slice->block_start; vslice->num_macroblocks = slice->block_size; - switch (pic->type) { + switch (base_pic->type) { case PICTURE_TYPE_IDR: case PICTURE_TYPE_I: qp = priv->quant_i; @@ -505,8 +508,8 @@ static int vaapi_encode_mpeg2_init_slice_params(AVCodecContext *avctx, } vslice->quantiser_scale_code = qp; - vslice->is_intra_slice = (pic->type == PICTURE_TYPE_IDR || - pic->type == PICTURE_TYPE_I); + vslice->is_intra_slice = (base_pic->type == PICTURE_TYPE_IDR || + base_pic->type == PICTURE_TYPE_I); return 0; } @@ -695,7 +698,7 @@ const FFCodec ff_mpeg2_vaapi_encoder = { .p.id = AV_CODEC_ID_MPEG2VIDEO, .priv_data_size = sizeof(VAAPIEncodeMPEG2Context), .init = &vaapi_encode_mpeg2_init, - FF_CODEC_RECEIVE_PACKET_CB(&ff_vaapi_encode_receive_packet), + FF_CODEC_RECEIVE_PACKET_CB(&ff_hw_base_encode_receive_packet), .close = &vaapi_encode_mpeg2_close, .p.priv_class = &vaapi_encode_mpeg2_class, .p.capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE | diff --git a/libavcodec/vaapi_encode_vp8.c b/libavcodec/vaapi_encode_vp8.c index 4e284f86e2..b7844c0f45 100644 --- a/libavcodec/vaapi_encode_vp8.c +++ b/libavcodec/vaapi_encode_vp8.c @@ -52,6 +52,7 @@ typedef struct VAAPIEncodeVP8Context { static int vaapi_encode_vp8_init_sequence_params(AVCodecContext *avctx) { + HWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeContext *ctx = avctx->priv_data; VAEncSequenceParameterBufferVP8 *vseq = ctx->codec_sequence_params; @@ -66,7 +67,7 @@ static int vaapi_encode_vp8_init_sequence_params(AVCodecContext *avctx) if (!(ctx->va_rc_mode & VA_RC_CQP)) { vseq->bits_per_second = ctx->va_bit_rate; - vseq->intra_period = ctx->gop_size; + vseq->intra_period = base_ctx->gop_size; } return 0; @@ -75,6 +76,7 @@ static int vaapi_encode_vp8_init_sequence_params(AVCodecContext *avctx) static int vaapi_encode_vp8_init_picture_params(AVCodecContext *avctx, VAAPIEncodePicture *pic) { + HWBaseEncodePicture *base_pic = (HWBaseEncodePicture *)pic; VAAPIEncodeVP8Context *priv = avctx->priv_data; VAEncPictureParameterBufferVP8 *vpic = pic->codec_picture_params; int i; @@ -83,10 +85,10 @@ static int vaapi_encode_vp8_init_picture_params(AVCodecContext *avctx, vpic->coded_buf = pic->output_buffer; - switch (pic->type) { + switch (base_pic->type) { case PICTURE_TYPE_IDR: case PICTURE_TYPE_I: - av_assert0(pic->nb_refs[0] == 0 && pic->nb_refs[1] == 0); + av_assert0(base_pic->nb_refs[0] == 0 && base_pic->nb_refs[1] == 0); vpic->ref_flags.bits.force_kf = 1; vpic->ref_last_frame = vpic->ref_gf_frame = @@ -94,20 +96,20 @@ static int vaapi_encode_vp8_init_picture_params(AVCodecContext *avctx, VA_INVALID_SURFACE; break; case PICTURE_TYPE_P: - av_assert0(!pic->nb_refs[1]); + av_assert0(!base_pic->nb_refs[1]); vpic->ref_flags.bits.no_ref_last = 0; vpic->ref_flags.bits.no_ref_gf = 1; vpic->ref_flags.bits.no_ref_arf = 1; vpic->ref_last_frame = vpic->ref_gf_frame = vpic->ref_arf_frame = - pic->refs[0][0]->recon_surface; + ((VAAPIEncodePicture *)base_pic->refs[0][0])->recon_surface; break; default: av_assert0(0 && "invalid picture type"); } - vpic->pic_flags.bits.frame_type = (pic->type != PICTURE_TYPE_IDR); + vpic->pic_flags.bits.frame_type = (base_pic->type != PICTURE_TYPE_IDR); vpic->pic_flags.bits.show_frame = 1; vpic->pic_flags.bits.refresh_last = 1; @@ -145,7 +147,7 @@ static int vaapi_encode_vp8_write_quant_table(AVCodecContext *avctx, memset(&quant, 0, sizeof(quant)); - if (pic->type == PICTURE_TYPE_P) + if (pic->base.type == PICTURE_TYPE_P) q = priv->q_index_p; else q = priv->q_index_i; @@ -250,7 +252,7 @@ const FFCodec ff_vp8_vaapi_encoder = { .p.id = AV_CODEC_ID_VP8, .priv_data_size = sizeof(VAAPIEncodeVP8Context), .init = &vaapi_encode_vp8_init, - FF_CODEC_RECEIVE_PACKET_CB(&ff_vaapi_encode_receive_packet), + FF_CODEC_RECEIVE_PACKET_CB(&ff_hw_base_encode_receive_packet), .close = &ff_vaapi_encode_close, .p.priv_class = &vaapi_encode_vp8_class, .p.capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE | diff --git a/libavcodec/vaapi_encode_vp9.c b/libavcodec/vaapi_encode_vp9.c index 88f951652c..93245425e7 100644 --- a/libavcodec/vaapi_encode_vp9.c +++ b/libavcodec/vaapi_encode_vp9.c @@ -53,6 +53,7 @@ typedef struct VAAPIEncodeVP9Context { static int vaapi_encode_vp9_init_sequence_params(AVCodecContext *avctx) { + HWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeContext *ctx = avctx->priv_data; VAEncSequenceParameterBufferVP9 *vseq = ctx->codec_sequence_params; VAEncPictureParameterBufferVP9 *vpic = ctx->codec_picture_params; @@ -64,7 +65,7 @@ static int vaapi_encode_vp9_init_sequence_params(AVCodecContext *avctx) if (!(ctx->va_rc_mode & VA_RC_CQP)) { vseq->bits_per_second = ctx->va_bit_rate; - vseq->intra_period = ctx->gop_size; + vseq->intra_period = base_ctx->gop_size; } vpic->frame_width_src = avctx->width; @@ -78,9 +79,10 @@ static int vaapi_encode_vp9_init_sequence_params(AVCodecContext *avctx) static int vaapi_encode_vp9_init_picture_params(AVCodecContext *avctx, VAAPIEncodePicture *pic) { - VAAPIEncodeContext *ctx = avctx->priv_data; + HWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeVP9Context *priv = avctx->priv_data; - VAAPIEncodeVP9Picture *hpic = pic->priv_data; + HWBaseEncodePicture *base_pic = (HWBaseEncodePicture *)pic; + VAAPIEncodeVP9Picture *hpic = base_pic->priv_data; VAEncPictureParameterBufferVP9 *vpic = pic->codec_picture_params; int i; int num_tile_columns; @@ -94,20 +96,20 @@ static int vaapi_encode_vp9_init_picture_params(AVCodecContext *avctx, num_tile_columns = (vpic->frame_width_src + VP9_MAX_TILE_WIDTH - 1) / VP9_MAX_TILE_WIDTH; vpic->log2_tile_columns = num_tile_columns == 1 ? 0 : av_log2(num_tile_columns - 1) + 1; - switch (pic->type) { + switch (base_pic->type) { case PICTURE_TYPE_IDR: - av_assert0(pic->nb_refs[0] == 0 && pic->nb_refs[1] == 0); + av_assert0(base_pic->nb_refs[0] == 0 && base_pic->nb_refs[1] == 0); vpic->ref_flags.bits.force_kf = 1; vpic->refresh_frame_flags = 0xff; hpic->slot = 0; break; case PICTURE_TYPE_P: - av_assert0(!pic->nb_refs[1]); + av_assert0(!base_pic->nb_refs[1]); { - VAAPIEncodeVP9Picture *href = pic->refs[0][0]->priv_data; + VAAPIEncodeVP9Picture *href = base_pic->refs[0][0]->priv_data; av_assert0(href->slot == 0 || href->slot == 1); - if (ctx->max_b_depth > 0) { + if (base_ctx->max_b_depth > 0) { hpic->slot = !href->slot; vpic->refresh_frame_flags = 1 << hpic->slot | 0xfc; } else { @@ -120,20 +122,20 @@ static int vaapi_encode_vp9_init_picture_params(AVCodecContext *avctx, } break; case PICTURE_TYPE_B: - av_assert0(pic->nb_refs[0] && pic->nb_refs[1]); + av_assert0(base_pic->nb_refs[0] && base_pic->nb_refs[1]); { - VAAPIEncodeVP9Picture *href0 = pic->refs[0][0]->priv_data, - *href1 = pic->refs[1][0]->priv_data; - av_assert0(href0->slot < pic->b_depth + 1 && - href1->slot < pic->b_depth + 1); + VAAPIEncodeVP9Picture *href0 = base_pic->refs[0][0]->priv_data, + *href1 = base_pic->refs[1][0]->priv_data; + av_assert0(href0->slot < base_pic->b_depth + 1 && + href1->slot < base_pic->b_depth + 1); - if (pic->b_depth == ctx->max_b_depth) { + if (base_pic->b_depth == base_ctx->max_b_depth) { // Unreferenced frame. vpic->refresh_frame_flags = 0x00; hpic->slot = 8; } else { - vpic->refresh_frame_flags = 0xfe << pic->b_depth & 0xff; - hpic->slot = 1 + pic->b_depth; + vpic->refresh_frame_flags = 0xfe << base_pic->b_depth & 0xff; + hpic->slot = 1 + base_pic->b_depth; } vpic->ref_flags.bits.ref_frame_ctrl_l0 = 1; vpic->ref_flags.bits.ref_frame_ctrl_l1 = 2; @@ -148,31 +150,31 @@ static int vaapi_encode_vp9_init_picture_params(AVCodecContext *avctx, } if (vpic->refresh_frame_flags == 0x00) { av_log(avctx, AV_LOG_DEBUG, "Pic %"PRId64" not stored.\n", - pic->display_order); + base_pic->display_order); } else { av_log(avctx, AV_LOG_DEBUG, "Pic %"PRId64" stored in slot %d.\n", - pic->display_order, hpic->slot); + base_pic->display_order, hpic->slot); } for (i = 0; i < FF_ARRAY_ELEMS(vpic->reference_frames); i++) vpic->reference_frames[i] = VA_INVALID_SURFACE; for (i = 0; i < MAX_REFERENCE_LIST_NUM; i++) { - for (int j = 0; j < pic->nb_refs[i]; j++) { - VAAPIEncodePicture *ref_pic = pic->refs[i][j]; + for (int j = 0; j < base_pic->nb_refs[i]; j++) { + HWBaseEncodePicture *ref_pic = base_pic->refs[i][j]; int slot; slot = ((VAAPIEncodeVP9Picture*)ref_pic->priv_data)->slot; av_assert0(vpic->reference_frames[slot] == VA_INVALID_SURFACE); - vpic->reference_frames[slot] = ref_pic->recon_surface; + vpic->reference_frames[slot] = ((VAAPIEncodePicture *)ref_pic)->recon_surface; } } - vpic->pic_flags.bits.frame_type = (pic->type != PICTURE_TYPE_IDR); - vpic->pic_flags.bits.show_frame = pic->display_order <= pic->encode_order; + vpic->pic_flags.bits.frame_type = (base_pic->type != PICTURE_TYPE_IDR); + vpic->pic_flags.bits.show_frame = base_pic->display_order <= base_pic->encode_order; - if (pic->type == PICTURE_TYPE_IDR) + if (base_pic->type == PICTURE_TYPE_IDR) vpic->luma_ac_qindex = priv->q_idx_idr; - else if (pic->type == PICTURE_TYPE_P) + else if (base_pic->type == PICTURE_TYPE_P) vpic->luma_ac_qindex = priv->q_idx_p; else vpic->luma_ac_qindex = priv->q_idx_b; @@ -307,7 +309,7 @@ const FFCodec ff_vp9_vaapi_encoder = { .p.id = AV_CODEC_ID_VP9, .priv_data_size = sizeof(VAAPIEncodeVP9Context), .init = &vaapi_encode_vp9_init, - FF_CODEC_RECEIVE_PACKET_CB(&ff_vaapi_encode_receive_packet), + FF_CODEC_RECEIVE_PACKET_CB(&ff_hw_base_encode_receive_packet), .close = &ff_vaapi_encode_close, .p.priv_class = &vaapi_encode_vp9_class, .p.capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE | From patchwork Thu Apr 18 08:59:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 48141 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a20:ce4e:b0:1a9:af23:56c1 with SMTP id id14csp1552099pzb; Thu, 18 Apr 2024 02:02:01 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCWG0neK93ud/Az+dKl3FIXEubMAMHzhAza02RWLUz4nfjJDQeYtgm9C7JCgDHqAH3aXbYUiC9cPeraUXiesQyfqF5lfTNyDd1dPVA== X-Google-Smtp-Source: AGHT+IGAp0RsjfWgxU57+dg8T494gRP2xio13Im9nlQ9Fr4jlgjmANoO0Z/Oa/nDtKnr6Vsu+Ti1 X-Received: by 2002:a17:906:c104:b0:a55:6f2e:b956 with SMTP id do4-20020a170906c10400b00a556f2eb956mr914618ejc.6.1713430921640; Thu, 18 Apr 2024 02:02:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1713430921; cv=none; d=google.com; s=arc-20160816; b=hKMdT9jXYYWvKyS6hNSdu/wWeq9JB2xccMmHUC73k6BHyv+Bg5Lcv+uAjx2jFSMcWH d/s6s0GmlL8UvPnTW1lsPYfutw+O4L1s1pBHq5fkAT7vJRNY2Van+4UmnQ0ngfRHt0L5 /Mcpfiis8DA8uIA2IXn/GNOQjtbyasklpS+7J0JTAJnC9t0FQ/HBvlnVV2kdQeVo2jl8 9GV+1iAieMitYdTW0/ax5ScKBg27Qvy/ErK8R72uINbaf13mFHSugN5tA6siCrHdOUuM /SALXU5VyOfdcdqgoE9ayrYF7knWDuqa4fvmj7iqINwTril7pWc3t+NU6/wTNEUEfPB9 PxeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=PjIiSs29G/Qatkz+IbrItCG6RozXKhFvBqUnwl8YHrY=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=AmnU8TcMqkhxlRn/Tcm/7812hzh1PhKi4egzd09pLf9BFmrQIWLjhSOiDmaH7B5GXR +yZbA3xXMhyVg8EiP4wL8+UltVH50kciEPlCcoWqi+dyPBW+l1s186A3fhfbS9D9/urL RfuN4/w+zGyBHqtd5mahfW0eAyTc96Kb0i4FlPJjZY6BCI6fd+aMe8r6b8DqJQTT+eVn sszJXaFg5HXfGWqhfXGqI7YSwC1LXlmdBbquYchHRdKcPwF+cHcru5i6y5Hx3kethFY4 yRYe7DwfSP3KqJkkRnQDTOUEjtzyT7stt/fgnmiKTouSvjdacNr9oOsssgLzL5TY5NDi iHFw==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=iWcee4PK; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id l24-20020a17090615d800b00a51d9c747cdsi576398ejd.124.2024.04.18.02.02.01; Thu, 18 Apr 2024 02:02:01 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=iWcee4PK; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 498AC68D43D; Thu, 18 Apr 2024 12:01:11 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 9014C68D41D for ; Thu, 18 Apr 2024 12:01:03 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713430869; x=1744966869; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Wid+kSvo7o9NuRD8aP/q5LyuDNPRXAplVu72o737mW0=; b=iWcee4PKhFRuw5EohsNdh6L1EOKRH9XHIU0qn8ZQRiGUj0mAzvy4ZPTY EXFvRnLF2FtPV7944fbyj1OtmK4xNYVjyBoevfxGTnT8pnfBzB9HbjXY7 azI8KOOd1msTk3aCxiKs2IIOLgHliqRVOttnO+Rl50kgN1w78tTs4+j4t 44OH7sVQhU1JBJLY1TQlljqOlrtbtuYXpXiAz6JfAULY16aMpvOU3Swbr SvG6T9pPH42FsQYTfe4ooC5aD+zEbXdSmDDGB7e+bO1upnHbBm1Zg1PUy DXPzXaKx1+GT21jCpSuYMyycLYLUnB4w8hDEg8ldyMvm9tgrXC4+txC8R w==; X-CSE-ConnectionGUID: 2w82GpbOSVSmftotc0cykw== X-CSE-MsgGUID: p7ZAQZDlTGexmsGc3z0iCQ== X-IronPort-AV: E=McAfee;i="6600,9927,11047"; a="12748322" X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="12748322" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Apr 2024 02:00:53 -0700 X-CSE-ConnectionGUID: dnm0jvvlQySsnBaQ6TUeLA== X-CSE-MsgGUID: Xc93od29S+GYKiIYHp4BMQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="54127559" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by fmviesa001.fm.intel.com with ESMTP; 18 Apr 2024 02:00:45 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Thu, 18 Apr 2024 16:59:00 +0800 Message-ID: <20240418085910.547-6-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240418085910.547-1-tong1.wu@intel.com> References: <20240418085910.547-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v8 06/15] avcodec/vaapi_encode: extract the init function to base layer X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: ou+970xviftb From: Tong Wu Related parameters are also moved to base layer. Signed-off-by: Tong Wu --- libavcodec/hw_base_encode.c | 33 ++++++++++++++++ libavcodec/hw_base_encode.h | 11 ++++++ libavcodec/vaapi_encode.c | 68 ++++++++++----------------------- libavcodec/vaapi_encode.h | 6 --- libavcodec/vaapi_encode_av1.c | 2 +- libavcodec/vaapi_encode_h264.c | 2 +- libavcodec/vaapi_encode_h265.c | 2 +- libavcodec/vaapi_encode_mjpeg.c | 6 ++- 8 files changed, 72 insertions(+), 58 deletions(-) diff --git a/libavcodec/hw_base_encode.c b/libavcodec/hw_base_encode.c index 1d9a255f69..14f3ecfc94 100644 --- a/libavcodec/hw_base_encode.c +++ b/libavcodec/hw_base_encode.c @@ -598,3 +598,36 @@ end: return 0; } + +int ff_hw_base_encode_init(AVCodecContext *avctx) +{ + HWBaseEncodeContext *ctx = avctx->priv_data; + + ctx->frame = av_frame_alloc(); + if (!ctx->frame) + return AVERROR(ENOMEM); + + if (!avctx->hw_frames_ctx) { + av_log(avctx, AV_LOG_ERROR, "A hardware frames reference is " + "required to associate the encoding device.\n"); + return AVERROR(EINVAL); + } + + ctx->input_frames_ref = av_buffer_ref(avctx->hw_frames_ctx); + if (!ctx->input_frames_ref) + return AVERROR(ENOMEM); + + ctx->input_frames = (AVHWFramesContext *)ctx->input_frames_ref->data; + + ctx->device_ref = av_buffer_ref(ctx->input_frames->device_ref); + if (!ctx->device_ref) + return AVERROR(ENOMEM); + + ctx->device = (AVHWDeviceContext *)ctx->device_ref->data; + + ctx->tail_pkt = av_packet_alloc(); + if (!ctx->tail_pkt) + return AVERROR(ENOMEM); + + return 0; +} diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h index b5b676b9a8..f7e385e840 100644 --- a/libavcodec/hw_base_encode.h +++ b/libavcodec/hw_base_encode.h @@ -19,6 +19,7 @@ #ifndef AVCODEC_HW_BASE_ENCODE_H #define AVCODEC_HW_BASE_ENCODE_H +#include "libavutil/hwcontext.h" #include "libavutil/fifo.h" #define MAX_DPB_SIZE 16 @@ -117,6 +118,14 @@ typedef struct HWBaseEncodeContext { // Hardware-specific hooks. const struct HWEncodePictureOperation *op; + // The hardware device context. + AVBufferRef *device_ref; + AVHWDeviceContext *device; + + // The hardware frame context containing the input frames. + AVBufferRef *input_frames_ref; + AVHWFramesContext *input_frames; + // Current encoding window, in display (input) order. HWBaseEncodePicture *pic_start, *pic_end; // The next picture to use as the previous reference picture in @@ -183,6 +192,8 @@ typedef struct HWBaseEncodeContext { int ff_hw_base_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt); +int ff_hw_base_encode_init(AVCodecContext *avctx); + #define HW_BASE_ENCODE_COMMON_OPTIONS \ { "async_depth", "Maximum processing parallelism. " \ "Increase this to improve single channel performance.", \ diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c index 18966596e1..c7488ad150 100644 --- a/libavcodec/vaapi_encode.c +++ b/libavcodec/vaapi_encode.c @@ -996,9 +996,10 @@ static const VAEntrypoint vaapi_encode_entrypoints_low_power[] = { static av_cold int vaapi_encode_profile_entrypoint(AVCodecContext *avctx) { - VAAPIEncodeContext *ctx = avctx->priv_data; - VAProfile *va_profiles = NULL; - VAEntrypoint *va_entrypoints = NULL; + HWBaseEncodeContext *base_ctx = avctx->priv_data; + VAAPIEncodeContext *ctx = avctx->priv_data; + VAProfile *va_profiles = NULL; + VAEntrypoint *va_entrypoints = NULL; VAStatus vas; const VAEntrypoint *usable_entrypoints; const VAAPIEncodeProfile *profile; @@ -1021,10 +1022,10 @@ static av_cold int vaapi_encode_profile_entrypoint(AVCodecContext *avctx) usable_entrypoints = vaapi_encode_entrypoints_normal; } - desc = av_pix_fmt_desc_get(ctx->input_frames->sw_format); + desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format); if (!desc) { av_log(avctx, AV_LOG_ERROR, "Invalid input pixfmt (%d).\n", - ctx->input_frames->sw_format); + base_ctx->input_frames->sw_format); return AVERROR(EINVAL); } depth = desc->comp[0].depth; @@ -2131,20 +2132,21 @@ static int vaapi_encode_alloc_output_buffer(FFRefStructOpaque opaque, void *obj) static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx) { - VAAPIEncodeContext *ctx = avctx->priv_data; + HWBaseEncodeContext *base_ctx = avctx->priv_data; + VAAPIEncodeContext *ctx = avctx->priv_data; AVVAAPIHWConfig *hwconfig = NULL; AVHWFramesConstraints *constraints = NULL; enum AVPixelFormat recon_format; int err, i; - hwconfig = av_hwdevice_hwconfig_alloc(ctx->device_ref); + hwconfig = av_hwdevice_hwconfig_alloc(base_ctx->device_ref); if (!hwconfig) { err = AVERROR(ENOMEM); goto fail; } hwconfig->config_id = ctx->va_config; - constraints = av_hwdevice_get_hwframe_constraints(ctx->device_ref, + constraints = av_hwdevice_get_hwframe_constraints(base_ctx->device_ref, hwconfig); if (!constraints) { err = AVERROR(ENOMEM); @@ -2157,9 +2159,9 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx) recon_format = AV_PIX_FMT_NONE; if (constraints->valid_sw_formats) { for (i = 0; constraints->valid_sw_formats[i] != AV_PIX_FMT_NONE; i++) { - if (ctx->input_frames->sw_format == + if (base_ctx->input_frames->sw_format == constraints->valid_sw_formats[i]) { - recon_format = ctx->input_frames->sw_format; + recon_format = base_ctx->input_frames->sw_format; break; } } @@ -2170,7 +2172,7 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx) } } else { // No idea what to use; copy input format. - recon_format = ctx->input_frames->sw_format; + recon_format = base_ctx->input_frames->sw_format; } av_log(avctx, AV_LOG_DEBUG, "Using %s as format of " "reconstructed frames.\n", av_get_pix_fmt_name(recon_format)); @@ -2191,7 +2193,7 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx) av_freep(&hwconfig); av_hwframe_constraints_free(&constraints); - ctx->recon_frames_ref = av_hwframe_ctx_alloc(ctx->device_ref); + ctx->recon_frames_ref = av_hwframe_ctx_alloc(base_ctx->device_ref); if (!ctx->recon_frames_ref) { err = AVERROR(ENOMEM); goto fail; @@ -2235,44 +2237,16 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx) VAStatus vas; int err; + err = ff_hw_base_encode_init(avctx); + if (err < 0) + goto fail; + ctx->va_config = VA_INVALID_ID; ctx->va_context = VA_INVALID_ID; base_ctx->op = &vaapi_op; - /* If you add something that can fail above this av_frame_alloc(), - * modify ff_vaapi_encode_close() accordingly. */ - base_ctx->frame = av_frame_alloc(); - if (!base_ctx->frame) { - return AVERROR(ENOMEM); - } - - if (!avctx->hw_frames_ctx) { - av_log(avctx, AV_LOG_ERROR, "A hardware frames reference is " - "required to associate the encoding device.\n"); - return AVERROR(EINVAL); - } - - ctx->input_frames_ref = av_buffer_ref(avctx->hw_frames_ctx); - if (!ctx->input_frames_ref) { - err = AVERROR(ENOMEM); - goto fail; - } - ctx->input_frames = (AVHWFramesContext*)ctx->input_frames_ref->data; - - ctx->device_ref = av_buffer_ref(ctx->input_frames->device_ref); - if (!ctx->device_ref) { - err = AVERROR(ENOMEM); - goto fail; - } - ctx->device = (AVHWDeviceContext*)ctx->device_ref->data; - ctx->hwctx = ctx->device->hwctx; - - base_ctx->tail_pkt = av_packet_alloc(); - if (!base_ctx->tail_pkt) { - err = AVERROR(ENOMEM); - goto fail; - } + ctx->hwctx = base_ctx->device->hwctx; err = vaapi_encode_profile_entrypoint(avctx); if (err < 0) @@ -2475,8 +2449,8 @@ av_cold int ff_vaapi_encode_close(AVCodecContext *avctx) av_fifo_freep2(&base_ctx->encode_fifo); av_buffer_unref(&ctx->recon_frames_ref); - av_buffer_unref(&ctx->input_frames_ref); - av_buffer_unref(&ctx->device_ref); + av_buffer_unref(&base_ctx->input_frames_ref); + av_buffer_unref(&base_ctx->device_ref); return 0; } diff --git a/libavcodec/vaapi_encode.h b/libavcodec/vaapi_encode.h index 13ccad8e47..8e466e2074 100644 --- a/libavcodec/vaapi_encode.h +++ b/libavcodec/vaapi_encode.h @@ -219,14 +219,8 @@ typedef struct VAAPIEncodeContext { VAConfigID va_config; VAContextID va_context; - AVBufferRef *device_ref; - AVHWDeviceContext *device; AVVAAPIDeviceContext *hwctx; - // The hardware frame context containing the input frames. - AVBufferRef *input_frames_ref; - AVHWFramesContext *input_frames; - // The hardware frame context containing the reconstructed frames. AVBufferRef *recon_frames_ref; AVHWFramesContext *recon_frames; diff --git a/libavcodec/vaapi_encode_av1.c b/libavcodec/vaapi_encode_av1.c index 393b479f99..c37ff7591a 100644 --- a/libavcodec/vaapi_encode_av1.c +++ b/libavcodec/vaapi_encode_av1.c @@ -370,7 +370,7 @@ static int vaapi_encode_av1_init_sequence_params(AVCodecContext *avctx) memset(sh_obu, 0, sizeof(*sh_obu)); sh_obu->header.obu_type = AV1_OBU_SEQUENCE_HEADER; - desc = av_pix_fmt_desc_get(priv->common.input_frames->sw_format); + desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format); av_assert0(desc); sh->seq_profile = avctx->profile; diff --git a/libavcodec/vaapi_encode_h264.c b/libavcodec/vaapi_encode_h264.c index 412e392ac4..106df30563 100644 --- a/libavcodec/vaapi_encode_h264.c +++ b/libavcodec/vaapi_encode_h264.c @@ -308,7 +308,7 @@ static int vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx) memset(sps, 0, sizeof(*sps)); memset(pps, 0, sizeof(*pps)); - desc = av_pix_fmt_desc_get(priv->common.input_frames->sw_format); + desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format); av_assert0(desc); if (desc->nb_components == 1 || desc->log2_chroma_w != 1 || desc->log2_chroma_h != 1) { av_log(avctx, AV_LOG_ERROR, "Chroma format of input pixel format " diff --git a/libavcodec/vaapi_encode_h265.c b/libavcodec/vaapi_encode_h265.c index bea35d7ca8..199d61c3a2 100644 --- a/libavcodec/vaapi_encode_h265.c +++ b/libavcodec/vaapi_encode_h265.c @@ -278,7 +278,7 @@ static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx) memset(pps, 0, sizeof(*pps)); - desc = av_pix_fmt_desc_get(priv->common.input_frames->sw_format); + desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format); av_assert0(desc); if (desc->nb_components == 1) { chroma_format = 0; diff --git a/libavcodec/vaapi_encode_mjpeg.c b/libavcodec/vaapi_encode_mjpeg.c index 8ca1be192a..17fd8ba8e9 100644 --- a/libavcodec/vaapi_encode_mjpeg.c +++ b/libavcodec/vaapi_encode_mjpeg.c @@ -222,6 +222,7 @@ static int vaapi_encode_mjpeg_write_extra_buffer(AVCodecContext *avctx, static int vaapi_encode_mjpeg_init_picture_params(AVCodecContext *avctx, VAAPIEncodePicture *pic) { + HWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeMJPEGContext *priv = avctx->priv_data; HWBaseEncodePicture *base_pic = (HWBaseEncodePicture *)pic; JPEGRawFrameHeader *fh = &priv->frame_header; @@ -235,7 +236,7 @@ static int vaapi_encode_mjpeg_init_picture_params(AVCodecContext *avctx, av_assert0(base_pic->type == PICTURE_TYPE_IDR); - desc = av_pix_fmt_desc_get(priv->common.input_frames->sw_format); + desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format); av_assert0(desc); if (desc->flags & AV_PIX_FMT_FLAG_RGB) components = components_rgb; @@ -437,10 +438,11 @@ static int vaapi_encode_mjpeg_init_slice_params(AVCodecContext *avctx, static av_cold int vaapi_encode_mjpeg_get_encoder_caps(AVCodecContext *avctx) { + HWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeContext *ctx = avctx->priv_data; const AVPixFmtDescriptor *desc; - desc = av_pix_fmt_desc_get(ctx->input_frames->sw_format); + desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format); av_assert0(desc); ctx->surface_width = FFALIGN(avctx->width, 8 << desc->log2_chroma_w); From patchwork Thu Apr 18 08:59:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 48143 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a20:ce4e:b0:1a9:af23:56c1 with SMTP id id14csp1552404pzb; Thu, 18 Apr 2024 02:02:30 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCUtZGMdiV+qgym18sb9EYRhffKSSfv9qV/Wn2RZGDn+EikpBii5WjFggZQ4JkCK3fdp11JoaYIboaKBSIAag2InFQM6Dog+1PO7Ew== X-Google-Smtp-Source: AGHT+IEHToyhbAOXk/55NoFrFP+8fG483/gVU5RpIHfgfxLlbpyiyhgXm7nSIVMMqMZVBPQD1sgd X-Received: by 2002:a19:6405:0:b0:518:e8a4:9266 with SMTP id y5-20020a196405000000b00518e8a49266mr1337822lfb.14.1713430950216; Thu, 18 Apr 2024 02:02:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1713430950; cv=none; d=google.com; s=arc-20160816; b=qpUc3ZXcWbsrJ9zkSnXno23HjBz8rsPj6CsIwgfi4nCZMig6Rz0s3qQ4DquX78W/kc GrMW0UsK341ytYi4BeKhptB5ydauMDD4DOda7ESEtCEL6mcSR/VIpBk6nDtSsuOrei8P iSF9UgKGGm6e6tS0rf5nWLj0X4w7B8dYuNSEE2VQk2SCRnyFveOuKEo4n6G81cBjAmmm XFjSj7+9mRhn+sqOr4LZrHKdLTiTZeiOkGWjq1HyWdr82a+MY4a3QEhNrrtDTDyEk4EF HYL1pMcFxIQCifQ2Fl1G6bpD+GYKN7Yj0M/tCO6+PlA44oIeaG1AV5Houqj97p3Ja/jK zAEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=DC+ZPNgSij1KS95U04D+nlrJkKqJWFYyM4Bqkyv6TGc=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=clVVdIBzQt6m685Hn+F4IqQFVpKyJchaeT6/s2pPNediwHOFi6ItckjHpX9sjiV35d GSM2WHUlYIpEyKWhqOAeFQ2axuqVLJju7zAshgOIrAH81oa0XNB/gXAx8T0yKldD7/1+ jS5eCu8dRv2hLPFs8nVtGJmPBNE4B5G72wBRYBANFQtNBPw1chN2HTzDDyxgd30ISytJ +ZP1fiD2/7m7UjIRbtLAAdJwM6nwkfDE3GEXGPHgq/Zu+qmo9rbIOczhOxAC1bk25E1d VoEyF2U4qck+N1WYxdLM4erq6aUDlcIY5eaD2BG8Nn9xcMBQe5833t7RL5YO1EQB7JO/ cVjA==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=FrdOTe9+; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id v16-20020ac258f0000000b0051898241127si351715lfo.389.2024.04.18.02.02.29; Thu, 18 Apr 2024 02:02:30 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=FrdOTe9+; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 7AB9368D442; Thu, 18 Apr 2024 12:01:13 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id C7F9668D43E for ; Thu, 18 Apr 2024 12:01:04 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713430870; x=1744966870; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Erj6orXK0wyPgIaRgrXzZy0XyrydM+TrYMFYcZnvrKY=; b=FrdOTe9+FjTLC6eCQ8tekU3h5zTHwVj+xtFMSoiEsiMEmGGo4fJcug0f Sj0tzGozHgkyuQZGfez/VlTehue63vWPv6P0r9moRU/uyoRUpczPBZsCu SiRR9aAEPd3Jf3482GjNZZtZtZ0GeRKMx8lcyVdSj26tmYEAXgV/ruvHV 0EwRLgLG/flLaE+dhHzZZmYIq/vhfyipzblNxFld3hSoLi2So8AJS73W4 mcVMTcI8LHxBAOwluu0LAn9wxPgWw4hOZbFkixcBYN00zc4dXGGtPadHp 3ykZaP4oPtZEv4zphJXiQ3xPteQoB7w4NvpmdYZ1YmCZNIDeftUNUHaPV A==; X-CSE-ConnectionGUID: iAveiLvrR5KYZmUNOOGlgA== X-CSE-MsgGUID: 2di05V4bR5O8YbUEPa6MNA== X-IronPort-AV: E=McAfee;i="6600,9927,11047"; a="12748323" X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="12748323" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Apr 2024 02:00:53 -0700 X-CSE-ConnectionGUID: Ub1R68BDTWWJZiOoGncqrw== X-CSE-MsgGUID: Q9zKrguOQdGD6ghGN/Li6A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="54127568" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by fmviesa001.fm.intel.com with ESMTP; 18 Apr 2024 02:00:47 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Thu, 18 Apr 2024 16:59:01 +0800 Message-ID: <20240418085910.547-7-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240418085910.547-1-tong1.wu@intel.com> References: <20240418085910.547-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v8 07/15] avcodec/vaapi_encode: extract gop configuration and two options to base layer X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: CTMjJo2N8uW0 From: Tong Wu idr_interval and desired_b_depth are moved to HW_BASE_ENCODE_COMMON_OPTIONS. Signed-off-by: Tong Wu --- libavcodec/hw_base_encode.c | 54 +++++++++++++++++++++++++++++++++++++ libavcodec/hw_base_encode.h | 19 +++++++++++++ libavcodec/vaapi_encode.c | 52 +++-------------------------------- libavcodec/vaapi_encode.h | 16 ----------- 4 files changed, 77 insertions(+), 64 deletions(-) diff --git a/libavcodec/hw_base_encode.c b/libavcodec/hw_base_encode.c index 14f3ecfc94..6e450f9dca 100644 --- a/libavcodec/hw_base_encode.c +++ b/libavcodec/hw_base_encode.c @@ -599,6 +599,60 @@ end: return 0; } +int ff_hw_base_init_gop_structure(AVCodecContext *avctx, uint32_t ref_l0, uint32_t ref_l1, + int flags, int prediction_pre_only) +{ + HWBaseEncodeContext *ctx = avctx->priv_data; + + if (flags & FLAG_INTRA_ONLY || avctx->gop_size <= 1) { + av_log(avctx, AV_LOG_VERBOSE, "Using intra frames only.\n"); + ctx->gop_size = 1; + } else if (ref_l0 < 1) { + av_log(avctx, AV_LOG_ERROR, "Driver does not support any " + "reference frames.\n"); + return AVERROR(EINVAL); + } else if (!(flags & FLAG_B_PICTURES) || ref_l1 < 1 || + avctx->max_b_frames < 1 || prediction_pre_only) { + if (ctx->p_to_gpb) + av_log(avctx, AV_LOG_VERBOSE, "Using intra and B-frames " + "(supported references: %d / %d).\n", + ref_l0, ref_l1); + else + av_log(avctx, AV_LOG_VERBOSE, "Using intra and P-frames " + "(supported references: %d / %d).\n", ref_l0, ref_l1); + ctx->gop_size = avctx->gop_size; + ctx->p_per_i = INT_MAX; + ctx->b_per_p = 0; + } else { + if (ctx->p_to_gpb) + av_log(avctx, AV_LOG_VERBOSE, "Using intra and B-frames " + "(supported references: %d / %d).\n", + ref_l0, ref_l1); + else + av_log(avctx, AV_LOG_VERBOSE, "Using intra, P- and B-frames " + "(supported references: %d / %d).\n", ref_l0, ref_l1); + ctx->gop_size = avctx->gop_size; + ctx->p_per_i = INT_MAX; + ctx->b_per_p = avctx->max_b_frames; + if (flags & FLAG_B_PICTURE_REFERENCES) { + ctx->max_b_depth = FFMIN(ctx->desired_b_depth, + av_log2(ctx->b_per_p) + 1); + } else { + ctx->max_b_depth = 1; + } + } + + if (flags & FLAG_NON_IDR_KEY_PICTURES) { + ctx->closed_gop = !!(avctx->flags & AV_CODEC_FLAG_CLOSED_GOP); + ctx->gop_per_idr = ctx->idr_interval + 1; + } else { + ctx->closed_gop = 1; + ctx->gop_per_idr = 1; + } + + return 0; +} + int ff_hw_base_encode_init(AVCodecContext *avctx) { HWBaseEncodeContext *ctx = avctx->priv_data; diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h index f7e385e840..1f6d7882bc 100644 --- a/libavcodec/hw_base_encode.h +++ b/libavcodec/hw_base_encode.h @@ -118,6 +118,14 @@ typedef struct HWBaseEncodeContext { // Hardware-specific hooks. const struct HWEncodePictureOperation *op; + // Global options. + + // Number of I frames between IDR frames. + int idr_interval; + + // Desired B frame reference depth. + int desired_b_depth; + // The hardware device context. AVBufferRef *device_ref; AVHWDeviceContext *device; @@ -192,9 +200,20 @@ typedef struct HWBaseEncodeContext { int ff_hw_base_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt); +int ff_hw_base_init_gop_structure(AVCodecContext *avctx, uint32_t ref_l0, uint32_t ref_l1, + int flags, int prediction_pre_only); + int ff_hw_base_encode_init(AVCodecContext *avctx); #define HW_BASE_ENCODE_COMMON_OPTIONS \ + { "idr_interval", \ + "Distance (in I-frames) between key frames", \ + OFFSET(common.base.idr_interval), AV_OPT_TYPE_INT, \ + { .i64 = 0 }, 0, INT_MAX, FLAGS }, \ + { "b_depth", \ + "Maximum B-frame reference depth", \ + OFFSET(common.base.desired_b_depth), AV_OPT_TYPE_INT, \ + { .i64 = 1 }, 1, INT_MAX, FLAGS }, \ { "async_depth", "Maximum processing parallelism. " \ "Increase this to improve single channel performance.", \ OFFSET(common.base.async_depth), AV_OPT_TYPE_INT, \ diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c index c7488ad150..993b2640a9 100644 --- a/libavcodec/vaapi_encode.c +++ b/libavcodec/vaapi_encode.c @@ -1638,7 +1638,7 @@ static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx) VAStatus vas; VAConfigAttrib attr = { VAConfigAttribEncMaxRefFrames }; uint32_t ref_l0, ref_l1; - int prediction_pre_only; + int prediction_pre_only, err; vas = vaGetConfigAttributes(ctx->hwctx->display, ctx->va_profile, @@ -1702,53 +1702,9 @@ static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx) } #endif - if (ctx->codec->flags & FLAG_INTRA_ONLY || - avctx->gop_size <= 1) { - av_log(avctx, AV_LOG_VERBOSE, "Using intra frames only.\n"); - base_ctx->gop_size = 1; - } else if (ref_l0 < 1) { - av_log(avctx, AV_LOG_ERROR, "Driver does not support any " - "reference frames.\n"); - return AVERROR(EINVAL); - } else if (!(ctx->codec->flags & FLAG_B_PICTURES) || - ref_l1 < 1 || avctx->max_b_frames < 1 || - prediction_pre_only) { - if (base_ctx->p_to_gpb) - av_log(avctx, AV_LOG_VERBOSE, "Using intra and B-frames " - "(supported references: %d / %d).\n", - ref_l0, ref_l1); - else - av_log(avctx, AV_LOG_VERBOSE, "Using intra and P-frames " - "(supported references: %d / %d).\n", ref_l0, ref_l1); - base_ctx->gop_size = avctx->gop_size; - base_ctx->p_per_i = INT_MAX; - base_ctx->b_per_p = 0; - } else { - if (base_ctx->p_to_gpb) - av_log(avctx, AV_LOG_VERBOSE, "Using intra and B-frames " - "(supported references: %d / %d).\n", - ref_l0, ref_l1); - else - av_log(avctx, AV_LOG_VERBOSE, "Using intra, P- and B-frames " - "(supported references: %d / %d).\n", ref_l0, ref_l1); - base_ctx->gop_size = avctx->gop_size; - base_ctx->p_per_i = INT_MAX; - base_ctx->b_per_p = avctx->max_b_frames; - if (ctx->codec->flags & FLAG_B_PICTURE_REFERENCES) { - base_ctx->max_b_depth = FFMIN(ctx->desired_b_depth, - av_log2(base_ctx->b_per_p) + 1); - } else { - base_ctx->max_b_depth = 1; - } - } - - if (ctx->codec->flags & FLAG_NON_IDR_KEY_PICTURES) { - base_ctx->closed_gop = !!(avctx->flags & AV_CODEC_FLAG_CLOSED_GOP); - base_ctx->gop_per_idr = ctx->idr_interval + 1; - } else { - base_ctx->closed_gop = 1; - base_ctx->gop_per_idr = 1; - } + err = ff_hw_base_init_gop_structure(avctx, ref_l0, ref_l1, ctx->codec->flags, prediction_pre_only); + if (err < 0) + return err; return 0; } diff --git a/libavcodec/vaapi_encode.h b/libavcodec/vaapi_encode.h index 8e466e2074..0d3d8e2ed0 100644 --- a/libavcodec/vaapi_encode.h +++ b/libavcodec/vaapi_encode.h @@ -151,17 +151,9 @@ typedef struct VAAPIEncodeContext { // Codec-specific hooks. const struct VAAPIEncodeType *codec; - // Global options. - // Use low power encoding mode. int low_power; - // Number of I frames between IDR frames. - int idr_interval; - - // Desired B frame reference depth. - int desired_b_depth; - // Max Frame Size int max_frame_size; @@ -375,14 +367,6 @@ int ff_vaapi_encode_close(AVCodecContext *avctx); "may not support all encoding features)", \ OFFSET(common.low_power), AV_OPT_TYPE_BOOL, \ { .i64 = 0 }, 0, 1, FLAGS }, \ - { "idr_interval", \ - "Distance (in I-frames) between IDR frames", \ - OFFSET(common.idr_interval), AV_OPT_TYPE_INT, \ - { .i64 = 0 }, 0, INT_MAX, FLAGS }, \ - { "b_depth", \ - "Maximum B-frame reference depth", \ - OFFSET(common.desired_b_depth), AV_OPT_TYPE_INT, \ - { .i64 = 1 }, 1, INT_MAX, FLAGS }, \ { "max_frame_size", \ "Maximum frame size (in bytes)",\ OFFSET(common.max_frame_size), AV_OPT_TYPE_INT, \ From patchwork Thu Apr 18 08:59:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 48145 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a20:ce4e:b0:1a9:af23:56c1 with SMTP id id14csp1552654pzb; Thu, 18 Apr 2024 02:02:57 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCXOKxnXCb2XWW6cf7hzv47sZ5pVYBjd2IdW1ZWOett0W09TwhkaoaiWtCiIniHlQGE0jkkqySysl73LdrgOblG413ylxG0NeH81FA== X-Google-Smtp-Source: AGHT+IFl3JHx1gzYs49+wYCi3WQdmZ35ZynI5shqoaNLTjSnvriBlvKLZZTMYH7GMyAZKOL+5lWV X-Received: by 2002:a17:906:2c45:b0:a55:57cd:1ccb with SMTP id f5-20020a1709062c4500b00a5557cd1ccbmr1219365ejh.19.1713430977357; Thu, 18 Apr 2024 02:02:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1713430977; cv=none; d=google.com; s=arc-20160816; b=dTkatNfCyyAvJ+eJt/CoJGO+cwstOCk6LBeofwswFG5DPWgG0KrxC+J4j2+Awt6VyH Lml7UnRbq6XAM/JMiemtx7aeOYL2Mzvydf5XLfBf5xLEC2QNf9LmfLFeE3BiUJhvnPTE w7f34GijvUQ4eY4P0iH5Yp5ptomTZbqkK8Of5OSCCMPPGkE2fGnA8oduDVaM2sMRSbs4 0wtCsxQS66A3x+qMe5NhaGkEF0EoKjAdZfcvdga/gU4aKiAEPeHGYiwrpqcSGNoJYksl L22pSEVp5g5I23sCVfsmptsaXN8MqWyDWHa5a27V9weda4BD0LjfVgVbYiGzN3egh3fD gRTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=HJNboPwiITuWQzQyhoPGZkWuYt0xf2JVHT+PxoGmTRM=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=vejZyZOxhec5mfYYHJFFn6BrWN69/DrZwGVOKr8BaF0WPH7f8jhijp6CMDKmOCddun 2b+AI2Vxzb23RTUrRmhX/Ax4HRqTp1vavBuBwJJ/bHrnafe8+bybqck7fICok5SmNV77 SC2+g0vt/JhzuHRMbZYV0DvutbSszMMenxVBXF3yACxv6SkqKIX7dvx0lARMFU4N108w NO+otKQTQ79t2kp1/5yBWBzyLziRL9GnGWglMclDtgqCnpxoXi6DvR2ci2fK2xpgMkOk fAGrnBldQI3TB35/nU+QL/afj+g2bIUiE4MRCYjAbIBv9zhfoD7Ckya9tUAnksNzUqNo nb1g==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=RD314kXT; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id y20-20020a17090629d400b00a51a664500esi602956eje.59.2024.04.18.02.02.56; Thu, 18 Apr 2024 02:02:57 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=RD314kXT; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 6887268D469; Thu, 18 Apr 2024 12:01:16 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 865BD68D43E for ; Thu, 18 Apr 2024 12:01:06 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713430871; x=1744966871; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CELJPe4NDe4H0UYe3KLPgHmGtfG1MP+i7cLB1awv+MI=; b=RD314kXTNx72PGPJdBDnR5/9f8PekA2ZX+k0ufEdLxV+l0ZWfKugeL6+ ReHLxWOjfySu0bGB9MvV6ximPV5irj+kU7ZHrBQ/MU8dHwk+ALr1a5mt/ hrSNeBUJb3G8zXovzB7SI2skbhHhDgzl9JrUFiao8pKax2LXswOqN/mwJ DIRKX+LbtAN12ndVAFxd1nt7F+U1IoHbIYmJUgt5NURIeenRUXdqUXMDH NAcQ6NIitFfEA8/kvLx8edvvSqlxwRRpu2sSbMF2nmdBfiY283MDmJV4p DBm6eoVtr1AMG8a6Z2GZd65jJNfzuogGzYGlM2i2bd1iGMlBurZiYVjPM Q==; X-CSE-ConnectionGUID: r/rtieGxSWKMdyxv7XTavA== X-CSE-MsgGUID: bzdZ14ulT1CGLqK4LdSXqg== X-IronPort-AV: E=McAfee;i="6600,9927,11047"; a="12748332" X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="12748332" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Apr 2024 02:00:55 -0700 X-CSE-ConnectionGUID: WkJ2ZwYWT2uhWokKrspBWQ== X-CSE-MsgGUID: l6H2Gy0MSJ2lAKA1YnS+Hw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="54127575" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by fmviesa001.fm.intel.com with ESMTP; 18 Apr 2024 02:00:50 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Thu, 18 Apr 2024 16:59:02 +0800 Message-ID: <20240418085910.547-8-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240418085910.547-1-tong1.wu@intel.com> References: <20240418085910.547-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v8 08/15] avcodec/vaapi_encode: move two reconstructed frames parameters to base layer X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: u7rwGSMy3P39 From: Tong Wu Signed-off-by: Tong Wu --- libavcodec/hw_base_encode.h | 4 ++++ libavcodec/vaapi_encode.c | 22 +++++++++++----------- libavcodec/vaapi_encode.h | 4 ---- 3 files changed, 15 insertions(+), 15 deletions(-) diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h index 1f6d7882bc..3eea3ef998 100644 --- a/libavcodec/hw_base_encode.h +++ b/libavcodec/hw_base_encode.h @@ -134,6 +134,10 @@ typedef struct HWBaseEncodeContext { AVBufferRef *input_frames_ref; AVHWFramesContext *input_frames; + // The hardware frame context containing the reconstructed frames. + AVBufferRef *recon_frames_ref; + AVHWFramesContext *recon_frames; + // Current encoding window, in display (input) order. HWBaseEncodePicture *pic_start, *pic_end; // The next picture to use as the previous reference picture in diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c index 993b2640a9..8640a888b7 100644 --- a/libavcodec/vaapi_encode.c +++ b/libavcodec/vaapi_encode.c @@ -314,7 +314,7 @@ static int vaapi_encode_issue(AVCodecContext *avctx, av_log(avctx, AV_LOG_DEBUG, "Input surface is %#x.\n", pic->input_surface); - err = av_hwframe_get_buffer(ctx->recon_frames_ref, base_pic->recon_image, 0); + err = av_hwframe_get_buffer(base_ctx->recon_frames_ref, base_pic->recon_image, 0); if (err < 0) { err = AVERROR(ENOMEM); goto fail; @@ -2149,19 +2149,19 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx) av_freep(&hwconfig); av_hwframe_constraints_free(&constraints); - ctx->recon_frames_ref = av_hwframe_ctx_alloc(base_ctx->device_ref); - if (!ctx->recon_frames_ref) { + base_ctx->recon_frames_ref = av_hwframe_ctx_alloc(base_ctx->device_ref); + if (!base_ctx->recon_frames_ref) { err = AVERROR(ENOMEM); goto fail; } - ctx->recon_frames = (AVHWFramesContext*)ctx->recon_frames_ref->data; + base_ctx->recon_frames = (AVHWFramesContext*)base_ctx->recon_frames_ref->data; - ctx->recon_frames->format = AV_PIX_FMT_VAAPI; - ctx->recon_frames->sw_format = recon_format; - ctx->recon_frames->width = ctx->surface_width; - ctx->recon_frames->height = ctx->surface_height; + base_ctx->recon_frames->format = AV_PIX_FMT_VAAPI; + base_ctx->recon_frames->sw_format = recon_format; + base_ctx->recon_frames->width = ctx->surface_width; + base_ctx->recon_frames->height = ctx->surface_height; - err = av_hwframe_ctx_init(ctx->recon_frames_ref); + err = av_hwframe_ctx_init(base_ctx->recon_frames_ref); if (err < 0) { av_log(avctx, AV_LOG_ERROR, "Failed to initialise reconstructed " "frame context: %d.\n", err); @@ -2269,7 +2269,7 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx) if (err < 0) goto fail; - recon_hwctx = ctx->recon_frames->hwctx; + recon_hwctx = base_ctx->recon_frames->hwctx; vas = vaCreateContext(ctx->hwctx->display, ctx->va_config, ctx->surface_width, ctx->surface_height, VA_PROGRESSIVE, @@ -2404,7 +2404,7 @@ av_cold int ff_vaapi_encode_close(AVCodecContext *avctx) av_freep(&ctx->codec_picture_params); av_fifo_freep2(&base_ctx->encode_fifo); - av_buffer_unref(&ctx->recon_frames_ref); + av_buffer_unref(&base_ctx->recon_frames_ref); av_buffer_unref(&base_ctx->input_frames_ref); av_buffer_unref(&base_ctx->device_ref); diff --git a/libavcodec/vaapi_encode.h b/libavcodec/vaapi_encode.h index 0d3d8e2ed0..76fb645d71 100644 --- a/libavcodec/vaapi_encode.h +++ b/libavcodec/vaapi_encode.h @@ -213,10 +213,6 @@ typedef struct VAAPIEncodeContext { AVVAAPIDeviceContext *hwctx; - // The hardware frame context containing the reconstructed frames. - AVBufferRef *recon_frames_ref; - AVHWFramesContext *recon_frames; - // Pool of (reusable) bitstream output buffers. struct FFRefStructPool *output_buffer_pool; From patchwork Thu Apr 18 08:59:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 48142 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a20:ce4e:b0:1a9:af23:56c1 with SMTP id id14csp1552241pzb; Thu, 18 Apr 2024 02:02:14 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCV01LaCXRxIMNpQyUlp8BP8/VkeFyciAv02eQ7ld5c6+h4bdpUnEqs9jSHw/rnzpINkVz4IekhJk3mAYbe/afLaDgLEH3tUkSOCGA== X-Google-Smtp-Source: AGHT+IH1Qm+UUzpeiCPoklG09CBHf0yz+OOrI9E5R1aZXnoxAuYyNXEUZvbDYS0IQJjswsBPIXPG X-Received: by 2002:a2e:900e:0:b0:2da:4a18:c0b9 with SMTP id h14-20020a2e900e000000b002da4a18c0b9mr1084731ljg.5.1713430933782; Thu, 18 Apr 2024 02:02:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1713430933; cv=none; d=google.com; s=arc-20160816; b=Ax9fg6TZRkXKHnpvJvPcYvA5Ha4OWF+wOfbL2aEIKTfnF8MN+ApvUfznf2dkk/Zk2G xYxMMFiW9inVKNMVcGwLNYT/DG7DAVQjjnuYAuOSQZdCUFUl963qpQ1BFC8/T3UGlXMr 2otZip8bVWSILX6dZbC3nxe9ihPYXkTR4kcUbK7YTTqSP8UOu0rYjnWLTfQHxaSLdyh3 r2dIFnf4NVFcQkBqCR0IMIR/asrCqHJfMhtXtgh6g6P2G6AbJijjO5A53WFW0Y2RgRmH ygVIOXZTSbPUaQ8JrDwtawPL24rrlrmJMXMxtta60Eiaym10aBodXodQNxMlL6/6BDSh NRuw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=OzcxA4lthEQTxuCQW8svD1Mx4uB2cVtJcNbhPNwMdmY=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=AjjpLEZ6+u+fK2mEWS723TLdlNRZYLJ31F63WP3NqcXfFQTOntJuCNTIfM+TyATIpH b0O35MythtGKPFZ6lI5vt3x6VFyg6EFrvlsjL+UfT7HUpQK+irnjWP3GCTPgcPi0csM2 IV61oIfGtEPVlw9237a3MjnxeZ2RuJWaoj5dTQXpp/ZxwmJTANBHxPpHxYt7P0BwePOT ssnH5OOBXB6ZQW5iYm6/9A49NjuKtS37N/ZwvxOmbcmW1WSL+oYprDjvJXGAuxOfsiLB 64xjspUCcfc84fCXTrcsIhNYv0yx2iY1F5PW9sVEeWevsxNgNJ8W8HnpH7BCulSOK5p+ deyA==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=E5k5KXli; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id b18-20020a05640202d200b0056e47321892si626521edx.374.2024.04.18.02.02.13; Thu, 18 Apr 2024 02:02:13 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=E5k5KXli; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 73F9C68D43E; Thu, 18 Apr 2024 12:01:12 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id B9DF868D41D for ; Thu, 18 Apr 2024 12:01:09 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713430870; x=1744966870; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hbBU3X8U+lBANwN1oBcLSgvsZLJC+yGUPfdn8eqeh1Q=; b=E5k5KXliAaS/7VXn0/KpYOsc4gBaw/pDjDtF47mQoYDHakf6u3mmW4WN y4fAsMsM+QFs0o6Mr0ZA5EYKJ03Hl68+X09ycHG5e7wTXhepwaqF0tfiT 54glPJYPyE6Zqaz1exXJS3k2eAxML0zHHbf9y1U8aIXPMZYM/iIxpXohe doCC33omyyjBMiMkjDlYz5tFGc6pdU5IND7i+Fpl3HC2IuzWy6uNCYgz8 hgoFY6qYwZC6ry8zBk9oJmDQzZ7O6o4ApwvLL7Et1iX373p0GuBeeeAOb 6MXk1x0c8neW5oz8+Gcs2qllWp8xczBXbWGvnoyUiA53v8b336yB0Uqfr w==; X-CSE-ConnectionGUID: LD0UycM1RAm3XYC6YpxSIw== X-CSE-MsgGUID: VD6PKd9+QeSOtlYFL2Q4Ig== X-IronPort-AV: E=McAfee;i="6600,9927,11047"; a="12748343" X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="12748343" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Apr 2024 02:00:56 -0700 X-CSE-ConnectionGUID: 50DDhbQ4T6yJrLL/pYN0iw== X-CSE-MsgGUID: ZC2hg/y0RreI7DHjaldeBA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="54127580" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by fmviesa001.fm.intel.com with ESMTP; 18 Apr 2024 02:00:52 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Thu, 18 Apr 2024 16:59:03 +0800 Message-ID: <20240418085910.547-9-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240418085910.547-1-tong1.wu@intel.com> References: <20240418085910.547-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v8 09/15] avcodec/vaapi_encode: extract a close function for base layer X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: 2NmOka55QyZG From: Tong Wu Signed-off-by: Tong Wu --- libavcodec/hw_base_encode.c | 16 ++++++++++++++++ libavcodec/hw_base_encode.h | 2 ++ libavcodec/vaapi_encode.c | 8 +------- 3 files changed, 19 insertions(+), 7 deletions(-) diff --git a/libavcodec/hw_base_encode.c b/libavcodec/hw_base_encode.c index 6e450f9dca..fa954d984d 100644 --- a/libavcodec/hw_base_encode.c +++ b/libavcodec/hw_base_encode.c @@ -685,3 +685,19 @@ int ff_hw_base_encode_init(AVCodecContext *avctx) return 0; } + +int ff_hw_base_encode_close(AVCodecContext *avctx) +{ + HWBaseEncodeContext *ctx = avctx->priv_data; + + av_fifo_freep2(&ctx->encode_fifo); + + av_frame_free(&ctx->frame); + av_packet_free(&ctx->tail_pkt); + + av_buffer_unref(&ctx->device_ref); + av_buffer_unref(&ctx->input_frames_ref); + av_buffer_unref(&ctx->recon_frames_ref); + + return 0; +} diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h index 3eea3ef998..d822175fcf 100644 --- a/libavcodec/hw_base_encode.h +++ b/libavcodec/hw_base_encode.h @@ -209,6 +209,8 @@ int ff_hw_base_init_gop_structure(AVCodecContext *avctx, uint32_t ref_l0, uint32 int ff_hw_base_encode_init(AVCodecContext *avctx); +int ff_hw_base_encode_close(AVCodecContext *avctx); + #define HW_BASE_ENCODE_COMMON_OPTIONS \ { "idr_interval", \ "Distance (in I-frames) between key frames", \ diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c index 8640a888b7..41d0d0f2bc 100644 --- a/libavcodec/vaapi_encode.c +++ b/libavcodec/vaapi_encode.c @@ -2397,16 +2397,10 @@ av_cold int ff_vaapi_encode_close(AVCodecContext *avctx) ctx->va_config = VA_INVALID_ID; } - av_frame_free(&base_ctx->frame); - av_packet_free(&base_ctx->tail_pkt); - av_freep(&ctx->codec_sequence_params); av_freep(&ctx->codec_picture_params); - av_fifo_freep2(&base_ctx->encode_fifo); - av_buffer_unref(&base_ctx->recon_frames_ref); - av_buffer_unref(&base_ctx->input_frames_ref); - av_buffer_unref(&base_ctx->device_ref); + ff_hw_base_encode_close(avctx); return 0; } From patchwork Thu Apr 18 08:59:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 48144 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a20:ce4e:b0:1a9:af23:56c1 with SMTP id id14csp1552461pzb; Thu, 18 Apr 2024 02:02:36 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCWobjYykmtFSApMTT1d78e10gDDy81nCpQIRpKE0VaQnz9v+dJ77/9tGpI1//AeLFGBb0nvqFzIUL2bhuxhteerhzi8OXZQhI5Lbg== X-Google-Smtp-Source: AGHT+IGR+VD30IPNZxa2nDmQv3CYeJZCS6TJyUPo2GfMp/mH5BsuuYf1xHOlHMH+WiT8qEltRqgc X-Received: by 2002:a19:5f0a:0:b0:517:82af:1edd with SMTP id t10-20020a195f0a000000b0051782af1eddmr1024400lfb.5.1713430955976; Thu, 18 Apr 2024 02:02:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1713430955; cv=none; d=google.com; s=arc-20160816; b=Q3vZZN+0VMXgAwXkCcLtlvFRWTyuvL3vwoWL/T9dQLxmuJatXLgWjOJ17Dn5h+J3MY BY/6xqYwK9vUfx0YCPu6HgqWW30/2tPN1KeqcgO+Tq5Ujm5Ookvy5IGLXHWz/DxYwYE4 I1MJJmQDWbBsjIlp5vxHqS/uGMZWeBDLs981+26G8rpwk4YkxHoz7XzV+XMQKv0YT2ch Cmdpafb1q3V6+aDcRkpVBa3GAlXbgDacPZbtAOJCi44f7Rtr3oxtFyGCZNF+tUKs7RES vz3GOeY5xv5bgepSZzfZQQyRaum/WLazK+RuyjSN4uWrxCNB6GuzUWgH7nnd+agCEUmB R8Aw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=555NNWQzvP8kovuzEvW7vhv5ckn+gQ7GyXxnTdYfsD4=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=B6a13sLjMoqhWxlD5shhf290zbCJuiayUAYDXg9T9X+AJ9t/FH9Wr2nKuzO9xiA0vk MnCQzD+xL1M1N1hr1VOO7Tq3UxiZBp61ayRjyH5wAZEg9xf78R25O2yl8mzPDSLzC5mN U3TbsB4/3T4DISRohPzhTSBH/R1t3Jf5RiL9pWrzCAzEOdniBoz9Pw75VUl29kIpXQSt hZqRAY3V/35UdTtdwrH6ZNwkE6UKrLvdToW5Gcu0RkQFH5QohzijiIyr1FdxkvCZ2Epl oZea8R4D1GWdZTbVOAsMCUyhkQgc3gvNhij/rDhEgaiJd5IrUE1qAU1D8aPl2A7m9PW9 N7dg==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=R4tAS0YO; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id z17-20020a056512309100b005195c66f78fsi367304lfd.83.2024.04.18.02.02.35; Thu, 18 Apr 2024 02:02:35 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=R4tAS0YO; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 76F0868D405; Thu, 18 Apr 2024 12:01:14 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id B1CC868D43E for ; Thu, 18 Apr 2024 12:01:10 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713430871; x=1744966871; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YAQPzkF3l+G/7PWogWeIAVd6YBAypNeMyN1ovHDVcek=; b=R4tAS0YOGVTvyXtalkG4ujeLSzh9Jlp20VChmTECfHUIOzUmI8zqCNS0 M+o2XfoLMLXMAmjMmBr3zn+1KyhB9qEZXIlsKhvp7FkI6Pco1IoHahl2H V6HAtJsySTFzgdAhNluNHu1EbKRwAr/0n/H5Hk29zzHF8mFC86gptNdsj alQy1L5Kbed8ZgmWm7Mt5nJpZIiL4v38IJ1RoPH/zRld7pCkRO/l61K41 OCbjFLbcIjWqqGv0je6Yw4GheyfipHaSgguaoznXIS/dsaBiaaApb4fLF TSXwY9cTfcXVtQgvzgSc0qfytg0p12AAO6g5K2I1pDumbw3WTDnPMUws7 g==; X-CSE-ConnectionGUID: yIp9wTHbQe2kRq7RWpVvOA== X-CSE-MsgGUID: jhh4NlIoRhGS52QlYKMbXQ== X-IronPort-AV: E=McAfee;i="6600,9927,11047"; a="12748351" X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="12748351" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Apr 2024 02:01:03 -0700 X-CSE-ConnectionGUID: IEbFKELGRnGFmoFh2hfqsw== X-CSE-MsgGUID: pQWo76fxT6yFL9f8+L5T5g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="54127589" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by fmviesa001.fm.intel.com with ESMTP; 18 Apr 2024 02:00:54 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Thu, 18 Apr 2024 16:59:04 +0800 Message-ID: <20240418085910.547-10-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240418085910.547-1-tong1.wu@intel.com> References: <20240418085910.547-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v8 10/15] avcodec/vaapi_encode: extract set_output_property to base layer X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: AY81NqPKY/S/ From: Tong Wu Signed-off-by: Tong Wu --- libavcodec/hw_base_encode.c | 40 +++++++++++++++++++++++++++++++++ libavcodec/hw_base_encode.h | 3 +++ libavcodec/vaapi_encode.c | 44 ++----------------------------------- 3 files changed, 45 insertions(+), 42 deletions(-) diff --git a/libavcodec/hw_base_encode.c b/libavcodec/hw_base_encode.c index fa954d984d..a4223d90f0 100644 --- a/libavcodec/hw_base_encode.c +++ b/libavcodec/hw_base_encode.c @@ -491,6 +491,46 @@ fail: return err; } +int ff_hw_base_encode_set_output_property(AVCodecContext *avctx, + HWBaseEncodePicture *pic, + AVPacket *pkt, int flag_no_delay) +{ + HWBaseEncodeContext *ctx = avctx->priv_data; + + if (pic->type == PICTURE_TYPE_IDR) + pkt->flags |= AV_PKT_FLAG_KEY; + + pkt->pts = pic->pts; + pkt->duration = pic->duration; + + // for no-delay encoders this is handled in generic codec + if (avctx->codec->capabilities & AV_CODEC_CAP_DELAY && + avctx->flags & AV_CODEC_FLAG_COPY_OPAQUE) { + pkt->opaque = pic->opaque; + pkt->opaque_ref = pic->opaque_ref; + pic->opaque_ref = NULL; + } + + if (flag_no_delay) { + pkt->dts = pkt->pts; + return 0; + } + + if (ctx->output_delay == 0) { + pkt->dts = pkt->pts; + } else if (pic->encode_order < ctx->decode_delay) { + if (ctx->ts_ring[pic->encode_order] < INT64_MIN + ctx->dts_pts_diff) + pkt->dts = INT64_MIN; + else + pkt->dts = ctx->ts_ring[pic->encode_order] - ctx->dts_pts_diff; + } else { + pkt->dts = ctx->ts_ring[(pic->encode_order - ctx->decode_delay) % + (3 * ctx->output_delay + ctx->async_depth)]; + } + + return 0; +} + int ff_hw_base_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt) { HWBaseEncodeContext *ctx = avctx->priv_data; diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h index d822175fcf..d717f955d8 100644 --- a/libavcodec/hw_base_encode.h +++ b/libavcodec/hw_base_encode.h @@ -202,6 +202,9 @@ typedef struct HWBaseEncodeContext { AVPacket *tail_pkt; } HWBaseEncodeContext; +int ff_hw_base_encode_set_output_property(AVCodecContext *avctx, HWBaseEncodePicture *pic, + AVPacket *pkt, int flag_no_delay); + int ff_hw_base_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt); int ff_hw_base_init_gop_structure(AVCodecContext *avctx, uint32_t ref_l0, uint32_t ref_l1, diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c index 41d0d0f2bc..ec4cde3c37 100644 --- a/libavcodec/vaapi_encode.c +++ b/libavcodec/vaapi_encode.c @@ -660,47 +660,6 @@ fail_at_end: return err; } -static int vaapi_encode_set_output_property(AVCodecContext *avctx, - HWBaseEncodePicture *pic, - AVPacket *pkt) -{ - HWBaseEncodeContext *base_ctx = avctx->priv_data; - VAAPIEncodeContext *ctx = avctx->priv_data; - - if (pic->type == PICTURE_TYPE_IDR) - pkt->flags |= AV_PKT_FLAG_KEY; - - pkt->pts = pic->pts; - pkt->duration = pic->duration; - - // for no-delay encoders this is handled in generic codec - if (avctx->codec->capabilities & AV_CODEC_CAP_DELAY && - avctx->flags & AV_CODEC_FLAG_COPY_OPAQUE) { - pkt->opaque = pic->opaque; - pkt->opaque_ref = pic->opaque_ref; - pic->opaque_ref = NULL; - } - - if (ctx->codec->flags & FLAG_TIMESTAMP_NO_DELAY) { - pkt->dts = pkt->pts; - return 0; - } - - if (base_ctx->output_delay == 0) { - pkt->dts = pkt->pts; - } else if (pic->encode_order < base_ctx->decode_delay) { - if (base_ctx->ts_ring[pic->encode_order] < INT64_MIN + base_ctx->dts_pts_diff) - pkt->dts = INT64_MIN; - else - pkt->dts = base_ctx->ts_ring[pic->encode_order] - base_ctx->dts_pts_diff; - } else { - pkt->dts = base_ctx->ts_ring[(pic->encode_order - base_ctx->decode_delay) % - (3 * base_ctx->output_delay + base_ctx->async_depth)]; - } - - return 0; -} - static int vaapi_encode_get_coded_buffer_size(AVCodecContext *avctx, VABufferID buf_id) { VAAPIEncodeContext *ctx = avctx->priv_data; @@ -852,7 +811,8 @@ static int vaapi_encode_output(AVCodecContext *avctx, av_log(avctx, AV_LOG_DEBUG, "Output read for pic %"PRId64"/%"PRId64".\n", base_pic->display_order, base_pic->encode_order); - vaapi_encode_set_output_property(avctx, (HWBaseEncodePicture*)pic, pkt_ptr); + ff_hw_base_encode_set_output_property(avctx, (HWBaseEncodePicture*)base_pic, pkt_ptr, + ctx->codec->flags & FLAG_TIMESTAMP_NO_DELAY); end: ff_refstruct_unref(&pic->output_buffer_ref); From patchwork Thu Apr 18 08:59:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 48133 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a20:ce4e:b0:1a9:af23:56c1 with SMTP id id14csp1552551pzb; Thu, 18 Apr 2024 02:02:46 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCUdMwLVESkUvUHb9owlxKZ9V28j7/OBchiaMoI1fln/L1akgomwhrWPT3pEvTsFVARDp1lTKghiNTjI5zbDpFjvXXiOdI7CR+JpSw== X-Google-Smtp-Source: AGHT+IF0UBI7XOHFXOUR64uXaTIElwpM09jP8BjbaSLt8lRtZwg7uk4ti+ewQrGbcHp+rVyKsZme X-Received: by 2002:a17:906:4c4f:b0:a52:6534:3efb with SMTP id d15-20020a1709064c4f00b00a5265343efbmr1064095ejw.77.1713430966357; Thu, 18 Apr 2024 02:02:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1713430965; cv=none; d=google.com; s=arc-20160816; b=jPLCZ4mWPS3ARBGFI/C/94raFdt0kRF9C7qAXfxPL6q6wS88JoZl+EDy39RdEIuqES S+UPO6699ASO4jtNo6BUKsgCbck/AvQkVlAD6W5qDumHrm7s4OVnnO41g7y5byD1Nid/ WLok34neUjOc22TGGGHuHTsPoqM/a+VwJNfXGq8U5mF5lieGHwjdP//ZG8LHI3/oIbwg hlWlAdrFbl444DjX7f1l9l2kadfFW2spgvmDllTWil/D+l4tdx+BlKkdWGNL3X11Voto SOm1IVSPWxunOBqwGpNm8v/jY8+/ln1o2G3Ewvcm0ZvYBsrEUsbtl61u30Rx90Rvvxp3 wSmA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=irScst8WFqXb3WJSKkAj+IeOW+/HcP5e9zFW9jFaV+A=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=UTDtePa+IQywPcqEpi2yiSsPaQmJpXw1kiw4WhWKBoKy5cLuoK3+FKRgCTOqfFQH9a 1tJH4SSIQBVyl1HRkf0ZzgvtDNVgWht10w1uzuAOm3OXWAmYYbi6MNHl17jHDYK/MGts fBtYB9HSBwJsvsWOPwiyNIDYJ8u8ou3Wo9XHNLf2JPf4J3/4K4X4itBQ6Aip5WxjX5tk Irl0rL05NLqA/FaDq/4h3G1lJeOURS3Y2REPdxvVwtAR0sxQMkPIgmpgMQAn0oZColoM aJ4X28/mIFlhq50TP+vhL6UHPpbjGWNmHXOmWspGhUsT/Yo5aa7/hTC3bab/W+MkHmvz ROIA==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=G++3GlfC; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id n7-20020a170906b30700b00a46c39976ffsi616719ejz.500.2024.04.18.02.02.45; Thu, 18 Apr 2024 02:02:45 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=G++3GlfC; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 62C4168D471; Thu, 18 Apr 2024 12:01:15 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id B83DF68D444 for ; Thu, 18 Apr 2024 12:01:10 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713430871; x=1744966871; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=RIJhZq1NMq8R+dcxG5SVrbcgLHQqxCLXuPx6BCryE/4=; b=G++3GlfCDkBBb0DwZqCiufgyAvak66qVAVDECxJlF9DUaT9ogNQvQ8xa CWRan5DWIbb26wxhLyRgx76kSfh9NFdpdHSGGwTFhN0TQGbs4TPS3dAyL aZVo1zdp1Zz3CyyT35R25fxJFwGea28PfXOJfndkNxivMJgh2sg5PPTXf f1Erh1AaF4Xkc+G/ig8z/U3JUmrq5Pw3S+L0V53XZkPxszsNCB7Tem5Zr M2b+oxn4tveDbjL7UIYwaZHQHS0150KYpb66mIz+wRQr4QnUi7CCWi53t agIsDHxQTIvRWNRYl02Z+AH+udikH/3q4KFF7qH3LAMgV7MMfdx9pwr2E Q==; X-CSE-ConnectionGUID: KnooUnGGTpOSJhG5l5p31Q== X-CSE-MsgGUID: VC8QcmUdTO+vD20GEkFvWQ== X-IronPort-AV: E=McAfee;i="6600,9927,11047"; a="12748352" X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="12748352" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Apr 2024 02:01:03 -0700 X-CSE-ConnectionGUID: WAfSr730RGCaTcMMOCJWSg== X-CSE-MsgGUID: lAKFDnD0RQiIu0gCU+wurw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="54127614" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by fmviesa001.fm.intel.com with ESMTP; 18 Apr 2024 02:00:55 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Thu, 18 Apr 2024 16:59:05 +0800 Message-ID: <20240418085910.547-11-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240418085910.547-1-tong1.wu@intel.com> References: <20240418085910.547-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v8 11/15] avcodec/vaapi_encode: extract a get_recon_format function to base layer X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: DLcu0kHmxeHv From: Tong Wu Surface size and block size parameters are also moved to base layer. Signed-off-by: Tong Wu --- libavcodec/hw_base_encode.c | 58 +++++++++++++++++++++++ libavcodec/hw_base_encode.h | 12 +++++ libavcodec/vaapi_encode.c | 81 ++++++++------------------------- libavcodec/vaapi_encode.h | 10 ---- libavcodec/vaapi_encode_av1.c | 10 ++-- libavcodec/vaapi_encode_h264.c | 11 +++-- libavcodec/vaapi_encode_h265.c | 25 +++++----- libavcodec/vaapi_encode_mjpeg.c | 5 +- libavcodec/vaapi_encode_vp9.c | 6 +-- 9 files changed, 118 insertions(+), 100 deletions(-) diff --git a/libavcodec/hw_base_encode.c b/libavcodec/hw_base_encode.c index a4223d90f0..af85bb99aa 100644 --- a/libavcodec/hw_base_encode.c +++ b/libavcodec/hw_base_encode.c @@ -693,6 +693,64 @@ int ff_hw_base_init_gop_structure(AVCodecContext *avctx, uint32_t ref_l0, uint32 return 0; } +int ff_hw_base_get_recon_format(AVCodecContext *avctx, const void *hwconfig, enum AVPixelFormat *fmt) +{ + HWBaseEncodeContext *ctx = avctx->priv_data; + AVHWFramesConstraints *constraints = NULL; + enum AVPixelFormat recon_format; + int err, i; + + constraints = av_hwdevice_get_hwframe_constraints(ctx->device_ref, + hwconfig); + if (!constraints) { + err = AVERROR(ENOMEM); + goto fail; + } + + // Probably we can use the input surface format as the surface format + // of the reconstructed frames. If not, we just pick the first (only?) + // format in the valid list and hope that it all works. + recon_format = AV_PIX_FMT_NONE; + if (constraints->valid_sw_formats) { + for (i = 0; constraints->valid_sw_formats[i] != AV_PIX_FMT_NONE; i++) { + if (ctx->input_frames->sw_format == + constraints->valid_sw_formats[i]) { + recon_format = ctx->input_frames->sw_format; + break; + } + } + if (recon_format == AV_PIX_FMT_NONE) { + // No match. Just use the first in the supported list and + // hope for the best. + recon_format = constraints->valid_sw_formats[0]; + } + } else { + // No idea what to use; copy input format. + recon_format = ctx->input_frames->sw_format; + } + av_log(avctx, AV_LOG_DEBUG, "Using %s as format of " + "reconstructed frames.\n", av_get_pix_fmt_name(recon_format)); + + if (ctx->surface_width < constraints->min_width || + ctx->surface_height < constraints->min_height || + ctx->surface_width > constraints->max_width || + ctx->surface_height > constraints->max_height) { + av_log(avctx, AV_LOG_ERROR, "Hardware does not support encoding at " + "size %dx%d (constraints: width %d-%d height %d-%d).\n", + ctx->surface_width, ctx->surface_height, + constraints->min_width, constraints->max_width, + constraints->min_height, constraints->max_height); + err = AVERROR(EINVAL); + goto fail; + } + + *fmt = recon_format; + err = 0; +fail: + av_hwframe_constraints_free(&constraints); + return err; +} + int ff_hw_base_encode_init(AVCodecContext *avctx) { HWBaseEncodeContext *ctx = avctx->priv_data; diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h index d717f955d8..7686cf9501 100644 --- a/libavcodec/hw_base_encode.h +++ b/libavcodec/hw_base_encode.h @@ -126,6 +126,16 @@ typedef struct HWBaseEncodeContext { // Desired B frame reference depth. int desired_b_depth; + // The required size of surfaces. This is probably the input + // size (AVCodecContext.width|height) aligned up to whatever + // block size is required by the codec. + int surface_width; + int surface_height; + + // The block size for slice calculations. + int slice_block_width; + int slice_block_height; + // The hardware device context. AVBufferRef *device_ref; AVHWDeviceContext *device; @@ -210,6 +220,8 @@ int ff_hw_base_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt); int ff_hw_base_init_gop_structure(AVCodecContext *avctx, uint32_t ref_l0, uint32_t ref_l1, int flags, int prediction_pre_only); +int ff_hw_base_get_recon_format(AVCodecContext *avctx, const void *hwconfig, enum AVPixelFormat *fmt); + int ff_hw_base_encode_init(AVCodecContext *avctx); int ff_hw_base_encode_close(AVCodecContext *avctx); diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c index ec4cde3c37..ee4cf42baf 100644 --- a/libavcodec/vaapi_encode.c +++ b/libavcodec/vaapi_encode.c @@ -1777,6 +1777,7 @@ static av_cold int vaapi_encode_init_tile_slice_structure(AVCodecContext *avctx, static av_cold int vaapi_encode_init_slice_structure(AVCodecContext *avctx) { + HWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeContext *ctx = avctx->priv_data; VAConfigAttrib attr[3] = { { VAConfigAttribEncMaxSlices }, { VAConfigAttribEncSliceStructure }, @@ -1796,12 +1797,12 @@ static av_cold int vaapi_encode_init_slice_structure(AVCodecContext *avctx) return 0; } - av_assert0(ctx->slice_block_height > 0 && ctx->slice_block_width > 0); + av_assert0(base_ctx->slice_block_height > 0 && base_ctx->slice_block_width > 0); - ctx->slice_block_rows = (avctx->height + ctx->slice_block_height - 1) / - ctx->slice_block_height; - ctx->slice_block_cols = (avctx->width + ctx->slice_block_width - 1) / - ctx->slice_block_width; + ctx->slice_block_rows = (avctx->height + base_ctx->slice_block_height - 1) / + base_ctx->slice_block_height; + ctx->slice_block_cols = (avctx->width + base_ctx->slice_block_width - 1) / + base_ctx->slice_block_width; if (avctx->slices <= 1 && !ctx->tile_rows && !ctx->tile_cols) { ctx->nb_slices = 1; @@ -2023,7 +2024,8 @@ static void vaapi_encode_free_output_buffer(FFRefStructOpaque opaque, static int vaapi_encode_alloc_output_buffer(FFRefStructOpaque opaque, void *obj) { AVCodecContext *avctx = opaque.nc; - VAAPIEncodeContext *ctx = avctx->priv_data; + HWBaseEncodeContext *base_ctx = avctx->priv_data; + VAAPIEncodeContext *ctx = avctx->priv_data; VABufferID *buffer_id = obj; VAStatus vas; @@ -2033,7 +2035,7 @@ static int vaapi_encode_alloc_output_buffer(FFRefStructOpaque opaque, void *obj) // bound on that. vas = vaCreateBuffer(ctx->hwctx->display, ctx->va_context, VAEncCodedBufferType, - 3 * ctx->surface_width * ctx->surface_height + + 3 * base_ctx->surface_width * base_ctx->surface_height + (1 << 16), 1, 0, buffer_id); if (vas != VA_STATUS_SUCCESS) { av_log(avctx, AV_LOG_ERROR, "Failed to create bitstream " @@ -2051,9 +2053,8 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx) HWBaseEncodeContext *base_ctx = avctx->priv_data; VAAPIEncodeContext *ctx = avctx->priv_data; AVVAAPIHWConfig *hwconfig = NULL; - AVHWFramesConstraints *constraints = NULL; enum AVPixelFormat recon_format; - int err, i; + int err; hwconfig = av_hwdevice_hwconfig_alloc(base_ctx->device_ref); if (!hwconfig) { @@ -2062,52 +2063,9 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx) } hwconfig->config_id = ctx->va_config; - constraints = av_hwdevice_get_hwframe_constraints(base_ctx->device_ref, - hwconfig); - if (!constraints) { - err = AVERROR(ENOMEM); - goto fail; - } - - // Probably we can use the input surface format as the surface format - // of the reconstructed frames. If not, we just pick the first (only?) - // format in the valid list and hope that it all works. - recon_format = AV_PIX_FMT_NONE; - if (constraints->valid_sw_formats) { - for (i = 0; constraints->valid_sw_formats[i] != AV_PIX_FMT_NONE; i++) { - if (base_ctx->input_frames->sw_format == - constraints->valid_sw_formats[i]) { - recon_format = base_ctx->input_frames->sw_format; - break; - } - } - if (recon_format == AV_PIX_FMT_NONE) { - // No match. Just use the first in the supported list and - // hope for the best. - recon_format = constraints->valid_sw_formats[0]; - } - } else { - // No idea what to use; copy input format. - recon_format = base_ctx->input_frames->sw_format; - } - av_log(avctx, AV_LOG_DEBUG, "Using %s as format of " - "reconstructed frames.\n", av_get_pix_fmt_name(recon_format)); - - if (ctx->surface_width < constraints->min_width || - ctx->surface_height < constraints->min_height || - ctx->surface_width > constraints->max_width || - ctx->surface_height > constraints->max_height) { - av_log(avctx, AV_LOG_ERROR, "Hardware does not support encoding at " - "size %dx%d (constraints: width %d-%d height %d-%d).\n", - ctx->surface_width, ctx->surface_height, - constraints->min_width, constraints->max_width, - constraints->min_height, constraints->max_height); - err = AVERROR(EINVAL); + err = ff_hw_base_get_recon_format(avctx, (const void*)hwconfig, &recon_format); + if (err < 0) goto fail; - } - - av_freep(&hwconfig); - av_hwframe_constraints_free(&constraints); base_ctx->recon_frames_ref = av_hwframe_ctx_alloc(base_ctx->device_ref); if (!base_ctx->recon_frames_ref) { @@ -2118,8 +2076,8 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx) base_ctx->recon_frames->format = AV_PIX_FMT_VAAPI; base_ctx->recon_frames->sw_format = recon_format; - base_ctx->recon_frames->width = ctx->surface_width; - base_ctx->recon_frames->height = ctx->surface_height; + base_ctx->recon_frames->width = base_ctx->surface_width; + base_ctx->recon_frames->height = base_ctx->surface_height; err = av_hwframe_ctx_init(base_ctx->recon_frames_ref); if (err < 0) { @@ -2131,7 +2089,6 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx) err = 0; fail: av_freep(&hwconfig); - av_hwframe_constraints_free(&constraints); return err; } @@ -2174,11 +2131,11 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx) goto fail; } else { // Assume 16x16 blocks. - ctx->surface_width = FFALIGN(avctx->width, 16); - ctx->surface_height = FFALIGN(avctx->height, 16); + base_ctx->surface_width = FFALIGN(avctx->width, 16); + base_ctx->surface_height = FFALIGN(avctx->height, 16); if (ctx->codec->flags & FLAG_SLICE_CONTROL) { - ctx->slice_block_width = 16; - ctx->slice_block_height = 16; + base_ctx->slice_block_width = 16; + base_ctx->slice_block_height = 16; } } @@ -2231,7 +2188,7 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx) recon_hwctx = base_ctx->recon_frames->hwctx; vas = vaCreateContext(ctx->hwctx->display, ctx->va_config, - ctx->surface_width, ctx->surface_height, + base_ctx->surface_width, base_ctx->surface_height, VA_PROGRESSIVE, recon_hwctx->surface_ids, recon_hwctx->nb_surfaces, diff --git a/libavcodec/vaapi_encode.h b/libavcodec/vaapi_encode.h index 76fb645d71..8c7568e9ae 100644 --- a/libavcodec/vaapi_encode.h +++ b/libavcodec/vaapi_encode.h @@ -171,16 +171,6 @@ typedef struct VAAPIEncodeContext { // Desired packed headers. unsigned int desired_packed_headers; - // The required size of surfaces. This is probably the input - // size (AVCodecContext.width|height) aligned up to whatever - // block size is required by the codec. - int surface_width; - int surface_height; - - // The block size for slice calculations. - int slice_block_width; - int slice_block_height; - // Everything above this point must be set before calling // ff_vaapi_encode_init(). diff --git a/libavcodec/vaapi_encode_av1.c b/libavcodec/vaapi_encode_av1.c index c37ff7591a..1e48c8d4ad 100644 --- a/libavcodec/vaapi_encode_av1.c +++ b/libavcodec/vaapi_encode_av1.c @@ -109,12 +109,12 @@ static void vaapi_encode_av1_trace_write_log(void *ctx, static av_cold int vaapi_encode_av1_get_encoder_caps(AVCodecContext *avctx) { - VAAPIEncodeContext *ctx = avctx->priv_data; - VAAPIEncodeAV1Context *priv = avctx->priv_data; + HWBaseEncodeContext *base_ctx = avctx->priv_data; + VAAPIEncodeAV1Context *priv = avctx->priv_data; // Surfaces must be aligned to superblock boundaries. - ctx->surface_width = FFALIGN(avctx->width, priv->use_128x128_superblock ? 128 : 64); - ctx->surface_height = FFALIGN(avctx->height, priv->use_128x128_superblock ? 128 : 64); + base_ctx->surface_width = FFALIGN(avctx->width, priv->use_128x128_superblock ? 128 : 64); + base_ctx->surface_height = FFALIGN(avctx->height, priv->use_128x128_superblock ? 128 : 64); return 0; } @@ -422,7 +422,7 @@ static int vaapi_encode_av1_init_sequence_params(AVCodecContext *avctx) framerate = 0; level = ff_av1_guess_level(avctx->bit_rate, priv->tier, - ctx->surface_width, ctx->surface_height, + base_ctx->surface_width, base_ctx->surface_height, priv->tile_rows * priv->tile_cols, priv->tile_cols, framerate); if (level) { diff --git a/libavcodec/vaapi_encode_h264.c b/libavcodec/vaapi_encode_h264.c index 106df30563..8d932149cd 100644 --- a/libavcodec/vaapi_encode_h264.c +++ b/libavcodec/vaapi_encode_h264.c @@ -1205,8 +1205,9 @@ static const VAAPIEncodeType vaapi_encode_type_h264 = { static av_cold int vaapi_encode_h264_init(AVCodecContext *avctx) { - VAAPIEncodeContext *ctx = avctx->priv_data; - VAAPIEncodeH264Context *priv = avctx->priv_data; + HWBaseEncodeContext *base_ctx = avctx->priv_data; + VAAPIEncodeContext *ctx = avctx->priv_data; + VAAPIEncodeH264Context *priv = avctx->priv_data; ctx->codec = &vaapi_encode_type_h264; @@ -1254,10 +1255,10 @@ static av_cold int vaapi_encode_h264_init(AVCodecContext *avctx) VA_ENC_PACKED_HEADER_SLICE | // Slice headers. VA_ENC_PACKED_HEADER_MISC; // SEI. - ctx->surface_width = FFALIGN(avctx->width, 16); - ctx->surface_height = FFALIGN(avctx->height, 16); + base_ctx->surface_width = FFALIGN(avctx->width, 16); + base_ctx->surface_height = FFALIGN(avctx->height, 16); - ctx->slice_block_height = ctx->slice_block_width = 16; + base_ctx->slice_block_height = base_ctx->slice_block_width = 16; if (priv->qp > 0) ctx->explicit_qp = priv->qp; diff --git a/libavcodec/vaapi_encode_h265.c b/libavcodec/vaapi_encode_h265.c index 199d61c3a2..b42659f400 100644 --- a/libavcodec/vaapi_encode_h265.c +++ b/libavcodec/vaapi_encode_h265.c @@ -352,7 +352,7 @@ static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx) const H265LevelDescriptor *level; level = ff_h265_guess_level(ptl, avctx->bit_rate, - ctx->surface_width, ctx->surface_height, + base_ctx->surface_width, base_ctx->surface_height, ctx->nb_slices, ctx->tile_rows, ctx->tile_cols, (base_ctx->b_per_p > 0) + 1); if (level) { @@ -410,18 +410,18 @@ static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx) sps->chroma_format_idc = chroma_format; sps->separate_colour_plane_flag = 0; - sps->pic_width_in_luma_samples = ctx->surface_width; - sps->pic_height_in_luma_samples = ctx->surface_height; + sps->pic_width_in_luma_samples = base_ctx->surface_width; + sps->pic_height_in_luma_samples = base_ctx->surface_height; - if (avctx->width != ctx->surface_width || - avctx->height != ctx->surface_height) { + if (avctx->width != base_ctx->surface_width || + avctx->height != base_ctx->surface_height) { sps->conformance_window_flag = 1; sps->conf_win_left_offset = 0; sps->conf_win_right_offset = - (ctx->surface_width - avctx->width) >> desc->log2_chroma_w; + (base_ctx->surface_width - avctx->width) >> desc->log2_chroma_w; sps->conf_win_top_offset = 0; sps->conf_win_bottom_offset = - (ctx->surface_height - avctx->height) >> desc->log2_chroma_h; + (base_ctx->surface_height - avctx->height) >> desc->log2_chroma_h; } else { sps->conformance_window_flag = 0; } @@ -1197,11 +1197,12 @@ static int vaapi_encode_h265_init_slice_params(AVCodecContext *avctx, static av_cold int vaapi_encode_h265_get_encoder_caps(AVCodecContext *avctx) { - VAAPIEncodeContext *ctx = avctx->priv_data; - VAAPIEncodeH265Context *priv = avctx->priv_data; + HWBaseEncodeContext *base_ctx = avctx->priv_data; + VAAPIEncodeH265Context *priv = avctx->priv_data; #if VA_CHECK_VERSION(1, 13, 0) { + VAAPIEncodeContext *ctx = avctx->priv_data; VAConfigAttribValEncHEVCBlockSizes block_size; VAConfigAttrib attr; VAStatus vas; @@ -1249,10 +1250,10 @@ static av_cold int vaapi_encode_h265_get_encoder_caps(AVCodecContext *avctx) "min CB size %dx%d.\n", priv->ctu_size, priv->ctu_size, priv->min_cb_size, priv->min_cb_size); - ctx->surface_width = FFALIGN(avctx->width, priv->min_cb_size); - ctx->surface_height = FFALIGN(avctx->height, priv->min_cb_size); + base_ctx->surface_width = FFALIGN(avctx->width, priv->min_cb_size); + base_ctx->surface_height = FFALIGN(avctx->height, priv->min_cb_size); - ctx->slice_block_width = ctx->slice_block_height = priv->ctu_size; + base_ctx->slice_block_width = base_ctx->slice_block_height = priv->ctu_size; return 0; } diff --git a/libavcodec/vaapi_encode_mjpeg.c b/libavcodec/vaapi_encode_mjpeg.c index 17fd8ba8e9..ddfc09a3a1 100644 --- a/libavcodec/vaapi_encode_mjpeg.c +++ b/libavcodec/vaapi_encode_mjpeg.c @@ -439,14 +439,13 @@ static int vaapi_encode_mjpeg_init_slice_params(AVCodecContext *avctx, static av_cold int vaapi_encode_mjpeg_get_encoder_caps(AVCodecContext *avctx) { HWBaseEncodeContext *base_ctx = avctx->priv_data; - VAAPIEncodeContext *ctx = avctx->priv_data; const AVPixFmtDescriptor *desc; desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format); av_assert0(desc); - ctx->surface_width = FFALIGN(avctx->width, 8 << desc->log2_chroma_w); - ctx->surface_height = FFALIGN(avctx->height, 8 << desc->log2_chroma_h); + base_ctx->surface_width = FFALIGN(avctx->width, 8 << desc->log2_chroma_w); + base_ctx->surface_height = FFALIGN(avctx->height, 8 << desc->log2_chroma_h); return 0; } diff --git a/libavcodec/vaapi_encode_vp9.c b/libavcodec/vaapi_encode_vp9.c index 93245425e7..db72add98a 100644 --- a/libavcodec/vaapi_encode_vp9.c +++ b/libavcodec/vaapi_encode_vp9.c @@ -190,11 +190,11 @@ static int vaapi_encode_vp9_init_picture_params(AVCodecContext *avctx, static av_cold int vaapi_encode_vp9_get_encoder_caps(AVCodecContext *avctx) { - VAAPIEncodeContext *ctx = avctx->priv_data; + HWBaseEncodeContext *base_ctx = avctx->priv_data; // Surfaces must be aligned to 64x64 superblock boundaries. - ctx->surface_width = FFALIGN(avctx->width, 64); - ctx->surface_height = FFALIGN(avctx->height, 64); + base_ctx->surface_width = FFALIGN(avctx->width, 64); + base_ctx->surface_height = FFALIGN(avctx->height, 64); return 0; } From patchwork Thu Apr 18 08:59:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 48146 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a20:ce4e:b0:1a9:af23:56c1 with SMTP id id14csp1552720pzb; Thu, 18 Apr 2024 02:03:07 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCVax0Ixd61wzrrRflRIIzw9f1EtVYNs7Uf7wY1wooLArvX8rht/BF+ckrawC2glcZVNf5YWeJYFoLOEXZ28S3lo0dThYS5YGaujag== X-Google-Smtp-Source: AGHT+IF/tvRndiyVbC/CUtacFXzAgzGcCnxj5AbUkrGkr0il4jr+7VOrZwdRtK2wtjkeWqI1PF80 X-Received: by 2002:a50:8e5b:0:b0:56d:faa2:7aca with SMTP id 27-20020a508e5b000000b0056dfaa27acamr1414075edx.17.1713430987619; Thu, 18 Apr 2024 02:03:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1713430987; cv=none; d=google.com; s=arc-20160816; b=lSrjgJUKxPDRSw9ebxggVC1smGK+vOEeaaB+Ieg2z419DPqgzdVQwrr6hL2Yrny8i6 9UhxlReQ2yih5hJb2X1LwGSn4B08/0pTl7bbMNH/26Y0kuFB62y992uc38ZUMhiGjv45 sWq7X3coaLeTQ2dy2TrSVA+3Giyy3LjJLRgotO2UJG7VmjnzpzSFQ7c3yez4DLLp97r3 1P1rmD0TYn/4di1sjbvGz7ITnP2oUZqkl8zZ2JohBcw4DAfAQlhj6bhjA2NjHaCNxBbk DPkTN1yQaVrazksruMWEttnZAWZM+X0mQfwNIldC7IPKjVSIXETVkJ4PTxAfqa+qywyB 0bAQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=y/xTZlVD+mzdfl90+T0QilkQu6eKOluDE28Lu1/hEzI=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=MKVSVRSf4T0tClcDMgVMkr+//NjdpICKNVhWIcHz9E1JYKcbLvNjCrorNy4S2JjruR 9h+NPbf8u5nW7URqoCdFjnSM7p/zEUeJvcvv80p7a6nAUSLiRRkvFwcy4C2x1Mh3oUB0 b8gfpjgf1VZvcG6FbIR+IeJh3R5r9netePxGxjqKh1q2z0fJ6/i6YZ+vMaDzunDwl4mV PD+aQFZmw6DLDlN374jy6sMxRIoHB/smH8Zo6aqspUBzt0U+X5XbJlBlxGbOZdRodvhW NdecR8qQb5chlkJpYzLbFKsBBBLM2NBiWTiDVJ77oZDslVvnMIG3XZ/2zRjZOIQs56Bc PLhg==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=ZsczS56N; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id l20-20020a056402255400b0056e73de309fsi610651edb.600.2024.04.18.02.03.07; Thu, 18 Apr 2024 02:03:07 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=ZsczS56N; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 90ED468D474; Thu, 18 Apr 2024 12:01:17 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id A81C268D444 for ; Thu, 18 Apr 2024 12:01:11 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713430872; x=1744966872; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/6HYBdnCtOtrEAB4k5tUpQzcdA9JKNEdA2vLrZEyiB0=; b=ZsczS56N4kizx5myxb14ks4+xuw2a0Qc63XG7REh7/SDET8agaDImmOy xe7VcvkJA0/apCHcW73JLrGzeBdr8J3OsUKGmk8KdK3KxCZ7L5AYjauYd J3jgW6+mM3xipPYU6tjnnSOvKYFjmaC/bIj4ko4LEZkj7TTTkyYsDtkb1 X+bUUscsLztY9Uo108muJS9NxUy44nDUM8g22PwNkH28EfrPY1D1GuqCC GINwOtej32Chvf+GQSFSrA/Y7SD8KHdqkLIxOqbcgxDcds3FUEg4Yguuw iqHm1pGCwycOEHjTOVE+QsEA0bTsbpfLuiQ/7ELxc4zBt8vs1ag8ok4Z0 w==; X-CSE-ConnectionGUID: 8puUgVQuS5yWgSWDtmqrSA== X-CSE-MsgGUID: 1qGDqeaJQa+qqM+CKzfKlw== X-IronPort-AV: E=McAfee;i="6600,9927,11047"; a="12748364" X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="12748364" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Apr 2024 02:01:04 -0700 X-CSE-ConnectionGUID: LIb44nJYSym+Ebegy3Gj3Q== X-CSE-MsgGUID: d5A/SF8IQ7Co+Sr3n95MLA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="54127622" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by fmviesa001.fm.intel.com with ESMTP; 18 Apr 2024 02:00:58 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Thu, 18 Apr 2024 16:59:06 +0800 Message-ID: <20240418085910.547-12-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240418085910.547-1-tong1.wu@intel.com> References: <20240418085910.547-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v8 12/15] avcodec/vaapi_encode: extract a free funtion to base layer X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: P7Iuu1w1LoDx From: Tong Wu Signed-off-by: Tong Wu --- libavcodec/hw_base_encode.c | 11 +++++++++++ libavcodec/hw_base_encode.h | 2 ++ libavcodec/vaapi_encode.c | 6 +----- 3 files changed, 14 insertions(+), 5 deletions(-) diff --git a/libavcodec/hw_base_encode.c b/libavcodec/hw_base_encode.c index af85bb99aa..812668f3f2 100644 --- a/libavcodec/hw_base_encode.c +++ b/libavcodec/hw_base_encode.c @@ -751,6 +751,17 @@ fail: return err; } +int ff_hw_base_encode_free(AVCodecContext *avctx, HWBaseEncodePicture *pic) +{ + av_frame_free(&pic->input_image); + av_frame_free(&pic->recon_image); + + av_buffer_unref(&pic->opaque_ref); + av_freep(&pic->priv_data); + + return 0; +} + int ff_hw_base_encode_init(AVCodecContext *avctx) { HWBaseEncodeContext *ctx = avctx->priv_data; diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h index 7686cf9501..d566980efc 100644 --- a/libavcodec/hw_base_encode.h +++ b/libavcodec/hw_base_encode.h @@ -222,6 +222,8 @@ int ff_hw_base_init_gop_structure(AVCodecContext *avctx, uint32_t ref_l0, uint32 int ff_hw_base_get_recon_format(AVCodecContext *avctx, const void *hwconfig, enum AVPixelFormat *fmt); +int ff_hw_base_encode_free(AVCodecContext *avctx, HWBaseEncodePicture *pic); + int ff_hw_base_encode_init(AVCodecContext *avctx); int ff_hw_base_encode_close(AVCodecContext *avctx); diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c index ee4cf42baf..08792c07c5 100644 --- a/libavcodec/vaapi_encode.c +++ b/libavcodec/vaapi_encode.c @@ -878,17 +878,13 @@ static int vaapi_encode_free(AVCodecContext *avctx, av_freep(&pic->slices[i].codec_slice_params); } - av_frame_free(&base_pic->input_image); - av_frame_free(&base_pic->recon_image); - - av_buffer_unref(&base_pic->opaque_ref); + ff_hw_base_encode_free(avctx, base_pic); av_freep(&pic->param_buffers); av_freep(&pic->slices); // Output buffer should already be destroyed. av_assert0(pic->output_buffer == VA_INVALID_ID); - av_freep(&base_pic->priv_data); av_freep(&pic->codec_picture_params); av_freep(&pic->roi); From patchwork Thu Apr 18 08:59:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 48147 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a20:ce4e:b0:1a9:af23:56c1 with SMTP id id14csp1552813pzb; Thu, 18 Apr 2024 02:03:19 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCVppW4m+lZbLAUcAxXG9TEtnbETVfp1fl59TLibfetSVCho9B/Siy2NhyOyTGt6cJFrxTbaSEO1lUDs7QhN0uf/uMxSeNuSNmiDPg== X-Google-Smtp-Source: AGHT+IHMYlewMVx33USl82XxTdHP+lw3UTKEo5tNCAK53OxdXeC7hlzuqpZnnqKnL2zlZ11TlRQD X-Received: by 2002:a05:651c:cc:b0:2db:a393:27cd with SMTP id 12-20020a05651c00cc00b002dba39327cdmr608676ljr.1.1713430998667; Thu, 18 Apr 2024 02:03:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1713430998; cv=none; d=google.com; s=arc-20160816; b=uIKLRk6OGXT5ZSQNoxvNRNR2suZRnRw/6/TKBuueHR4OEzFahSyX7hbvEiQgjGWmMP IvcxwCe0C4NmKj5bGRMEB0iReNnlAtTf9O/kjywGx1DRzi8iGwi9oVIpfObS4ndOPuac BAEi0iOYc7NVPRNzuP5erAAlSJvI2JbxJbIP42iQSek86YERKmaLeud37WU1gZ7fCUC3 PAa8kuRd6tHzOMyQiL2nyrx5q5R4MrSZ8e5euzE/e4bLFekSVESuqo2jAEhy1B/1we0j XZkGOWY/+/O/8ZR4pkwFPV1EDEDETigFDejoPr2OjsqJjMx2hJ5gQPoO0mDFDac0pdmN 51AQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=lR26JfG8b4htMH2TdRymKu2SOONTo57PGDXxpwQRV8U=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=zIQW0aHE+Wu5TLnf2+mfFVYFb00ufBC59IiV/x94yv28luzXIQNs2Xbz+oTIdYwaiF 3LjXSJFp9ROj5HXHTyazhmlSTI/qfBgg3yHbvru9HiioE9I1Aq2+KomTOFALwBgBcHwW ypfJI2safjc1U0wTvlMF2LENWUwFRDcn0EV6YLXGTyRa+qNPMIOcF/nrtu6wAeYStSnR 1p+VxHvwZNbnVqkrjOZsSx4YHnn5cMdNaP1cVSHa6qDTWt91m5TgUzcaAr0eaJ5m7ah+ PjKEljqDgtlnP19BEDGnIjCLRuQkK5c8IquEIdrgFeAiI0JEOSXLq7w10d7PJF+7I5Y/ w+8A==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=QrXLycYY; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id o18-20020a2e7312000000b002d83ed5b099si322902ljc.111.2024.04.18.02.03.18; Thu, 18 Apr 2024 02:03:18 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=QrXLycYY; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 72C2668D47C; Thu, 18 Apr 2024 12:01:18 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id D4C3B68D430 for ; Thu, 18 Apr 2024 12:01:11 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713430872; x=1744966872; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=P9G9idAkJmH1DEmFYjr9EdMLWvmh/eSKSWR7bk986sk=; b=QrXLycYY5S1DnZfQ8L1DnDmxtz204FV5OfHqVKquUSN79tHAUVqJj1JY EXTwK17ohkKUqsC1yRPzqEfi1d4hUh4NuB3jPuU3fURGB+la2jgxA2OM6 sg0IqzqG6e52g15Qszt1+7byLgpoO4AgJvBrzWFNfSp++/eHi3/vd4sSn a9Y9t+6Hx8KyOlMm/GL0HqOYbstOqFtN+mdFjyVrqknOwmFAQyQulgQwq u6AmdRlsICyp//mMNVKjSda1wTC6cSd1EEsM6ZXUfQm5Jfo0sf7JcJ/n2 e8x58KoaTWzUnRYLTWsdMp+TKheU8QysP83k5bOl83hOwlubBb0NY4UNb Q==; X-CSE-ConnectionGUID: sxg39d11TtWwHfcnMjNmkA== X-CSE-MsgGUID: d4fSxY1vSc+A6TEi0s/stg== X-IronPort-AV: E=McAfee;i="6600,9927,11047"; a="12748382" X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="12748382" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Apr 2024 02:01:05 -0700 X-CSE-ConnectionGUID: aRSvYXwCRciWHQ5+ti74bw== X-CSE-MsgGUID: nGYqtJuuSluHdQyDz89YMQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="54127635" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by fmviesa001.fm.intel.com with ESMTP; 18 Apr 2024 02:01:01 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Thu, 18 Apr 2024 16:59:07 +0800 Message-ID: <20240418085910.547-13-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240418085910.547-1-tong1.wu@intel.com> References: <20240418085910.547-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v8 13/15] avutil/hwcontext_d3d12va: add Flags for resource creation X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: +ynU5D1agltG From: Tong Wu Flags field is added to support diffferent resource creation. Signed-off-by: Tong Wu --- doc/APIchanges | 3 +++ libavutil/hwcontext_d3d12va.c | 2 +- libavutil/hwcontext_d3d12va.h | 8 ++++++++ libavutil/version.h | 2 +- 4 files changed, 13 insertions(+), 2 deletions(-) diff --git a/doc/APIchanges b/doc/APIchanges index 63e7a47126..ebf645dda8 100644 --- a/doc/APIchanges +++ b/doc/APIchanges @@ -2,6 +2,9 @@ The last version increases of all libraries were on 2024-03-07 API changes, most recent first: +2024-01-xx - xxxxxxxxxx - lavu 59.16.100 - hwcontext_d3d12va.h + Add AVD3D12VAFramesContext.flags + 2024-04-11 - xxxxxxxxxx - lavc 61.5.102 - avcodec.h AVCodecContext.decoded_side_data may now be set by libavcodec after calling avcodec_open2(). diff --git a/libavutil/hwcontext_d3d12va.c b/libavutil/hwcontext_d3d12va.c index cfc016315d..6507cf69c1 100644 --- a/libavutil/hwcontext_d3d12va.c +++ b/libavutil/hwcontext_d3d12va.c @@ -247,7 +247,7 @@ static AVBufferRef *d3d12va_pool_alloc(void *opaque, size_t size) .Format = hwctx->format, .SampleDesc = {.Count = 1, .Quality = 0 }, .Layout = D3D12_TEXTURE_LAYOUT_UNKNOWN, - .Flags = D3D12_RESOURCE_FLAG_NONE, + .Flags = hwctx->flags, }; frame = av_mallocz(sizeof(AVD3D12VAFrame)); diff --git a/libavutil/hwcontext_d3d12va.h b/libavutil/hwcontext_d3d12va.h index ff06e6f2ef..608dbac97f 100644 --- a/libavutil/hwcontext_d3d12va.h +++ b/libavutil/hwcontext_d3d12va.h @@ -129,6 +129,14 @@ typedef struct AVD3D12VAFramesContext { * If unset, will be automatically set. */ DXGI_FORMAT format; + + /** + * This field is used to specify options for working with resources. + * If unset, this will be D3D12_RESOURCE_FLAG_NONE. + * + * @see: https://learn.microsoft.com/en-us/windows/win32/api/d3d12/ne-d3d12-d3d12_resource_flags. + */ + D3D12_RESOURCE_FLAGS flags; } AVD3D12VAFramesContext; #endif /* AVUTIL_HWCONTEXT_D3D12VA_H */ diff --git a/libavutil/version.h b/libavutil/version.h index 1f2bddc022..ea289c406f 100644 --- a/libavutil/version.h +++ b/libavutil/version.h @@ -79,7 +79,7 @@ */ #define LIBAVUTIL_VERSION_MAJOR 59 -#define LIBAVUTIL_VERSION_MINOR 15 +#define LIBAVUTIL_VERSION_MINOR 16 #define LIBAVUTIL_VERSION_MICRO 100 #define LIBAVUTIL_VERSION_INT AV_VERSION_INT(LIBAVUTIL_VERSION_MAJOR, \ From patchwork Thu Apr 18 08:59:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 48135 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a20:ce4e:b0:1a9:af23:56c1 with SMTP id id14csp1552999pzb; Thu, 18 Apr 2024 02:03:41 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCWifYQNAJ2v5NFd2/cDhBc58M5MQJmC6eyBWxk3n1WaWLvToDtGME4BF8QUM2jkRzNWZAUn9VPoGTSxucAHcAN/n9Cd4d1Vd6OWfA== X-Google-Smtp-Source: AGHT+IFJ2TFuQh/kahERf7qbxnbCyfLPp/FL+jnB7DjWuQpiVh1MQiAQjKzuH42jQVHQ63mL4RrI X-Received: by 2002:a19:2d4a:0:b0:518:6ee8:3e2 with SMTP id t10-20020a192d4a000000b005186ee803e2mr974574lft.24.1713431019804; Thu, 18 Apr 2024 02:03:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1713431019; cv=none; d=google.com; s=arc-20160816; b=0xwX+0AHUXkLhwsuEXUCm4Z5+vQrKIOlDQkt+wxAdvY65g/bWYzdyysSyPPMvQLaOQ cF/tRZZ/YDUXFu20wo3M1CQKuLXY6aNarz468mI0qbJ0poOkDKl3E4yBRGNB2YT01Qdy xCUmk1RLxxgpHG9xtE8iBk5Jfaeds3cm0klg9KCrLKIMZgznXKBYsu9p4jzS4dP2pQDC j3eZYc8JQlIXQC9zjhQzYkgch5EPwqL1iXfQTJ3tdBptKj4Zme0La3SWNBPCKNmUKkiZ kDvvEbfG+3TMMx3MT4mEjQeOr5vv8I8pbSUTbHSUDPG5opXEjx6iixk/Ba4Yu6AHMDLO 2QNA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=pbgSVMtPc9qnJJMj3XRxW408NAhiOScInAwgjfYRzg0=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=mdqpp5Ky6WSxcswLM3TDmge5+QKaG7dWNbzJp/bksZWAKBqfqLUZ3JHBisCkM/E/ow 81NtiWHd4GRDIGVqqAgYptcLyjzWq/N6Lg01+I/64jnS8Zh45c3HqWBs1c9gNTp/2FKV QF96s+mxP4M2MkWPmfP5Yc0VvEI+E0HNHW/WBGWNQXWivSYnkQ4dWUDfeYyvL5wpfBLH Q3oHuLuocLYMFydpeOvN+fTTU/+uB4e+SpdX2oK/Vg4Kvk19cv1Tmlax04fU0NaUkh+A lLLMbveGlOjKrS6iFk3TefMVTiTkZtHitO+3Irc5as0rjGdClQ+jurJNoUhUUjG7oczi Q7Yg==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=lYIqwAGF; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id z13-20020aa7c64d000000b0056e6ac386f8si597759edr.55.2024.04.18.02.03.39; Thu, 18 Apr 2024 02:03:39 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=lYIqwAGF; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 4915268D414; Thu, 18 Apr 2024 12:01:20 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 8CC3968D430 for ; Thu, 18 Apr 2024 12:01:12 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713430873; x=1744966873; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LfqFq2HaggfofIOlL6H0c8TLrgMuaSbNRXDaeXgDODs=; b=lYIqwAGF8JDvZ2fsU50RC/Sdvd7CMBhmFWTbBjbW+i3daacSraezBP3j NygPN38hZhBQq3Ml5tY0LcMS+tPZOwx6ofOge1XWGI2MYPgmIVyqHrJ2o 3f7XQH3uQgSYdPQ5eAUZIR1JQdu68/TYZX++BjKc+0GDuiu8H3m2UKBnG O7G6QGvju11OMTDvIYUKzAhEPCATpHJp6xT3IBCzYkTM2tFZXKRdpu0mH kuDvvTWUWMKHXOuC4oq9PyTY/IWhqVsAY3YhNQGkd7EktYV2K0uVXFC6d zkp86bYjKm+sEilfBOSJHoM3e8c52LW3iKupXioXQ7WXGGkB2KCALr+Ft A==; X-CSE-ConnectionGUID: m9GV1hd8TEuKSOWV79jqKg== X-CSE-MsgGUID: KEzXPZ+HTAW0Wvudd1eR5Q== X-IronPort-AV: E=McAfee;i="6600,9927,11047"; a="12748395" X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="12748395" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Apr 2024 02:01:09 -0700 X-CSE-ConnectionGUID: znQkwMUHRpCsPL+6NDrprA== X-CSE-MsgGUID: 0ef0kfePSke1xJV/2luFeQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="54127653" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by fmviesa001.fm.intel.com with ESMTP; 18 Apr 2024 02:01:03 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Thu, 18 Apr 2024 16:59:08 +0800 Message-ID: <20240418085910.547-14-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240418085910.547-1-tong1.wu@intel.com> References: <20240418085910.547-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v8 14/15] avcodec: add D3D12VA hardware HEVC encoder X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: NWdS2oORUM08 From: Tong Wu This implementation is based on D3D12 Video Encoding Spec: https://microsoft.github.io/DirectX-Specs/d3d/D3D12VideoEncoding.html Sample command line for transcoding: ffmpeg.exe -hwaccel d3d12va -hwaccel_output_format d3d12 -i input.mp4 -c:v hevc_d3d12va output.mp4 Signed-off-by: Tong Wu --- configure | 6 + libavcodec/Makefile | 4 +- libavcodec/allcodecs.c | 1 + libavcodec/d3d12va_encode.c | 1558 ++++++++++++++++++++++++++++++ libavcodec/d3d12va_encode.h | 339 +++++++ libavcodec/d3d12va_encode_hevc.c | 1007 +++++++++++++++++++ 6 files changed, 2914 insertions(+), 1 deletion(-) create mode 100644 libavcodec/d3d12va_encode.c create mode 100644 libavcodec/d3d12va_encode.h create mode 100644 libavcodec/d3d12va_encode_hevc.c diff --git a/configure b/configure index 18eeca89f2..f11b256c80 100755 --- a/configure +++ b/configure @@ -2547,6 +2547,7 @@ CONFIG_EXTRA=" cbs_mpeg2 cbs_vp8 cbs_vp9 + d3d12va_encode deflate_wrapper dirac_parse dnn @@ -3274,6 +3275,7 @@ wmv3_vaapi_hwaccel_select="vc1_vaapi_hwaccel" wmv3_vdpau_hwaccel_select="vc1_vdpau_hwaccel" # hardware-accelerated codecs +d3d12va_encode_deps="d3d12va ID3D12VideoEncoder d3d12_encoder_feature" mediafoundation_deps="mftransform_h MFCreateAlignedMemoryBuffer" omx_deps="libdl pthreads" omx_rpi_select="omx" @@ -3340,6 +3342,7 @@ h264_v4l2m2m_encoder_deps="v4l2_m2m h264_v4l2_m2m" hevc_amf_encoder_deps="amf" hevc_cuvid_decoder_deps="cuvid" hevc_cuvid_decoder_select="hevc_mp4toannexb_bsf" +hevc_d3d12va_encoder_select="cbs_h265 d3d12va_encode" hevc_mediacodec_decoder_deps="mediacodec" hevc_mediacodec_decoder_select="hevc_mp4toannexb_bsf hevc_parser" hevc_mediacodec_encoder_deps="mediacodec" @@ -6697,6 +6700,9 @@ check_type "windows.h d3d11.h" "ID3D11VideoDecoder" check_type "windows.h d3d11.h" "ID3D11VideoContext" check_type "windows.h d3d12.h" "ID3D12Device" check_type "windows.h d3d12video.h" "ID3D12VideoDecoder" +check_type "windows.h d3d12video.h" "ID3D12VideoEncoder" +test_code cc "windows.h d3d12video.h" "D3D12_FEATURE_VIDEO feature = D3D12_FEATURE_VIDEO_ENCODER_CODEC" && \ +test_code cc "windows.h d3d12video.h" "D3D12_FEATURE_DATA_VIDEO_ENCODER_RESOURCE_REQUIREMENTS req" && enable d3d12_encoder_feature check_type "windows.h" "DPI_AWARENESS_CONTEXT" -D_WIN32_WINNT=0x0A00 check_type "d3d9.h dxva2api.h" DXVA2_ConfigPictureDecode -D_WIN32_WINNT=0x0602 check_func_headers mfapi.h MFCreateAlignedMemoryBuffer -lmfplat diff --git a/libavcodec/Makefile b/libavcodec/Makefile index a2174dcb2f..ed1fa62f92 100644 --- a/libavcodec/Makefile +++ b/libavcodec/Makefile @@ -84,6 +84,7 @@ OBJS-$(CONFIG_CBS_JPEG) += cbs_jpeg.o OBJS-$(CONFIG_CBS_MPEG2) += cbs_mpeg2.o OBJS-$(CONFIG_CBS_VP8) += cbs_vp8.o vp8data.o OBJS-$(CONFIG_CBS_VP9) += cbs_vp9.o +OBJS-$(CONFIG_D3D12VA_ENCODE) += d3d12va_encode.o hw_base_encode.o OBJS-$(CONFIG_DEFLATE_WRAPPER) += zlib_wrapper.o OBJS-$(CONFIG_DOVI_RPU) += dovi_rpu.o OBJS-$(CONFIG_ERROR_RESILIENCE) += error_resilience.o @@ -435,6 +436,7 @@ OBJS-$(CONFIG_HEVC_DECODER) += hevcdec.o hevc_mvs.o \ h274.o aom_film_grain.o OBJS-$(CONFIG_HEVC_AMF_ENCODER) += amfenc_hevc.o OBJS-$(CONFIG_HEVC_CUVID_DECODER) += cuviddec.o +OBJS-$(CONFIG_HEVC_D3D12VA_ENCODER) += d3d12va_encode_hevc.o OBJS-$(CONFIG_HEVC_MEDIACODEC_DECODER) += mediacodecdec.o OBJS-$(CONFIG_HEVC_MEDIACODEC_ENCODER) += mediacodecenc.o OBJS-$(CONFIG_HEVC_MF_ENCODER) += mfenc.o mf_utils.o @@ -1264,7 +1266,7 @@ SKIPHEADERS += %_tablegen.h \ SKIPHEADERS-$(CONFIG_AMF) += amfenc.h SKIPHEADERS-$(CONFIG_D3D11VA) += d3d11va.h dxva2_internal.h -SKIPHEADERS-$(CONFIG_D3D12VA) += d3d12va_decode.h +SKIPHEADERS-$(CONFIG_D3D12VA) += d3d12va_decode.h d3d12va_encode.h SKIPHEADERS-$(CONFIG_DXVA2) += dxva2.h dxva2_internal.h SKIPHEADERS-$(CONFIG_JNI) += ffjni.h SKIPHEADERS-$(CONFIG_LCMS2) += fflcms2.h diff --git a/libavcodec/allcodecs.c b/libavcodec/allcodecs.c index f4705651fb..de08eb722b 100644 --- a/libavcodec/allcodecs.c +++ b/libavcodec/allcodecs.c @@ -857,6 +857,7 @@ extern const FFCodec ff_h264_vaapi_encoder; extern const FFCodec ff_h264_videotoolbox_encoder; extern const FFCodec ff_hevc_amf_encoder; extern const FFCodec ff_hevc_cuvid_decoder; +extern const FFCodec ff_hevc_d3d12va_encoder; extern const FFCodec ff_hevc_mediacodec_decoder; extern const FFCodec ff_hevc_mediacodec_encoder; extern const FFCodec ff_hevc_mf_encoder; diff --git a/libavcodec/d3d12va_encode.c b/libavcodec/d3d12va_encode.c new file mode 100644 index 0000000000..1ef9c7a16f --- /dev/null +++ b/libavcodec/d3d12va_encode.c @@ -0,0 +1,1558 @@ +/* + * Direct3D 12 HW acceleration video encoder + * + * Copyright (c) 2024 Intel Corporation + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include "libavutil/avassert.h" +#include "libavutil/common.h" +#include "libavutil/internal.h" +#include "libavutil/log.h" +#include "libavutil/mem.h" +#include "libavutil/pixdesc.h" +#include "libavutil/hwcontext_d3d12va_internal.h" +#include "libavutil/hwcontext_d3d12va.h" + +#include "avcodec.h" +#include "d3d12va_encode.h" +#include "encode.h" + +const AVCodecHWConfigInternal *const ff_d3d12va_encode_hw_configs[] = { + HW_CONFIG_ENCODER_FRAMES(D3D12, D3D12VA), + NULL, +}; + +static int d3d12va_fence_completion(AVD3D12VASyncContext *psync_ctx) +{ + uint64_t completion = ID3D12Fence_GetCompletedValue(psync_ctx->fence); + if (completion < psync_ctx->fence_value) { + if (FAILED(ID3D12Fence_SetEventOnCompletion(psync_ctx->fence, psync_ctx->fence_value, psync_ctx->event))) + return AVERROR(EINVAL); + + WaitForSingleObjectEx(psync_ctx->event, INFINITE, FALSE); + } + + return 0; +} + +static int d3d12va_sync_with_gpu(AVCodecContext *avctx) +{ + D3D12VAEncodeContext *ctx = avctx->priv_data; + + DX_CHECK(ID3D12CommandQueue_Signal(ctx->command_queue, ctx->sync_ctx.fence, ++ctx->sync_ctx.fence_value)); + return d3d12va_fence_completion(&ctx->sync_ctx); + +fail: + return AVERROR(EINVAL); +} + +typedef struct CommandAllocator { + ID3D12CommandAllocator *command_allocator; + uint64_t fence_value; +} CommandAllocator; + +static int d3d12va_get_valid_command_allocator(AVCodecContext *avctx, ID3D12CommandAllocator **ppAllocator) +{ + HRESULT hr; + D3D12VAEncodeContext *ctx = avctx->priv_data; + CommandAllocator allocator; + + if (av_fifo_peek(ctx->allocator_queue, &allocator, 1, 0) >= 0) { + uint64_t completion = ID3D12Fence_GetCompletedValue(ctx->sync_ctx.fence); + if (completion >= allocator.fence_value) { + *ppAllocator = allocator.command_allocator; + av_fifo_read(ctx->allocator_queue, &allocator, 1); + return 0; + } + } + + hr = ID3D12Device_CreateCommandAllocator(ctx->hwctx->device, D3D12_COMMAND_LIST_TYPE_VIDEO_ENCODE, + &IID_ID3D12CommandAllocator, (void **)ppAllocator); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "Failed to create a new command allocator!\n"); + return AVERROR(EINVAL); + } + + return 0; +} + +static int d3d12va_discard_command_allocator(AVCodecContext *avctx, ID3D12CommandAllocator *pAllocator, uint64_t fence_value) +{ + D3D12VAEncodeContext *ctx = avctx->priv_data; + + CommandAllocator allocator = { + .command_allocator = pAllocator, + .fence_value = fence_value, + }; + + av_fifo_write(ctx->allocator_queue, &allocator, 1); + + return 0; +} + +static int d3d12va_encode_wait(AVCodecContext *avctx, + D3D12VAEncodePicture *pic) +{ + D3D12VAEncodeContext *ctx = avctx->priv_data; + HWBaseEncodePicture *base_pic = (HWBaseEncodePicture *)pic; + uint64_t completion; + + av_assert0(base_pic->encode_issued); + + if (base_pic->encode_complete) { + // Already waited for this picture. + return 0; + } + + completion = ID3D12Fence_GetCompletedValue(ctx->sync_ctx.fence); + if (completion < pic->fence_value) { + if (FAILED(ID3D12Fence_SetEventOnCompletion(ctx->sync_ctx.fence, pic->fence_value, + ctx->sync_ctx.event))) + return AVERROR(EINVAL); + + WaitForSingleObjectEx(ctx->sync_ctx.event, INFINITE, FALSE); + } + + av_log(avctx, AV_LOG_DEBUG, "Sync to pic %"PRId64"/%"PRId64" " + "(input surface %p).\n", base_pic->display_order, + base_pic->encode_order, pic->input_surface->texture); + + av_frame_free(&base_pic->input_image); + + base_pic->encode_complete = 1; + return 0; +} + +static int d3d12va_encode_create_metadata_buffers(AVCodecContext *avctx, + D3D12VAEncodePicture *pic) +{ + D3D12VAEncodeContext *ctx = avctx->priv_data; + int width = sizeof(D3D12_VIDEO_ENCODER_OUTPUT_METADATA) + sizeof(D3D12_VIDEO_ENCODER_FRAME_SUBREGION_METADATA); + D3D12_HEAP_PROPERTIES encoded_meta_props = { .Type = D3D12_HEAP_TYPE_DEFAULT }, resolved_meta_props; + D3D12_HEAP_TYPE resolved_heap_type = D3D12_HEAP_TYPE_READBACK; + HRESULT hr; + + D3D12_RESOURCE_DESC meta_desc = { + .Dimension = D3D12_RESOURCE_DIMENSION_BUFFER, + .Alignment = 0, + .Width = ctx->req.MaxEncoderOutputMetadataBufferSize, + .Height = 1, + .DepthOrArraySize = 1, + .MipLevels = 1, + .Format = DXGI_FORMAT_UNKNOWN, + .SampleDesc = { .Count = 1, .Quality = 0 }, + .Layout = D3D12_TEXTURE_LAYOUT_ROW_MAJOR, + .Flags = D3D12_RESOURCE_FLAG_NONE, + }; + + hr = ID3D12Device_CreateCommittedResource(ctx->hwctx->device, &encoded_meta_props, D3D12_HEAP_FLAG_NONE, + &meta_desc, D3D12_RESOURCE_STATE_COMMON, NULL, + &IID_ID3D12Resource, (void **)&pic->encoded_metadata); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "Failed to create metadata buffer.\n"); + return AVERROR_UNKNOWN; + } + + ctx->hwctx->device->lpVtbl->GetCustomHeapProperties(ctx->hwctx->device, &resolved_meta_props, 0, resolved_heap_type); + + meta_desc.Width = width; + + hr = ID3D12Device_CreateCommittedResource(ctx->hwctx->device, &resolved_meta_props, D3D12_HEAP_FLAG_NONE, + &meta_desc, D3D12_RESOURCE_STATE_COMMON, NULL, + &IID_ID3D12Resource, (void **)&pic->resolved_metadata); + + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "Failed to create output metadata buffer.\n"); + return AVERROR_UNKNOWN; + } + + return 0; +} + +static int d3d12va_encode_issue(AVCodecContext *avctx, + const HWBaseEncodePicture *base_pic) +{ + HWBaseEncodeContext *base_ctx = avctx->priv_data; + D3D12VAEncodeContext *ctx = avctx->priv_data; + AVD3D12VAFramesContext *frames_hwctx = base_ctx->input_frames->hwctx; + D3D12VAEncodePicture *pic = (D3D12VAEncodePicture *)base_pic; + int err, i, j; + HRESULT hr; + char data[MAX_PARAM_BUFFER_SIZE]; + void *ptr; + size_t bit_len; + ID3D12CommandAllocator *command_allocator = NULL; + ID3D12VideoEncodeCommandList2 *cmd_list = ctx->command_list; + D3D12_RESOURCE_BARRIER barriers[32] = { 0 }; + D3D12_VIDEO_ENCODE_REFERENCE_FRAMES d3d12_refs = { 0 }; + + D3D12_VIDEO_ENCODER_ENCODEFRAME_INPUT_ARGUMENTS input_args = { + .SequenceControlDesc = { + .Flags = D3D12_VIDEO_ENCODER_SEQUENCE_CONTROL_FLAG_NONE, + .IntraRefreshConfig = { 0 }, + .RateControl = ctx->rc, + .PictureTargetResolution = ctx->resolution, + .SelectedLayoutMode = D3D12_VIDEO_ENCODER_FRAME_SUBREGION_LAYOUT_MODE_FULL_FRAME, + .FrameSubregionsLayoutData = { 0 }, + .CodecGopSequence = ctx->gop, + }, + .pInputFrame = pic->input_surface->texture, + .InputFrameSubresource = 0, + }; + + D3D12_VIDEO_ENCODER_ENCODEFRAME_OUTPUT_ARGUMENTS output_args = { 0 }; + + D3D12_VIDEO_ENCODER_RESOLVE_METADATA_INPUT_ARGUMENTS input_metadata = { + .EncoderCodec = ctx->codec->d3d12_codec, + .EncoderProfile = ctx->profile->d3d12_profile, + .EncoderInputFormat = frames_hwctx->format, + .EncodedPictureEffectiveResolution = ctx->resolution, + }; + + D3D12_VIDEO_ENCODER_RESOLVE_METADATA_OUTPUT_ARGUMENTS output_metadata = { 0 }; + + memset(data, 0, sizeof(data)); + + av_log(avctx, AV_LOG_DEBUG, "Issuing encode for pic %"PRId64"/%"PRId64" " + "as type %s.\n", base_pic->display_order, base_pic->encode_order, + ff_hw_base_encode_get_pictype_name(base_pic->type)); + if (base_pic->nb_refs[0] == 0 && base_pic->nb_refs[1] == 0) { + av_log(avctx, AV_LOG_DEBUG, "No reference pictures.\n"); + } else { + av_log(avctx, AV_LOG_DEBUG, "L0 refers to"); + for (i = 0; i < base_pic->nb_refs[0]; i++) { + av_log(avctx, AV_LOG_DEBUG, " %"PRId64"/%"PRId64, + base_pic->refs[0][i]->display_order, base_pic->refs[0][i]->encode_order); + } + av_log(avctx, AV_LOG_DEBUG, ".\n"); + + if (base_pic->nb_refs[1]) { + av_log(avctx, AV_LOG_DEBUG, "L1 refers to"); + for (i = 0; i < base_pic->nb_refs[1]; i++) { + av_log(avctx, AV_LOG_DEBUG, " %"PRId64"/%"PRId64, + base_pic->refs[1][i]->display_order, base_pic->refs[1][i]->encode_order); + } + av_log(avctx, AV_LOG_DEBUG, ".\n"); + } + } + + av_assert0(!base_pic->encode_issued); + for (i = 0; i < base_pic->nb_refs[0]; i++) { + av_assert0(base_pic->refs[0][i]); + av_assert0(base_pic->refs[0][i]->encode_issued); + } + for (i = 0; i < base_pic->nb_refs[1]; i++) { + av_assert0(base_pic->refs[1][i]); + av_assert0(base_pic->refs[1][i]->encode_issued); + } + + av_log(avctx, AV_LOG_DEBUG, "Input surface is %p.\n", pic->input_surface->texture); + + err = av_hwframe_get_buffer(base_ctx->recon_frames_ref, base_pic->recon_image, 0); + if (err < 0) { + err = AVERROR(ENOMEM); + goto fail; + } + + pic->recon_surface = (AVD3D12VAFrame *)base_pic->recon_image->data[0]; + av_log(avctx, AV_LOG_DEBUG, "Recon surface is %p.\n", + pic->recon_surface->texture); + + pic->output_buffer_ref = av_buffer_pool_get(ctx->output_buffer_pool); + if (!pic->output_buffer_ref) { + err = AVERROR(ENOMEM); + goto fail; + } + pic->output_buffer = (ID3D12Resource *)pic->output_buffer_ref->data; + av_log(avctx, AV_LOG_DEBUG, "Output buffer is %p.\n", + pic->output_buffer); + + err = d3d12va_encode_create_metadata_buffers(avctx, pic); + if (err < 0) + goto fail; + + if (ctx->codec->init_picture_params) { + err = ctx->codec->init_picture_params(avctx, pic); + if (err < 0) { + av_log(avctx, AV_LOG_ERROR, "Failed to initialise picture " + "parameters: %d.\n", err); + goto fail; + } + } + + if (base_pic->type == PICTURE_TYPE_IDR) { + if (ctx->codec->write_sequence_header) { + bit_len = 8 * sizeof(data); + err = ctx->codec->write_sequence_header(avctx, data, &bit_len); + if (err < 0) { + av_log(avctx, AV_LOG_ERROR, "Failed to write per-sequence " + "header: %d.\n", err); + goto fail; + } + } + + pic->header_size = (int)bit_len / 8; + pic->header_size = pic->header_size % ctx->req.CompressedBitstreamBufferAccessAlignment ? + FFALIGN(pic->header_size, ctx->req.CompressedBitstreamBufferAccessAlignment) : + pic->header_size; + + hr = ID3D12Resource_Map(pic->output_buffer, 0, NULL, (void **)&ptr); + if (FAILED(hr)) { + err = AVERROR_UNKNOWN; + goto fail; + } + + memcpy(ptr, data, pic->header_size); + ID3D12Resource_Unmap(pic->output_buffer, 0, NULL); + } + + d3d12_refs.NumTexture2Ds = base_pic->nb_refs[0] + base_pic->nb_refs[1]; + if (d3d12_refs.NumTexture2Ds) { + d3d12_refs.ppTexture2Ds = av_calloc(d3d12_refs.NumTexture2Ds, + sizeof(*d3d12_refs.ppTexture2Ds)); + if (!d3d12_refs.ppTexture2Ds) { + err = AVERROR(ENOMEM); + goto fail; + } + + i = 0; + for (j = 0; j < base_pic->nb_refs[0]; j++) + d3d12_refs.ppTexture2Ds[i++] = ((D3D12VAEncodePicture *)base_pic->refs[0][j])->recon_surface->texture; + for (j = 0; j < base_pic->nb_refs[1]; j++) + d3d12_refs.ppTexture2Ds[i++] = ((D3D12VAEncodePicture *)base_pic->refs[1][j])->recon_surface->texture; + } + + input_args.PictureControlDesc.IntraRefreshFrameIndex = 0; + if (base_pic->is_reference) + input_args.PictureControlDesc.Flags |= D3D12_VIDEO_ENCODER_PICTURE_CONTROL_FLAG_USED_AS_REFERENCE_PICTURE; + + input_args.PictureControlDesc.PictureControlCodecData = pic->pic_ctl; + input_args.PictureControlDesc.ReferenceFrames = d3d12_refs; + input_args.CurrentFrameBitstreamMetadataSize = pic->header_size; + + output_args.Bitstream.pBuffer = pic->output_buffer; + output_args.Bitstream.FrameStartOffset = pic->header_size; + output_args.ReconstructedPicture.pReconstructedPicture = pic->recon_surface->texture; + output_args.ReconstructedPicture.ReconstructedPictureSubresource = 0; + output_args.EncoderOutputMetadata.pBuffer = pic->encoded_metadata; + output_args.EncoderOutputMetadata.Offset = 0; + + input_metadata.HWLayoutMetadata.pBuffer = pic->encoded_metadata; + input_metadata.HWLayoutMetadata.Offset = 0; + + output_metadata.ResolvedLayoutMetadata.pBuffer = pic->resolved_metadata; + output_metadata.ResolvedLayoutMetadata.Offset = 0; + + err = d3d12va_get_valid_command_allocator(avctx, &command_allocator); + if (err < 0) + goto fail; + + hr = ID3D12CommandAllocator_Reset(command_allocator); + if (FAILED(hr)) { + err = AVERROR_UNKNOWN; + goto fail; + } + + hr = ID3D12VideoEncodeCommandList2_Reset(cmd_list, command_allocator); + if (FAILED(hr)) { + err = AVERROR_UNKNOWN; + goto fail; + } + +#define TRANSITION_BARRIER(res, before, after) \ + (D3D12_RESOURCE_BARRIER) { \ + .Type = D3D12_RESOURCE_BARRIER_TYPE_TRANSITION, \ + .Flags = D3D12_RESOURCE_BARRIER_FLAG_NONE, \ + .Transition = { \ + .pResource = res, \ + .Subresource = D3D12_RESOURCE_BARRIER_ALL_SUBRESOURCES, \ + .StateBefore = before, \ + .StateAfter = after, \ + }, \ + } + + barriers[0] = TRANSITION_BARRIER(pic->input_surface->texture, + D3D12_RESOURCE_STATE_COMMON, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ); + barriers[1] = TRANSITION_BARRIER(pic->output_buffer, + D3D12_RESOURCE_STATE_COMMON, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE); + barriers[2] = TRANSITION_BARRIER(pic->recon_surface->texture, + D3D12_RESOURCE_STATE_COMMON, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE); + barriers[3] = TRANSITION_BARRIER(pic->encoded_metadata, + D3D12_RESOURCE_STATE_COMMON, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE); + barriers[4] = TRANSITION_BARRIER(pic->resolved_metadata, + D3D12_RESOURCE_STATE_COMMON, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE); + + ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, 5, barriers); + + if (d3d12_refs.NumTexture2Ds) { + D3D12_RESOURCE_BARRIER refs_barriers[3]; + + for (i = 0; i < d3d12_refs.NumTexture2Ds; i++) + refs_barriers[i] = TRANSITION_BARRIER(d3d12_refs.ppTexture2Ds[i], + D3D12_RESOURCE_STATE_COMMON, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ); + + ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, d3d12_refs.NumTexture2Ds, + refs_barriers); + } + + ID3D12VideoEncodeCommandList2_EncodeFrame(cmd_list, ctx->encoder, ctx->encoder_heap, + &input_args, &output_args); + + barriers[3] = TRANSITION_BARRIER(pic->encoded_metadata, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ); + + ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, 1, &barriers[3]); + + ID3D12VideoEncodeCommandList2_ResolveEncoderOutputMetadata(cmd_list, &input_metadata, &output_metadata); + + if (d3d12_refs.NumTexture2Ds) { + D3D12_RESOURCE_BARRIER refs_barriers[3]; + + for (i = 0; i < d3d12_refs.NumTexture2Ds; i++) + refs_barriers[i] = TRANSITION_BARRIER(d3d12_refs.ppTexture2Ds[i], + D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ, + D3D12_RESOURCE_STATE_COMMON); + + ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, d3d12_refs.NumTexture2Ds, + refs_barriers); + } + + barriers[0] = TRANSITION_BARRIER(pic->input_surface->texture, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ, + D3D12_RESOURCE_STATE_COMMON); + barriers[1] = TRANSITION_BARRIER(pic->output_buffer, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE, + D3D12_RESOURCE_STATE_COMMON); + barriers[2] = TRANSITION_BARRIER(pic->recon_surface->texture, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE, + D3D12_RESOURCE_STATE_COMMON); + barriers[3] = TRANSITION_BARRIER(pic->encoded_metadata, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ, + D3D12_RESOURCE_STATE_COMMON); + barriers[4] = TRANSITION_BARRIER(pic->resolved_metadata, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE, + D3D12_RESOURCE_STATE_COMMON); + + ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, 5, barriers); + + hr = ID3D12VideoEncodeCommandList2_Close(cmd_list); + if (FAILED(hr)) { + err = AVERROR_UNKNOWN; + goto fail; + } + + hr = ID3D12CommandQueue_Wait(ctx->command_queue, pic->input_surface->sync_ctx.fence, + pic->input_surface->sync_ctx.fence_value); + if (FAILED(hr)) { + err = AVERROR_UNKNOWN; + goto fail; + } + + ID3D12CommandQueue_ExecuteCommandLists(ctx->command_queue, 1, (ID3D12CommandList **)&ctx->command_list); + + hr = ID3D12CommandQueue_Signal(ctx->command_queue, pic->input_surface->sync_ctx.fence, + ++pic->input_surface->sync_ctx.fence_value); + if (FAILED(hr)) { + err = AVERROR_UNKNOWN; + goto fail; + } + + hr = ID3D12CommandQueue_Signal(ctx->command_queue, ctx->sync_ctx.fence, ++ctx->sync_ctx.fence_value); + if (FAILED(hr)) { + err = AVERROR_UNKNOWN; + goto fail; + } + + err = d3d12va_discard_command_allocator(avctx, command_allocator, ctx->sync_ctx.fence_value); + if (err < 0) + goto fail; + + pic->fence_value = ctx->sync_ctx.fence_value; + + if (d3d12_refs.ppTexture2Ds) + av_freep(&d3d12_refs.ppTexture2Ds); + + return 0; + +fail: + if (command_allocator) + d3d12va_discard_command_allocator(avctx, command_allocator, ctx->sync_ctx.fence_value); + + if (d3d12_refs.ppTexture2Ds) + av_freep(&d3d12_refs.ppTexture2Ds); + + if (ctx->codec->free_picture_params) + ctx->codec->free_picture_params(pic); + + av_buffer_unref(&pic->output_buffer_ref); + pic->output_buffer = NULL; + D3D12_OBJECT_RELEASE(pic->encoded_metadata); + D3D12_OBJECT_RELEASE(pic->resolved_metadata); + return err; +} + +static int d3d12va_encode_discard(AVCodecContext *avctx, + D3D12VAEncodePicture *pic) +{ + HWBaseEncodePicture *base_pic = (HWBaseEncodePicture *)pic; + d3d12va_encode_wait(avctx, pic); + + if (pic->output_buffer_ref) { + av_log(avctx, AV_LOG_DEBUG, "Discard output for pic " + "%"PRId64"/%"PRId64".\n", + base_pic->display_order, base_pic->encode_order); + + av_buffer_unref(&pic->output_buffer_ref); + pic->output_buffer = NULL; + } + + D3D12_OBJECT_RELEASE(pic->encoded_metadata); + D3D12_OBJECT_RELEASE(pic->resolved_metadata); + + return 0; +} + +static int d3d12va_encode_free_rc_params(AVCodecContext *avctx) +{ + D3D12VAEncodeContext *ctx = avctx->priv_data; + + switch (ctx->rc.Mode) + { + case D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_CQP: + av_freep(&ctx->rc.ConfigParams.pConfiguration_CQP); + break; + case D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_CBR: + av_freep(&ctx->rc.ConfigParams.pConfiguration_CBR); + break; + case D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_VBR: + av_freep(&ctx->rc.ConfigParams.pConfiguration_VBR); + break; + case D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_QVBR: + av_freep(&ctx->rc.ConfigParams.pConfiguration_QVBR); + break; + default: + break; + } + + return 0; +} + +static HWBaseEncodePicture *d3d12va_encode_alloc(AVCodecContext *avctx, + const AVFrame *frame) +{ + D3D12VAEncodeContext *ctx = avctx->priv_data; + D3D12VAEncodePicture *pic; + + pic = av_mallocz(sizeof(*pic)); + if (!pic) + return NULL; + + if (ctx->codec->picture_priv_data_size > 0) { + pic->base.priv_data = av_mallocz(ctx->codec->picture_priv_data_size); + if (!pic->base.priv_data) { + av_freep(&pic); + return NULL; + } + } + + pic->input_surface = (AVD3D12VAFrame *)frame->data[0]; + + return (HWBaseEncodePicture *)pic; +} + +static int d3d12va_encode_free(AVCodecContext *avctx, + HWBaseEncodePicture *base_pic) +{ + D3D12VAEncodeContext *ctx = avctx->priv_data; + D3D12VAEncodePicture *pic = (D3D12VAEncodePicture *)base_pic; + + if (base_pic->encode_issued) + d3d12va_encode_discard(avctx, pic); + + if (ctx->codec->free_picture_params) + ctx->codec->free_picture_params(pic); + + ff_hw_base_encode_free(avctx, base_pic); + + av_free(pic); + + return 0; +} + +static int d3d12va_encode_get_buffer_size(AVCodecContext *avctx, + D3D12VAEncodePicture *pic, size_t *size) +{ + D3D12_VIDEO_ENCODER_OUTPUT_METADATA *meta = NULL; + uint8_t *data; + HRESULT hr; + int err; + + hr = ID3D12Resource_Map(pic->resolved_metadata, 0, NULL, (void **)&data); + if (FAILED(hr)) { + err = AVERROR_UNKNOWN; + return err; + } + + meta = (D3D12_VIDEO_ENCODER_OUTPUT_METADATA *)data; + + if (meta->EncodeErrorFlags != D3D12_VIDEO_ENCODER_ENCODE_ERROR_FLAG_NO_ERROR) { + av_log(avctx, AV_LOG_ERROR, "Encode failed %"PRIu64"\n", meta->EncodeErrorFlags); + err = AVERROR(EINVAL); + return err; + } + + if (meta->EncodedBitstreamWrittenBytesCount == 0) { + av_log(avctx, AV_LOG_ERROR, "No bytes were written to encoded bitstream\n"); + err = AVERROR(EINVAL); + return err; + } + + *size = meta->EncodedBitstreamWrittenBytesCount; + + ID3D12Resource_Unmap(pic->resolved_metadata, 0, NULL); + + return 0; +} + +static int d3d12va_encode_get_coded_data(AVCodecContext *avctx, + D3D12VAEncodePicture *pic, AVPacket *pkt) +{ + int err; + uint8_t *ptr, *mapped_data; + size_t total_size = 0; + HRESULT hr; + + err = d3d12va_encode_get_buffer_size(avctx, pic, &total_size); + if (err < 0) + goto end; + + total_size += pic->header_size; + av_log(avctx, AV_LOG_DEBUG, "Output buffer size %"PRId64"\n", total_size); + + hr = ID3D12Resource_Map(pic->output_buffer, 0, NULL, (void **)&mapped_data); + if (FAILED(hr)) { + err = AVERROR_UNKNOWN; + goto end; + } + + err = ff_get_encode_buffer(avctx, pkt, total_size, 0); + if (err < 0) + goto end; + ptr = pkt->data; + + memcpy(ptr, mapped_data, total_size); + + ID3D12Resource_Unmap(pic->output_buffer, 0, NULL); + +end: + av_buffer_unref(&pic->output_buffer_ref); + pic->output_buffer = NULL; + return err; +} + +static int d3d12va_encode_output(AVCodecContext *avctx, + const HWBaseEncodePicture *base_pic, AVPacket *pkt) +{ + D3D12VAEncodePicture *pic = (D3D12VAEncodePicture *)base_pic; + AVPacket *pkt_ptr = pkt; + int err; + + err = d3d12va_encode_wait(avctx, pic); + if (err < 0) + return err; + + err = d3d12va_encode_get_coded_data(avctx, pic, pkt); + if (err < 0) + return err; + + av_log(avctx, AV_LOG_DEBUG, "Output read for pic %"PRId64"/%"PRId64".\n", + base_pic->display_order, base_pic->encode_order); + + ff_hw_base_encode_set_output_property(avctx, (HWBaseEncodePicture *)base_pic, pkt_ptr, 0); + + return 0; +} + +static int d3d12va_encode_set_profile(AVCodecContext *avctx) +{ + HWBaseEncodeContext *base_ctx = avctx->priv_data; + D3D12VAEncodeContext *ctx = avctx->priv_data; + const D3D12VAEncodeProfile *profile; + const AVPixFmtDescriptor *desc; + int i, depth; + + desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format); + if (!desc) { + av_log(avctx, AV_LOG_ERROR, "Invalid input pixfmt (%d).\n", + base_ctx->input_frames->sw_format); + return AVERROR(EINVAL); + } + + depth = desc->comp[0].depth; + for (i = 1; i < desc->nb_components; i++) { + if (desc->comp[i].depth != depth) { + av_log(avctx, AV_LOG_ERROR, "Invalid input pixfmt (%s).\n", + desc->name); + return AVERROR(EINVAL); + } + } + av_log(avctx, AV_LOG_VERBOSE, "Input surface format is %s.\n", + desc->name); + + av_assert0(ctx->codec->profiles); + for (i = 0; (ctx->codec->profiles[i].av_profile != + AV_PROFILE_UNKNOWN); i++) { + profile = &ctx->codec->profiles[i]; + if (depth != profile->depth || + desc->nb_components != profile->nb_components) + continue; + if (desc->nb_components > 1 && + (desc->log2_chroma_w != profile->log2_chroma_w || + desc->log2_chroma_h != profile->log2_chroma_h)) + continue; + if (avctx->profile != profile->av_profile && + avctx->profile != AV_PROFILE_UNKNOWN) + continue; + + ctx->profile = profile; + break; + } + if (!ctx->profile) { + av_log(avctx, AV_LOG_ERROR, "No usable encoding profile found.\n"); + return AVERROR(ENOSYS); + } + + avctx->profile = profile->av_profile; + return 0; +} + +static const D3D12VAEncodeRCMode d3d12va_encode_rc_modes[] = { + // Bitrate Quality + // | Maxrate | HRD/VBV + { 0 }, // | | | | + { RC_MODE_CQP, "CQP", 0, 0, 1, 0, 1, D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_CQP }, + { RC_MODE_CBR, "CBR", 1, 0, 0, 1, 1, D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_CBR }, + { RC_MODE_VBR, "VBR", 1, 1, 0, 1, 1, D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_VBR }, + { RC_MODE_QVBR, "QVBR", 1, 1, 1, 1, 1, D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_QVBR }, +}; + +static int check_rate_control_support(AVCodecContext *avctx, const D3D12VAEncodeRCMode *rc_mode) +{ + HRESULT hr; + D3D12VAEncodeContext *ctx = avctx->priv_data; + D3D12_FEATURE_DATA_VIDEO_ENCODER_RATE_CONTROL_MODE d3d12_rc_mode = { + .Codec = ctx->codec->d3d12_codec, + }; + + if (!rc_mode->d3d12_mode) + return 0; + + d3d12_rc_mode.IsSupported = 0; + d3d12_rc_mode.RateControlMode = rc_mode->d3d12_mode; + + hr = ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3, + D3D12_FEATURE_VIDEO_ENCODER_RATE_CONTROL_MODE, + &d3d12_rc_mode, sizeof(d3d12_rc_mode)); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "Failed to check rate control support.\n"); + return 0; + } + + return d3d12_rc_mode.IsSupported; +} + +static int d3d12va_encode_init_rate_control(AVCodecContext *avctx) +{ + D3D12VAEncodeContext *ctx = avctx->priv_data; + int64_t rc_target_bitrate; + int64_t rc_peak_bitrate; + int rc_quality; + int64_t hrd_buffer_size; + int64_t hrd_initial_buffer_fullness; + int fr_num, fr_den; + const D3D12VAEncodeRCMode *rc_mode; + + // Rate control mode selection: + // * If the user has set a mode explicitly with the rc_mode option, + // use it and fail if it is not available. + // * If an explicit QP option has been set, use CQP. + // * If the codec is CQ-only, use CQP. + // * If the QSCALE avcodec option is set, use CQP. + // * If bitrate and quality are both set, try QVBR. + // * If quality is set, try CQP. + // * If bitrate and maxrate are set and have the same value, try CBR. + // * If a bitrate is set, try VBR, then CBR. + // * If no bitrate is set, try CQP. + +#define TRY_RC_MODE(mode, fail) do { \ + rc_mode = &d3d12va_encode_rc_modes[mode]; \ + if (!(rc_mode->d3d12_mode && check_rate_control_support(avctx, rc_mode))) { \ + if (fail) { \ + av_log(avctx, AV_LOG_ERROR, "Driver does not support %s " \ + "RC mode.\n", rc_mode->name); \ + return AVERROR(EINVAL); \ + } \ + av_log(avctx, AV_LOG_DEBUG, "Driver does not support %s " \ + "RC mode.\n", rc_mode->name); \ + rc_mode = NULL; \ + } else { \ + goto rc_mode_found; \ + } \ + } while (0) + + if (ctx->explicit_rc_mode) + TRY_RC_MODE(ctx->explicit_rc_mode, 1); + + if (ctx->explicit_qp) + TRY_RC_MODE(RC_MODE_CQP, 1); + + if (ctx->codec->flags & FLAG_CONSTANT_QUALITY_ONLY) + TRY_RC_MODE(RC_MODE_CQP, 1); + + if (avctx->flags & AV_CODEC_FLAG_QSCALE) + TRY_RC_MODE(RC_MODE_CQP, 1); + + if (avctx->bit_rate > 0 && avctx->global_quality > 0) + TRY_RC_MODE(RC_MODE_QVBR, 0); + + if (avctx->global_quality > 0) { + TRY_RC_MODE(RC_MODE_CQP, 0); + } + + if (avctx->bit_rate > 0 && avctx->rc_max_rate == avctx->bit_rate) + TRY_RC_MODE(RC_MODE_CBR, 0); + + if (avctx->bit_rate > 0) { + TRY_RC_MODE(RC_MODE_VBR, 0); + TRY_RC_MODE(RC_MODE_CBR, 0); + } else { + TRY_RC_MODE(RC_MODE_CQP, 0); + } + + av_log(avctx, AV_LOG_ERROR, "Driver does not support any " + "RC mode compatible with selected options.\n"); + return AVERROR(EINVAL); + +rc_mode_found: + if (rc_mode->bitrate) { + if (avctx->bit_rate <= 0) { + av_log(avctx, AV_LOG_ERROR, "Bitrate must be set for %s " + "RC mode.\n", rc_mode->name); + return AVERROR(EINVAL); + } + + if (rc_mode->maxrate) { + if (avctx->rc_max_rate > 0) { + if (avctx->rc_max_rate < avctx->bit_rate) { + av_log(avctx, AV_LOG_ERROR, "Invalid bitrate settings: " + "bitrate (%"PRId64") must not be greater than " + "maxrate (%"PRId64").\n", avctx->bit_rate, + avctx->rc_max_rate); + return AVERROR(EINVAL); + } + rc_target_bitrate = avctx->bit_rate; + rc_peak_bitrate = avctx->rc_max_rate; + } else { + // We only have a target bitrate, but this mode requires + // that a maximum rate be supplied as well. Since the + // user does not want this to be a constraint, arbitrarily + // pick a maximum rate of double the target rate. + rc_target_bitrate = avctx->bit_rate; + rc_peak_bitrate = 2 * avctx->bit_rate; + } + } else { + if (avctx->rc_max_rate > avctx->bit_rate) { + av_log(avctx, AV_LOG_WARNING, "Max bitrate is ignored " + "in %s RC mode.\n", rc_mode->name); + } + rc_target_bitrate = avctx->bit_rate; + rc_peak_bitrate = 0; + } + } else { + rc_target_bitrate = 0; + rc_peak_bitrate = 0; + } + + if (rc_mode->quality) { + if (ctx->explicit_qp) { + rc_quality = ctx->explicit_qp; + } else if (avctx->global_quality > 0) { + if (avctx->flags & AV_CODEC_FLAG_QSCALE) + rc_quality = avctx->global_quality / FF_QP2LAMBDA; + else + rc_quality = avctx->global_quality; + } else { + rc_quality = ctx->codec->default_quality; + av_log(avctx, AV_LOG_WARNING, "No quality level set; " + "using default (%d).\n", rc_quality); + } + } else { + rc_quality = 0; + } + + if (rc_mode->hrd) { + if (avctx->rc_buffer_size) + hrd_buffer_size = avctx->rc_buffer_size; + else if (avctx->rc_max_rate > 0) + hrd_buffer_size = avctx->rc_max_rate; + else + hrd_buffer_size = avctx->bit_rate; + if (avctx->rc_initial_buffer_occupancy) { + if (avctx->rc_initial_buffer_occupancy > hrd_buffer_size) { + av_log(avctx, AV_LOG_ERROR, "Invalid RC buffer settings: " + "must have initial buffer size (%d) <= " + "buffer size (%"PRId64").\n", + avctx->rc_initial_buffer_occupancy, hrd_buffer_size); + return AVERROR(EINVAL); + } + hrd_initial_buffer_fullness = avctx->rc_initial_buffer_occupancy; + } else { + hrd_initial_buffer_fullness = hrd_buffer_size * 3 / 4; + } + } else { + if (avctx->rc_buffer_size || avctx->rc_initial_buffer_occupancy) { + av_log(avctx, AV_LOG_WARNING, "Buffering settings are ignored " + "in %s RC mode.\n", rc_mode->name); + } + + hrd_buffer_size = 0; + hrd_initial_buffer_fullness = 0; + } + + if (rc_target_bitrate > UINT32_MAX || + hrd_buffer_size > UINT32_MAX || + hrd_initial_buffer_fullness > UINT32_MAX) { + av_log(avctx, AV_LOG_ERROR, "RC parameters of 2^32 or " + "greater are not supported by D3D12.\n"); + return AVERROR(EINVAL); + } + + ctx->rc_quality = rc_quality; + + av_log(avctx, AV_LOG_VERBOSE, "RC mode: %s.\n", rc_mode->name); + + if (rc_mode->quality) + av_log(avctx, AV_LOG_VERBOSE, "RC quality: %d.\n", rc_quality); + + if (rc_mode->hrd) { + av_log(avctx, AV_LOG_VERBOSE, "RC buffer: %"PRId64" bits, " + "initial fullness %"PRId64" bits.\n", + hrd_buffer_size, hrd_initial_buffer_fullness); + } + + if (avctx->framerate.num > 0 && avctx->framerate.den > 0) + av_reduce(&fr_num, &fr_den, + avctx->framerate.num, avctx->framerate.den, 65535); + else + av_reduce(&fr_num, &fr_den, + avctx->time_base.den, avctx->time_base.num, 65535); + + av_log(avctx, AV_LOG_VERBOSE, "RC framerate: %d/%d (%.2f fps).\n", + fr_num, fr_den, (double)fr_num / fr_den); + + ctx->rc.Flags = D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_NONE; + ctx->rc.TargetFrameRate.Numerator = fr_num; + ctx->rc.TargetFrameRate.Denominator = fr_den; + ctx->rc.Mode = rc_mode->d3d12_mode; + + switch (rc_mode->mode) { + case RC_MODE_CQP: + // cqp ConfigParams will be updated in ctx->codec->configure. + break; + + case RC_MODE_CBR: + D3D12_VIDEO_ENCODER_RATE_CONTROL_CBR *cbr_ctl; + + ctx->rc.ConfigParams.DataSize = sizeof(D3D12_VIDEO_ENCODER_RATE_CONTROL_CBR); + cbr_ctl = av_mallocz(ctx->rc.ConfigParams.DataSize); + if (!cbr_ctl) + return AVERROR(ENOMEM); + + cbr_ctl->TargetBitRate = rc_target_bitrate; + cbr_ctl->VBVCapacity = hrd_buffer_size; + cbr_ctl->InitialVBVFullness = hrd_initial_buffer_fullness; + ctx->rc.Flags |= D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_ENABLE_VBV_SIZES; + + if (avctx->qmin > 0 || avctx->qmax > 0) { + cbr_ctl->MinQP = avctx->qmin; + cbr_ctl->MaxQP = avctx->qmax; + ctx->rc.Flags |= D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_ENABLE_QP_RANGE; + } + + ctx->rc.ConfigParams.pConfiguration_CBR = cbr_ctl; + break; + + case RC_MODE_VBR: + D3D12_VIDEO_ENCODER_RATE_CONTROL_VBR *vbr_ctl; + + ctx->rc.ConfigParams.DataSize = sizeof(D3D12_VIDEO_ENCODER_RATE_CONTROL_VBR); + vbr_ctl = av_mallocz(ctx->rc.ConfigParams.DataSize); + if (!vbr_ctl) + return AVERROR(ENOMEM); + + vbr_ctl->TargetAvgBitRate = rc_target_bitrate; + vbr_ctl->PeakBitRate = rc_peak_bitrate; + vbr_ctl->VBVCapacity = hrd_buffer_size; + vbr_ctl->InitialVBVFullness = hrd_initial_buffer_fullness; + ctx->rc.Flags |= D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_ENABLE_VBV_SIZES; + + if (avctx->qmin > 0 || avctx->qmax > 0) { + vbr_ctl->MinQP = avctx->qmin; + vbr_ctl->MaxQP = avctx->qmax; + ctx->rc.Flags |= D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_ENABLE_QP_RANGE; + } + + ctx->rc.ConfigParams.pConfiguration_VBR = vbr_ctl; + break; + + case RC_MODE_QVBR: + D3D12_VIDEO_ENCODER_RATE_CONTROL_QVBR *qvbr_ctl; + + ctx->rc.ConfigParams.DataSize = sizeof(D3D12_VIDEO_ENCODER_RATE_CONTROL_QVBR); + qvbr_ctl = av_mallocz(ctx->rc.ConfigParams.DataSize); + if (!qvbr_ctl) + return AVERROR(ENOMEM); + + qvbr_ctl->TargetAvgBitRate = rc_target_bitrate; + qvbr_ctl->PeakBitRate = rc_peak_bitrate; + qvbr_ctl->ConstantQualityTarget = rc_quality; + + if (avctx->qmin > 0 || avctx->qmax > 0) { + qvbr_ctl->MinQP = avctx->qmin; + qvbr_ctl->MaxQP = avctx->qmax; + ctx->rc.Flags |= D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_ENABLE_QP_RANGE; + } + + ctx->rc.ConfigParams.pConfiguration_QVBR = qvbr_ctl; + break; + + default: + break; + } + return 0; +} + +static int d3d12va_encode_init_gop_structure(AVCodecContext *avctx) +{ + HWBaseEncodeContext *base_ctx = avctx->priv_data; + D3D12VAEncodeContext *ctx = avctx->priv_data; + uint32_t ref_l0, ref_l1; + int err; + HRESULT hr; + D3D12_FEATURE_DATA_VIDEO_ENCODER_CODEC_PICTURE_CONTROL_SUPPORT support; + union { + D3D12_VIDEO_ENCODER_CODEC_PICTURE_CONTROL_SUPPORT_H264 h264; + D3D12_VIDEO_ENCODER_CODEC_PICTURE_CONTROL_SUPPORT_HEVC hevc; + } codec_support; + + support.NodeIndex = 0; + support.Codec = ctx->codec->d3d12_codec; + support.Profile = ctx->profile->d3d12_profile; + + switch (ctx->codec->d3d12_codec) { + case D3D12_VIDEO_ENCODER_CODEC_H264: + support.PictureSupport.DataSize = sizeof(codec_support.h264); + support.PictureSupport.pH264Support = &codec_support.h264; + break; + + case D3D12_VIDEO_ENCODER_CODEC_HEVC: + support.PictureSupport.DataSize = sizeof(codec_support.hevc); + support.PictureSupport.pHEVCSupport = &codec_support.hevc; + break; + + default: + av_assert0(0); + } + + hr = ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3, D3D12_FEATURE_VIDEO_ENCODER_CODEC_PICTURE_CONTROL_SUPPORT, + &support, sizeof(support)); + if (FAILED(hr)) + return AVERROR(EINVAL); + + if (support.IsSupported) { + switch (ctx->codec->d3d12_codec) { + case D3D12_VIDEO_ENCODER_CODEC_H264: + ref_l0 = FFMIN(support.PictureSupport.pH264Support->MaxL0ReferencesForP, + support.PictureSupport.pH264Support->MaxL1ReferencesForB); + ref_l1 = support.PictureSupport.pH264Support->MaxL1ReferencesForB; + break; + + case D3D12_VIDEO_ENCODER_CODEC_HEVC: + ref_l0 = FFMIN(support.PictureSupport.pHEVCSupport->MaxL0ReferencesForP, + support.PictureSupport.pHEVCSupport->MaxL1ReferencesForB); + ref_l1 = support.PictureSupport.pHEVCSupport->MaxL1ReferencesForB; + break; + + default: + av_assert0(0); + } + } else { + ref_l0 = ref_l1 = 0; + } + + if (ref_l0 > 0 && ref_l1 > 0 && ctx->bi_not_empty) { + base_ctx->p_to_gpb = 1; + av_log(avctx, AV_LOG_VERBOSE, "Driver does not support P-frames, " + "replacing them with B-frames.\n"); + } + + err = ff_hw_base_init_gop_structure(avctx, ref_l0, ref_l1, ctx->codec->flags, 0); + if (err < 0) + return err; + + return 0; +} + +static int d3d12va_create_encoder(AVCodecContext *avctx) +{ + HWBaseEncodeContext *base_ctx = avctx->priv_data; + D3D12VAEncodeContext *ctx = avctx->priv_data; + AVD3D12VAFramesContext *frames_hwctx = base_ctx->input_frames->hwctx; + HRESULT hr; + + D3D12_VIDEO_ENCODER_DESC desc = { + .NodeMask = 0, + .Flags = D3D12_VIDEO_ENCODER_FLAG_NONE, + .EncodeCodec = ctx->codec->d3d12_codec, + .EncodeProfile = ctx->profile->d3d12_profile, + .InputFormat = frames_hwctx->format, + .CodecConfiguration = ctx->codec_conf, + .MaxMotionEstimationPrecision = D3D12_VIDEO_ENCODER_MOTION_ESTIMATION_PRECISION_MODE_MAXIMUM, + }; + + hr = ID3D12VideoDevice3_CreateVideoEncoder(ctx->video_device3, &desc, &IID_ID3D12VideoEncoder, + (void **)&ctx->encoder); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "Failed to create encoder.\n"); + return AVERROR(EINVAL); + } + + return 0; +} + +static int d3d12va_create_encoder_heap(AVCodecContext* avctx) +{ + D3D12VAEncodeContext *ctx = avctx->priv_data; + HRESULT hr; + + D3D12_VIDEO_ENCODER_HEAP_DESC desc = { + .NodeMask = 0, + .Flags = D3D12_VIDEO_ENCODER_FLAG_NONE, + .EncodeCodec = ctx->codec->d3d12_codec, + .EncodeProfile = ctx->profile->d3d12_profile, + .EncodeLevel = ctx->level, + .ResolutionsListCount = 1, + .pResolutionList = &ctx->resolution, + }; + + hr = ID3D12VideoDevice3_CreateVideoEncoderHeap(ctx->video_device3, &desc, + &IID_ID3D12VideoEncoderHeap, (void **)&ctx->encoder_heap); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "Failed to create encoder heap.\n"); + return AVERROR(EINVAL); + } + + return 0; +} + +static void d3d12va_encode_free_buffer(void *opaque, uint8_t *data) +{ + ID3D12Resource *pResource; + + pResource = (ID3D12Resource *)data; + D3D12_OBJECT_RELEASE(pResource); +} + +static AVBufferRef *d3d12va_encode_alloc_output_buffer(void *opaque, size_t size) +{ + AVCodecContext *avctx = opaque; + HWBaseEncodeContext *base_ctx = avctx->priv_data; + D3D12VAEncodeContext *ctx = avctx->priv_data; + ID3D12Resource *pResource = NULL; + HRESULT hr; + AVBufferRef *ref; + D3D12_HEAP_PROPERTIES heap_props; + D3D12_HEAP_TYPE heap_type = D3D12_HEAP_TYPE_READBACK; + + D3D12_RESOURCE_DESC desc = { + .Dimension = D3D12_RESOURCE_DIMENSION_BUFFER, + .Alignment = 0, + .Width = FFALIGN(3 * base_ctx->surface_width * base_ctx->surface_height + (1 << 16), + D3D12_TEXTURE_DATA_PLACEMENT_ALIGNMENT), + .Height = 1, + .DepthOrArraySize = 1, + .MipLevels = 1, + .Format = DXGI_FORMAT_UNKNOWN, + .SampleDesc = { .Count = 1, .Quality = 0 }, + .Layout = D3D12_TEXTURE_LAYOUT_ROW_MAJOR, + .Flags = D3D12_RESOURCE_FLAG_NONE, + }; + + ctx->hwctx->device->lpVtbl->GetCustomHeapProperties(ctx->hwctx->device, &heap_props, 0, heap_type); + + hr = ID3D12Device_CreateCommittedResource(ctx->hwctx->device, &heap_props, D3D12_HEAP_FLAG_NONE, + &desc, D3D12_RESOURCE_STATE_COMMON, NULL, &IID_ID3D12Resource, + (void **)&pResource); + + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "Failed to create d3d12 buffer.\n"); + return NULL; + } + + ref = av_buffer_create((uint8_t *)(uintptr_t)pResource, + sizeof(pResource), + &d3d12va_encode_free_buffer, + avctx, AV_BUFFER_FLAG_READONLY); + if (!ref) { + D3D12_OBJECT_RELEASE(pResource); + return NULL; + } + + return ref; +} + +static int d3d12va_encode_prepare_output_buffers(AVCodecContext *avctx) +{ + HWBaseEncodeContext *base_ctx = avctx->priv_data; + D3D12VAEncodeContext *ctx = avctx->priv_data; + AVD3D12VAFramesContext *frames_ctx = base_ctx->input_frames->hwctx; + HRESULT hr; + + ctx->req.NodeIndex = 0; + ctx->req.Codec = ctx->codec->d3d12_codec; + ctx->req.Profile = ctx->profile->d3d12_profile; + ctx->req.InputFormat = frames_ctx->format; + ctx->req.PictureTargetResolution = ctx->resolution; + + hr = ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3, + D3D12_FEATURE_VIDEO_ENCODER_RESOURCE_REQUIREMENTS, + &ctx->req, sizeof(ctx->req)); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "Failed to check encoder resource requirements support.\n"); + return AVERROR(EINVAL); + } + + if (!ctx->req.IsSupported) { + av_log(avctx, AV_LOG_ERROR, "Encoder resource requirements unsupported.\n"); + return AVERROR(EINVAL); + } + + ctx->output_buffer_pool = av_buffer_pool_init2(sizeof(ID3D12Resource *), avctx, + &d3d12va_encode_alloc_output_buffer, NULL); + if (!ctx->output_buffer_pool) + return AVERROR(ENOMEM); + + return 0; +} + +static int d3d12va_encode_create_command_objects(AVCodecContext *avctx) +{ + D3D12VAEncodeContext *ctx = avctx->priv_data; + ID3D12CommandAllocator *command_allocator = NULL; + int err; + HRESULT hr; + + D3D12_COMMAND_QUEUE_DESC queue_desc = { + .Type = D3D12_COMMAND_LIST_TYPE_VIDEO_ENCODE, + .Priority = 0, + .Flags = D3D12_COMMAND_QUEUE_FLAG_NONE, + .NodeMask = 0, + }; + + ctx->allocator_queue = av_fifo_alloc2(D3D12VA_VIDEO_ENC_ASYNC_DEPTH, + sizeof(CommandAllocator), AV_FIFO_FLAG_AUTO_GROW); + if (!ctx->allocator_queue) + return AVERROR(ENOMEM); + + hr = ID3D12Device_CreateFence(ctx->hwctx->device, 0, D3D12_FENCE_FLAG_NONE, + &IID_ID3D12Fence, (void **)&ctx->sync_ctx.fence); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "Failed to create fence(%lx)\n", (long)hr); + err = AVERROR_UNKNOWN; + goto fail; + } + + ctx->sync_ctx.event = CreateEvent(NULL, FALSE, FALSE, NULL); + if (!ctx->sync_ctx.event) + goto fail; + + err = d3d12va_get_valid_command_allocator(avctx, &command_allocator); + if (err < 0) + goto fail; + + hr = ID3D12Device_CreateCommandQueue(ctx->hwctx->device, &queue_desc, + &IID_ID3D12CommandQueue, (void **)&ctx->command_queue); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "Failed to create command queue(%lx)\n", (long)hr); + err = AVERROR_UNKNOWN; + goto fail; + } + + hr = ID3D12Device_CreateCommandList(ctx->hwctx->device, 0, queue_desc.Type, + command_allocator, NULL, &IID_ID3D12CommandList, + (void **)&ctx->command_list); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "Failed to create command list(%lx)\n", (long)hr); + err = AVERROR_UNKNOWN; + goto fail; + } + + hr = ID3D12VideoEncodeCommandList2_Close(ctx->command_list); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "Failed to close the command list(%lx)\n", (long)hr); + err = AVERROR_UNKNOWN; + goto fail; + } + + ID3D12CommandQueue_ExecuteCommandLists(ctx->command_queue, 1, (ID3D12CommandList **)&ctx->command_list); + + err = d3d12va_sync_with_gpu(avctx); + if (err < 0) + goto fail; + + err = d3d12va_discard_command_allocator(avctx, command_allocator, ctx->sync_ctx.fence_value); + if (err < 0) + goto fail; + + return 0; + +fail: + D3D12_OBJECT_RELEASE(command_allocator); + return err; +} + +static int d3d12va_encode_create_recon_frames(AVCodecContext *avctx) +{ + HWBaseEncodeContext *base_ctx = avctx->priv_data; + AVD3D12VAFramesContext *hwctx; + enum AVPixelFormat recon_format; + int err; + + err = ff_hw_base_get_recon_format(avctx, NULL, &recon_format); + if (err < 0) + return err; + + base_ctx->recon_frames_ref = av_hwframe_ctx_alloc(base_ctx->device_ref); + if (!base_ctx->recon_frames_ref) + return AVERROR(ENOMEM); + + base_ctx->recon_frames = (AVHWFramesContext *)base_ctx->recon_frames_ref->data; + hwctx = (AVD3D12VAFramesContext *)base_ctx->recon_frames->hwctx; + + base_ctx->recon_frames->format = AV_PIX_FMT_D3D12; + base_ctx->recon_frames->sw_format = recon_format; + base_ctx->recon_frames->width = base_ctx->surface_width; + base_ctx->recon_frames->height = base_ctx->surface_height; + + hwctx->flags = D3D12_RESOURCE_FLAG_VIDEO_ENCODE_REFERENCE_ONLY | + D3D12_RESOURCE_FLAG_DENY_SHADER_RESOURCE; + + err = av_hwframe_ctx_init(base_ctx->recon_frames_ref); + if (err < 0) { + av_log(avctx, AV_LOG_ERROR, "Failed to initialise reconstructed " + "frame context: %d.\n", err); + return err; + } + + return 0; +} + +static const HWEncodePictureOperation d3d12va_type = { + .alloc = &d3d12va_encode_alloc, + + .issue = &d3d12va_encode_issue, + + .output = &d3d12va_encode_output, + + .free = &d3d12va_encode_free, +}; + +int ff_d3d12va_encode_init(AVCodecContext *avctx) +{ + HWBaseEncodeContext *base_ctx = avctx->priv_data; + D3D12VAEncodeContext *ctx = avctx->priv_data; + D3D12_FEATURE_DATA_VIDEO_FEATURE_AREA_SUPPORT support = { 0 }; + int err; + HRESULT hr; + + err = ff_hw_base_encode_init(avctx); + if (err < 0) + goto fail; + + base_ctx->op = &d3d12va_type; + + ctx->hwctx = base_ctx->device->hwctx; + + ctx->resolution.Width = base_ctx->input_frames->width; + ctx->resolution.Height = base_ctx->input_frames->height; + + hr = ID3D12Device_QueryInterface(ctx->hwctx->device, &IID_ID3D12Device3, (void **)&ctx->device3); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "ID3D12Device3 interface is not supported.\n"); + err = AVERROR_UNKNOWN; + goto fail; + } + + hr = ID3D12Device3_QueryInterface(ctx->device3, &IID_ID3D12VideoDevice3, (void **)&ctx->video_device3); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "ID3D12VideoDevice3 interface is not supported.\n"); + err = AVERROR_UNKNOWN; + goto fail; + } + + if (FAILED(ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3, D3D12_FEATURE_VIDEO_FEATURE_AREA_SUPPORT, + &support, sizeof(support))) && !support.VideoEncodeSupport) { + av_log(avctx, AV_LOG_ERROR, "D3D12 video device has no video encoder support.\n"); + err = AVERROR(EINVAL); + goto fail; + } + + err = d3d12va_encode_set_profile(avctx); + if (err < 0) + goto fail; + + if (ctx->codec->get_encoder_caps) { + err = ctx->codec->get_encoder_caps(avctx); + if (err < 0) + goto fail; + } + + err = d3d12va_encode_init_rate_control(avctx); + if (err < 0) + goto fail; + + err = d3d12va_encode_init_gop_structure(avctx); + if (err < 0) + goto fail; + + if (!(ctx->codec->flags & FLAG_SLICE_CONTROL) && avctx->slices > 0) { + av_log(avctx, AV_LOG_WARNING, "Multiple slices were requested " + "but this codec does not support controlling slices.\n"); + } + + err = d3d12va_encode_create_command_objects(avctx); + if (err < 0) + goto fail; + + err = d3d12va_encode_create_recon_frames(avctx); + if (err < 0) + goto fail; + + err = d3d12va_encode_prepare_output_buffers(avctx); + if (err < 0) + goto fail; + + if (ctx->codec->configure) { + err = ctx->codec->configure(avctx); + if (err < 0) + goto fail; + } + + if (ctx->codec->init_sequence_params) { + err = ctx->codec->init_sequence_params(avctx); + if (err < 0) { + av_log(avctx, AV_LOG_ERROR, "Codec sequence initialisation " + "failed: %d.\n", err); + goto fail; + } + } + + if (ctx->codec->set_level) { + err = ctx->codec->set_level(avctx); + if (err < 0) + goto fail; + } + + base_ctx->output_delay = base_ctx->b_per_p; + base_ctx->decode_delay = base_ctx->max_b_depth; + + err = d3d12va_create_encoder(avctx); + if (err < 0) + goto fail; + + err = d3d12va_create_encoder_heap(avctx); + if (err < 0) + goto fail; + + base_ctx->async_encode = 1; + base_ctx->encode_fifo = av_fifo_alloc2(base_ctx->async_depth, + sizeof(D3D12VAEncodePicture *), 0); + if (!base_ctx->encode_fifo) + return AVERROR(ENOMEM); + + return 0; + +fail: + return err; +} + +int ff_d3d12va_encode_close(AVCodecContext *avctx) +{ + int num_allocator = 0; + HWBaseEncodeContext *base_ctx = avctx->priv_data; + D3D12VAEncodeContext *ctx = avctx->priv_data; + HWBaseEncodePicture *pic, *next; + CommandAllocator allocator; + + if (!base_ctx->frame) + return 0; + + for (pic = base_ctx->pic_start; pic; pic = next) { + next = pic->next; + d3d12va_encode_free(avctx, pic); + } + + d3d12va_encode_free_rc_params(avctx); + + av_buffer_pool_uninit(&ctx->output_buffer_pool); + + D3D12_OBJECT_RELEASE(ctx->command_list); + D3D12_OBJECT_RELEASE(ctx->command_queue); + + if (ctx->allocator_queue) { + while (av_fifo_read(ctx->allocator_queue, &allocator, 1) >= 0) { + num_allocator++; + D3D12_OBJECT_RELEASE(allocator.command_allocator); + } + + av_log(avctx, AV_LOG_VERBOSE, "Total number of command allocators reused: %d\n", num_allocator); + } + + av_fifo_freep2(&ctx->allocator_queue); + + D3D12_OBJECT_RELEASE(ctx->sync_ctx.fence); + if (ctx->sync_ctx.event) + CloseHandle(ctx->sync_ctx.event); + + D3D12_OBJECT_RELEASE(ctx->encoder_heap); + D3D12_OBJECT_RELEASE(ctx->encoder); + D3D12_OBJECT_RELEASE(ctx->video_device3); + D3D12_OBJECT_RELEASE(ctx->device3); + + ff_hw_base_encode_close(avctx); + + return 0; +} diff --git a/libavcodec/d3d12va_encode.h b/libavcodec/d3d12va_encode.h new file mode 100644 index 0000000000..03cc5ef221 --- /dev/null +++ b/libavcodec/d3d12va_encode.h @@ -0,0 +1,339 @@ +/* + * Direct3D 12 HW acceleration video encoder + * + * Copyright (c) 2024 Intel Corporation + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#ifndef AVCODEC_D3D12VA_ENCODE_H +#define AVCODEC_D3D12VA_ENCODE_H + +#include "libavutil/fifo.h" +#include "libavutil/hwcontext.h" +#include "libavutil/hwcontext_d3d12va_internal.h" +#include "libavutil/hwcontext_d3d12va.h" +#include "avcodec.h" +#include "internal.h" +#include "hwconfig.h" +#include "hw_base_encode.h" + +struct D3D12VAEncodeType; + +extern const AVCodecHWConfigInternal *const ff_d3d12va_encode_hw_configs[]; + +#define MAX_PARAM_BUFFER_SIZE 4096 +#define D3D12VA_VIDEO_ENC_ASYNC_DEPTH 8 + +typedef struct D3D12VAEncodePicture { + HWBaseEncodePicture base; + + int header_size; + + AVD3D12VAFrame *input_surface; + AVD3D12VAFrame *recon_surface; + + AVBufferRef *output_buffer_ref; + ID3D12Resource *output_buffer; + + ID3D12Resource *encoded_metadata; + ID3D12Resource *resolved_metadata; + + D3D12_VIDEO_ENCODER_PICTURE_CONTROL_CODEC_DATA pic_ctl; + + int fence_value; +} D3D12VAEncodePicture; + +typedef struct D3D12VAEncodeProfile { + /** + * lavc profile value (AV_PROFILE_*). + */ + int av_profile; + + /** + * Supported bit depth. + */ + int depth; + + /** + * Number of components. + */ + int nb_components; + + /** + * Chroma subsampling in width dimension. + */ + int log2_chroma_w; + + /** + * Chroma subsampling in height dimension. + */ + int log2_chroma_h; + + /** + * D3D12 profile value. + */ + D3D12_VIDEO_ENCODER_PROFILE_DESC d3d12_profile; +} D3D12VAEncodeProfile; + +enum { + RC_MODE_AUTO, + RC_MODE_CQP, + RC_MODE_CBR, + RC_MODE_VBR, + RC_MODE_QVBR, + RC_MODE_MAX = RC_MODE_QVBR, +}; + + +typedef struct D3D12VAEncodeRCMode { + /** + * Mode from above enum (RC_MODE_*). + */ + int mode; + + /** + * Name. + * + */ + const char *name; + + /** + * Uses bitrate parameters. + * + */ + int bitrate; + + /** + * Supports maxrate distinct from bitrate. + * + */ + int maxrate; + + /** + * Uses quality value. + * + */ + int quality; + + /** + * Supports HRD/VBV parameters. + * + */ + int hrd; + + /** + * Supported by D3D12 HW. + */ + int supported; + + /** + * D3D12 mode value. + */ + D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE d3d12_mode; +} D3D12VAEncodeRCMode; + +typedef struct D3D12VAEncodeContext { + HWBaseEncodeContext base; + + /** + * Codec-specific hooks. + */ + const struct D3D12VAEncodeType *codec; + + /** + * Explicitly set RC mode (otherwise attempt to pick from + * available modes). + */ + int explicit_rc_mode; + + /** + * Explicitly-set QP, for use with the "qp" options. + * (Forces CQP mode when set, overriding everything else.) + */ + int explicit_qp; + + /** + * RC quality level - meaning depends on codec and RC mode. + * In CQP mode this sets the fixed quantiser value. + */ + int rc_quality; + + /** + * Chosen encoding profile details. + */ + const D3D12VAEncodeProfile *profile; + + AVD3D12VADeviceContext *hwctx; + + /** + * ID3D12Device3 interface. + */ + ID3D12Device3 *device3; + + /** + * ID3D12VideoDevice3 interface. + */ + ID3D12VideoDevice3 *video_device3; + + /** + * Pool of (reusable) bitstream output buffers. + */ + AVBufferPool *output_buffer_pool; + + /** + * D3D12 video encoder. + */ + AVBufferRef *encoder_ref; + + ID3D12VideoEncoder *encoder; + + /** + * D3D12 video encoder heap. + */ + ID3D12VideoEncoderHeap *encoder_heap; + + /** + * A cached queue for reusing the D3D12 command allocators. + * + * @see https://learn.microsoft.com/en-us/windows/win32/direct3d12/recording-command-lists-and-bundles#id3d12commandallocator + */ + AVFifo *allocator_queue; + + /** + * D3D12 command queue. + */ + ID3D12CommandQueue *command_queue; + + /** + * D3D12 video encode command list. + */ + ID3D12VideoEncodeCommandList2 *command_list; + + /** + * The sync context used to sync command queue. + */ + AVD3D12VASyncContext sync_ctx; + + /** + * The bi_not_empty feature. + */ + int bi_not_empty; + + /** + * D3D12_FEATURE structures. + */ + D3D12_FEATURE_DATA_VIDEO_ENCODER_RESOURCE_REQUIREMENTS req; + + D3D12_FEATURE_DATA_VIDEO_ENCODER_RESOLUTION_SUPPORT_LIMITS res_limits; + + /** + * D3D12_VIDEO_ENCODER structures. + */ + D3D12_VIDEO_ENCODER_PICTURE_RESOLUTION_DESC resolution; + + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION codec_conf; + + D3D12_VIDEO_ENCODER_RATE_CONTROL rc; + + D3D12_VIDEO_ENCODER_SEQUENCE_GOP_STRUCTURE gop; + + D3D12_VIDEO_ENCODER_LEVEL_SETTING level; +} D3D12VAEncodeContext; + +typedef struct D3D12VAEncodeType { + /** + * List of supported profiles. + */ + const D3D12VAEncodeProfile *profiles; + + /** + * D3D12 codec name. + */ + D3D12_VIDEO_ENCODER_CODEC d3d12_codec; + + /** + * Codec feature flags. + */ + int flags; + + /** + * Default quality for this codec - used as quantiser or RC quality + * factor depending on RC mode. + */ + int default_quality; + + /** + * Query codec configuration and determine encode parameters like + * block sizes for surface alignment and slices. If not set, assume + * that all blocks are 16x16 and that surfaces should be aligned to match + * this. + */ + int (*get_encoder_caps)(AVCodecContext *avctx); + + /** + * Perform any extra codec-specific configuration. + */ + int (*configure)(AVCodecContext *avctx); + + /** + * Set codec-specific level setting. + */ + int (*set_level)(AVCodecContext *avctx); + + /** + * The size of any private data structure associated with each + * picture (can be zero if not required). + */ + size_t picture_priv_data_size; + + /** + * Fill the corresponding parameters. + */ + int (*init_sequence_params)(AVCodecContext *avctx); + + int (*init_picture_params)(AVCodecContext *avctx, + D3D12VAEncodePicture *pic); + + void (*free_picture_params)(D3D12VAEncodePicture *pic); + + /** + * Write the packed header data to the provided buffer. + */ + int (*write_sequence_header)(AVCodecContext *avctx, + char *data, size_t *data_len); +} D3D12VAEncodeType; + +int ff_d3d12va_encode_init(AVCodecContext *avctx); +int ff_d3d12va_encode_close(AVCodecContext *avctx); + +#define D3D12VA_ENCODE_RC_MODE(name, desc) \ + { #name, desc, 0, AV_OPT_TYPE_CONST, { .i64 = RC_MODE_ ## name }, \ + 0, 0, FLAGS, .unit = "rc_mode" } +#define D3D12VA_ENCODE_RC_OPTIONS \ + { "rc_mode",\ + "Set rate control mode", \ + OFFSET(common.explicit_rc_mode), AV_OPT_TYPE_INT, \ + { .i64 = RC_MODE_AUTO }, RC_MODE_AUTO, RC_MODE_MAX, FLAGS, .unit = "rc_mode" }, \ + { "auto", "Choose mode automatically based on other parameters", \ + 0, AV_OPT_TYPE_CONST, { .i64 = RC_MODE_AUTO }, 0, 0, FLAGS, .unit = "rc_mode" }, \ + D3D12VA_ENCODE_RC_MODE(CQP, "Constant-quality"), \ + D3D12VA_ENCODE_RC_MODE(CBR, "Constant-bitrate"), \ + D3D12VA_ENCODE_RC_MODE(VBR, "Variable-bitrate"), \ + D3D12VA_ENCODE_RC_MODE(QVBR, "Quality-defined variable-bitrate") + +#endif /* AVCODEC_D3D12VA_ENCODE_H */ diff --git a/libavcodec/d3d12va_encode_hevc.c b/libavcodec/d3d12va_encode_hevc.c new file mode 100644 index 0000000000..eaea76ba0d --- /dev/null +++ b/libavcodec/d3d12va_encode_hevc.c @@ -0,0 +1,1007 @@ +/* + * Direct3D 12 HW acceleration video encoder + * + * Copyright (c) 2024 Intel Corporation + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ +#include "libavutil/opt.h" +#include "libavutil/common.h" +#include "libavutil/mem.h" +#include "libavutil/pixdesc.h" +#include "libavutil/hwcontext_d3d12va_internal.h" + +#include "avcodec.h" +#include "cbs.h" +#include "cbs_h265.h" +#include "h2645data.h" +#include "h265_profile_level.h" +#include "codec_internal.h" +#include "d3d12va_encode.h" + +typedef struct D3D12VAEncodeHEVCPicture { + int pic_order_cnt; + int64_t last_idr_frame; +} D3D12VAEncodeHEVCPicture; + +typedef struct D3D12VAEncodeHEVCContext { + D3D12VAEncodeContext common; + + // User options. + int qp; + int profile; + int tier; + int level; + + // Writer structures. + H265RawVPS raw_vps; + H265RawSPS raw_sps; + H265RawPPS raw_pps; + + CodedBitstreamContext *cbc; + CodedBitstreamFragment current_access_unit; +} D3D12VAEncodeHEVCContext; + +typedef struct D3D12VAEncodeHEVCLevel { + int level; + D3D12_VIDEO_ENCODER_LEVELS_HEVC d3d12_level; +} D3D12VAEncodeHEVCLevel; + +static const D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC hevc_config_support_sets[] = +{ + { + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_NONE, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_32x32, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32, + 3, + 3, + }, + { + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_NONE, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_32x32, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32, + 0, + 0, + }, + { + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_NONE, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_32x32, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32, + 2, + 2, + }, + { + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_NONE, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_64x64, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32, + 2, + 2, + }, + { + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_NONE, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_64x64, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4, + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32, + 4, + 4, + }, +}; + +static const D3D12VAEncodeHEVCLevel hevc_levels[] = { + { 30, D3D12_VIDEO_ENCODER_LEVELS_HEVC_1 }, + { 60, D3D12_VIDEO_ENCODER_LEVELS_HEVC_2 }, + { 63, D3D12_VIDEO_ENCODER_LEVELS_HEVC_21 }, + { 90, D3D12_VIDEO_ENCODER_LEVELS_HEVC_3 }, + { 93, D3D12_VIDEO_ENCODER_LEVELS_HEVC_31 }, + { 120, D3D12_VIDEO_ENCODER_LEVELS_HEVC_4 }, + { 123, D3D12_VIDEO_ENCODER_LEVELS_HEVC_41 }, + { 150, D3D12_VIDEO_ENCODER_LEVELS_HEVC_5 }, + { 153, D3D12_VIDEO_ENCODER_LEVELS_HEVC_51 }, + { 156, D3D12_VIDEO_ENCODER_LEVELS_HEVC_52 }, + { 180, D3D12_VIDEO_ENCODER_LEVELS_HEVC_6 }, + { 183, D3D12_VIDEO_ENCODER_LEVELS_HEVC_61 }, + { 186, D3D12_VIDEO_ENCODER_LEVELS_HEVC_62 }, +}; + +static const D3D12_VIDEO_ENCODER_PROFILE_HEVC profile_main = D3D12_VIDEO_ENCODER_PROFILE_HEVC_MAIN; +static const D3D12_VIDEO_ENCODER_PROFILE_HEVC profile_main10 = D3D12_VIDEO_ENCODER_PROFILE_HEVC_MAIN10; + +#define D3D_PROFILE_DESC(name) \ + { sizeof(D3D12_VIDEO_ENCODER_PROFILE_HEVC), { .pHEVCProfile = (D3D12_VIDEO_ENCODER_PROFILE_HEVC *)&profile_ ## name } } +static const D3D12VAEncodeProfile d3d12va_encode_hevc_profiles[] = { + { AV_PROFILE_HEVC_MAIN, 8, 3, 1, 1, D3D_PROFILE_DESC(main) }, + { AV_PROFILE_HEVC_MAIN_10, 10, 3, 1, 1, D3D_PROFILE_DESC(main10) }, + { AV_PROFILE_UNKNOWN }, +}; + +static uint8_t d3d12va_encode_hevc_map_cusize(D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE cusize) +{ + switch (cusize) { + case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8: return 8; + case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_16x16: return 16; + case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_32x32: return 32; + case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_64x64: return 64; + default: av_assert0(0); + } + return 0; +} + +static uint8_t d3d12va_encode_hevc_map_tusize(D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE tusize) +{ + switch (tusize) { + case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4: return 4; + case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_8x8: return 8; + case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_16x16: return 16; + case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32: return 32; + default: av_assert0(0); + } + return 0; +} + +static int d3d12va_encode_hevc_write_access_unit(AVCodecContext *avctx, + char *data, size_t *data_len, + CodedBitstreamFragment *au) +{ + D3D12VAEncodeHEVCContext *priv = avctx->priv_data; + int err; + + err = ff_cbs_write_fragment_data(priv->cbc, au); + if (err < 0) { + av_log(avctx, AV_LOG_ERROR, "Failed to write packed header.\n"); + return err; + } + + if (*data_len < 8 * au->data_size - au->data_bit_padding) { + av_log(avctx, AV_LOG_ERROR, "Access unit too large: " + "%zu < %zu.\n", *data_len, + 8 * au->data_size - au->data_bit_padding); + return AVERROR(ENOSPC); + } + + memcpy(data, au->data, au->data_size); + *data_len = 8 * au->data_size - au->data_bit_padding; + + return 0; +} + +static int d3d12va_encode_hevc_add_nal(AVCodecContext *avctx, + CodedBitstreamFragment *au, + void *nal_unit) +{ + H265RawNALUnitHeader *header = nal_unit; + int err; + + err = ff_cbs_insert_unit_content(au, -1, + header->nal_unit_type, nal_unit, NULL); + if (err < 0) { + av_log(avctx, AV_LOG_ERROR, "Failed to add NAL unit: " + "type = %d.\n", header->nal_unit_type); + return err; + } + + return 0; +} + +static int d3d12va_encode_hevc_write_sequence_header(AVCodecContext *avctx, + char *data, size_t *data_len) +{ + D3D12VAEncodeHEVCContext *priv = avctx->priv_data; + CodedBitstreamFragment *au = &priv->current_access_unit; + int err; + + err = d3d12va_encode_hevc_add_nal(avctx, au, &priv->raw_vps); + if (err < 0) + goto fail; + + err = d3d12va_encode_hevc_add_nal(avctx, au, &priv->raw_sps); + if (err < 0) + goto fail; + + err = d3d12va_encode_hevc_add_nal(avctx, au, &priv->raw_pps); + if (err < 0) + goto fail; + + err = d3d12va_encode_hevc_write_access_unit(avctx, data, data_len, au); +fail: + ff_cbs_fragment_reset(au); + return err; + +} + +static int d3d12va_encode_hevc_init_sequence_params(AVCodecContext *avctx) +{ + HWBaseEncodeContext *base_ctx = avctx->priv_data; + D3D12VAEncodeContext *ctx = avctx->priv_data; + D3D12VAEncodeHEVCContext *priv = avctx->priv_data; + AVD3D12VAFramesContext *hwctx = base_ctx->input_frames->hwctx; + H265RawVPS *vps = &priv->raw_vps; + H265RawSPS *sps = &priv->raw_sps; + H265RawPPS *pps = &priv->raw_pps; + H265RawProfileTierLevel *ptl = &vps->profile_tier_level; + H265RawVUI *vui = &sps->vui; + D3D12_VIDEO_ENCODER_PROFILE_HEVC profile = D3D12_VIDEO_ENCODER_PROFILE_HEVC_MAIN; + D3D12_VIDEO_ENCODER_LEVEL_TIER_CONSTRAINTS_HEVC level = { 0 }; + const AVPixFmtDescriptor *desc; + uint8_t min_cu_size, max_cu_size, min_tu_size, max_tu_size; + int chroma_format, bit_depth; + HRESULT hr; + int i; + + D3D12_FEATURE_DATA_VIDEO_ENCODER_SUPPORT support = { + .NodeIndex = 0, + .Codec = D3D12_VIDEO_ENCODER_CODEC_HEVC, + .InputFormat = hwctx->format, + .RateControl = ctx->rc, + .IntraRefresh = D3D12_VIDEO_ENCODER_INTRA_REFRESH_MODE_NONE, + .SubregionFrameEncoding = D3D12_VIDEO_ENCODER_FRAME_SUBREGION_LAYOUT_MODE_FULL_FRAME, + .ResolutionsListCount = 1, + .pResolutionList = &ctx->resolution, + .CodecGopSequence = ctx->gop, + .MaxReferenceFramesInDPB = MAX_DPB_SIZE - 1, + .CodecConfiguration = ctx->codec_conf, + .SuggestedProfile.DataSize = sizeof(D3D12_VIDEO_ENCODER_PROFILE_HEVC), + .SuggestedProfile.pHEVCProfile = &profile, + .SuggestedLevel.DataSize = sizeof(D3D12_VIDEO_ENCODER_LEVEL_TIER_CONSTRAINTS_HEVC), + .SuggestedLevel.pHEVCLevelSetting = &level, + .pResolutionDependentSupport = &ctx->res_limits, + }; + + hr = ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3, D3D12_FEATURE_VIDEO_ENCODER_SUPPORT, + &support, sizeof(support)); + + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "Failed to check encoder support(%lx).\n", (long)hr); + return AVERROR(EINVAL); + } + + if (!(support.SupportFlags & D3D12_VIDEO_ENCODER_SUPPORT_FLAG_GENERAL_SUPPORT_OK)) { + av_log(avctx, AV_LOG_ERROR, "Driver does not support some request features. %#x\n", + support.ValidationFlags); + return AVERROR(EINVAL); + } + + if (support.SupportFlags & D3D12_VIDEO_ENCODER_SUPPORT_FLAG_RECONSTRUCTED_FRAMES_REQUIRE_TEXTURE_ARRAYS) { + av_log(avctx, AV_LOG_ERROR, "D3D12 video encode on this device requires texture array support, " + "but it's not implemented.\n"); + return AVERROR_PATCHWELCOME; + } + + memset(vps, 0, sizeof(*vps)); + memset(sps, 0, sizeof(*sps)); + memset(pps, 0, sizeof(*pps)); + + desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format); + av_assert0(desc); + if (desc->nb_components == 1) { + chroma_format = 0; + } else { + if (desc->log2_chroma_w == 1 && desc->log2_chroma_h == 1) { + chroma_format = 1; + } else if (desc->log2_chroma_w == 1 && desc->log2_chroma_h == 0) { + chroma_format = 2; + } else if (desc->log2_chroma_w == 0 && desc->log2_chroma_h == 0) { + chroma_format = 3; + } else { + av_log(avctx, AV_LOG_ERROR, "Chroma format of input pixel format " + "%s is not supported.\n", desc->name); + return AVERROR(EINVAL); + } + } + bit_depth = desc->comp[0].depth; + + min_cu_size = d3d12va_encode_hevc_map_cusize(ctx->codec_conf.pHEVCConfig->MinLumaCodingUnitSize); + max_cu_size = d3d12va_encode_hevc_map_cusize(ctx->codec_conf.pHEVCConfig->MaxLumaCodingUnitSize); + min_tu_size = d3d12va_encode_hevc_map_tusize(ctx->codec_conf.pHEVCConfig->MinLumaTransformUnitSize); + max_tu_size = d3d12va_encode_hevc_map_tusize(ctx->codec_conf.pHEVCConfig->MaxLumaTransformUnitSize); + + // VPS + + vps->nal_unit_header = (H265RawNALUnitHeader) { + .nal_unit_type = HEVC_NAL_VPS, + .nuh_layer_id = 0, + .nuh_temporal_id_plus1 = 1, + }; + + vps->vps_video_parameter_set_id = 0; + + vps->vps_base_layer_internal_flag = 1; + vps->vps_base_layer_available_flag = 1; + vps->vps_max_layers_minus1 = 0; + vps->vps_max_sub_layers_minus1 = 0; + vps->vps_temporal_id_nesting_flag = 1; + + ptl->general_profile_space = 0; + ptl->general_profile_idc = avctx->profile; + ptl->general_tier_flag = priv->tier; + + ptl->general_profile_compatibility_flag[ptl->general_profile_idc] = 1; + + ptl->general_progressive_source_flag = 1; + ptl->general_interlaced_source_flag = 0; + ptl->general_non_packed_constraint_flag = 1; + ptl->general_frame_only_constraint_flag = 1; + + ptl->general_max_14bit_constraint_flag = bit_depth <= 14; + ptl->general_max_12bit_constraint_flag = bit_depth <= 12; + ptl->general_max_10bit_constraint_flag = bit_depth <= 10; + ptl->general_max_8bit_constraint_flag = bit_depth == 8; + + ptl->general_max_422chroma_constraint_flag = chroma_format <= 2; + ptl->general_max_420chroma_constraint_flag = chroma_format <= 1; + ptl->general_max_monochrome_constraint_flag = chroma_format == 0; + + ptl->general_intra_constraint_flag = base_ctx->gop_size == 1; + ptl->general_one_picture_only_constraint_flag = 0; + + ptl->general_lower_bit_rate_constraint_flag = 1; + + if (avctx->level != FF_LEVEL_UNKNOWN) { + ptl->general_level_idc = avctx->level; + } else { + const H265LevelDescriptor *level; + + level = ff_h265_guess_level(ptl, avctx->bit_rate, + base_ctx->surface_width, base_ctx->surface_height, + 1, 1, 1, (base_ctx->b_per_p > 0) + 1); + if (level) { + av_log(avctx, AV_LOG_VERBOSE, "Using level %s.\n", level->name); + ptl->general_level_idc = level->level_idc; + } else { + av_log(avctx, AV_LOG_VERBOSE, "Stream will not conform to " + "any normal level; using level 8.5.\n"); + ptl->general_level_idc = 255; + // The tier flag must be set in level 8.5. + ptl->general_tier_flag = 1; + } + avctx->level = ptl->general_level_idc; + } + + vps->vps_sub_layer_ordering_info_present_flag = 0; + vps->vps_max_dec_pic_buffering_minus1[0] = base_ctx->max_b_depth + 1; + vps->vps_max_num_reorder_pics[0] = base_ctx->max_b_depth; + vps->vps_max_latency_increase_plus1[0] = 0; + + vps->vps_max_layer_id = 0; + vps->vps_num_layer_sets_minus1 = 0; + vps->layer_id_included_flag[0][0] = 1; + + vps->vps_timing_info_present_flag = 1; + if (avctx->framerate.num > 0 && avctx->framerate.den > 0) { + vps->vps_num_units_in_tick = avctx->framerate.den; + vps->vps_time_scale = avctx->framerate.num; + vps->vps_poc_proportional_to_timing_flag = 1; + vps->vps_num_ticks_poc_diff_one_minus1 = 0; + } else { + vps->vps_num_units_in_tick = avctx->time_base.num; + vps->vps_time_scale = avctx->time_base.den; + vps->vps_poc_proportional_to_timing_flag = 0; + } + vps->vps_num_hrd_parameters = 0; + + // SPS + + sps->nal_unit_header = (H265RawNALUnitHeader) { + .nal_unit_type = HEVC_NAL_SPS, + .nuh_layer_id = 0, + .nuh_temporal_id_plus1 = 1, + }; + + sps->sps_video_parameter_set_id = vps->vps_video_parameter_set_id; + + sps->sps_max_sub_layers_minus1 = vps->vps_max_sub_layers_minus1; + sps->sps_temporal_id_nesting_flag = vps->vps_temporal_id_nesting_flag; + + sps->profile_tier_level = vps->profile_tier_level; + + sps->sps_seq_parameter_set_id = 0; + + sps->chroma_format_idc = chroma_format; + sps->separate_colour_plane_flag = 0; + + av_assert0(ctx->res_limits.SubregionBlockPixelsSize % min_cu_size == 0); + + sps->pic_width_in_luma_samples = FFALIGN(base_ctx->surface_width, + ctx->res_limits.SubregionBlockPixelsSize); + sps->pic_height_in_luma_samples = FFALIGN(base_ctx->surface_height, + ctx->res_limits.SubregionBlockPixelsSize); + + if (avctx->width != sps->pic_width_in_luma_samples || + avctx->height != sps->pic_height_in_luma_samples) { + sps->conformance_window_flag = 1; + sps->conf_win_left_offset = 0; + sps->conf_win_right_offset = + (sps->pic_width_in_luma_samples - avctx->width) >> desc->log2_chroma_w; + sps->conf_win_top_offset = 0; + sps->conf_win_bottom_offset = + (sps->pic_height_in_luma_samples - avctx->height) >> desc->log2_chroma_h; + } else { + sps->conformance_window_flag = 0; + } + + sps->bit_depth_luma_minus8 = bit_depth - 8; + sps->bit_depth_chroma_minus8 = bit_depth - 8; + + sps->log2_max_pic_order_cnt_lsb_minus4 = ctx->gop.pHEVCGroupOfPictures->log2_max_pic_order_cnt_lsb_minus4; + + sps->sps_sub_layer_ordering_info_present_flag = + vps->vps_sub_layer_ordering_info_present_flag; + for (i = 0; i <= sps->sps_max_sub_layers_minus1; i++) { + sps->sps_max_dec_pic_buffering_minus1[i] = + vps->vps_max_dec_pic_buffering_minus1[i]; + sps->sps_max_num_reorder_pics[i] = + vps->vps_max_num_reorder_pics[i]; + sps->sps_max_latency_increase_plus1[i] = + vps->vps_max_latency_increase_plus1[i]; + } + + sps->log2_min_luma_coding_block_size_minus3 = (uint8_t)(av_log2(min_cu_size) - 3); + sps->log2_diff_max_min_luma_coding_block_size = (uint8_t)(av_log2(max_cu_size) - av_log2(min_cu_size)); + sps->log2_min_luma_transform_block_size_minus2 = (uint8_t)(av_log2(min_tu_size) - 2); + sps->log2_diff_max_min_luma_transform_block_size = (uint8_t)(av_log2(max_tu_size) - av_log2(min_tu_size)); + + sps->max_transform_hierarchy_depth_inter = ctx->codec_conf.pHEVCConfig->max_transform_hierarchy_depth_inter; + sps->max_transform_hierarchy_depth_intra = ctx->codec_conf.pHEVCConfig->max_transform_hierarchy_depth_intra; + + sps->amp_enabled_flag = !!(ctx->codec_conf.pHEVCConfig->ConfigurationFlags & + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_USE_ASYMETRIC_MOTION_PARTITION); + sps->sample_adaptive_offset_enabled_flag = !!(ctx->codec_conf.pHEVCConfig->ConfigurationFlags & + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_ENABLE_SAO_FILTER); + sps->sps_temporal_mvp_enabled_flag = 0; + sps->pcm_enabled_flag = 0; + + sps->vui_parameters_present_flag = 1; + + if (avctx->sample_aspect_ratio.num != 0 && + avctx->sample_aspect_ratio.den != 0) { + int num, den, i; + av_reduce(&num, &den, avctx->sample_aspect_ratio.num, + avctx->sample_aspect_ratio.den, 65535); + for (i = 0; i < FF_ARRAY_ELEMS(ff_h2645_pixel_aspect); i++) { + if (num == ff_h2645_pixel_aspect[i].num && + den == ff_h2645_pixel_aspect[i].den) { + vui->aspect_ratio_idc = i; + break; + } + } + if (i >= FF_ARRAY_ELEMS(ff_h2645_pixel_aspect)) { + vui->aspect_ratio_idc = 255; + vui->sar_width = num; + vui->sar_height = den; + } + vui->aspect_ratio_info_present_flag = 1; + } + + // Unspecified video format, from table E-2. + vui->video_format = 5; + vui->video_full_range_flag = + avctx->color_range == AVCOL_RANGE_JPEG; + vui->colour_primaries = avctx->color_primaries; + vui->transfer_characteristics = avctx->color_trc; + vui->matrix_coefficients = avctx->colorspace; + if (avctx->color_primaries != AVCOL_PRI_UNSPECIFIED || + avctx->color_trc != AVCOL_TRC_UNSPECIFIED || + avctx->colorspace != AVCOL_SPC_UNSPECIFIED) + vui->colour_description_present_flag = 1; + if (avctx->color_range != AVCOL_RANGE_UNSPECIFIED || + vui->colour_description_present_flag) + vui->video_signal_type_present_flag = 1; + + if (avctx->chroma_sample_location != AVCHROMA_LOC_UNSPECIFIED) { + vui->chroma_loc_info_present_flag = 1; + vui->chroma_sample_loc_type_top_field = + vui->chroma_sample_loc_type_bottom_field = + avctx->chroma_sample_location - 1; + } + + vui->vui_timing_info_present_flag = 1; + vui->vui_num_units_in_tick = vps->vps_num_units_in_tick; + vui->vui_time_scale = vps->vps_time_scale; + vui->vui_poc_proportional_to_timing_flag = vps->vps_poc_proportional_to_timing_flag; + vui->vui_num_ticks_poc_diff_one_minus1 = vps->vps_num_ticks_poc_diff_one_minus1; + vui->vui_hrd_parameters_present_flag = 0; + + vui->bitstream_restriction_flag = 1; + vui->motion_vectors_over_pic_boundaries_flag = 1; + vui->restricted_ref_pic_lists_flag = 1; + vui->max_bytes_per_pic_denom = 0; + vui->max_bits_per_min_cu_denom = 0; + vui->log2_max_mv_length_horizontal = 15; + vui->log2_max_mv_length_vertical = 15; + + // PPS + + pps->nal_unit_header = (H265RawNALUnitHeader) { + .nal_unit_type = HEVC_NAL_PPS, + .nuh_layer_id = 0, + .nuh_temporal_id_plus1 = 1, + }; + + pps->pps_pic_parameter_set_id = 0; + pps->pps_seq_parameter_set_id = sps->sps_seq_parameter_set_id; + + pps->cabac_init_present_flag = 1; + + pps->num_ref_idx_l0_default_active_minus1 = 0; + pps->num_ref_idx_l1_default_active_minus1 = 0; + + pps->init_qp_minus26 = 0; + + pps->transform_skip_enabled_flag = !!(ctx->codec_conf.pHEVCConfig->ConfigurationFlags & + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_ENABLE_TRANSFORM_SKIPPING); + + // cu_qp_delta always required to be 1 in https://github.com/microsoft/DirectX-Specs/blob/master/d3d/D3D12VideoEncoding.md + pps->cu_qp_delta_enabled_flag = 1; + + pps->diff_cu_qp_delta_depth = 0; + + pps->pps_slice_chroma_qp_offsets_present_flag = 1; + + pps->tiles_enabled_flag = 0; // no tiling in D3D12 + + pps->pps_loop_filter_across_slices_enabled_flag = !(ctx->codec_conf.pHEVCConfig->ConfigurationFlags & + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_DISABLE_LOOP_FILTER_ACROSS_SLICES); + pps->deblocking_filter_control_present_flag = 1; + + return 0; +} + +static int d3d12va_encode_hevc_get_encoder_caps(AVCodecContext *avctx) +{ + int i; + HRESULT hr; + uint8_t min_cu_size, max_cu_size; + HWBaseEncodeContext *base_ctx = avctx->priv_data; + D3D12VAEncodeContext *ctx = avctx->priv_data; + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC *config; + D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC hevc_caps; + + D3D12_FEATURE_DATA_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT codec_caps = { + .NodeIndex = 0, + .Codec = D3D12_VIDEO_ENCODER_CODEC_HEVC, + .Profile = ctx->profile->d3d12_profile, + .CodecSupportLimits.DataSize = sizeof(D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC), + }; + + for (i = 0; i < FF_ARRAY_ELEMS(hevc_config_support_sets); i++) { + hevc_caps = hevc_config_support_sets[i]; + codec_caps.CodecSupportLimits.pHEVCSupport = &hevc_caps; + hr = ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3, D3D12_FEATURE_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT, + &codec_caps, sizeof(codec_caps)); + if (SUCCEEDED(hr) && codec_caps.IsSupported) + break; + } + + if (i == FF_ARRAY_ELEMS(hevc_config_support_sets)) { + av_log(avctx, AV_LOG_ERROR, "Unsupported codec configuration\n"); + return AVERROR(EINVAL); + } + + ctx->codec_conf.DataSize = sizeof(D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC); + ctx->codec_conf.pHEVCConfig = av_mallocz(ctx->codec_conf.DataSize); + if (!ctx->codec_conf.pHEVCConfig) + return AVERROR(ENOMEM); + + config = ctx->codec_conf.pHEVCConfig; + + config->ConfigurationFlags = D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_NONE; + config->MinLumaCodingUnitSize = hevc_caps.MinLumaCodingUnitSize; + config->MaxLumaCodingUnitSize = hevc_caps.MaxLumaCodingUnitSize; + config->MinLumaTransformUnitSize = hevc_caps.MinLumaTransformUnitSize; + config->MaxLumaTransformUnitSize = hevc_caps.MaxLumaTransformUnitSize; + config->max_transform_hierarchy_depth_inter = hevc_caps.max_transform_hierarchy_depth_inter; + config->max_transform_hierarchy_depth_intra = hevc_caps.max_transform_hierarchy_depth_intra; + + if (hevc_caps.SupportFlags & D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_ASYMETRIC_MOTION_PARTITION_SUPPORT || + hevc_caps.SupportFlags & D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_ASYMETRIC_MOTION_PARTITION_REQUIRED) + config->ConfigurationFlags |= D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_USE_ASYMETRIC_MOTION_PARTITION; + + if (hevc_caps.SupportFlags & D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_SAO_FILTER_SUPPORT) + config->ConfigurationFlags |= D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_ENABLE_SAO_FILTER; + + if (hevc_caps.SupportFlags & D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_DISABLING_LOOP_FILTER_ACROSS_SLICES_SUPPORT) + config->ConfigurationFlags |= D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_DISABLE_LOOP_FILTER_ACROSS_SLICES; + + if (hevc_caps.SupportFlags & D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_TRANSFORM_SKIP_SUPPORT) + config->ConfigurationFlags |= D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_ENABLE_TRANSFORM_SKIPPING; + + if (hevc_caps.SupportFlags & D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_P_FRAMES_IMPLEMENTED_AS_LOW_DELAY_B_FRAMES) + ctx->bi_not_empty = 1; + + // block sizes + min_cu_size = d3d12va_encode_hevc_map_cusize(hevc_caps.MinLumaCodingUnitSize); + max_cu_size = d3d12va_encode_hevc_map_cusize(hevc_caps.MaxLumaCodingUnitSize); + + av_log(avctx, AV_LOG_VERBOSE, "Using CTU size %dx%d, " + "min CB size %dx%d.\n", max_cu_size, max_cu_size, + min_cu_size, min_cu_size); + + base_ctx->surface_width = FFALIGN(avctx->width, min_cu_size); + base_ctx->surface_height = FFALIGN(avctx->height, min_cu_size); + + return 0; +} + +static int d3d12va_encode_hevc_configure(AVCodecContext *avctx) +{ + HWBaseEncodeContext *base_ctx = avctx->priv_data; + D3D12VAEncodeContext *ctx = avctx->priv_data; + D3D12VAEncodeHEVCContext *priv = avctx->priv_data; + int fixed_qp_idr, fixed_qp_p, fixed_qp_b; + int err; + + err = ff_cbs_init(&priv->cbc, AV_CODEC_ID_HEVC, avctx); + if (err < 0) + return err; + + // Rate control + if (ctx->rc.Mode == D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_CQP) { + D3D12_VIDEO_ENCODER_RATE_CONTROL_CQP *cqp_ctl; + fixed_qp_p = av_clip(ctx->rc_quality, 1, 51); + if (avctx->i_quant_factor > 0.0) + fixed_qp_idr = av_clip((avctx->i_quant_factor * fixed_qp_p + + avctx->i_quant_offset) + 0.5, 1, 51); + else + fixed_qp_idr = fixed_qp_p; + if (avctx->b_quant_factor > 0.0) + fixed_qp_b = av_clip((avctx->b_quant_factor * fixed_qp_p + + avctx->b_quant_offset) + 0.5, 1, 51); + else + fixed_qp_b = fixed_qp_p; + + av_log(avctx, AV_LOG_DEBUG, "Using fixed QP = " + "%d / %d / %d for IDR- / P- / B-frames.\n", + fixed_qp_idr, fixed_qp_p, fixed_qp_b); + + ctx->rc.ConfigParams.DataSize = sizeof(D3D12_VIDEO_ENCODER_RATE_CONTROL_CQP); + cqp_ctl = av_mallocz(ctx->rc.ConfigParams.DataSize); + if (!cqp_ctl) + return AVERROR(ENOMEM); + + cqp_ctl->ConstantQP_FullIntracodedFrame = fixed_qp_idr; + cqp_ctl->ConstantQP_InterPredictedFrame_PrevRefOnly = fixed_qp_p; + cqp_ctl->ConstantQP_InterPredictedFrame_BiDirectionalRef = fixed_qp_b; + + ctx->rc.ConfigParams.pConfiguration_CQP = cqp_ctl; + } + + // GOP + ctx->gop.DataSize = sizeof(D3D12_VIDEO_ENCODER_SEQUENCE_GOP_STRUCTURE_HEVC); + ctx->gop.pHEVCGroupOfPictures = av_mallocz(ctx->gop.DataSize); + if (!ctx->gop.pHEVCGroupOfPictures) + return AVERROR(ENOMEM); + + ctx->gop.pHEVCGroupOfPictures->GOPLength = base_ctx->gop_size; + ctx->gop.pHEVCGroupOfPictures->PPicturePeriod = base_ctx->b_per_p + 1; + // Power of 2 + if (base_ctx->gop_size & base_ctx->gop_size - 1 == 0) + ctx->gop.pHEVCGroupOfPictures->log2_max_pic_order_cnt_lsb_minus4 = + FFMAX(av_log2(base_ctx->gop_size) - 4, 0); + else + ctx->gop.pHEVCGroupOfPictures->log2_max_pic_order_cnt_lsb_minus4 = + FFMAX(av_log2(base_ctx->gop_size) - 3, 0); + + return 0; +} + +static int d3d12va_encode_hevc_set_level(AVCodecContext *avctx) +{ + D3D12VAEncodeContext *ctx = avctx->priv_data; + D3D12VAEncodeHEVCContext *priv = avctx->priv_data; + int i; + + ctx->level.DataSize = sizeof(D3D12_VIDEO_ENCODER_LEVEL_TIER_CONSTRAINTS_HEVC); + ctx->level.pHEVCLevelSetting = av_mallocz(ctx->level.DataSize); + if (!ctx->level.pHEVCLevelSetting) + return AVERROR(ENOMEM); + + for (i = 0; i < FF_ARRAY_ELEMS(hevc_levels); i++) { + if (avctx->level == hevc_levels[i].level) { + ctx->level.pHEVCLevelSetting->Level = hevc_levels[i].d3d12_level; + break; + } + } + + if (i == FF_ARRAY_ELEMS(hevc_levels)) { + av_log(avctx, AV_LOG_ERROR, "Invalid level %d.\n", avctx->level); + return AVERROR(EINVAL); + } + + ctx->level.pHEVCLevelSetting->Tier = priv->raw_vps.profile_tier_level.general_tier_flag == 0 ? + D3D12_VIDEO_ENCODER_TIER_HEVC_MAIN : + D3D12_VIDEO_ENCODER_TIER_HEVC_HIGH; + + return 0; +} + +static void d3d12va_encode_hevc_free_picture_params(D3D12VAEncodePicture *pic) +{ + if (!pic->pic_ctl.pHEVCPicData) + return; + + av_freep(&pic->pic_ctl.pHEVCPicData->pList0ReferenceFrames); + av_freep(&pic->pic_ctl.pHEVCPicData->pList1ReferenceFrames); + av_freep(&pic->pic_ctl.pHEVCPicData->pReferenceFramesReconPictureDescriptors); + av_freep(&pic->pic_ctl.pHEVCPicData); +} + +static int d3d12va_encode_hevc_init_picture_params(AVCodecContext *avctx, + D3D12VAEncodePicture *pic) +{ + HWBaseEncodePicture *base_pic = (HWBaseEncodePicture *)pic; + D3D12VAEncodeHEVCPicture *hpic = base_pic->priv_data; + HWBaseEncodePicture *prev = base_pic->prev; + D3D12VAEncodeHEVCPicture *hprev = prev ? prev->priv_data : NULL; + D3D12_VIDEO_ENCODER_REFERENCE_PICTURE_DESCRIPTOR_HEVC *pd = NULL; + UINT *ref_list0 = NULL, *ref_list1 = NULL; + int i, idx = 0; + + pic->pic_ctl.DataSize = sizeof(D3D12_VIDEO_ENCODER_PICTURE_CONTROL_CODEC_DATA_HEVC); + pic->pic_ctl.pHEVCPicData = av_mallocz(pic->pic_ctl.DataSize); + if (!pic->pic_ctl.pHEVCPicData) + return AVERROR(ENOMEM); + + if (base_pic->type == PICTURE_TYPE_IDR) { + av_assert0(base_pic->display_order == base_pic->encode_order); + hpic->last_idr_frame = base_pic->display_order; + } else { + av_assert0(prev); + hpic->last_idr_frame = hprev->last_idr_frame; + } + hpic->pic_order_cnt = base_pic->display_order - hpic->last_idr_frame; + + switch(base_pic->type) { + case PICTURE_TYPE_IDR: + pic->pic_ctl.pHEVCPicData->FrameType = D3D12_VIDEO_ENCODER_FRAME_TYPE_HEVC_IDR_FRAME; + break; + case PICTURE_TYPE_I: + pic->pic_ctl.pHEVCPicData->FrameType = D3D12_VIDEO_ENCODER_FRAME_TYPE_HEVC_I_FRAME; + break; + case PICTURE_TYPE_P: + pic->pic_ctl.pHEVCPicData->FrameType = D3D12_VIDEO_ENCODER_FRAME_TYPE_HEVC_P_FRAME; + break; + case PICTURE_TYPE_B: + pic->pic_ctl.pHEVCPicData->FrameType = D3D12_VIDEO_ENCODER_FRAME_TYPE_HEVC_B_FRAME; + break; + default: + av_assert0(0 && "invalid picture type"); + } + + pic->pic_ctl.pHEVCPicData->slice_pic_parameter_set_id = 0; + pic->pic_ctl.pHEVCPicData->PictureOrderCountNumber = hpic->pic_order_cnt; + + if (base_pic->type == PICTURE_TYPE_P || base_pic->type == PICTURE_TYPE_B) { + pd = av_calloc(MAX_PICTURE_REFERENCES, sizeof(*pd)); + if (!pd) + return AVERROR(ENOMEM); + + ref_list0 = av_calloc(MAX_PICTURE_REFERENCES, sizeof(*ref_list0)); + if (!ref_list0) + return AVERROR(ENOMEM); + + pic->pic_ctl.pHEVCPicData->List0ReferenceFramesCount = base_pic->nb_refs[0]; + for (i = 0; i < base_pic->nb_refs[0]; i++) { + HWBaseEncodePicture *ref = base_pic->refs[0][i]; + D3D12VAEncodeHEVCPicture *href; + + av_assert0(ref && ref->encode_order < base_pic->encode_order); + href = ref->priv_data; + + ref_list0[i] = idx; + pd[idx].ReconstructedPictureResourceIndex = idx; + pd[idx].IsRefUsedByCurrentPic = TRUE; + pd[idx].PictureOrderCountNumber = href->pic_order_cnt; + idx++; + } + } + + if (base_pic->type == PICTURE_TYPE_B) { + ref_list1 = av_calloc(MAX_PICTURE_REFERENCES, sizeof(*ref_list1)); + if (!ref_list1) + return AVERROR(ENOMEM); + + pic->pic_ctl.pHEVCPicData->List1ReferenceFramesCount = base_pic->nb_refs[1]; + for (i = 0; i < base_pic->nb_refs[1]; i++) { + HWBaseEncodePicture *ref = base_pic->refs[1][i]; + D3D12VAEncodeHEVCPicture *href; + + av_assert0(ref && ref->encode_order < base_pic->encode_order); + href = ref->priv_data; + + ref_list1[i] = idx; + pd[idx].ReconstructedPictureResourceIndex = idx; + pd[idx].IsRefUsedByCurrentPic = TRUE; + pd[idx].PictureOrderCountNumber = href->pic_order_cnt; + idx++; + } + } + + pic->pic_ctl.pHEVCPicData->pList0ReferenceFrames = ref_list0; + pic->pic_ctl.pHEVCPicData->pList1ReferenceFrames = ref_list1; + pic->pic_ctl.pHEVCPicData->ReferenceFramesReconPictureDescriptorsCount = idx; + pic->pic_ctl.pHEVCPicData->pReferenceFramesReconPictureDescriptors = pd; + + return 0; +} + +static const D3D12VAEncodeType d3d12va_encode_type_hevc = { + .profiles = d3d12va_encode_hevc_profiles, + + .d3d12_codec = D3D12_VIDEO_ENCODER_CODEC_HEVC, + + .flags = FLAG_B_PICTURES | + FLAG_B_PICTURE_REFERENCES | + FLAG_NON_IDR_KEY_PICTURES, + + .default_quality = 25, + + .get_encoder_caps = &d3d12va_encode_hevc_get_encoder_caps, + + .configure = &d3d12va_encode_hevc_configure, + + .set_level = &d3d12va_encode_hevc_set_level, + + .picture_priv_data_size = sizeof(D3D12VAEncodeHEVCPicture), + + .init_sequence_params = &d3d12va_encode_hevc_init_sequence_params, + + .init_picture_params = &d3d12va_encode_hevc_init_picture_params, + + .free_picture_params = &d3d12va_encode_hevc_free_picture_params, + + .write_sequence_header = &d3d12va_encode_hevc_write_sequence_header, +}; + +static int d3d12va_encode_hevc_init(AVCodecContext *avctx) +{ + D3D12VAEncodeContext *ctx = avctx->priv_data; + D3D12VAEncodeHEVCContext *priv = avctx->priv_data; + + ctx->codec = &d3d12va_encode_type_hevc; + + if (avctx->profile == AV_PROFILE_UNKNOWN) + avctx->profile = priv->profile; + if (avctx->level == FF_LEVEL_UNKNOWN) + avctx->level = priv->level; + + if (avctx->level != FF_LEVEL_UNKNOWN && avctx->level & ~0xff) { + av_log(avctx, AV_LOG_ERROR, "Invalid level %d: must fit " + "in 8-bit unsigned integer.\n", avctx->level); + return AVERROR(EINVAL); + } + + if (priv->qp > 0) + ctx->explicit_qp = priv->qp; + + return ff_d3d12va_encode_init(avctx); +} + +static int d3d12va_encode_hevc_close(AVCodecContext *avctx) +{ + D3D12VAEncodeHEVCContext *priv = avctx->priv_data; + + ff_cbs_fragment_free(&priv->current_access_unit); + ff_cbs_close(&priv->cbc); + + av_freep(&priv->common.codec_conf.pHEVCConfig); + av_freep(&priv->common.gop.pHEVCGroupOfPictures); + av_freep(&priv->common.level.pHEVCLevelSetting); + + return ff_d3d12va_encode_close(avctx); +} + +#define OFFSET(x) offsetof(D3D12VAEncodeHEVCContext, x) +#define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM) +static const AVOption d3d12va_encode_hevc_options[] = { + HW_BASE_ENCODE_COMMON_OPTIONS, + D3D12VA_ENCODE_RC_OPTIONS, + + { "qp", "Constant QP (for P-frames; scaled by qfactor/qoffset for I/B)", + OFFSET(qp), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 52, FLAGS }, + + { "profile", "Set profile (general_profile_idc)", + OFFSET(profile), AV_OPT_TYPE_INT, + { .i64 = AV_PROFILE_UNKNOWN }, AV_PROFILE_UNKNOWN, 0xff, FLAGS, "profile" }, + +#define PROFILE(name, value) name, NULL, 0, AV_OPT_TYPE_CONST, \ + { .i64 = value }, 0, 0, FLAGS, "profile" + { PROFILE("main", AV_PROFILE_HEVC_MAIN) }, + { PROFILE("main10", AV_PROFILE_HEVC_MAIN_10) }, +#undef PROFILE + + { "tier", "Set tier (general_tier_flag)", + OFFSET(tier), AV_OPT_TYPE_INT, + { .i64 = 0 }, 0, 1, FLAGS, "tier" }, + { "main", NULL, 0, AV_OPT_TYPE_CONST, + { .i64 = 0 }, 0, 0, FLAGS, "tier" }, + { "high", NULL, 0, AV_OPT_TYPE_CONST, + { .i64 = 1 }, 0, 0, FLAGS, "tier" }, + + { "level", "Set level (general_level_idc)", + OFFSET(level), AV_OPT_TYPE_INT, + { .i64 = FF_LEVEL_UNKNOWN }, FF_LEVEL_UNKNOWN, 0xff, FLAGS, "level" }, + +#define LEVEL(name, value) name, NULL, 0, AV_OPT_TYPE_CONST, \ + { .i64 = value }, 0, 0, FLAGS, "level" + { LEVEL("1", 30) }, + { LEVEL("2", 60) }, + { LEVEL("2.1", 63) }, + { LEVEL("3", 90) }, + { LEVEL("3.1", 93) }, + { LEVEL("4", 120) }, + { LEVEL("4.1", 123) }, + { LEVEL("5", 150) }, + { LEVEL("5.1", 153) }, + { LEVEL("5.2", 156) }, + { LEVEL("6", 180) }, + { LEVEL("6.1", 183) }, + { LEVEL("6.2", 186) }, +#undef LEVEL + + { NULL }, +}; + +static const FFCodecDefault d3d12va_encode_hevc_defaults[] = { + { "b", "0" }, + { "bf", "2" }, + { "g", "120" }, + { "i_qfactor", "1" }, + { "i_qoffset", "0" }, + { "b_qfactor", "1" }, + { "b_qoffset", "0" }, + { "qmin", "-1" }, + { "qmax", "-1" }, + { NULL }, +}; + +static const AVClass d3d12va_encode_hevc_class = { + .class_name = "hevc_d3d12va", + .item_name = av_default_item_name, + .option = d3d12va_encode_hevc_options, + .version = LIBAVUTIL_VERSION_INT, +}; + +const FFCodec ff_hevc_d3d12va_encoder = { + .p.name = "hevc_d3d12va", + CODEC_LONG_NAME("D3D12VA hevc encoder"), + .p.type = AVMEDIA_TYPE_VIDEO, + .p.id = AV_CODEC_ID_HEVC, + .priv_data_size = sizeof(D3D12VAEncodeHEVCContext), + .init = &d3d12va_encode_hevc_init, + FF_CODEC_RECEIVE_PACKET_CB(&ff_hw_base_encode_receive_packet), + .close = &d3d12va_encode_hevc_close, + .p.priv_class = &d3d12va_encode_hevc_class, + .p.capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE | + AV_CODEC_CAP_DR1 | AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE, + .caps_internal = FF_CODEC_CAP_NOT_INIT_THREADSAFE | + FF_CODEC_CAP_INIT_CLEANUP, + .defaults = d3d12va_encode_hevc_defaults, + .p.pix_fmts = (const enum AVPixelFormat[]) { + AV_PIX_FMT_D3D12, + AV_PIX_FMT_NONE, + }, + .hw_configs = ff_d3d12va_encode_hw_configs, + .p.wrapper_name = "d3d12va", +}; From patchwork Thu Apr 18 08:59:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Tong1" X-Patchwork-Id: 48148 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a20:ce4e:b0:1a9:af23:56c1 with SMTP id id14csp1552894pzb; Thu, 18 Apr 2024 02:03:28 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCX4BRjvfdfRbfvXaQxjCQf8/vJC1eKVhnCHXjKAZV7p6yzHAUKTZmBGFxyzx8zal/M3iTM6yz3Dd29OgrIFnbphQVP65TnxK/PQMQ== X-Google-Smtp-Source: AGHT+IEp2FQWspxtA4Ed3+rT0mxuSBoS4IVLiWzGkR5oLbUt1a5YiPCDFNb2h3jx85POJtUuaGkZ X-Received: by 2002:a17:906:f50:b0:a52:2a91:eebd with SMTP id h16-20020a1709060f5000b00a522a91eebdmr1042916ejj.75.1713431007767; Thu, 18 Apr 2024 02:03:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1713431007; cv=none; d=google.com; s=arc-20160816; b=Tf7RQloMQsKBS9Fp+x57/d2MIsNv7/oj+u6dN6DahppWqVgRiAUIbHIQFKIynk6gU6 Gm/hb0azgUHSnwD00/Ej2GkkxZvzigIaW8G8hXc4/9zerKTvw3AqDTubmwwve+t5Sssp jv5w9kxcX0SBqSVW/qgFDnT1TMXi2e1WlygwM93BZNdfJhPpxmMNgDGsK9gALScqNhUZ xCU4/6bsrT6hgHNEl9Hoq0NIiAsUjujgthKzGNzzSBoaYlFD/1+yuLOz9Hz7BA4lGEYi V6ROKo/pBbALgbynY6MzCgAxG+eZkSLoD13APWK4I95bN+wgT6b/wfKJOwXDrMtekhGK uWHQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=N3lYZAWlXBeTuUjxbRlFrQiEk8dTq4ySkkQRbi/nGS0=; fh=CX/m9qTuMDwrotqtm4RkAOJT6yXlKL2vcfWDitFPXZs=; b=gkqbU6PflTskKnP/d5cxFEIw11w8GM07po3YJX6NTg+uVupOpl+OFqr/rTjGk8u2b9 RV84R3RvYikWFCf56F5ewrrxJpfzM1c3z9c2Z3sM+Ezf5XlNFo956HmMU09mzAdE2iTY jl2SJASQQHzuoI1MquqEyTmDQnBrKknEPk8rTlNX/qoVwJXD9k7sCLperfFQ5790sR2+ U1aP+Xj7vsqfY3NI55Ss/Rrs466nAcFP2TLPmMUSH/OgG6hs8FT540yyJpRfET95w9To 3pR2p1OBAJJn6+7Kmwe0lPPcWZ/BKTGAPkjz01G4QkxWk3PWlemYayvRvtQGRFUyIdvO AvGQ==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=gC06PHB1; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id m18-20020a1709060d9200b00a51f3ad369csi599138eji.172.2024.04.18.02.03.27; Thu, 18 Apr 2024 02:03:27 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=gC06PHB1; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 5599468D3FE; Thu, 18 Apr 2024 12:01:19 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id A172068D457 for ; Thu, 18 Apr 2024 12:01:12 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713430873; x=1744966873; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JnUAlZDlwXK04e1L6QxlQ/N4+w9EScSyXt9OFMAbpJk=; b=gC06PHB12pApAbA8FG0L/77paox/+jT2ZU7+Bd8UGRjbo6gRzJS4HOt4 lDPBd7FiP/xOCQoUI2wh0FW0wOHIKKR+atzpxKjMCBPcGVB9tBMwUSGL1 TIRRkfty8z9sJUub5Z5D+epkOuSZOgAo3FdVTBKfIr7pSfVYew5ClklK7 AXIKra8br192xIhbpFtu2OUSgGX6zrXXap+oZqxXezhRjNVUVJpPrH5Um 1bf2MhhPURMo4X7FY7ALBtze96pOtYBBLipNy5MIOt876B34S9dtLqKLN DVv3hbe73UcNjpfF36wq1Mk9CUeM93H8Bd+IKmiWqKBoqQqNfpKUUGlge Q==; X-CSE-ConnectionGUID: tMCGRwGFTImunYnVfiqf5A== X-CSE-MsgGUID: 8SPsCOt6QEOEw1zPrXQ7PQ== X-IronPort-AV: E=McAfee;i="6600,9927,11047"; a="12748400" X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="12748400" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Apr 2024 02:01:10 -0700 X-CSE-ConnectionGUID: JpmGJlaLSia4MaRx+hUamQ== X-CSE-MsgGUID: Q6/b8ywrSr+YmjV5vJmfkg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,211,1708416000"; d="scan'208";a="54127663" Received: from unknown (HELO localhost.localdomain) ([10.239.160.66]) by fmviesa001.fm.intel.com with ESMTP; 18 Apr 2024 02:01:06 -0700 From: tong1.wu-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Thu, 18 Apr 2024 16:59:09 +0800 Message-ID: <20240418085910.547-15-tong1.wu@intel.com> X-Mailer: git-send-email 2.41.0.windows.1 In-Reply-To: <20240418085910.547-1-tong1.wu@intel.com> References: <20240418085910.547-1-tong1.wu@intel.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v8 15/15] Changelog: add D3D12VA HEVC encoder changelog X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Tong Wu Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: FWeuH2FjX+YG From: Tong Wu Signed-off-by: Tong Wu --- Changelog | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Changelog b/Changelog index 8db14f02b4..573e356a96 100644 --- a/Changelog +++ b/Changelog @@ -7,7 +7,7 @@ version : - ffmpeg CLI filtergraph chaining - LC3/LC3plus demuxer and muxer - pad_vaapi, drawbox_vaapi filters - +- D3D12VA HEVC encoder version 7.0: - DXV DXT1 encoder