From patchwork Tue Feb 1 13:06:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andreas Rheinhardt X-Patchwork-Id: 34002 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6602:2c4e:0:0:0:0 with SMTP id x14csp605701iov; Tue, 1 Feb 2022 05:13:42 -0800 (PST) X-Google-Smtp-Source: ABdhPJz9kQ6HVhmi68MM9oLQfemV1rplbRqyrPSdJ6ZN5gJaMBQcRT26FzfyEPVr7G8RAlS/+Wi+ X-Received: by 2002:a17:907:9619:: with SMTP id gb25mr20816591ejc.426.1643721222228; Tue, 01 Feb 2022 05:13:42 -0800 (PST) Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id sh10si8879356ejc.278.2022.02.01.05.13.41; Tue, 01 Feb 2022 05:13:42 -0800 (PST) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@outlook.com header.s=selector1 header.b=XxLtKjHf; arc=fail (body hash mismatch); spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=outlook.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id A0A2368B36F; Tue, 1 Feb 2022 15:08:01 +0200 (EET) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from EUR05-DB8-obe.outbound.protection.outlook.com (mail-db8eur05olkn2079.outbound.protection.outlook.com [40.92.89.79]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 0F2F068B304 for ; Tue, 1 Feb 2022 15:07:59 +0200 (EET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=QboMMEpyN3SECvueyYCLEX0eHSsnzaqpu1ps0nHvThujzQNQ7j+m5+Xv6pv4OM28qzgaWNYSH87GX7ZPqLB6oAJsag3l1RBbH4exK/KNJJLjRlf1mU9IwhoJsOKHald2vsLoVUIq5830Nii1aEBFQuXx6tYt16A8DF+YiTe4fC+PyfCIVJo1cBdOnTtWoZDEx3BA7JNIFuUIBxoWy30FoT6irYWneFbKc6acysLFPuA+Q5grIiG3pWNpg7RROZ+Um3iCDwFPoguOlqumFSeLPviIe1mCpHdFOnHpFE2BSTqu8GOH6ymW2S7D9+qZzyL/eJGiHC95TYudo8iQPyhjIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=RolCVSZ2I++jsDIm8iynPtGHIDVofqwaE1+OkhfXVAQ=; b=OsA1t1fDbiop6XND8CZNWjRkSEiKYGxOvjlLjHsvTE1cLmaJJq839OHmKKq4U0nhm1N9DE3zSOhCls5j858LlXOoPtlM8VX27hxy9sAgvFjCe6U+vfNF37aUmLmSPGZZrTY9E1xj40tgdNCctW60dOpThOmlWpi2yYHCHbOd/23XsHXwd3C8Y2xSVrv0PzhNZvqL4eu2l5MqaMUT+UCIwWoJYLcmf93UulFV9rpFvMRH3LLppqpjJQN4RYPeTbw2eZSXnV2euO9BjHwUQgkL42Yz7U/v9WGvxsyhsYB4kJEcManrNpgvKQjWu6kAZUv8QNWK6Zt5DiXB6xq1GxsNzQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=outlook.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=RolCVSZ2I++jsDIm8iynPtGHIDVofqwaE1+OkhfXVAQ=; b=XxLtKjHf7oMflaMbCxJ0+zGIflBzEqOm362t80LG5eENw8Ip/777Dq3F/26Tj77krQkINem5QFCyj51ORMHeymM5kAlGYoJm3MoFgv0+NReFAPSNWD9aT9UxLSHYuYLLyPsOYZiiolWox04wVgv8l+rvsAgrqJQo7TBLfXm0OALVepyLki1fvkgk/T61oshnF7VW1CV7NN6J9jJH3o9uw+4vPiaCodhjYxZ0hyMh18Zyt7EVCatBYQX9Roatj6nZBYUWRp6/fjTgpix9WlyX+YuzNNSAGXmxdvD7P/xMeu9bOdF2gs+nteJ5Bmrpeu7k1jODy2UJCxk/bBebEmBwKw== Received: from AM7PR03MB6660.eurprd03.prod.outlook.com (2603:10a6:20b:1c1::22) by AM0PR03MB5700.eurprd03.prod.outlook.com (2603:10a6:208:16f::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4930.22; Tue, 1 Feb 2022 13:07:55 +0000 Received: from AM7PR03MB6660.eurprd03.prod.outlook.com ([fe80::ac56:2ff4:d304:ab22]) by AM7PR03MB6660.eurprd03.prod.outlook.com ([fe80::ac56:2ff4:d304:ab22%6]) with mapi id 15.20.4951.011; Tue, 1 Feb 2022 13:07:55 +0000 From: Andreas Rheinhardt To: ffmpeg-devel@ffmpeg.org Date: Tue, 1 Feb 2022 14:06:25 +0100 Message-ID: X-Mailer: git-send-email 2.32.0 In-Reply-To: References: X-TMN: [iORy9LkI82l9n0BsJWXfrcaS/lvhaAia] X-ClientProxiedBy: AM6P195CA0092.EURP195.PROD.OUTLOOK.COM (2603:10a6:209:86::33) To AM7PR03MB6660.eurprd03.prod.outlook.com (2603:10a6:20b:1c1::22) X-Microsoft-Original-Message-ID: <20220201130706.1420875-27-andreas.rheinhardt@outlook.com> MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 672950f4-aefa-4146-1cd7-08d9e583dc78 X-MS-TrafficTypeDiagnostic: AM0PR03MB5700:EE_ X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: kqwh4zCs+tAPGy9/PxRkaHho8C6jZPwUe9vWo/ty667643boGvrImO8wdSzbFGkR4BH6DAlvqy2LjPijLELadjHPLT9WK2EB/vIso7HQsoM4bl3NADbHTsmWeoggz3rNDgr7yJPHdZzWFwBntUTkScztwkQ4834a0w50pIV8zOHp15UH9Q/7zzSD14RVCw8qHnJFt3ShqZ9jMr6LoG3Q945lKq4URpvtElIhmeiTohSKQaD31B0D/kEgGAWKUNApnvYXRFgcGXsoZJ1Nvc4lMIbUEQrOOoq0+8k1zpj29ELUWOvZa53pPr+zfqEhudj1QRYxcfkV1WJKExssjjBKhTOnWkQzkGA3bmVza/SAyQLJ2aG4HMNvVsmKR0rGwBuX25TzgttBpIJVcXZFD05lotM0l1yfYHahgsAhh17vGb0jiA1yTjstSUuuKkKRq0O5lXWm8/XWOCa8Yw6A42W33p+MgAl/wIiOnw9plFWcpYE0NoITH/30dQtstX0d4p50KnEMVHMMx58GmuMAJlwzIiEyplnFmYE/205cvITFa2FV8UDTNHTjskTIrrcm+B8i X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: uTPO0xUOcpLPgVa7UZ2W5IzSMquscOhGgYknw/m/6gHB/21Wf3tKd3CoVBz0ZxO+0zPATK6jYooRPT2ODRVTMFeIPt7zUjfAa/JxnkF8DdtvC1lQsPynjCUOEFngkSPWzcKct7KCnU2rhRnD1QUAUVP00sr86H1Cqf9VyNl8BgDL5k2a5ZdmJAYZiAV31p5RwB8Wk3A/wKTL/0RhoclttZTwIXwtp/osxUeNZEQRdf5hZMYzPGqlEhKgKceD+x2rHZE81tak1mD7ZKZySwAKSNda3JcBBVGJ9sHQp/YE+gGNDRkrL4SkLi/UY9CEQwZRImIR+6lLpCCuZT0O69Gr+UHvZkJDnb7XzKJqp5gFFDFgFLcerPg4rI3MGtlvxTlzapxce4dgx7XRMVxtzzU7hU8+EJnBiWKiwugo0FHqqfVZs+xz62Ni1RHyus+PNVm0Jo7L0cYKFYY1qm+vvf3BgXglbEtl6oIzrM/ShQDkz4RZlu30jTqWyKUjbSFSDG0/P+2IcC66DWWmd0BQJLr7MX+giv9E3UDRFC1ghOI0qVQ9rjzGS0tuJTOe8oTRzv4INs0gjOJmbJTJTOT0c7yTs45vAuWlxDvf7nV5gm3mlwMc2sIqNeOPeJcItZO0pRtW9yQF66skx9hAOwcy0ooNz2xoSJ/8W2aCL5RzQgahvurk+2EckNpXBgNx8VbyoDdHsU+cRnRfHYdk7hMOCrxfWRjE6/2DHbGUB+FjIkQBU19Us2n0T5Wo+aAflfvmBaq9J04Kw9lxx8a38Qns9LzLB8VOoV85oG8uHM0ec5kVkS4F0GZaeSaeb9MN+RSclKCimjJGA0FxW0vvJZ34+SaLFJRyFZLHaEcJTVaW7ymBvpHn9UU5YJZl5QYNBIhDLewM0BGWE0uZqgSzlMM7qpqRpeY0apEW9G8GW0kGc2uiX8kkYRwwiWDzw+hpoW8gvoZ1kV7aaidkWxwFwqBUakW/kA== X-OriginatorOrg: outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 672950f4-aefa-4146-1cd7-08d9e583dc78 X-MS-Exchange-CrossTenant-AuthSource: AM7PR03MB6660.eurprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2022 13:07:55.5252 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 84df9e7f-e9f6-40af-b435-aaaaaaaaaaaa X-MS-Exchange-CrossTenant-RMS-PersistedConsumerOrg: 00000000-0000-0000-0000-000000000000 X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB5700 Subject: [FFmpeg-devel] [PATCH v2 28/69] avcodec/mpegvideoenc: Add proper MPVMainEncContext X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Andreas Rheinhardt Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: FVQVPHd9Roxd This is in preparation for moving fields only used by the main encoder thread from MPVContext to MPVMainEncContext. Signed-off-by: Andreas Rheinhardt --- libavcodec/dnxhdenc.c | 302 +++++++++++++++++++------------------ libavcodec/flvenc.c | 3 +- libavcodec/h261enc.c | 15 +- libavcodec/ituh263enc.c | 13 +- libavcodec/mjpegenc.c | 24 +-- libavcodec/mpeg12enc.c | 23 +-- libavcodec/mpeg4videoenc.c | 41 ++--- libavcodec/mpegvideo_enc.c | 135 +++++++++-------- libavcodec/mpegvideoenc.h | 7 +- libavcodec/msmpeg4enc.c | 12 +- libavcodec/ratecontrol.c | 59 +++++--- libavcodec/ratecontrol.h | 2 - libavcodec/rv10enc.c | 3 +- libavcodec/rv20enc.c | 3 +- libavcodec/snow.c | 11 +- libavcodec/snowenc.c | 170 +++++++++++---------- libavcodec/speedhqenc.c | 8 +- libavcodec/svq1enc.c | 159 +++++++++---------- libavcodec/wmv2enc.c | 12 +- 19 files changed, 542 insertions(+), 460 deletions(-) diff --git a/libavcodec/dnxhdenc.c b/libavcodec/dnxhdenc.c index 74989276b9..9ad95ac20a 100644 --- a/libavcodec/dnxhdenc.c +++ b/libavcodec/dnxhdenc.c @@ -267,30 +267,31 @@ static av_cold int dnxhd_init_qmat(DNXHDEncContext *ctx, int lbias, int cbias) int qscale, i; const uint8_t *luma_weight_table = ctx->cid_table->luma_weight; const uint8_t *chroma_weight_table = ctx->cid_table->chroma_weight; + MPVEncContext *const s = &ctx->m.common; - if (!FF_ALLOCZ_TYPED_ARRAY(ctx->qmatrix_l, ctx->m.avctx->qmax + 1) || - !FF_ALLOCZ_TYPED_ARRAY(ctx->qmatrix_c, ctx->m.avctx->qmax + 1) || - !FF_ALLOCZ_TYPED_ARRAY(ctx->qmatrix_l16, ctx->m.avctx->qmax + 1) || - !FF_ALLOCZ_TYPED_ARRAY(ctx->qmatrix_c16, ctx->m.avctx->qmax + 1)) + if (!FF_ALLOCZ_TYPED_ARRAY(ctx->qmatrix_l, s->avctx->qmax + 1) || + !FF_ALLOCZ_TYPED_ARRAY(ctx->qmatrix_c, s->avctx->qmax + 1) || + !FF_ALLOCZ_TYPED_ARRAY(ctx->qmatrix_l16, s->avctx->qmax + 1) || + !FF_ALLOCZ_TYPED_ARRAY(ctx->qmatrix_c16, s->avctx->qmax + 1)) return AVERROR(ENOMEM); if (ctx->bit_depth == 8) { for (i = 1; i < 64; i++) { - int j = ctx->m.idsp.idct_permutation[ff_zigzag_direct[i]]; + int j = s->idsp.idct_permutation[ff_zigzag_direct[i]]; weight_matrix[j] = ctx->cid_table->luma_weight[i]; } - ff_convert_matrix(&ctx->m, ctx->qmatrix_l, ctx->qmatrix_l16, + ff_convert_matrix(s, ctx->qmatrix_l, ctx->qmatrix_l16, weight_matrix, ctx->intra_quant_bias, 1, - ctx->m.avctx->qmax, 1); + s->avctx->qmax, 1); for (i = 1; i < 64; i++) { - int j = ctx->m.idsp.idct_permutation[ff_zigzag_direct[i]]; + int j = s->idsp.idct_permutation[ff_zigzag_direct[i]]; weight_matrix[j] = ctx->cid_table->chroma_weight[i]; } - ff_convert_matrix(&ctx->m, ctx->qmatrix_c, ctx->qmatrix_c16, + ff_convert_matrix(s, ctx->qmatrix_c, ctx->qmatrix_c16, weight_matrix, ctx->intra_quant_bias, 1, - ctx->m.avctx->qmax, 1); + s->avctx->qmax, 1); - for (qscale = 1; qscale <= ctx->m.avctx->qmax; qscale++) { + for (qscale = 1; qscale <= s->avctx->qmax; qscale++) { for (i = 0; i < 64; i++) { ctx->qmatrix_l[qscale][i] <<= 2; ctx->qmatrix_c[qscale][i] <<= 2; @@ -302,7 +303,7 @@ static av_cold int dnxhd_init_qmat(DNXHDEncContext *ctx, int lbias, int cbias) } } else { // 10-bit - for (qscale = 1; qscale <= ctx->m.avctx->qmax; qscale++) { + for (qscale = 1; qscale <= s->avctx->qmax; qscale++) { for (i = 1; i < 64; i++) { int j = ff_zigzag_direct[i]; @@ -325,22 +326,23 @@ static av_cold int dnxhd_init_qmat(DNXHDEncContext *ctx, int lbias, int cbias) } } - ctx->m.q_chroma_intra_matrix16 = ctx->qmatrix_c16; - ctx->m.q_chroma_intra_matrix = ctx->qmatrix_c; - ctx->m.q_intra_matrix16 = ctx->qmatrix_l16; - ctx->m.q_intra_matrix = ctx->qmatrix_l; + s->q_chroma_intra_matrix16 = ctx->qmatrix_c16; + s->q_chroma_intra_matrix = ctx->qmatrix_c; + s->q_intra_matrix16 = ctx->qmatrix_l16; + s->q_intra_matrix = ctx->qmatrix_l; return 0; } static av_cold int dnxhd_init_rc(DNXHDEncContext *ctx) { - if (!FF_ALLOCZ_TYPED_ARRAY(ctx->mb_rc, (ctx->m.avctx->qmax + 1) * ctx->m.mb_num)) + MPVEncContext *const s = &ctx->m.common; + if (!FF_ALLOCZ_TYPED_ARRAY(ctx->mb_rc, (s->avctx->qmax + 1) * s->mb_num)) return AVERROR(ENOMEM); - if (ctx->m.avctx->mb_decision != FF_MB_DECISION_RD) { - if (!FF_ALLOCZ_TYPED_ARRAY(ctx->mb_cmp, ctx->m.mb_num) || - !FF_ALLOCZ_TYPED_ARRAY(ctx->mb_cmp_tmp, ctx->m.mb_num)) + if (s->avctx->mb_decision != FF_MB_DECISION_RD) { + if (!FF_ALLOCZ_TYPED_ARRAY(ctx->mb_cmp, s->mb_num) || + !FF_ALLOCZ_TYPED_ARRAY(ctx->mb_cmp_tmp, s->mb_num)) return AVERROR(ENOMEM); } ctx->frame_bits = (ctx->coding_unit_size - @@ -353,6 +355,7 @@ static av_cold int dnxhd_init_rc(DNXHDEncContext *ctx) static av_cold int dnxhd_encode_init(AVCodecContext *avctx) { DNXHDEncContext *ctx = avctx->priv_data; + MPVEncContext *const s = &ctx->m.common; int i, ret; switch (avctx->pix_fmt) { @@ -412,31 +415,31 @@ static av_cold int dnxhd_encode_init(AVCodecContext *avctx) ctx->cid_table = ff_dnxhd_get_cid_table(ctx->cid); av_assert0(ctx->cid_table); - ctx->m.avctx = avctx; - ctx->m.mb_intra = 1; - ctx->m.h263_aic = 1; + s->avctx = avctx; + s->mb_intra = 1; + s->h263_aic = 1; avctx->bits_per_raw_sample = ctx->bit_depth; ff_blockdsp_init(&ctx->bdsp, avctx); - ff_fdctdsp_init(&ctx->m.fdsp, avctx); - ff_mpv_idct_init(&ctx->m); - ff_mpegvideoencdsp_init(&ctx->m.mpvencdsp, avctx); - ff_pixblockdsp_init(&ctx->m.pdsp, avctx); - ff_dct_encode_init(&ctx->m); + ff_fdctdsp_init(&s->fdsp, avctx); + ff_mpv_idct_init(s); + ff_mpegvideoencdsp_init(&s->mpvencdsp, avctx); + ff_pixblockdsp_init(&s->pdsp, avctx); + ff_dct_encode_init(s); if (ctx->profile != FF_PROFILE_DNXHD) - ff_videodsp_init(&ctx->m.vdsp, ctx->bit_depth); + ff_videodsp_init(&s->vdsp, ctx->bit_depth); - if (!ctx->m.dct_quantize) - ctx->m.dct_quantize = ff_dct_quantize_c; + if (!s->dct_quantize) + s->dct_quantize = ff_dct_quantize_c; if (ctx->is_444 || ctx->profile == FF_PROFILE_DNXHR_HQX) { - ctx->m.dct_quantize = dnxhd_10bit_dct_quantize_444; + s->dct_quantize = dnxhd_10bit_dct_quantize_444; ctx->get_pixels_8x4_sym = dnxhd_10bit_get_pixels_8x4_sym; ctx->block_width_l2 = 4; } else if (ctx->bit_depth == 10) { - ctx->m.dct_quantize = dnxhd_10bit_dct_quantize; + s->dct_quantize = dnxhd_10bit_dct_quantize; ctx->get_pixels_8x4_sym = dnxhd_10bit_get_pixels_8x4_sym; ctx->block_width_l2 = 4; } else { @@ -447,12 +450,12 @@ static av_cold int dnxhd_encode_init(AVCodecContext *avctx) if (ARCH_X86) ff_dnxhdenc_init_x86(ctx); - ctx->m.mb_height = (avctx->height + 15) / 16; - ctx->m.mb_width = (avctx->width + 15) / 16; + s->mb_height = (avctx->height + 15) / 16; + s->mb_width = (avctx->width + 15) / 16; if (avctx->flags & AV_CODEC_FLAG_INTERLACED_DCT) { ctx->interlaced = 1; - ctx->m.mb_height /= 2; + s->mb_height /= 2; } if (ctx->interlaced && ctx->profile != FF_PROFILE_DNXHD) { @@ -461,7 +464,7 @@ static av_cold int dnxhd_encode_init(AVCodecContext *avctx) return AVERROR(EINVAL); } - ctx->m.mb_num = ctx->m.mb_height * ctx->m.mb_width; + s->mb_num = s->mb_height * s->mb_width; if (ctx->cid_table->frame_size == DNXHD_VARIABLE) { ctx->frame_size = ff_dnxhd_get_hr_frame_size(ctx->cid, @@ -473,8 +476,8 @@ static av_cold int dnxhd_encode_init(AVCodecContext *avctx) ctx->coding_unit_size = ctx->cid_table->coding_unit_size; } - if (ctx->m.mb_height > 68) - ctx->data_offset = 0x170 + (ctx->m.mb_height << 2); + if (s->mb_height > 68) + ctx->data_offset = 0x170 + (s->mb_height << 2); else ctx->data_offset = 0x280; @@ -492,10 +495,10 @@ static av_cold int dnxhd_encode_init(AVCodecContext *avctx) if ((ret = dnxhd_init_rc(ctx)) < 0) return ret; - if (!FF_ALLOCZ_TYPED_ARRAY(ctx->slice_size, ctx->m.mb_height) || - !FF_ALLOCZ_TYPED_ARRAY(ctx->slice_offs, ctx->m.mb_height) || - !FF_ALLOCZ_TYPED_ARRAY(ctx->mb_bits, ctx->m.mb_num) || - !FF_ALLOCZ_TYPED_ARRAY(ctx->mb_qscale, ctx->m.mb_num)) + if (!FF_ALLOCZ_TYPED_ARRAY(ctx->slice_size, s->mb_height) || + !FF_ALLOCZ_TYPED_ARRAY(ctx->slice_offs, s->mb_height) || + !FF_ALLOCZ_TYPED_ARRAY(ctx->mb_bits, s->mb_num) || + !FF_ALLOCZ_TYPED_ARRAY(ctx->mb_qscale, s->mb_num)) return AVERROR(ENOMEM); if (avctx->active_thread_type == FF_THREAD_SLICE) { @@ -525,6 +528,7 @@ static av_cold int dnxhd_encode_init(AVCodecContext *avctx) static int dnxhd_write_header(AVCodecContext *avctx, uint8_t *buf) { DNXHDEncContext *ctx = avctx->priv_data; + MPVEncContext *const s = &ctx->m.common; memset(buf, 0, ctx->data_offset); @@ -550,8 +554,8 @@ static int dnxhd_write_header(AVCodecContext *avctx, uint8_t *buf) buf[0x5f] = 0x01; // UDL buf[0x167] = 0x02; // reserved - AV_WB16(buf + 0x16a, ctx->m.mb_height * 4 + 4); // MSIPS - AV_WB16(buf + 0x16c, ctx->m.mb_height); // Ns + AV_WB16(buf + 0x16a, s->mb_height * 4 + 4); // MSIPS + AV_WB16(buf + 0x16c, s->mb_height); // Ns buf[0x16f] = 0x10; // reserved ctx->msip = buf + 0x170; @@ -567,7 +571,7 @@ static av_always_inline void dnxhd_encode_dc(DNXHDEncContext *ctx, int diff) } else { nbits = av_log2_16bit(2 * diff); } - put_bits(&ctx->m.pb, ctx->cid_table->dc_bits[nbits] + nbits, + put_bits(&ctx->m.common.pb, ctx->cid_table->dc_bits[nbits] + nbits, (ctx->cid_table->dc_codes[nbits] << nbits) + av_mod_uintp2(diff, nbits)); } @@ -576,26 +580,27 @@ static av_always_inline void dnxhd_encode_block(DNXHDEncContext *ctx, int16_t *block, int last_index, int n) { + MPVEncContext *const s = &ctx->m.common; int last_non_zero = 0; int slevel, i, j; - dnxhd_encode_dc(ctx, block[0] - ctx->m.last_dc[n]); - ctx->m.last_dc[n] = block[0]; + dnxhd_encode_dc(ctx, block[0] - s->last_dc[n]); + s->last_dc[n] = block[0]; for (i = 1; i <= last_index; i++) { - j = ctx->m.intra_scantable.permutated[i]; + j = s->intra_scantable.permutated[i]; slevel = block[j]; if (slevel) { int run_level = i - last_non_zero - 1; int rlevel = slevel * (1 << 1) | !!run_level; - put_bits(&ctx->m.pb, ctx->vlc_bits[rlevel], ctx->vlc_codes[rlevel]); + put_bits(&s->pb, ctx->vlc_bits[rlevel], ctx->vlc_codes[rlevel]); if (run_level) - put_bits(&ctx->m.pb, ctx->run_bits[run_level], + put_bits(&s->pb, ctx->run_bits[run_level], ctx->run_codes[run_level]); last_non_zero = i; } } - put_bits(&ctx->m.pb, ctx->vlc_bits[0], ctx->vlc_codes[0]); // EOB + put_bits(&s->pb, ctx->vlc_bits[0], ctx->vlc_codes[0]); // EOB } static av_always_inline @@ -615,7 +620,7 @@ void dnxhd_unquantize_c(DNXHDEncContext *ctx, int16_t *block, int n, } for (i = 1; i <= last_index; i++) { - int j = ctx->m.intra_scantable.permutated[i]; + int j = ctx->m.common.intra_scantable.permutated[i]; level = block[j]; if (level) { if (level < 0) { @@ -663,7 +668,7 @@ int dnxhd_calc_ac_bits(DNXHDEncContext *ctx, int16_t *block, int last_index) int bits = 0; int i, j, level; for (i = 1; i <= last_index; i++) { - j = ctx->m.intra_scantable.permutated[i]; + j = ctx->m.common.intra_scantable.permutated[i]; level = block[j]; if (level) { int run_level = i - last_non_zero - 1; @@ -678,40 +683,41 @@ int dnxhd_calc_ac_bits(DNXHDEncContext *ctx, int16_t *block, int last_index) static av_always_inline void dnxhd_get_blocks(DNXHDEncContext *ctx, int mb_x, int mb_y) { + MPVEncContext *const s = &ctx->m.common; const int bs = ctx->block_width_l2; const int bw = 1 << bs; int dct_y_offset = ctx->dct_y_offset; int dct_uv_offset = ctx->dct_uv_offset; - int linesize = ctx->m.linesize; - int uvlinesize = ctx->m.uvlinesize; + int linesize = s->linesize; + int uvlinesize = s->uvlinesize; const uint8_t *ptr_y = ctx->thread[0]->src[0] + - ((mb_y << 4) * ctx->m.linesize) + (mb_x << bs + 1); + ((mb_y << 4) * s->linesize) + (mb_x << bs + 1); const uint8_t *ptr_u = ctx->thread[0]->src[1] + - ((mb_y << 4) * ctx->m.uvlinesize) + (mb_x << bs + ctx->is_444); + ((mb_y << 4) * s->uvlinesize) + (mb_x << bs + ctx->is_444); const uint8_t *ptr_v = ctx->thread[0]->src[2] + - ((mb_y << 4) * ctx->m.uvlinesize) + (mb_x << bs + ctx->is_444); - PixblockDSPContext *pdsp = &ctx->m.pdsp; - VideoDSPContext *vdsp = &ctx->m.vdsp; - - if (ctx->bit_depth != 10 && vdsp->emulated_edge_mc && ((mb_x << 4) + 16 > ctx->m.avctx->width || - (mb_y << 4) + 16 > ctx->m.avctx->height)) { - int y_w = ctx->m.avctx->width - (mb_x << 4); - int y_h = ctx->m.avctx->height - (mb_y << 4); + ((mb_y << 4) * s->uvlinesize) + (mb_x << bs + ctx->is_444); + PixblockDSPContext *pdsp = &s->pdsp; + VideoDSPContext *vdsp = &s->vdsp; + + if (ctx->bit_depth != 10 && vdsp->emulated_edge_mc && ((mb_x << 4) + 16 > s->avctx->width || + (mb_y << 4) + 16 > s->avctx->height)) { + int y_w = s->avctx->width - (mb_x << 4); + int y_h = s->avctx->height - (mb_y << 4); int uv_w = (y_w + 1) / 2; int uv_h = y_h; linesize = 16; uvlinesize = 8; vdsp->emulated_edge_mc(&ctx->edge_buf_y[0], ptr_y, - linesize, ctx->m.linesize, + linesize, s->linesize, linesize, 16, 0, 0, y_w, y_h); vdsp->emulated_edge_mc(&ctx->edge_buf_uv[0][0], ptr_u, - uvlinesize, ctx->m.uvlinesize, + uvlinesize, s->uvlinesize, uvlinesize, 16, 0, 0, uv_w, uv_h); vdsp->emulated_edge_mc(&ctx->edge_buf_uv[1][0], ptr_v, - uvlinesize, ctx->m.uvlinesize, + uvlinesize, s->uvlinesize, uvlinesize, 16, 0, 0, uv_w, uv_h); @@ -720,25 +726,25 @@ void dnxhd_get_blocks(DNXHDEncContext *ctx, int mb_x, int mb_y) ptr_y = &ctx->edge_buf_y[0]; ptr_u = &ctx->edge_buf_uv[0][0]; ptr_v = &ctx->edge_buf_uv[1][0]; - } else if (ctx->bit_depth == 10 && vdsp->emulated_edge_mc && ((mb_x << 4) + 16 > ctx->m.avctx->width || - (mb_y << 4) + 16 > ctx->m.avctx->height)) { - int y_w = ctx->m.avctx->width - (mb_x << 4); - int y_h = ctx->m.avctx->height - (mb_y << 4); + } else if (ctx->bit_depth == 10 && vdsp->emulated_edge_mc && ((mb_x << 4) + 16 > s->avctx->width || + (mb_y << 4) + 16 > s->avctx->height)) { + int y_w = s->avctx->width - (mb_x << 4); + int y_h = s->avctx->height - (mb_y << 4); int uv_w = ctx->is_444 ? y_w : (y_w + 1) / 2; int uv_h = y_h; linesize = 32; uvlinesize = 16 + 16 * ctx->is_444; vdsp->emulated_edge_mc(&ctx->edge_buf_y[0], ptr_y, - linesize, ctx->m.linesize, + linesize, s->linesize, linesize / 2, 16, 0, 0, y_w, y_h); vdsp->emulated_edge_mc(&ctx->edge_buf_uv[0][0], ptr_u, - uvlinesize, ctx->m.uvlinesize, + uvlinesize, s->uvlinesize, uvlinesize / 2, 16, 0, 0, uv_w, uv_h); vdsp->emulated_edge_mc(&ctx->edge_buf_uv[1][0], ptr_v, - uvlinesize, ctx->m.uvlinesize, + uvlinesize, s->uvlinesize, uvlinesize / 2, 16, 0, 0, uv_w, uv_h); @@ -755,7 +761,7 @@ void dnxhd_get_blocks(DNXHDEncContext *ctx, int mb_x, int mb_y) pdsp->get_pixels(ctx->blocks[2], ptr_u, uvlinesize); pdsp->get_pixels(ctx->blocks[3], ptr_v, uvlinesize); - if (mb_y + 1 == ctx->m.mb_height && ctx->m.avctx->height == 1080) { + if (mb_y + 1 == s->mb_height && s->avctx->height == 1080) { if (ctx->interlaced) { ctx->get_pixels_8x4_sym(ctx->blocks[4], ptr_y + dct_y_offset, @@ -821,17 +827,18 @@ static int dnxhd_calc_bits_thread(AVCodecContext *avctx, void *arg, int jobnr, int threadnr) { DNXHDEncContext *ctx = avctx->priv_data; + MPVEncContext *const s = &ctx->m.common; int mb_y = jobnr, mb_x; int qscale = ctx->qscale; LOCAL_ALIGNED_16(int16_t, block, [64]); ctx = ctx->thread[threadnr]; - ctx->m.last_dc[0] = - ctx->m.last_dc[1] = - ctx->m.last_dc[2] = 1 << (ctx->bit_depth + 2); + s->last_dc[0] = + s->last_dc[1] = + s->last_dc[2] = 1 << (ctx->bit_depth + 2); - for (mb_x = 0; mb_x < ctx->m.mb_width; mb_x++) { - unsigned mb = mb_y * ctx->m.mb_width + mb_x; + for (mb_x = 0; mb_x < s->mb_width; mb_x++) { + unsigned mb = mb_y * s->mb_width + mb_x; int ssd = 0; int ac_bits = 0; int dc_bits = 0; @@ -845,12 +852,12 @@ static int dnxhd_calc_bits_thread(AVCodecContext *avctx, void *arg, int n = dnxhd_switch_matrix(ctx, i); memcpy(block, src_block, 64 * sizeof(*block)); - last_index = ctx->m.dct_quantize(&ctx->m, block, + last_index = s->dct_quantize(s, block, ctx->is_444 ? 4 * (n > 0): 4 & (2*i), qscale, &overflow); ac_bits += dnxhd_calc_ac_bits(ctx, block, last_index); - diff = block[0] - ctx->m.last_dc[n]; + diff = block[0] - s->last_dc[n]; if (diff < 0) nbits = av_log2_16bit(-2 * diff); else @@ -859,16 +866,16 @@ static int dnxhd_calc_bits_thread(AVCodecContext *avctx, void *arg, av_assert1(nbits < ctx->bit_depth + 4); dc_bits += ctx->cid_table->dc_bits[nbits] + nbits; - ctx->m.last_dc[n] = block[0]; + s->last_dc[n] = block[0]; if (avctx->mb_decision == FF_MB_DECISION_RD || !RC_VARIANCE) { dnxhd_unquantize_c(ctx, block, i, qscale, last_index); - ctx->m.idsp.idct(block); + s->idsp.idct(block); ssd += dnxhd_ssd_block(block, src_block); } } - ctx->mb_rc[(qscale * ctx->m.mb_num) + mb].ssd = ssd; - ctx->mb_rc[(qscale * ctx->m.mb_num) + mb].bits = ac_bits + dc_bits + 12 + + ctx->mb_rc[(qscale * s->mb_num) + mb].ssd = ssd; + ctx->mb_rc[(qscale * s->mb_num) + mb].bits = ac_bits + dc_bits + 12 + (1 + ctx->is_444) * 8 * ctx->vlc_bits[0]; } return 0; @@ -878,50 +885,52 @@ static int dnxhd_encode_thread(AVCodecContext *avctx, void *arg, int jobnr, int threadnr) { DNXHDEncContext *ctx = avctx->priv_data; + MPVEncContext *const s = &ctx->m.common; int mb_y = jobnr, mb_x; ctx = ctx->thread[threadnr]; - init_put_bits(&ctx->m.pb, (uint8_t *)arg + ctx->data_offset + ctx->slice_offs[jobnr], + init_put_bits(&s->pb, (uint8_t *)arg + ctx->data_offset + ctx->slice_offs[jobnr], ctx->slice_size[jobnr]); - ctx->m.last_dc[0] = - ctx->m.last_dc[1] = - ctx->m.last_dc[2] = 1 << (ctx->bit_depth + 2); - for (mb_x = 0; mb_x < ctx->m.mb_width; mb_x++) { - unsigned mb = mb_y * ctx->m.mb_width + mb_x; + s->last_dc[0] = + s->last_dc[1] = + s->last_dc[2] = 1 << (ctx->bit_depth + 2); + for (mb_x = 0; mb_x < s->mb_width; mb_x++) { + unsigned mb = mb_y * s->mb_width + mb_x; int qscale = ctx->mb_qscale[mb]; int i; - put_bits(&ctx->m.pb, 11, qscale); - put_bits(&ctx->m.pb, 1, avctx->pix_fmt == AV_PIX_FMT_YUV444P10); + put_bits(&s->pb, 11, qscale); + put_bits(&s->pb, 1, avctx->pix_fmt == AV_PIX_FMT_YUV444P10); dnxhd_get_blocks(ctx, mb_x, mb_y); for (i = 0; i < 8 + 4 * ctx->is_444; i++) { int16_t *block = ctx->blocks[i]; int overflow, n = dnxhd_switch_matrix(ctx, i); - int last_index = ctx->m.dct_quantize(&ctx->m, block, + int last_index = s->dct_quantize(s, block, ctx->is_444 ? (((i >> 1) % 3) < 1 ? 0 : 4): 4 & (2*i), qscale, &overflow); dnxhd_encode_block(ctx, block, last_index, n); } } - if (put_bits_count(&ctx->m.pb) & 31) - put_bits(&ctx->m.pb, 32 - (put_bits_count(&ctx->m.pb) & 31), 0); - flush_put_bits(&ctx->m.pb); + if (put_bits_count(&s->pb) & 31) + put_bits(&s->pb, 32 - (put_bits_count(&s->pb) & 31), 0); + flush_put_bits(&s->pb); return 0; } static void dnxhd_setup_threads_slices(DNXHDEncContext *ctx) { + MPVEncContext *const s = &ctx->m.common; int mb_y, mb_x; int offset = 0; - for (mb_y = 0; mb_y < ctx->m.mb_height; mb_y++) { + for (mb_y = 0; mb_y < s->mb_height; mb_y++) { int thread_size; ctx->slice_offs[mb_y] = offset; ctx->slice_size[mb_y] = 0; - for (mb_x = 0; mb_x < ctx->m.mb_width; mb_x++) { - unsigned mb = mb_y * ctx->m.mb_width + mb_x; + for (mb_x = 0; mb_x < s->mb_width; mb_x++) { + unsigned mb = mb_y * s->mb_width + mb_x; ctx->slice_size[mb_y] += ctx->mb_bits[mb]; } ctx->slice_size[mb_y] = (ctx->slice_size[mb_y] + 31) & ~31; @@ -935,28 +944,29 @@ static int dnxhd_mb_var_thread(AVCodecContext *avctx, void *arg, int jobnr, int threadnr) { DNXHDEncContext *ctx = avctx->priv_data; + MPVEncContext *const s = &ctx->m.common; int mb_y = jobnr, mb_x, x, y; - int partial_last_row = (mb_y == ctx->m.mb_height - 1) && + int partial_last_row = (mb_y == s->mb_height - 1) && ((avctx->height >> ctx->interlaced) & 0xF); ctx = ctx->thread[threadnr]; if (ctx->bit_depth == 8) { - uint8_t *pix = ctx->thread[0]->src[0] + ((mb_y << 4) * ctx->m.linesize); - for (mb_x = 0; mb_x < ctx->m.mb_width; ++mb_x, pix += 16) { - unsigned mb = mb_y * ctx->m.mb_width + mb_x; + uint8_t *pix = ctx->thread[0]->src[0] + ((mb_y << 4) * s->linesize); + for (mb_x = 0; mb_x < s->mb_width; ++mb_x, pix += 16) { + unsigned mb = mb_y * s->mb_width + mb_x; int sum; int varc; if (!partial_last_row && mb_x * 16 <= avctx->width - 16 && (avctx->width % 16) == 0) { - sum = ctx->m.mpvencdsp.pix_sum(pix, ctx->m.linesize); - varc = ctx->m.mpvencdsp.pix_norm1(pix, ctx->m.linesize); + sum = s->mpvencdsp.pix_sum(pix, s->linesize); + varc = s->mpvencdsp.pix_norm1(pix, s->linesize); } else { int bw = FFMIN(avctx->width - 16 * mb_x, 16); int bh = FFMIN((avctx->height >> ctx->interlaced) - 16 * mb_y, 16); sum = varc = 0; for (y = 0; y < bh; y++) { for (x = 0; x < bw; x++) { - uint8_t val = pix[x + y * ctx->m.linesize]; + uint8_t val = pix[x + y * s->linesize]; sum += val; varc += val * val; } @@ -968,11 +978,11 @@ static int dnxhd_mb_var_thread(AVCodecContext *avctx, void *arg, ctx->mb_cmp[mb].mb = mb; } } else { // 10-bit - const int linesize = ctx->m.linesize >> 1; - for (mb_x = 0; mb_x < ctx->m.mb_width; ++mb_x) { + const int linesize = s->linesize >> 1; + for (mb_x = 0; mb_x < s->mb_width; ++mb_x) { uint16_t *pix = (uint16_t *)ctx->thread[0]->src[0] + ((mb_y << 4) * linesize) + (mb_x << 4); - unsigned mb = mb_y * ctx->m.mb_width + mb_x; + unsigned mb = mb_y * s->mb_width + mb_x; int sum = 0; int sqsum = 0; int bw = FFMIN(avctx->width - 16 * mb_x, 16); @@ -1001,6 +1011,7 @@ static int dnxhd_mb_var_thread(AVCodecContext *avctx, void *arg, static int dnxhd_encode_rdo(AVCodecContext *avctx, DNXHDEncContext *ctx) { + MPVEncContext *const s = &ctx->m.common; int lambda, up_step, down_step; int last_lower = INT_MAX, last_higher = 0; int x, y, q; @@ -1008,7 +1019,7 @@ static int dnxhd_encode_rdo(AVCodecContext *avctx, DNXHDEncContext *ctx) for (q = 1; q < avctx->qmax; q++) { ctx->qscale = q; avctx->execute2(avctx, dnxhd_calc_bits_thread, - NULL, NULL, ctx->m.mb_height); + NULL, NULL, s->mb_height); } up_step = down_step = 2 << LAMBDA_FRAC_BITS; lambda = ctx->lambda; @@ -1020,14 +1031,14 @@ static int dnxhd_encode_rdo(AVCodecContext *avctx, DNXHDEncContext *ctx) lambda++; end = 1; // need to set final qscales/bits } - for (y = 0; y < ctx->m.mb_height; y++) { - for (x = 0; x < ctx->m.mb_width; x++) { + for (y = 0; y < s->mb_height; y++) { + for (x = 0; x < s->mb_width; x++) { unsigned min = UINT_MAX; int qscale = 1; - int mb = y * ctx->m.mb_width + x; + int mb = y * s->mb_width + x; int rc = 0; for (q = 1; q < avctx->qmax; q++) { - int i = (q*ctx->m.mb_num) + mb; + int i = (q*s->mb_num) + mb; unsigned score = ctx->mb_rc[i].bits * lambda + ((unsigned) ctx->mb_rc[i].ssd << LAMBDA_FRAC_BITS); if (score < min) { @@ -1078,6 +1089,7 @@ static int dnxhd_encode_rdo(AVCodecContext *avctx, DNXHDEncContext *ctx) static int dnxhd_find_qscale(DNXHDEncContext *ctx) { + MPVEncContext *const s = &ctx->m.common; int bits = 0; int up_step = 1; int down_step = 1; @@ -1091,11 +1103,11 @@ static int dnxhd_find_qscale(DNXHDEncContext *ctx) bits = 0; ctx->qscale = qscale; // XXX avoid recalculating bits - ctx->m.avctx->execute2(ctx->m.avctx, dnxhd_calc_bits_thread, - NULL, NULL, ctx->m.mb_height); - for (y = 0; y < ctx->m.mb_height; y++) { - for (x = 0; x < ctx->m.mb_width; x++) - bits += ctx->mb_rc[(qscale*ctx->m.mb_num) + (y*ctx->m.mb_width+x)].bits; + s->avctx->execute2(s->avctx, dnxhd_calc_bits_thread, + NULL, NULL, s->mb_height); + for (y = 0; y < s->mb_height; y++) { + for (x = 0; x < s->mb_width; x++) + bits += ctx->mb_rc[(qscale*s->mb_num) + (y*s->mb_width+x)].bits; bits = (bits+31)&~31; // padding if (bits > ctx->frame_bits) break; @@ -1124,7 +1136,7 @@ static int dnxhd_find_qscale(DNXHDEncContext *ctx) else qscale += up_step++; down_step = 1; - if (qscale >= ctx->m.avctx->qmax) + if (qscale >= s->avctx->qmax) return AVERROR(EINVAL); } } @@ -1190,25 +1202,26 @@ static void radix_sort(RCCMPEntry *data, RCCMPEntry *tmp, int size) static int dnxhd_encode_fast(AVCodecContext *avctx, DNXHDEncContext *ctx) { + MPVEncContext *const s = &ctx->m.common; int max_bits = 0; int ret, x, y; if ((ret = dnxhd_find_qscale(ctx)) < 0) return ret; - for (y = 0; y < ctx->m.mb_height; y++) { - for (x = 0; x < ctx->m.mb_width; x++) { - int mb = y * ctx->m.mb_width + x; - int rc = (ctx->qscale * ctx->m.mb_num ) + mb; + for (y = 0; y < s->mb_height; y++) { + for (x = 0; x < s->mb_width; x++) { + int mb = y * s->mb_width + x; + int rc = (ctx->qscale * s->mb_num ) + mb; int delta_bits; ctx->mb_qscale[mb] = ctx->qscale; ctx->mb_bits[mb] = ctx->mb_rc[rc].bits; max_bits += ctx->mb_rc[rc].bits; if (!RC_VARIANCE) { delta_bits = ctx->mb_rc[rc].bits - - ctx->mb_rc[rc + ctx->m.mb_num].bits; + ctx->mb_rc[rc + s->mb_num].bits; ctx->mb_cmp[mb].mb = mb; ctx->mb_cmp[mb].value = delta_bits ? ((ctx->mb_rc[rc].ssd - - ctx->mb_rc[rc + ctx->m.mb_num].ssd) * 100) / + ctx->mb_rc[rc + s->mb_num].ssd) * 100) / delta_bits : INT_MIN; // avoid increasing qscale } @@ -1218,15 +1231,15 @@ static int dnxhd_encode_fast(AVCodecContext *avctx, DNXHDEncContext *ctx) if (!ret) { if (RC_VARIANCE) avctx->execute2(avctx, dnxhd_mb_var_thread, - NULL, NULL, ctx->m.mb_height); - radix_sort(ctx->mb_cmp, ctx->mb_cmp_tmp, ctx->m.mb_num); - for (x = 0; x < ctx->m.mb_num && max_bits > ctx->frame_bits; x++) { + NULL, NULL, s->mb_height); + radix_sort(ctx->mb_cmp, ctx->mb_cmp_tmp, s->mb_num); + for (x = 0; x < s->mb_num && max_bits > ctx->frame_bits; x++) { int mb = ctx->mb_cmp[x].mb; - int rc = (ctx->qscale * ctx->m.mb_num ) + mb; + int rc = (ctx->qscale * s->mb_num ) + mb; max_bits -= ctx->mb_rc[rc].bits - - ctx->mb_rc[rc + ctx->m.mb_num].bits; + ctx->mb_rc[rc + s->mb_num].bits; ctx->mb_qscale[mb] = ctx->qscale + 1; - ctx->mb_bits[mb] = ctx->mb_rc[rc + ctx->m.mb_num].bits; + ctx->mb_bits[mb] = ctx->mb_rc[rc + s->mb_num].bits; } } return 0; @@ -1234,13 +1247,13 @@ static int dnxhd_encode_fast(AVCodecContext *avctx, DNXHDEncContext *ctx) static void dnxhd_load_picture(DNXHDEncContext *ctx, const AVFrame *frame) { - int i; + MPVEncContext *const s = &ctx->m.common; - for (i = 0; i < ctx->m.avctx->thread_count; i++) { - ctx->thread[i]->m.linesize = frame->linesize[0] << ctx->interlaced; - ctx->thread[i]->m.uvlinesize = frame->linesize[1] << ctx->interlaced; - ctx->thread[i]->dct_y_offset = ctx->m.linesize *8; - ctx->thread[i]->dct_uv_offset = ctx->m.uvlinesize*8; + for (int i = 0; i < s->avctx->thread_count; i++) { + ctx->thread[i]->m.common.linesize = frame->linesize[0] << ctx->interlaced; + ctx->thread[i]->m.common.uvlinesize = frame->linesize[1] << ctx->interlaced; + ctx->thread[i]->dct_y_offset = s->linesize *8; + ctx->thread[i]->dct_uv_offset = s->uvlinesize*8; } ctx->cur_field = frame->interlaced_frame && !frame->top_field_first; @@ -1250,6 +1263,7 @@ static int dnxhd_encode_picture(AVCodecContext *avctx, AVPacket *pkt, const AVFrame *frame, int *got_packet) { DNXHDEncContext *ctx = avctx->priv_data; + MPVEncContext *const s = &ctx->m.common; int first_field = 1; int offset, i, ret; uint8_t *buf; @@ -1282,13 +1296,13 @@ encode_coding_unit: dnxhd_setup_threads_slices(ctx); offset = 0; - for (i = 0; i < ctx->m.mb_height; i++) { + for (i = 0; i < s->mb_height; i++) { AV_WB32(ctx->msip + i * 4, offset); offset += ctx->slice_size[i]; av_assert1(!(ctx->slice_size[i] & 3)); } - avctx->execute2(avctx, dnxhd_encode_thread, buf, NULL, ctx->m.mb_height); + avctx->execute2(avctx, dnxhd_encode_thread, buf, NULL, s->mb_height); av_assert1(ctx->data_offset + offset + 4 <= ctx->coding_unit_size); memset(buf + ctx->data_offset + offset, 0, diff --git a/libavcodec/flvenc.c b/libavcodec/flvenc.c index 41e04e1abe..ba6bce029e 100644 --- a/libavcodec/flvenc.c +++ b/libavcodec/flvenc.c @@ -23,8 +23,9 @@ #include "mpegvideodata.h" #include "mpegvideoenc.h" -void ff_flv_encode_picture_header(MPVMainEncContext *s, int picture_number) +void ff_flv_encode_picture_header(MPVMainEncContext *m, int picture_number) { + MPVEncContext *const s = &m->common; int format; align_put_bits(&s->pb); diff --git a/libavcodec/h261enc.c b/libavcodec/h261enc.c index 5c9664ea57..9c8d9d1b08 100644 --- a/libavcodec/h261enc.c +++ b/libavcodec/h261enc.c @@ -60,9 +60,10 @@ int ff_h261_get_picture_format(int width, int height) return AVERROR(EINVAL); } -void ff_h261_encode_picture_header(MPVMainEncContext *s, int picture_number) +void ff_h261_encode_picture_header(MPVMainEncContext *m, int picture_number) { - H261EncContext *const h = (H261EncContext *)s; + H261EncContext *const h = (H261EncContext *)m; + MPVEncContext *const s = &m->common; int format, temp_ref; align_put_bits(&s->pb); @@ -175,9 +176,8 @@ static inline int get_cbp(MPVEncContext *s, int16_t block[6][64]) * @param block the 8x8 block * @param n block index (0-3 are luma, 4-5 are chroma) */ -static void h261_encode_block(H261EncContext *h, int16_t *block, int n) +static void h261_encode_block(MPVEncContext *s, int16_t *block, int n) { - MPVEncContext *const s = &h->s; int level, run, i, j, last_index, last_non_zero, sign, slevel, code; RLTable *rl; @@ -326,7 +326,7 @@ void ff_h261_encode_mb(MPVEncContext *s, int16_t block[6][64], } for (i = 0; i < 6; i++) /* encode each block */ - h261_encode_block(h, block[i], i); + h261_encode_block(s, block[i], i); if (!IS_16X16(com->mtype)) { s->last_mv[0][0][0] = 0; @@ -381,9 +381,10 @@ static av_cold void h261_encode_init_static(void) init_uni_h261_rl_tab(&ff_h261_rl_tcoeff, uni_h261_rl_len); } -av_cold void ff_h261_encode_init(MPVMainEncContext *s) +av_cold void ff_h261_encode_init(MPVMainEncContext *m) { - H261EncContext *const h = (H261EncContext*)s; + H261EncContext *const h = (H261EncContext*)m; + MPVEncContext *const s = &m->common; static AVOnce init_static_once = AV_ONCE_INIT; s->private_ctx = &h->common; diff --git a/libavcodec/ituh263enc.c b/libavcodec/ituh263enc.c index e0b42dd442..97173bfdc6 100644 --- a/libavcodec/ituh263enc.c +++ b/libavcodec/ituh263enc.c @@ -102,8 +102,9 @@ av_const int ff_h263_aspect_to_info(AVRational aspect){ return FF_ASPECT_EXTENDED; } -void ff_h263_encode_picture_header(MPVMainEncContext *s, int picture_number) +void ff_h263_encode_picture_header(MPVMainEncContext *m, int picture_number) { + MPVEncContext *const s = &m->common; int format, coded_frame_rate, coded_frame_rate_base, i, temp_ref; int best_clock_code=1; int best_divisor=60; @@ -266,12 +267,13 @@ void ff_h263_encode_gob_header(MPVEncContext *s, int mb_line) /** * modify qscale so that encoding is actually possible in H.263 (limit difference to -2..2) */ -void ff_clean_h263_qscales(MPVMainEncContext *s) +void ff_clean_h263_qscales(MPVMainEncContext *m) { + MPVEncContext *const s = &m->common; int i; int8_t * const qscale_table = s->current_picture.qscale_table; - ff_init_qscale_tab(s); + ff_init_qscale_tab(m); for(i=1; imb_num; i++){ if(qscale_table[ s->mb_index2xy[i] ] - qscale_table[ s->mb_index2xy[i-1] ] >2) @@ -812,8 +814,9 @@ static av_cold void h263_encode_init_static(void) init_mv_penalty_and_fcode(); } -av_cold void ff_h263_encode_init(MPVMainEncContext *s) +av_cold void ff_h263_encode_init(MPVMainEncContext *m) { + MPVEncContext *const s = &m->common; static AVOnce init_static_once = AV_ONCE_INIT; s->me.mv_penalty= mv_penalty; // FIXME exact table for MSMPEG4 & H.263+ @@ -878,7 +881,7 @@ void ff_h263_encode_mba(MPVEncContext *s) put_bits(&s->pb, ff_mba_length[i], mb_pos); } -#define OFFSET(x) offsetof(MPVMainEncContext, x) +#define OFFSET(x) offsetof(MPVMainEncContext, common.x) #define VE AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM static const AVOption h263_options[] = { { "obmc", "use overlapped block motion compensation.", OFFSET(obmc), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE }, diff --git a/libavcodec/mjpegenc.c b/libavcodec/mjpegenc.c index 596accfc50..fd87186fbc 100644 --- a/libavcodec/mjpegenc.c +++ b/libavcodec/mjpegenc.c @@ -75,8 +75,9 @@ static av_cold void init_uni_ac_vlc(const uint8_t huff_size_ac[256], } } -static void mjpeg_encode_picture_header(MPVMainEncContext *s) +static void mjpeg_encode_picture_header(MPVMainEncContext *m) { + MPVEncContext *const s = &m->common; ff_mjpeg_encode_picture_header(s->avctx, &s->pb, s->mjpeg_ctx, &s->intra_scantable, 0, s->intra_matrix, s->chroma_intra_matrix, @@ -87,13 +88,13 @@ static void mjpeg_encode_picture_header(MPVMainEncContext *s) s->thread_context[i]->esc_pos = 0; } -void ff_mjpeg_amv_encode_picture_header(MPVMainEncContext *s) +void ff_mjpeg_amv_encode_picture_header(MPVMainEncContext *main) { - MJPEGEncContext *const m = (MJPEGEncContext*)s; - av_assert2(s->mjpeg_ctx == &m->mjpeg); + MJPEGEncContext *const m = (MJPEGEncContext*)main; + av_assert2(main->common.mjpeg_ctx == &m->mjpeg); /* s->huffman == HUFFMAN_TABLE_OPTIMAL can only be true for MJPEG. */ if (!CONFIG_MJPEG_ENCODER || m->mjpeg.huffman != HUFFMAN_TABLE_OPTIMAL) - mjpeg_encode_picture_header(s); + mjpeg_encode_picture_header(main); } #if CONFIG_MJPEG_ENCODER @@ -235,7 +236,9 @@ int ff_mjpeg_encode_stuffing(MPVEncContext *s) s->intra_chroma_ac_vlc_length = s->intra_chroma_ac_vlc_last_length = m->uni_chroma_ac_vlc_len; - mjpeg_encode_picture_header(s); + /* HUFFMAN_TABLE_OPTIMAL is incompatible with slice threading, + * therefore the following cast is allowed. */ + mjpeg_encode_picture_header((MPVMainEncContext*)s); mjpeg_encode_picture_frame(s); } #endif @@ -260,7 +263,7 @@ fail: return ret; } -static int alloc_huffman(MPVMainEncContext *s) +static int alloc_huffman(MPVEncContext *s) { MJpegContext *m = s->mjpeg_ctx; size_t num_mbs, num_blocks, num_codes; @@ -288,9 +291,10 @@ static int alloc_huffman(MPVMainEncContext *s) return 0; } -av_cold int ff_mjpeg_encode_init(MPVMainEncContext *s) +av_cold int ff_mjpeg_encode_init(MPVMainEncContext *main) { - MJpegContext *const m = &((MJPEGEncContext*)s)->mjpeg; + MJpegContext *const m = &((MJPEGEncContext*)main)->mjpeg; + MPVEncContext *const s = &main->common; int ret, use_slices; s->mjpeg_ctx = m; @@ -589,7 +593,7 @@ void ff_mjpeg_encode_mb(MPVEncContext *s, int16_t block[12][64]) static int amv_encode_picture(AVCodecContext *avctx, AVPacket *pkt, const AVFrame *pic_arg, int *got_packet) { - MPVMainEncContext *const s = avctx->priv_data; + MPVEncContext *const s = avctx->priv_data; AVFrame *pic; int i, ret; int chroma_h_shift, chroma_v_shift; diff --git a/libavcodec/mpeg12enc.c b/libavcodec/mpeg12enc.c index 34183cd1d5..4b878b2f90 100644 --- a/libavcodec/mpeg12enc.c +++ b/libavcodec/mpeg12enc.c @@ -122,7 +122,8 @@ av_cold void ff_mpeg1_init_uni_ac_vlc(const RLTable *rl, uint8_t *uni_ac_vlc_len #if CONFIG_MPEG1VIDEO_ENCODER || CONFIG_MPEG2VIDEO_ENCODER static int find_frame_rate_index(MPEG12EncContext *mpeg12) { - MPVMainEncContext *const s = &mpeg12->mpeg; + MPVMainEncContext *const m = &mpeg12->mpeg; + MPVEncContext *const s = &m->common; int i; AVRational bestq = (AVRational) {0, 0}; AVRational ext; @@ -163,7 +164,8 @@ static int find_frame_rate_index(MPEG12EncContext *mpeg12) static av_cold int encode_init(AVCodecContext *avctx) { MPEG12EncContext *const mpeg12 = avctx->priv_data; - MPVMainEncContext *const s = &mpeg12->mpeg; + MPVMainEncContext *const m = &mpeg12->mpeg; + MPVEncContext *const s = &m->common; int ret; int max_size = avctx->codec_id == AV_CODEC_ID_MPEG2VIDEO ? 16383 : 4095; @@ -263,9 +265,10 @@ static void put_header(MPVEncContext *s, int header) } /* put sequence header if needed */ -static void mpeg1_encode_sequence_header(MPVMainEncContext *s) +static void mpeg1_encode_sequence_header(MPVMainEncContext *m) { - MPEG12EncContext *const mpeg12 = (MPEG12EncContext*)s; + MPEG12EncContext *const mpeg12 = (MPEG12EncContext*)m; + MPVEncContext *const s = &m->common; unsigned int vbv_buffer_size, fps, v; int constraint_parameter_flag; AVRational framerate = ff_mpeg12_frame_rate_tab[mpeg12->frame_rate_index]; @@ -451,11 +454,12 @@ void ff_mpeg1_encode_slice_header(MPVEncContext *s) put_bits(&s->pb, 1, 0); } -void ff_mpeg1_encode_picture_header(MPVMainEncContext *s, int picture_number) +void ff_mpeg1_encode_picture_header(MPVMainEncContext *m, int picture_number) { - MPEG12EncContext *const mpeg12 = (MPEG12EncContext*)s; + MPEG12EncContext *const mpeg12 = (MPEG12EncContext*)m; + MPVEncContext *const s = &m->common; AVFrameSideData *side_data; - mpeg1_encode_sequence_header(s); + mpeg1_encode_sequence_header(m); /* MPEG-1 picture header */ put_header(s, PICTURE_START_CODE); @@ -1131,11 +1135,12 @@ static av_cold void mpeg12_encode_init_static(void) fcode_tab[mv + MAX_MV] = f_code; } -av_cold void ff_mpeg1_encode_init(MPVMainEncContext *s) +av_cold void ff_mpeg1_encode_init(MPVMainEncContext *m) { + MPVEncContext *const s = &m->common; static AVOnce init_static_once = AV_ONCE_INIT; - ff_mpeg12_common_init(s); + ff_mpeg12_common_init(&m->common); s->me.mv_penalty = mv_penalty; s->fcode_tab = fcode_tab; diff --git a/libavcodec/mpeg4videoenc.c b/libavcodec/mpeg4videoenc.c index 03d53ab56b..c7b0b3ec02 100644 --- a/libavcodec/mpeg4videoenc.c +++ b/libavcodec/mpeg4videoenc.c @@ -216,12 +216,13 @@ static inline int decide_ac_pred(MPVEncContext *s, int16_t block[6][64], /** * modify mb_type & qscale so that encoding is actually possible in MPEG-4 */ -void ff_clean_mpeg4_qscales(MPVMainEncContext *s) +void ff_clean_mpeg4_qscales(MPVMainEncContext *m) { + MPVEncContext *const s = &m->common; int i; int8_t *const qscale_table = s->current_picture.qscale_table; - ff_clean_h263_qscales(s); + ff_clean_h263_qscales(m); if (s->pict_type == AV_PICTURE_TYPE_B) { int odd = 0; @@ -876,8 +877,9 @@ void ff_mpeg4_stuffing(PutBitContext *pbc) } /* must be called before writing the header */ -void ff_set_mpeg4_time(MPVMainEncContext *s) +void ff_set_mpeg4_time(MPVMainEncContext *m) { + MPVEncContext *const s = &m->common; if (s->pict_type == AV_PICTURE_TYPE_B) { ff_mpeg4_init_direct_mv(s); } else { @@ -886,8 +888,9 @@ void ff_set_mpeg4_time(MPVMainEncContext *s) } } -static void mpeg4_encode_gop_header(MPVMainEncContext *s) +static void mpeg4_encode_gop_header(MPVMainEncContext *m) { + MPVEncContext *const s = &m->common; int64_t hours, minutes, seconds; int64_t time; @@ -916,8 +919,9 @@ static void mpeg4_encode_gop_header(MPVMainEncContext *s) ff_mpeg4_stuffing(&s->pb); } -static void mpeg4_encode_visual_object_header(MPVMainEncContext *s) +static void mpeg4_encode_visual_object_header(MPVMainEncContext *m) { + MPVEncContext *const s = &m->common; int profile_and_level_indication; int vo_ver_id; @@ -960,10 +964,11 @@ static void mpeg4_encode_visual_object_header(MPVMainEncContext *s) ff_mpeg4_stuffing(&s->pb); } -static void mpeg4_encode_vol_header(MPVMainEncContext *s, +static void mpeg4_encode_vol_header(MPVMainEncContext *m, int vo_number, int vol_number) { + MPVEncContext *const s = &m->common; int vo_ver_id, vo_type, aspect_ratio_info; if (s->max_b_frames || s->quarter_sample) { @@ -1061,20 +1066,21 @@ static void mpeg4_encode_vol_header(MPVMainEncContext *s, } /* write MPEG-4 VOP header */ -int ff_mpeg4_encode_picture_header(MPVMainEncContext *s, int picture_number) +int ff_mpeg4_encode_picture_header(MPVMainEncContext *m, int picture_number) { + MPVEncContext *const s = &m->common; uint64_t time_incr; int64_t time_div, time_mod; if (s->pict_type == AV_PICTURE_TYPE_I) { if (!(s->avctx->flags & AV_CODEC_FLAG_GLOBAL_HEADER)) { if (s->strict_std_compliance < FF_COMPLIANCE_VERY_STRICT) // HACK, the reference sw is buggy - mpeg4_encode_visual_object_header(s); + mpeg4_encode_visual_object_header(m); if (s->strict_std_compliance < FF_COMPLIANCE_VERY_STRICT || picture_number == 0) // HACK, the reference sw is buggy - mpeg4_encode_vol_header(s, 0, 0); + mpeg4_encode_vol_header(m, 0, 0); } if (!(s->workaround_bugs & FF_BUG_MS)) - mpeg4_encode_gop_header(s); + mpeg4_encode_gop_header(m); } s->partitioned_frame = s->data_partitioning && s->pict_type != AV_PICTURE_TYPE_B; @@ -1284,7 +1290,8 @@ static av_cold void mpeg4_encode_init_static(void) static av_cold int encode_init(AVCodecContext *avctx) { static AVOnce init_static_once = AV_ONCE_INIT; - MPVMainEncContext *const s = avctx->priv_data; + MPVMainEncContext *const m = avctx->priv_data; + MPVEncContext *const s = &m->common; int ret; if (avctx->width >= (1<<13) || avctx->height >= (1<<13)) { @@ -1315,8 +1322,8 @@ static av_cold int encode_init(AVCodecContext *avctx) init_put_bits(&s->pb, s->avctx->extradata, 1024); if (!(s->workaround_bugs & FF_BUG_MS)) - mpeg4_encode_visual_object_header(s); - mpeg4_encode_vol_header(s, 0, 0); + mpeg4_encode_visual_object_header(m); + mpeg4_encode_vol_header(m, 0, 0); // ff_mpeg4_stuffing(&s->pb); ? flush_put_bits(&s->pb); @@ -1376,13 +1383,13 @@ void ff_mpeg4_encode_video_packet_header(MPVEncContext *s) put_bits(&s->pb, 1, 0); /* no HEC */ } -#define OFFSET(x) offsetof(MPVMainEncContext, x) +#define OFFSET(x) offsetof(MPVMainEncContext, common.x) #define VE AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM static const AVOption options[] = { - { "data_partitioning", "Use data partitioning.", OFFSET(data_partitioning), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE }, - { "alternate_scan", "Enable alternate scantable.", OFFSET(alternate_scan), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE }, + { "data_partitioning", "Use data partitioning.", FF_MPV_OFFSET(data_partitioning), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE }, + { "alternate_scan", "Enable alternate scantable.", FF_MPV_OFFSET(alternate_scan), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE }, { "mpeg_quant", "Use MPEG quantizers instead of H.263", - OFFSET(mpeg_quant), AV_OPT_TYPE_INT, {.i64 = 0 }, 0, 1, VE }, + FF_MPV_OFFSET(mpeg_quant), AV_OPT_TYPE_INT, {.i64 = 0 }, 0, 1, VE }, FF_MPV_COMMON_BFRAME_OPTS FF_MPV_COMMON_OPTS #if FF_API_MPEGVIDEO_OPTS diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c index 5bf135bccf..5f607d9e11 100644 --- a/libavcodec/mpegvideo_enc.c +++ b/libavcodec/mpegvideo_enc.c @@ -237,8 +237,9 @@ void ff_write_quant_matrix(PutBitContext *pb, uint16_t *matrix) /** * init s->current_picture.qscale_table from s->lambda_table */ -void ff_init_qscale_tab(MPVMainEncContext *s) +void ff_init_qscale_tab(MPVMainEncContext *m) { + MPVEncContext *const s = &m->common; int8_t * const qscale_table = s->current_picture.qscale_table; int i; @@ -278,11 +279,13 @@ static void mpv_encode_init_static(void) * Set the given MPVMainEncContext to defaults for encoding. * the changed fields will not depend upon the prior state of the MPVMainEncContext. */ -static void mpv_encode_defaults(MPVMainEncContext *s) +static void mpv_encode_defaults(MPVMainEncContext *m) { + MPVMainContext *const com = &m->common; + MPVEncContext *const s = com; static AVOnce init_static_once = AV_ONCE_INIT; - ff_mpv_common_defaults(s); + ff_mpv_common_defaults(com); ff_thread_once(&init_static_once, mpv_encode_init_static); @@ -314,11 +317,12 @@ av_cold int ff_dct_encode_init(MPVEncContext *s) /* init video encoder */ av_cold int ff_mpv_encode_init(AVCodecContext *avctx) { - MPVMainEncContext *const s = avctx->priv_data; + MPVMainEncContext *const m = avctx->priv_data; + MPVMainContext *const s = &m->common; AVCPBProperties *cpb_props; int i, ret; - mpv_encode_defaults(s); + mpv_encode_defaults(m); switch (avctx->pix_fmt) { case AV_PIX_FMT_YUVJ444P: @@ -667,7 +671,7 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx) case AV_CODEC_ID_AMV: s->out_format = FMT_MJPEG; s->intra_only = 1; /* force intra only for jpeg */ - if ((ret = ff_mjpeg_encode_init(s)) < 0) + if ((ret = ff_mjpeg_encode_init(m)) < 0) return ret; avctx->delay = 0; s->low_delay = 1; @@ -678,7 +682,7 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx) s->intra_only = 1; /* force intra only for SHQ */ if (!CONFIG_SPEEDHQ_ENCODER) return AVERROR_ENCODER_NOT_FOUND; - if ((ret = ff_speedhq_encode_init(s)) < 0) + if ((ret = ff_speedhq_encode_init(m)) < 0) return ret; avctx->delay = 0; s->low_delay = 1; @@ -851,14 +855,14 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx) ff_set_cmp(&s->mecc, s->mecc.frame_skip_cmp, s->frame_skip_cmp); if (CONFIG_H261_ENCODER && s->out_format == FMT_H261) { - ff_h261_encode_init(s); + ff_h261_encode_init(m); } else if ((CONFIG_MPEG1VIDEO_ENCODER || CONFIG_MPEG2VIDEO_ENCODER) && s->out_format == FMT_MPEG1) { - ff_mpeg1_encode_init(s); + ff_mpeg1_encode_init(m); } else if (CONFIG_H263_ENCODER && s->out_format == FMT_H263) { - ff_h263_encode_init(s); + ff_h263_encode_init(m); if (CONFIG_MSMPEG4_ENCODER && s->msmpeg4_version) - ff_msmpeg4_encode_init(s); + ff_msmpeg4_encode_init(m); } /* init q matrix */ @@ -897,7 +901,7 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx) 31, 0); } - if ((ret = ff_rate_control_init(s)) < 0) + if ((ret = ff_rate_control_init(m)) < 0) return ret; if (s->b_frame_strategy == 2) { @@ -929,12 +933,13 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx) av_cold int ff_mpv_encode_end(AVCodecContext *avctx) { - MPVMainEncContext *const s = avctx->priv_data; + MPVMainEncContext *const m = avctx->priv_data; + MPVMainContext *const s = &m->common; int i; - ff_rate_control_uninit(s); + ff_rate_control_uninit(m); - ff_mpv_common_end(s); + ff_mpv_common_end(&m->common); for (i = 0; i < FF_ARRAY_ELEMS(s->tmp_frames); i++) av_frame_free(&s->tmp_frames[i]); @@ -972,7 +977,7 @@ static int get_sae(uint8_t *src, int ref, int stride) return acc; } -static int get_intra_count(MPVMainEncContext *s, uint8_t *src, +static int get_intra_count(MPVEncContext *s, uint8_t *src, uint8_t *ref, int stride) { int x, y, w, h; @@ -995,7 +1000,7 @@ static int get_intra_count(MPVMainEncContext *s, uint8_t *src, return acc; } -static int alloc_picture(MPVMainEncContext *s, Picture *pic, int shared) +static int alloc_picture(MPVEncContext *s, Picture *pic, int shared) { return ff_alloc_picture(s->avctx, pic, &s->me, &s->sc, shared, 1, s->chroma_x_shift, s->chroma_y_shift, s->out_format, @@ -1003,8 +1008,10 @@ static int alloc_picture(MPVMainEncContext *s, Picture *pic, int shared) &s->linesize, &s->uvlinesize); } -static int load_input_picture(MPVMainEncContext *s, const AVFrame *pic_arg) +static int load_input_picture(MPVMainEncContext *m, const AVFrame *pic_arg) { + MPVMainContext *const com = &m->common; + MPVEncContext *const s = com; Picture *pic = NULL; int64_t pts; int i, display_picture_number = 0, ret; @@ -1154,8 +1161,9 @@ static int load_input_picture(MPVMainEncContext *s, const AVFrame *pic_arg) return 0; } -static int skip_check(MPVMainEncContext *s, Picture *p, Picture *ref) +static int skip_check(MPVMainEncContext *m, Picture *p, Picture *ref) { + MPVEncContext *const s = &m->common; int x, y, plane; int score = 0; int64_t score64 = 0; @@ -1216,8 +1224,9 @@ static int encode_frame(AVCodecContext *c, AVFrame *frame, AVPacket *pkt) return size; } -static int estimate_best_b_count(MPVMainEncContext *s) +static int estimate_best_b_count(MPVMainEncContext *m) { + MPVEncContext *const s = &m->common; AVPacket *pkt; const int scale = s->brd_scale; int width = s->width >> scale; @@ -1362,8 +1371,9 @@ fail: return best_b_count; } -static int select_input_picture(MPVMainEncContext *s) +static int select_input_picture(MPVMainEncContext *m) { + MPVEncContext *const s = &m->common; int i, ret; for (i = 1; i < MAX_PICTURE_COUNT; i++) @@ -1375,11 +1385,11 @@ static int select_input_picture(MPVMainEncContext *s) if (s->frame_skip_threshold || s->frame_skip_factor) { if (s->picture_in_gop_number < s->gop_size && s->next_picture_ptr && - skip_check(s, s->input_picture[0], s->next_picture_ptr)) { + skip_check(m, s->input_picture[0], s->next_picture_ptr)) { // FIXME check that the gop check above is +-1 correct av_frame_unref(s->input_picture[0]->f); - ff_vbv_update(s, 0); + ff_vbv_update(m, 0); goto no_output_pic; } @@ -1439,7 +1449,7 @@ static int select_input_picture(MPVMainEncContext *s) s->input_picture[i]->b_frame_score = 0; } } else if (s->b_frame_strategy == 2) { - b_frames = estimate_best_b_count(s); + b_frames = estimate_best_b_count(m); if (b_frames < 0) return b_frames; } @@ -1539,8 +1549,9 @@ no_output_pic: return 0; } -static void frame_end(MPVMainEncContext *s) +static void frame_end(MPVMainEncContext *m) { + MPVEncContext *const s = &m->common; if (s->unrestricted_mv && s->current_picture.reference && !s->intra_only) { @@ -1576,8 +1587,9 @@ static void frame_end(MPVMainEncContext *s) s->last_non_b_pict_type = s->pict_type; } -static void update_noise_reduction(MPVMainEncContext *s) +static void update_noise_reduction(MPVMainEncContext *m) { + MPVEncContext *const s = &m->common; int intra, i; for (intra = 0; intra < 2; intra++) { @@ -1597,8 +1609,9 @@ static void update_noise_reduction(MPVMainEncContext *s) } } -static int frame_start(MPVMainEncContext *s) +static int frame_start(MPVMainEncContext *m) { + MPVEncContext *const s = &m->common; int ret; /* mark & release old frames */ @@ -1662,7 +1675,7 @@ static int frame_start(MPVMainEncContext *s) if (s->dct_error_sum) { av_assert2(s->noise_reduction && s->encoding); - update_noise_reduction(s); + update_noise_reduction(m); } return 0; @@ -1671,7 +1684,8 @@ static int frame_start(MPVMainEncContext *s) int ff_mpv_encode_picture(AVCodecContext *avctx, AVPacket *pkt, const AVFrame *pic_arg, int *got_packet) { - MPVMainEncContext *const s = avctx->priv_data; + MPVMainEncContext *const m = avctx->priv_data; + MPVEncContext *const s = &m->common; int i, stuffing_count, ret; int context_count = s->slice_context_count; @@ -1679,10 +1693,10 @@ int ff_mpv_encode_picture(AVCodecContext *avctx, AVPacket *pkt, s->picture_in_gop_number++; - if (load_input_picture(s, pic_arg) < 0) + if (load_input_picture(m, pic_arg) < 0) return -1; - if (select_input_picture(s) < 0) { + if (select_input_picture(m) < 0) { return -1; } @@ -1713,11 +1727,11 @@ int ff_mpv_encode_picture(AVCodecContext *avctx, AVPacket *pkt, s->pict_type = s->new_picture.f->pict_type; //emms_c(); - ret = frame_start(s); + ret = frame_start(m); if (ret < 0) return ret; vbv_retry: - ret = encode_picture(s, s->picture_number); + ret = encode_picture(m, s->picture_number); if (growing_buffer) { av_assert0(s->pb.buf == avctx->internal->byte_buffer); pkt->data = s->pb.buf; @@ -1726,7 +1740,7 @@ vbv_retry: if (ret < 0) return -1; - frame_end(s); + frame_end(m); if ((CONFIG_MJPEG_ENCODER || CONFIG_AMV_ENCODER) && s->out_format == FMT_MJPEG) ff_mjpeg_encode_picture_trailer(&s->pb, s->header_bits); @@ -1774,7 +1788,7 @@ vbv_retry: } if (avctx->flags & AV_CODEC_FLAG_PASS1) - ff_write_pass1_stats(s); + ff_write_pass1_stats(m); for (i = 0; i < 4; i++) { s->current_picture_ptr->encoding_error[i] = s->current_picture.encoding_error[i]; @@ -1792,7 +1806,7 @@ vbv_retry: flush_put_bits(&s->pb); s->frame_bits = put_bits_count(&s->pb); - stuffing_count = ff_vbv_update(s, s->frame_bits); + stuffing_count = ff_vbv_update(m, s->frame_bits); s->stuffing_bits = 8*stuffing_count; if (stuffing_count) { if (put_bytes_left(&s->pb, 0) < stuffing_count + 50) { @@ -3431,14 +3445,15 @@ static void merge_context_after_encode(MPVEncContext *dst, MPVEncContext *src) flush_put_bits(&dst->pb); } -static int estimate_qp(MPVMainEncContext *s, int dry_run) +static int estimate_qp(MPVMainEncContext *m, int dry_run) { + MPVEncContext *const s = &m->common; if (s->next_lambda){ s->current_picture_ptr->f->quality = s->current_picture.f->quality = s->next_lambda; if(!dry_run) s->next_lambda= 0; } else if (!s->fixed_qscale) { - int quality = ff_rate_estimate_qscale(s, dry_run); + int quality = ff_rate_estimate_qscale(m, dry_run); s->current_picture_ptr->f->quality = s->current_picture.f->quality = quality; if (s->current_picture.f->quality < 0) @@ -3449,16 +3464,16 @@ static int estimate_qp(MPVMainEncContext *s, int dry_run) switch(s->codec_id){ case AV_CODEC_ID_MPEG4: if (CONFIG_MPEG4_ENCODER) - ff_clean_mpeg4_qscales(s); + ff_clean_mpeg4_qscales(m); break; case AV_CODEC_ID_H263: case AV_CODEC_ID_H263P: case AV_CODEC_ID_FLV1: if (CONFIG_H263_ENCODER) - ff_clean_h263_qscales(s); + ff_clean_h263_qscales(m); break; default: - ff_init_qscale_tab(s); + ff_init_qscale_tab(m); } s->lambda= s->lambda_table[0]; @@ -3470,8 +3485,9 @@ static int estimate_qp(MPVMainEncContext *s, int dry_run) } /* must be called before writing the header */ -static void set_frame_distances(MPVMainEncContext *s) +static void set_frame_distances(MPVMainEncContext *m) { + MPVEncContext *const s = &m->common; av_assert1(s->current_picture_ptr->f->pts != AV_NOPTS_VALUE); s->time = s->current_picture_ptr->f->pts * s->avctx->time_base.num; @@ -3485,8 +3501,9 @@ static void set_frame_distances(MPVMainEncContext *s) } } -static int encode_picture(MPVMainEncContext *s, int picture_number) +static int encode_picture(MPVMainEncContext *m, int picture_number) { + MPVEncContext *const s = &m->common; int i, ret; int bits; int context_count = s->slice_context_count; @@ -3500,9 +3517,9 @@ static int encode_picture(MPVMainEncContext *s, int picture_number) /* we need to initialize some time vars before we can encode B-frames */ // RAL: Condition added for MPEG1VIDEO if (s->out_format == FMT_MPEG1 || (s->h263_pred && !s->msmpeg4_version)) - set_frame_distances(s); + set_frame_distances(m); if(CONFIG_MPEG4_ENCODER && s->codec_id == AV_CODEC_ID_MPEG4) - ff_set_mpeg4_time(s); + ff_set_mpeg4_time(m); s->me.scene_change_score=0; @@ -3517,9 +3534,9 @@ static int encode_picture(MPVMainEncContext *s, int picture_number) } if (s->avctx->flags & AV_CODEC_FLAG_PASS2) { - if (estimate_qp(s,1) < 0) + if (estimate_qp(m, 1) < 0) return -1; - ff_get_2pass_fcode(s); + ff_get_2pass_fcode(m); } else if (!(s->avctx->flags & AV_CODEC_FLAG_QSCALE)) { if(s->pict_type==AV_PICTURE_TYPE_B) s->lambda= s->last_lambda_for[s->pict_type]; @@ -3637,7 +3654,7 @@ static int encode_picture(MPVMainEncContext *s, int picture_number) } } - if (estimate_qp(s, 0) < 0) + if (estimate_qp(m, 0) < 0) return -1; if (s->qscale < 3 && s->max_qcoeff <= 128 && @@ -3711,41 +3728,41 @@ static int encode_picture(MPVMainEncContext *s, int picture_number) switch(s->out_format) { #if CONFIG_MJPEG_ENCODER || CONFIG_AMV_ENCODER case FMT_MJPEG: - ff_mjpeg_amv_encode_picture_header(s); + ff_mjpeg_amv_encode_picture_header(m); break; #endif case FMT_SPEEDHQ: if (CONFIG_SPEEDHQ_ENCODER) - ff_speedhq_encode_picture_header(s); + ff_speedhq_encode_picture_header(m); break; case FMT_H261: if (CONFIG_H261_ENCODER) - ff_h261_encode_picture_header(s, picture_number); + ff_h261_encode_picture_header(m, picture_number); break; case FMT_H263: if (CONFIG_WMV2_ENCODER && s->codec_id == AV_CODEC_ID_WMV2) - ff_wmv2_encode_picture_header(s, picture_number); + ff_wmv2_encode_picture_header(m, picture_number); else if (CONFIG_MSMPEG4_ENCODER && s->msmpeg4_version) - ff_msmpeg4_encode_picture_header(s, picture_number); + ff_msmpeg4_encode_picture_header(m, picture_number); else if (CONFIG_MPEG4_ENCODER && s->h263_pred) { - ret = ff_mpeg4_encode_picture_header(s, picture_number); + ret = ff_mpeg4_encode_picture_header(m, picture_number); if (ret < 0) return ret; } else if (CONFIG_RV10_ENCODER && s->codec_id == AV_CODEC_ID_RV10) { - ret = ff_rv10_encode_picture_header(s, picture_number); + ret = ff_rv10_encode_picture_header(m, picture_number); if (ret < 0) return ret; } else if (CONFIG_RV20_ENCODER && s->codec_id == AV_CODEC_ID_RV20) - ff_rv20_encode_picture_header(s, picture_number); + ff_rv20_encode_picture_header(m, picture_number); else if (CONFIG_FLV_ENCODER && s->codec_id == AV_CODEC_ID_FLV1) - ff_flv_encode_picture_header(s, picture_number); + ff_flv_encode_picture_header(m, picture_number); else if (CONFIG_H263_ENCODER) - ff_h263_encode_picture_header(s, picture_number); + ff_h263_encode_picture_header(m, picture_number); break; case FMT_MPEG1: if (CONFIG_MPEG1VIDEO_ENCODER || CONFIG_MPEG2VIDEO_ENCODER) - ff_mpeg1_encode_picture_header(s, picture_number); + ff_mpeg1_encode_picture_header(m, picture_number); break; default: av_assert0(0); diff --git a/libavcodec/mpegvideoenc.h b/libavcodec/mpegvideoenc.h index 0b63487291..37375132d7 100644 --- a/libavcodec/mpegvideoenc.h +++ b/libavcodec/mpegvideoenc.h @@ -33,7 +33,9 @@ #include "mpegvideo.h" typedef MPVContext MPVEncContext; -typedef MPVContext MPVMainEncContext; +typedef struct MPVMainEncContext { + MPVMainContext common; +} MPVMainEncContext; #define UNI_AC_ENC_INDEX(run,level) ((run)*128 + (level)) @@ -62,7 +64,8 @@ typedef MPVContext MPVMainEncContext; { "chroma", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = FF_CMP_CHROMA }, INT_MIN, INT_MAX, FF_MPV_OPT_FLAGS, "cmp_func" }, \ { "msad", "Sum of absolute differences, median predicted", 0, AV_OPT_TYPE_CONST, {.i64 = FF_CMP_MEDIAN_SAD }, INT_MIN, INT_MAX, FF_MPV_OPT_FLAGS, "cmp_func" } -#define FF_MPV_OFFSET(x) offsetof(MPVMainEncContext, x) +#define FF_MPV_MAIN_OFFSET(x) offsetof(MPVMainEncContext, x) +#define FF_MPV_OFFSET(x) FF_MPV_MAIN_OFFSET(common.x) #define FF_MPV_OPT_FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM) #define FF_MPV_COMMON_OPTS \ FF_MPV_OPT_CMP_FUNC, \ diff --git a/libavcodec/msmpeg4enc.c b/libavcodec/msmpeg4enc.c index 1a2e24f703..7c1fa0a1e3 100644 --- a/libavcodec/msmpeg4enc.c +++ b/libavcodec/msmpeg4enc.c @@ -134,11 +134,12 @@ static av_cold void msmpeg4_encode_init_static(void) } } -av_cold void ff_msmpeg4_encode_init(MPVMainEncContext *s) +av_cold void ff_msmpeg4_encode_init(MPVMainEncContext *m) { static AVOnce init_static_once = AV_ONCE_INIT; + MPVEncContext *const s = &m->common; - ff_msmpeg4_common_init(s); + ff_msmpeg4_common_init(&m->common); if (s->msmpeg4_version >= 4) { s->min_qcoeff = -255; s->max_qcoeff = 255; @@ -150,7 +151,7 @@ av_cold void ff_msmpeg4_encode_init(MPVMainEncContext *s) static void find_best_tables(MSMPEG4EncContext *ms) { - MPVEncContext *const s = &ms->s; + MPVEncContext *const s = &ms->s.common; int i; int best = 0, best_size = INT_MAX; int chroma_best = 0, best_chroma_size = INT_MAX; @@ -214,9 +215,10 @@ static void find_best_tables(MSMPEG4EncContext *ms) } /* write MSMPEG4 compatible frame header */ -void ff_msmpeg4_encode_picture_header(MPVMainEncContext *s, int picture_number) +void ff_msmpeg4_encode_picture_header(MPVMainEncContext *m, int picture_number) { - MSMPEG4EncContext *const ms = (MSMPEG4EncContext*)s; + MSMPEG4EncContext *const ms = (MSMPEG4EncContext*)m; + MPVEncContext *const s = &m->common; find_best_tables(ms); diff --git a/libavcodec/ratecontrol.c b/libavcodec/ratecontrol.c index c5088d6790..4c326741df 100644 --- a/libavcodec/ratecontrol.c +++ b/libavcodec/ratecontrol.c @@ -35,8 +35,9 @@ #include "mpegvideoenc.h" #include "libavutil/eval.h" -void ff_write_pass1_stats(MPVMainEncContext *s) +void ff_write_pass1_stats(MPVMainEncContext *m) { + MPVEncContext *const s = &m->common; snprintf(s->avctx->stats_out, 256, "in:%d out:%d type:%d q:%d itex:%d ptex:%d mv:%d misc:%d " "fcode:%d bcode:%d mc-var:%"PRId64" var:%"PRId64" icount:%d skipcount:%d hbits:%d;\n", @@ -77,8 +78,9 @@ static inline double bits2qp(RateControlEntry *rce, double bits) return rce->qscale * (double)(rce->i_tex_bits + rce->p_tex_bits + 1) / bits; } -static double get_diff_limited_q(MPVMainEncContext *s, RateControlEntry *rce, double q) +static double get_diff_limited_q(MPVMainEncContext *m, RateControlEntry *rce, double q) { + MPVEncContext *const s = &m->common; RateControlContext *rcc = &s->rc_context; AVCodecContext *a = s->avctx; const int pict_type = rce->new_pict_type; @@ -116,7 +118,7 @@ static double get_diff_limited_q(MPVMainEncContext *s, RateControlEntry *rce, do /** * Get the qmin & qmax for pict_type. */ -static void get_qminmax(int *qmin_ret, int *qmax_ret, MPVMainEncContext *s, int pict_type) +static void get_qminmax(int *qmin_ret, int *qmax_ret, MPVEncContext *s, int pict_type) { int qmin = s->lmin; int qmax = s->lmax; @@ -144,9 +146,10 @@ static void get_qminmax(int *qmin_ret, int *qmax_ret, MPVMainEncContext *s, int *qmax_ret = qmax; } -static double modify_qscale(MPVMainEncContext *s, RateControlEntry *rce, +static double modify_qscale(MPVMainEncContext *m, RateControlEntry *rce, double q, int frame_num) { + MPVEncContext *const s = &m->common; RateControlContext *rcc = &s->rc_context; const double buffer_size = s->avctx->rc_buffer_size; const double fps = get_fps(s->avctx); @@ -235,9 +238,10 @@ static double modify_qscale(MPVMainEncContext *s, RateControlEntry *rce, /** * Modify the bitrate curve from pass1 for one frame. */ -static double get_qscale(MPVMainEncContext *s, RateControlEntry *rce, +static double get_qscale(MPVMainEncContext *m, RateControlEntry *rce, double rate_factor, int frame_num) { + MPVEncContext *const s = &m->common; RateControlContext *rcc = &s->rc_context; AVCodecContext *a = s->avctx; const int pict_type = rce->new_pict_type; @@ -308,8 +312,9 @@ static double get_qscale(MPVMainEncContext *s, RateControlEntry *rce, return q; } -static int init_pass2(MPVMainEncContext *s) +static int init_pass2(MPVMainEncContext *m) { + MPVEncContext *const s = &m->common; RateControlContext *rcc = &s->rc_context; AVCodecContext *a = s->avctx; int i, toobig; @@ -368,7 +373,7 @@ static int init_pass2(MPVMainEncContext *s) for (i = 0; i < rcc->num_entries; i++) { RateControlEntry *rce = &rcc->entry[i]; - qscale[i] = get_qscale(s, &rcc->entry[i], rate_factor, i); + qscale[i] = get_qscale(m, &rcc->entry[i], rate_factor, i); rcc->last_qscale_for[rce->pict_type] = qscale[i]; } av_assert0(filter_size % 2 == 1); @@ -377,13 +382,13 @@ static int init_pass2(MPVMainEncContext *s) for (i = FFMAX(0, rcc->num_entries - 300); i < rcc->num_entries; i++) { RateControlEntry *rce = &rcc->entry[i]; - qscale[i] = get_diff_limited_q(s, rce, qscale[i]); + qscale[i] = get_diff_limited_q(m, rce, qscale[i]); } for (i = rcc->num_entries - 1; i >= 0; i--) { RateControlEntry *rce = &rcc->entry[i]; - qscale[i] = get_diff_limited_q(s, rce, qscale[i]); + qscale[i] = get_diff_limited_q(m, rce, qscale[i]); } /* smooth curve */ @@ -413,10 +418,10 @@ static int init_pass2(MPVMainEncContext *s) RateControlEntry *rce = &rcc->entry[i]; double bits; - rce->new_qscale = modify_qscale(s, rce, blurred_qscale[i], i); + rce->new_qscale = modify_qscale(m, rce, blurred_qscale[i], i); bits = qp2bits(rce, rce->new_qscale) + rce->mv_bits + rce->misc_bits; - bits += 8 * ff_vbv_update(s, bits); + bits += 8 * ff_vbv_update(m, bits); rce->expected_bits = expected_bits; expected_bits += bits; @@ -469,8 +474,9 @@ static int init_pass2(MPVMainEncContext *s) return 0; } -av_cold int ff_rate_control_init(MPVMainEncContext *s) +av_cold int ff_rate_control_init(MPVMainEncContext *m) { + MPVEncContext *const s = &m->common; RateControlContext *rcc = &s->rc_context; int i, res; static const char * const const_names[] = { @@ -603,8 +609,8 @@ av_cold int ff_rate_control_init(MPVMainEncContext *s) p = next; } - if (init_pass2(s) < 0) { - ff_rate_control_uninit(s); + if (init_pass2(m) < 0) { + ff_rate_control_uninit(m); return -1; } } @@ -658,7 +664,7 @@ av_cold int ff_rate_control_init(MPVMainEncContext *s) rcc->mv_bits_sum[rce.pict_type] += rce.mv_bits; rcc->frame_count[rce.pict_type]++; - get_qscale(s, &rce, rcc->pass1_wanted_bits / rcc->pass1_rc_eq_output_sum, i); + get_qscale(m, &rce, rcc->pass1_wanted_bits / rcc->pass1_rc_eq_output_sum, i); // FIXME misbehaves a little for variable fps rcc->pass1_wanted_bits += s->bit_rate / get_fps(s->avctx); @@ -669,8 +675,9 @@ av_cold int ff_rate_control_init(MPVMainEncContext *s) return 0; } -av_cold void ff_rate_control_uninit(MPVMainEncContext *s) +av_cold void ff_rate_control_uninit(MPVMainEncContext *m) { + MPVEncContext *const s = &m->common; RateControlContext *rcc = &s->rc_context; emms_c(); @@ -678,8 +685,9 @@ av_cold void ff_rate_control_uninit(MPVMainEncContext *s) av_freep(&rcc->entry); } -int ff_vbv_update(MPVMainEncContext *s, int frame_size) +int ff_vbv_update(MPVMainEncContext *m, int frame_size) { + MPVEncContext *const s = &m->common; RateControlContext *rcc = &s->rc_context; const double fps = get_fps(s->avctx); const int buffer_size = s->avctx->rc_buffer_size; @@ -737,8 +745,9 @@ static void update_predictor(Predictor *p, double q, double var, double size) p->coeff += new_coeff; } -static void adaptive_quantization(MPVMainEncContext *s, double q) +static void adaptive_quantization(MPVMainEncContext *m, double q) { + MPVEncContext *const s = &m->common; int i; const float lumi_masking = s->avctx->lumi_masking / (128.0 * 128.0); const float dark_masking = s->avctx->dark_masking / (128.0 * 128.0); @@ -854,8 +863,9 @@ static void adaptive_quantization(MPVMainEncContext *s, double q) } } -void ff_get_2pass_fcode(MPVMainEncContext *s) +void ff_get_2pass_fcode(MPVMainEncContext *m) { + MPVEncContext *const s = &m->common; RateControlContext *rcc = &s->rc_context; RateControlEntry *rce = &rcc->entry[s->picture_number]; @@ -865,8 +875,9 @@ void ff_get_2pass_fcode(MPVMainEncContext *s) // FIXME rd or at least approx for dquant -float ff_rate_estimate_qscale(MPVMainEncContext *s, int dry_run) +float ff_rate_estimate_qscale(MPVMainEncContext *m, int dry_run) { + MPVEncContext *const s = &m->common; float q; int qmin, qmax; float br_compensation; @@ -971,12 +982,12 @@ float ff_rate_estimate_qscale(MPVMainEncContext *s, int dry_run) rate_factor = rcc->pass1_wanted_bits / rcc->pass1_rc_eq_output_sum * br_compensation; - q = get_qscale(s, rce, rate_factor, picture_number); + q = get_qscale(m, rce, rate_factor, picture_number); if (q < 0) return -1; av_assert0(q > 0.0); - q = get_diff_limited_q(s, rce, q); + q = get_diff_limited_q(m, rce, q); av_assert0(q > 0.0); // FIXME type dependent blur like in 2-pass @@ -990,7 +1001,7 @@ float ff_rate_estimate_qscale(MPVMainEncContext *s, int dry_run) } av_assert0(q > 0.0); - q = modify_qscale(s, rce, q, picture_number); + q = modify_qscale(m, rce, q, picture_number); rcc->pass1_wanted_bits += s->bit_rate / fps; @@ -1015,7 +1026,7 @@ float ff_rate_estimate_qscale(MPVMainEncContext *s, int dry_run) q = qmax; if (s->adaptive_quant) - adaptive_quantization(s, q); + adaptive_quantization(m, q); else q = (int)(q + 0.5); diff --git a/libavcodec/ratecontrol.h b/libavcodec/ratecontrol.h index 23f41dcca2..6b29a0a94d 100644 --- a/libavcodec/ratecontrol.h +++ b/libavcodec/ratecontrol.h @@ -86,7 +86,6 @@ typedef struct RateControlContext{ AVExpr * rc_eq_eval; }RateControlContext; -#define MPVMainEncContext MPVContext struct MPVMainEncContext; /* rate control */ @@ -96,6 +95,5 @@ void ff_write_pass1_stats(struct MPVMainEncContext *m); void ff_rate_control_uninit(struct MPVMainEncContext *m); int ff_vbv_update(struct MPVMainEncContext *m, int frame_size); void ff_get_2pass_fcode(struct MPVMainEncContext *m); -#undef MPVMainEncContext #endif /* AVCODEC_RATECONTROL_H */ diff --git a/libavcodec/rv10enc.c b/libavcodec/rv10enc.c index 6c9305d7fe..ddbea6298c 100644 --- a/libavcodec/rv10enc.c +++ b/libavcodec/rv10enc.c @@ -29,8 +29,9 @@ #include "put_bits.h" #include "rv10enc.h" -int ff_rv10_encode_picture_header(MPVMainEncContext *s, int picture_number) +int ff_rv10_encode_picture_header(MPVMainEncContext *m, int picture_number) { + MPVEncContext *const s = &m->common; int full_frame= 0; align_put_bits(&s->pb); diff --git a/libavcodec/rv20enc.c b/libavcodec/rv20enc.c index a4b4bad277..06f6751549 100644 --- a/libavcodec/rv20enc.c +++ b/libavcodec/rv20enc.c @@ -32,8 +32,9 @@ #include "put_bits.h" #include "rv10enc.h" -void ff_rv20_encode_picture_header(MPVMainEncContext *s, int picture_number) +void ff_rv20_encode_picture_header(MPVMainEncContext *m, int picture_number) { + MPVEncContext *const s = &m->common; put_bits(&s->pb, 2, s->pict_type); //I 0 vs. 1 ? put_bits(&s->pb, 1, 0); /* unknown bit */ put_bits(&s->pb, 5, s->qscale); diff --git a/libavcodec/snow.c b/libavcodec/snow.c index 0a500695ce..de9741a888 100644 --- a/libavcodec/snow.c +++ b/libavcodec/snow.c @@ -689,6 +689,7 @@ int ff_snow_frame_start(SnowContext *s){ av_cold void ff_snow_common_end(SnowContext *s) { int plane_index, level, orientation, i; + MPVEncContext *const m = &s->m.common; av_freep(&s->spatial_dwt_buffer); av_freep(&s->temp_dwt_buffer); @@ -696,11 +697,11 @@ av_cold void ff_snow_common_end(SnowContext *s) av_freep(&s->temp_idwt_buffer); av_freep(&s->run_buffer); - s->m.me.temp= NULL; - av_freep(&s->m.me.scratchpad); - av_freep(&s->m.me.map); - av_freep(&s->m.me.score_map); - av_freep(&s->m.sc.obmc_scratchpad); + m->me.temp = NULL; + av_freep(&m->me.scratchpad); + av_freep(&m->me.map); + av_freep(&m->me.score_map); + av_freep(&m->sc.obmc_scratchpad); av_freep(&s->block); av_freep(&s->scratchbuf); diff --git a/libavcodec/snowenc.c b/libavcodec/snowenc.c index e169ae601d..028b6b91aa 100644 --- a/libavcodec/snowenc.c +++ b/libavcodec/snowenc.c @@ -39,6 +39,7 @@ static av_cold int encode_init(AVCodecContext *avctx) { SnowContext *s = avctx->priv_data; + MPVEncContext *const mpv = &s->m.common; int plane_index, ret; int i; @@ -72,18 +73,18 @@ static av_cold int encode_init(AVCodecContext *avctx) s->version=0; - s->m.avctx = avctx; - s->m.bit_rate= avctx->bit_rate; - s->m.lmin = avctx->mb_lmin; - s->m.lmax = avctx->mb_lmax; - s->m.mb_num = (avctx->width * avctx->height + 255) / 256; // For ratecontrol - - s->m.me.temp = - s->m.me.scratchpad = av_calloc(avctx->width + 64, 2*16*2*sizeof(uint8_t)); - s->m.me.map = av_mallocz(ME_MAP_SIZE*sizeof(uint32_t)); - s->m.me.score_map = av_mallocz(ME_MAP_SIZE*sizeof(uint32_t)); - s->m.sc.obmc_scratchpad= av_mallocz(MB_SIZE*MB_SIZE*12*sizeof(uint32_t)); - if (!s->m.me.scratchpad || !s->m.me.map || !s->m.me.score_map || !s->m.sc.obmc_scratchpad) + mpv->avctx = avctx; + mpv->bit_rate= avctx->bit_rate; + mpv->lmin = avctx->mb_lmin; + mpv->lmax = avctx->mb_lmax; + mpv->mb_num = (avctx->width * avctx->height + 255) / 256; // For ratecontrol + + mpv->me.temp = + mpv->me.scratchpad = av_calloc(avctx->width + 64, 2*16*2*sizeof(uint8_t)); + mpv->me.map = av_mallocz(ME_MAP_SIZE*sizeof(uint32_t)); + mpv->me.score_map = av_mallocz(ME_MAP_SIZE*sizeof(uint32_t)); + mpv->sc.obmc_scratchpad= av_mallocz(MB_SIZE*MB_SIZE*12*sizeof(uint32_t)); + if (!mpv->me.scratchpad || !mpv->me.map || !mpv->me.score_map || !mpv->sc.obmc_scratchpad) return AVERROR(ENOMEM); ff_h263_encode_init(&s->m); //mv_penalty @@ -216,6 +217,7 @@ static inline int get_penalty_factor(int lambda, int lambda2, int type){ #define FLAG_QPEL 1 //must be 1 static int encode_q_branch(SnowContext *s, int level, int x, int y){ + MPVEncContext *const mpv = &s->m.common; uint8_t p_buffer[1024]; uint8_t i_buffer[1024]; uint8_t p_state[sizeof(s->block_state)]; @@ -252,7 +254,7 @@ static int encode_q_branch(SnowContext *s, int level, int x, int y){ int16_t last_mv[3][2]; int qpel= !!(s->avctx->flags & AV_CODEC_FLAG_QPEL); //unused const int shift= 1+qpel; - MotionEstContext *c= &s->m.me; + MotionEstContext *c= &mpv->me; int ref_context= av_log2(2*left->ref) + av_log2(2*top->ref); int mx_context= av_log2(2*FFABS(left->mx - top->mx)); int my_context= av_log2(2*FFABS(left->my - top->my)); @@ -281,9 +283,9 @@ static int encode_q_branch(SnowContext *s, int level, int x, int y){ last_mv[2][0]= bottom->mx; last_mv[2][1]= bottom->my; - s->m.mb_stride=2; - s->m.mb_x= - s->m.mb_y= 0; + mpv->mb_stride=2; + mpv->mb_x= + mpv->mb_y= 0; c->skip= 0; av_assert1(c-> stride == stride); @@ -292,7 +294,7 @@ static int encode_q_branch(SnowContext *s, int level, int x, int y){ c->penalty_factor = get_penalty_factor(s->lambda, s->lambda2, c->avctx->me_cmp); c->sub_penalty_factor= get_penalty_factor(s->lambda, s->lambda2, c->avctx->me_sub_cmp); c->mb_penalty_factor = get_penalty_factor(s->lambda, s->lambda2, c->avctx->mb_cmp); - c->current_mv_penalty= c->mv_penalty[s->m.f_code=1] + MAX_DMV; + c->current_mv_penalty= c->mv_penalty[mpv->f_code=1] + MAX_DMV; c->xmin = - x*block_w - 16+3; c->ymin = - y*block_w - 16+3; @@ -323,16 +325,16 @@ static int encode_q_branch(SnowContext *s, int level, int x, int y){ for(ref=0; refref_frames; ref++){ init_ref(c, current_data, s->last_picture[ref]->data, NULL, block_w*x, block_w*y, 0); - ref_score= ff_epzs_motion_search(&s->m, &ref_mx, &ref_my, P, 0, /*ref_index*/ 0, last_mv, - (1<<16)>>shift, level-LOG2_MB_SIZE+4, block_w); + ref_score = ff_epzs_motion_search(mpv, &ref_mx, &ref_my, P, 0, /*ref_index*/ 0, last_mv, + (1<<16)>>shift, level-LOG2_MB_SIZE+4, block_w); av_assert2(ref_mx >= c->xmin); av_assert2(ref_mx <= c->xmax); av_assert2(ref_my >= c->ymin); av_assert2(ref_my <= c->ymax); - ref_score= c->sub_motion_search(&s->m, &ref_mx, &ref_my, ref_score, 0, 0, level-LOG2_MB_SIZE+4, block_w); - ref_score= ff_get_mb_score(&s->m, ref_mx, ref_my, 0, 0, level-LOG2_MB_SIZE+4, block_w, 0); + ref_score = c->sub_motion_search(mpv, &ref_mx, &ref_my, ref_score, 0, 0, level-LOG2_MB_SIZE+4, block_w); + ref_score = ff_get_mb_score(mpv, ref_mx, ref_my, 0, 0, level-LOG2_MB_SIZE+4, block_w, 0); ref_score+= 2*av_log2(2*ref)*c->penalty_factor; if(s->ref_mvs[ref]){ s->ref_mvs[ref][index][0]= ref_mx; @@ -408,7 +410,7 @@ static int encode_q_branch(SnowContext *s, int level, int x, int y){ if (vard <= 64 || vard < varc) c->scene_change_score+= ff_sqrt(vard) - ff_sqrt(varc); else - c->scene_change_score+= s->m.qscale; + c->scene_change_score+= mpv->qscale; } if(level!=s->block_max_depth){ @@ -509,7 +511,7 @@ static int get_dc(SnowContext *s, int mb_x, int mb_y, int plane_index){ const int obmc_stride= plane_index ? (2*block_size)>>s->chroma_h_shift : 2*block_size; const int ref_stride= s->current_picture->linesize[plane_index]; uint8_t *src= s-> input_picture->data[plane_index]; - IDWTELEM *dst= (IDWTELEM*)s->m.sc.obmc_scratchpad + plane_index*block_size*block_size*4; //FIXME change to unsigned + IDWTELEM *dst= (IDWTELEM*)s->m.common.sc.obmc_scratchpad + plane_index*block_size*block_size*4; //FIXME change to unsigned const int b_stride = s->b_width << s->block_max_depth; const int w= p->width; const int h= p->height; @@ -596,6 +598,7 @@ static inline int get_block_bits(SnowContext *s, int x, int y, int w){ } static int get_block_rd(SnowContext *s, int mb_x, int mb_y, int plane_index, uint8_t (*obmc_edged)[MB_SIZE * 2]){ + MPVEncContext *const mpv = &s->m.common; Plane *p= &s->plane[plane_index]; const int block_size = MB_SIZE >> s->block_max_depth; const int block_w = plane_index ? block_size>>s->chroma_h_shift : block_size; @@ -604,7 +607,7 @@ static int get_block_rd(SnowContext *s, int mb_x, int mb_y, int plane_index, uin const int ref_stride= s->current_picture->linesize[plane_index]; uint8_t *dst= s->current_picture->data[plane_index]; uint8_t *src= s-> input_picture->data[plane_index]; - IDWTELEM *pred= (IDWTELEM*)s->m.sc.obmc_scratchpad + plane_index*block_size*block_size*4; + IDWTELEM *pred= (IDWTELEM*)mpv->sc.obmc_scratchpad + plane_index*block_size*block_size*4; uint8_t *cur = s->scratchbuf; uint8_t *tmp = s->emu_edge_buffer; const int b_stride = s->b_width << s->block_max_depth; @@ -667,19 +670,19 @@ static int get_block_rd(SnowContext *s, int mb_x, int mb_y, int plane_index, uin * to improve the score of the whole frame, thus iterative motion * estimation does not always converge. */ if(s->avctx->me_cmp == FF_CMP_W97) - distortion = ff_w97_32_c(&s->m, src + sx + sy*ref_stride, dst + sx + sy*ref_stride, ref_stride, 32); + distortion = ff_w97_32_c(mpv, src + sx + sy*ref_stride, dst + sx + sy*ref_stride, ref_stride, 32); else if(s->avctx->me_cmp == FF_CMP_W53) - distortion = ff_w53_32_c(&s->m, src + sx + sy*ref_stride, dst + sx + sy*ref_stride, ref_stride, 32); + distortion = ff_w53_32_c(mpv, src + sx + sy*ref_stride, dst + sx + sy*ref_stride, ref_stride, 32); else{ distortion = 0; for(i=0; i<4; i++){ int off = sx+16*(i&1) + (sy+16*(i>>1))*ref_stride; - distortion += s->mecc.me_cmp[0](&s->m, src + off, dst + off, ref_stride, 16); + distortion += s->mecc.me_cmp[0](mpv, src + off, dst + off, ref_stride, 16); } } }else{ av_assert2(block_w==8); - distortion = s->mecc.me_cmp[0](&s->m, src + sx + sy*ref_stride, dst + sx + sy*ref_stride, ref_stride, block_w*2); + distortion = s->mecc.me_cmp[0](mpv, src + sx + sy*ref_stride, dst + sx + sy*ref_stride, ref_stride, block_w*2); } if(plane_index==0){ @@ -697,6 +700,7 @@ static int get_block_rd(SnowContext *s, int mb_x, int mb_y, int plane_index, uin } static int get_4block_rd(SnowContext *s, int mb_x, int mb_y, int plane_index){ + MPVEncContext *const mpv = &s->m.common; int i, y2; Plane *p= &s->plane[plane_index]; const int block_size = MB_SIZE >> s->block_max_depth; @@ -743,7 +747,7 @@ static int get_4block_rd(SnowContext *s, int mb_x, int mb_y, int plane_index){ } av_assert1(block_w== 8 || block_w==16); - distortion += s->mecc.me_cmp[block_w==8](&s->m, src + x + y*ref_stride, dst + x + y*ref_stride, ref_stride, block_h); + distortion += s->mecc.me_cmp[block_w==8](mpv, src + x + y*ref_stride, dst + x + y*ref_stride, ref_stride, block_h); } if(plane_index==0){ @@ -1477,6 +1481,7 @@ static int qscale2qlog(int qscale){ static int ratecontrol_1pass(SnowContext *s, AVFrame *pict) { + MPVEncContext *const mpv = &s->m.common; /* Estimate the frame's complexity as a sum of weighted dwt coefficients. * FIXME we know exact mv bits at this point, * but ratecontrol isn't set up to include them. */ @@ -1511,11 +1516,11 @@ static int ratecontrol_1pass(SnowContext *s, AVFrame *pict) coef_sum = (uint64_t)coef_sum * coef_sum >> 16; if(pict->pict_type == AV_PICTURE_TYPE_I){ - s->m.current_picture.mb_var_sum= coef_sum; - s->m.current_picture.mc_mb_var_sum= 0; + mpv->current_picture.mb_var_sum= coef_sum; + mpv->current_picture.mc_mb_var_sum= 0; }else{ - s->m.current_picture.mc_mb_var_sum= coef_sum; - s->m.current_picture.mb_var_sum= 0; + mpv->current_picture.mc_mb_var_sum= coef_sum; + mpv->current_picture.mb_var_sum= 0; } pict->quality= ff_rate_estimate_qscale(&s->m, 1); @@ -1557,6 +1562,7 @@ static int encode_frame(AVCodecContext *avctx, AVPacket *pkt, const AVFrame *pict, int *got_packet) { SnowContext *s = avctx->priv_data; + MPVEncContext *const mpv = &s->m.common; RangeCoder * const c= &s->c; AVFrame *pic; const int width= s->avctx->width; @@ -1589,9 +1595,9 @@ static int encode_frame(AVCodecContext *avctx, AVPacket *pkt, pic->pict_type = pict->pict_type; pic->quality = pict->quality; - s->m.picture_number= avctx->frame_number; + mpv->picture_number= avctx->frame_number; if(avctx->flags&AV_CODEC_FLAG_PASS2){ - s->m.pict_type = pic->pict_type = s->m.rc_context.entry[avctx->frame_number].new_pict_type; + mpv->pict_type = pic->pict_type = mpv->rc_context.entry[avctx->frame_number].new_pict_type; s->keyframe = pic->pict_type == AV_PICTURE_TYPE_I; if(!(avctx->flags&AV_CODEC_FLAG_QSCALE)) { pic->quality = ff_rate_estimate_qscale(&s->m, 0); @@ -1600,7 +1606,7 @@ static int encode_frame(AVCodecContext *avctx, AVPacket *pkt, } }else{ s->keyframe= avctx->gop_size==0 || avctx->frame_number % avctx->gop_size == 0; - s->m.pict_type = pic->pict_type = s->keyframe ? AV_PICTURE_TYPE_I : AV_PICTURE_TYPE_P; + mpv->pict_type = pic->pict_type = s->keyframe ? AV_PICTURE_TYPE_I : AV_PICTURE_TYPE_P; } if(s->pass1_rc && avctx->frame_number == 0) @@ -1634,9 +1640,9 @@ static int encode_frame(AVCodecContext *avctx, AVPacket *pkt, ff_snow_frame_start(s); - s->m.current_picture_ptr= &s->m.current_picture; - s->m.current_picture.f = s->current_picture; - s->m.current_picture.f->pts = pict->pts; + mpv->current_picture_ptr= &mpv->current_picture; + mpv->current_picture.f = s->current_picture; + mpv->current_picture.f->pts = pict->pts; if(pic->pict_type == AV_PICTURE_TYPE_P){ int block_width = (width +15)>>4; int block_height= (height+15)>>4; @@ -1645,37 +1651,37 @@ static int encode_frame(AVCodecContext *avctx, AVPacket *pkt, av_assert0(s->current_picture->data[0]); av_assert0(s->last_picture[0]->data[0]); - s->m.avctx= s->avctx; - s->m. last_picture.f = s->last_picture[0]; - s->m. new_picture.f = s->input_picture; - s->m. last_picture_ptr= &s->m. last_picture; - s->m.linesize = stride; - s->m.uvlinesize= s->current_picture->linesize[1]; - s->m.width = width; - s->m.height= height; - s->m.mb_width = block_width; - s->m.mb_height= block_height; - s->m.mb_stride= s->m.mb_width+1; - s->m.b8_stride= 2*s->m.mb_width+1; - s->m.f_code=1; - s->m.pict_type = pic->pict_type; - s->m.motion_est= s->motion_est; - s->m.me.scene_change_score=0; - s->m.me.dia_size = avctx->dia_size; - s->m.quarter_sample= (s->avctx->flags & AV_CODEC_FLAG_QPEL)!=0; - s->m.out_format= FMT_H263; - s->m.unrestricted_mv= 1; - - s->m.lambda = s->lambda; - s->m.qscale= (s->m.lambda*139 + FF_LAMBDA_SCALE*64) >> (FF_LAMBDA_SHIFT + 7); - s->lambda2= s->m.lambda2= (s->m.lambda*s->m.lambda + FF_LAMBDA_SCALE/2) >> FF_LAMBDA_SHIFT; - - s->m.mecc= s->mecc; //move - s->m.qdsp= s->qdsp; //move - s->m.hdsp = s->hdsp; - ff_init_me(&s->m); - s->hdsp = s->m.hdsp; - s->mecc= s->m.mecc; + mpv->avctx= s->avctx; + mpv-> last_picture.f = s->last_picture[0]; + mpv-> new_picture.f = s->input_picture; + mpv-> last_picture_ptr= &mpv-> last_picture; + mpv->linesize = stride; + mpv->uvlinesize= s->current_picture->linesize[1]; + mpv->width = width; + mpv->height= height; + mpv->mb_width = block_width; + mpv->mb_height= block_height; + mpv->mb_stride= mpv->mb_width+1; + mpv->b8_stride= 2*mpv->mb_width+1; + mpv->f_code=1; + mpv->pict_type = pic->pict_type; + mpv->motion_est= s->motion_est; + mpv->me.scene_change_score=0; + mpv->me.dia_size = avctx->dia_size; + mpv->quarter_sample= (s->avctx->flags & AV_CODEC_FLAG_QPEL)!=0; + mpv->out_format= FMT_H263; + mpv->unrestricted_mv= 1; + + mpv->lambda = s->lambda; + mpv->qscale= (mpv->lambda*139 + FF_LAMBDA_SCALE*64) >> (FF_LAMBDA_SHIFT + 7); + s->lambda2= mpv->lambda2= (mpv->lambda*mpv->lambda + FF_LAMBDA_SCALE/2) >> FF_LAMBDA_SHIFT; + + mpv->mecc= s->mecc; //move + mpv->qdsp= s->qdsp; //move + mpv->hdsp = s->hdsp; + ff_init_me(mpv); + s->hdsp = mpv->hdsp; + s->mecc= mpv->mecc; } if(s->pass1_rc){ @@ -1696,7 +1702,7 @@ redo_frame: return AVERROR(EINVAL); } - s->m.pict_type = pic->pict_type; + mpv->pict_type = pic->pict_type; s->qbias = pic->pict_type == AV_PICTURE_TYPE_P ? 2 : 0; ff_snow_common_init_after_header(avctx); @@ -1708,9 +1714,9 @@ redo_frame: } encode_header(s); - s->m.misc_bits = 8*(s->c.bytestream - s->c.bytestream_start); + mpv->misc_bits = 8*(s->c.bytestream - s->c.bytestream_start); encode_blocks(s, 1); - s->m.mv_bits = 8*(s->c.bytestream - s->c.bytestream_start) - s->m.misc_bits; + mpv->mv_bits = 8*(s->c.bytestream - s->c.bytestream_start) - mpv->misc_bits; for(plane_index=0; plane_index < s->nb_planes; plane_index++){ Plane *p= &s->plane[plane_index]; @@ -1732,7 +1738,7 @@ redo_frame: if( plane_index==0 && pic->pict_type == AV_PICTURE_TYPE_P && !(avctx->flags&AV_CODEC_FLAG_PASS2) - && s->m.me.scene_change_score > s->scenechange_threshold){ + && mpv->me.scene_change_score > s->scenechange_threshold){ ff_init_range_encoder(c, pkt->data, pkt->size); ff_build_rac_states(c, (1LL<<32)/20, 256-8); pic->pict_type= AV_PICTURE_TYPE_I; @@ -1841,18 +1847,18 @@ redo_frame: s->current_picture->coded_picture_number = avctx->frame_number; s->current_picture->pict_type = pic->pict_type; s->current_picture->quality = pic->quality; - s->m.frame_bits = 8*(s->c.bytestream - s->c.bytestream_start); - s->m.p_tex_bits = s->m.frame_bits - s->m.misc_bits - s->m.mv_bits; - s->m.current_picture.f->display_picture_number = - s->m.current_picture.f->coded_picture_number = avctx->frame_number; - s->m.current_picture.f->quality = pic->quality; - s->m.total_bits += 8*(s->c.bytestream - s->c.bytestream_start); + mpv->frame_bits = 8*(s->c.bytestream - s->c.bytestream_start); + mpv->p_tex_bits = mpv->frame_bits - mpv->misc_bits - mpv->mv_bits; + mpv->current_picture.f->display_picture_number = + mpv->current_picture.f->coded_picture_number = avctx->frame_number; + mpv->current_picture.f->quality = pic->quality; + mpv->total_bits += 8*(s->c.bytestream - s->c.bytestream_start); if(s->pass1_rc) if (ff_rate_estimate_qscale(&s->m, 0) < 0) return -1; if(avctx->flags&AV_CODEC_FLAG_PASS1) ff_write_pass1_stats(&s->m); - s->m.last_pict_type = s->m.pict_type; + mpv->last_pict_type = mpv->pict_type; emms_c(); @@ -1901,7 +1907,7 @@ static const AVOption options[] = { "defined in the section 'Expression Evaluation', the following functions are available: " "bits2qp(bits), qp2bits(qp). Also the following constants are available: iTex pTex tex mv " "fCode iCount mcVar var isI isP isB avgQP qComp avgIITex avgPITex avgPPTex avgBPTex avgTex.", - OFFSET(m.rc_eq), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, VE }, + OFFSET(m.common.rc_eq), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, VE }, { NULL }, }; diff --git a/libavcodec/speedhqenc.c b/libavcodec/speedhqenc.c index be80065a32..6f89927d68 100644 --- a/libavcodec/speedhqenc.c +++ b/libavcodec/speedhqenc.c @@ -89,8 +89,9 @@ static av_cold void speedhq_init_static_data(void) ff_mpeg1_init_uni_ac_vlc(&ff_rl_speedhq, uni_speedhq_ac_vlc_len); } -av_cold int ff_speedhq_encode_init(MPVMainEncContext *s) +av_cold int ff_speedhq_encode_init(MPVMainEncContext *m) { + MPVEncContext *const s = &m->common; static AVOnce init_static_once = AV_ONCE_INIT; if (s->width > 65500 || s->height > 65500) { @@ -125,9 +126,10 @@ av_cold int ff_speedhq_encode_init(MPVMainEncContext *s) return 0; } -void ff_speedhq_encode_picture_header(MPVMainEncContext *s) +void ff_speedhq_encode_picture_header(MPVMainEncContext *m) { - SpeedHQEncContext *ctx = (SpeedHQEncContext*)s; + SpeedHQEncContext *ctx = (SpeedHQEncContext*)m; + MPVEncContext *const s = &m->common; put_bits_le(&s->pb, 8, 100 - s->qscale * 2); /* FIXME why doubled */ put_bits_le(&s->pb, 24, 4); /* no second field */ diff --git a/libavcodec/svq1enc.c b/libavcodec/svq1enc.c index 7af82bb3ac..5e26c1073a 100644 --- a/libavcodec/svq1enc.c +++ b/libavcodec/svq1enc.c @@ -253,6 +253,7 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane, unsigned char *decoded_plane, int width, int height, int src_stride, int stride) { + MPVEncContext *const mpv = &s->m.common; int x, y; int i; int block_width, block_height; @@ -271,64 +272,64 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane, block_height = (height + 15) / 16; if (s->pict_type == AV_PICTURE_TYPE_P) { - s->m.avctx = s->avctx; - s->m.current_picture_ptr = &s->m.current_picture; - s->m.last_picture_ptr = &s->m.last_picture; - s->m.last_picture.f->data[0] = ref_plane; - s->m.linesize = - s->m.last_picture.f->linesize[0] = - s->m.new_picture.f->linesize[0] = - s->m.current_picture.f->linesize[0] = stride; - s->m.width = width; - s->m.height = height; - s->m.mb_width = block_width; - s->m.mb_height = block_height; - s->m.mb_stride = s->m.mb_width + 1; - s->m.b8_stride = 2 * s->m.mb_width + 1; - s->m.f_code = 1; - s->m.pict_type = s->pict_type; - s->m.motion_est = s->motion_est; - s->m.me.scene_change_score = 0; - // s->m.out_format = FMT_H263; - // s->m.unrestricted_mv = 1; - s->m.lambda = s->quality; - s->m.qscale = s->m.lambda * 139 + + mpv->avctx = s->avctx; + mpv->current_picture_ptr = &mpv->current_picture; + mpv->last_picture_ptr = &mpv->last_picture; + mpv->last_picture.f->data[0] = ref_plane; + mpv->linesize = + mpv->last_picture.f->linesize[0] = + mpv->new_picture.f->linesize[0] = + mpv->current_picture.f->linesize[0] = stride; + mpv->width = width; + mpv->height = height; + mpv->mb_width = block_width; + mpv->mb_height = block_height; + mpv->mb_stride = mpv->mb_width + 1; + mpv->b8_stride = 2 * mpv->mb_width + 1; + mpv->f_code = 1; + mpv->pict_type = s->pict_type; + mpv->motion_est = s->motion_est; + mpv->me.scene_change_score = 0; + // mpv->out_format = FMT_H263; + // mpv->unrestricted_mv = 1; + mpv->lambda = s->quality; + mpv->qscale = mpv->lambda * 139 + FF_LAMBDA_SCALE * 64 >> FF_LAMBDA_SHIFT + 7; - s->m.lambda2 = s->m.lambda * s->m.lambda + + mpv->lambda2 = mpv->lambda * mpv->lambda + FF_LAMBDA_SCALE / 2 >> FF_LAMBDA_SHIFT; if (!s->motion_val8[plane]) { - s->motion_val8[plane] = av_mallocz((s->m.b8_stride * + s->motion_val8[plane] = av_mallocz((mpv->b8_stride * block_height * 2 + 2) * 2 * sizeof(int16_t)); - s->motion_val16[plane] = av_mallocz((s->m.mb_stride * + s->motion_val16[plane] = av_mallocz((mpv->mb_stride * (block_height + 2) + 1) * 2 * sizeof(int16_t)); if (!s->motion_val8[plane] || !s->motion_val16[plane]) return AVERROR(ENOMEM); } - s->m.mb_type = s->mb_type; + mpv->mb_type = s->mb_type; // dummies, to avoid segfaults - s->m.current_picture.mb_mean = (uint8_t *)s->dummy; - s->m.current_picture.mb_var = (uint16_t *)s->dummy; - s->m.current_picture.mc_mb_var = (uint16_t *)s->dummy; - s->m.current_picture.mb_type = s->dummy; - - s->m.current_picture.motion_val[0] = s->motion_val8[plane] + 2; - s->m.p_mv_table = s->motion_val16[plane] + - s->m.mb_stride + 1; - s->m.mecc = s->mecc; // move - ff_init_me(&s->m); - - s->m.me.dia_size = s->avctx->dia_size; - s->m.first_slice_line = 1; + mpv->current_picture.mb_mean = (uint8_t *)s->dummy; + mpv->current_picture.mb_var = (uint16_t *)s->dummy; + mpv->current_picture.mc_mb_var = (uint16_t *)s->dummy; + mpv->current_picture.mb_type = s->dummy; + + mpv->current_picture.motion_val[0] = s->motion_val8[plane] + 2; + mpv->p_mv_table = s->motion_val16[plane] + + mpv->mb_stride + 1; + mpv->mecc = s->mecc; // move + ff_init_me(mpv); + + mpv->me.dia_size = s->avctx->dia_size; + mpv->first_slice_line = 1; for (y = 0; y < block_height; y++) { - s->m.new_picture.f->data[0] = src - y * 16 * stride; // ugly - s->m.mb_y = y; + mpv->new_picture.f->data[0] = src - y * 16 * stride; // ugly + mpv->mb_y = y; for (i = 0; i < 16 && i + 16 * y < height; i++) { memcpy(&src[i * stride], &src_plane[(i + 16 * y) * src_stride], @@ -341,20 +342,20 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane, 16 * block_width); for (x = 0; x < block_width; x++) { - s->m.mb_x = x; - init_block_index(&s->m); + mpv->mb_x = x; + init_block_index(mpv); - ff_estimate_p_frame_motion(&s->m, x, y); + ff_estimate_p_frame_motion(mpv, x, y); } - s->m.first_slice_line = 0; + mpv->first_slice_line = 0; } - ff_fix_long_p_mvs(&s->m, CANDIDATE_MB_TYPE_INTRA); - ff_fix_long_mvs(&s->m, NULL, 0, s->m.p_mv_table, s->m.f_code, + ff_fix_long_p_mvs(mpv, CANDIDATE_MB_TYPE_INTRA); + ff_fix_long_mvs(mpv, NULL, 0, mpv->p_mv_table, mpv->f_code, CANDIDATE_MB_TYPE_INTER, 0); } - s->m.first_slice_line = 1; + mpv->first_slice_line = 1; for (y = 0; y < block_height; y++) { for (i = 0; i < 16 && i + 16 * y < height; i++) { memcpy(&src[i * stride], &src_plane[(i + 16 * y) * src_stride], @@ -365,7 +366,7 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane, for (; i < 16 && i + 16 * y < 16 * block_height; i++) memcpy(&src[i * stride], &src[(i - 1) * stride], 16 * block_width); - s->m.mb_y = y; + mpv->mb_y = y; for (x = 0; x < block_width; x++) { uint8_t reorder_buffer[2][6][7 * 32]; int count[2][6]; @@ -380,11 +381,11 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane, return -1; } - s->m.mb_x = x; - init_block_index(&s->m); + mpv->mb_x = x; + init_block_index(mpv); if (s->pict_type == AV_PICTURE_TYPE_I || - (s->m.mb_type[x + y * s->m.mb_stride] & + (mpv->mb_type[x + y * mpv->mb_stride] & CANDIDATE_MB_TYPE_INTRA)) { for (i = 0; i < 6; i++) init_put_bits(&s->reorder_pb[i], reorder_buffer[0][i], @@ -410,8 +411,8 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane, int mx, my, pred_x, pred_y, dxy; int16_t *motion_ptr; - motion_ptr = ff_h263_pred_motion(&s->m, 0, 0, &pred_x, &pred_y); - if (s->m.mb_type[x + y * s->m.mb_stride] & + motion_ptr = ff_h263_pred_motion(mpv, 0, 0, &pred_x, &pred_y); + if (mpv->mb_type[x + y * mpv->mb_stride] & CANDIDATE_MB_TYPE_INTER) { for (i = 0; i < 6; i++) init_put_bits(&s->reorder_pb[i], reorder_buffer[1][i], @@ -419,16 +420,16 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane, put_bits(&s->reorder_pb[5], vlc[1], vlc[0]); - s->m.pb = s->reorder_pb[5]; + mpv->pb = s->reorder_pb[5]; mx = motion_ptr[0]; my = motion_ptr[1]; av_assert1(mx >= -32 && mx <= 31); av_assert1(my >= -32 && my <= 31); av_assert1(pred_x >= -32 && pred_x <= 31); av_assert1(pred_y >= -32 && pred_y <= 31); - ff_h263_encode_motion(&s->m.pb, mx - pred_x, 1); - ff_h263_encode_motion(&s->m.pb, my - pred_y, 1); - s->reorder_pb[5] = s->m.pb; + ff_h263_encode_motion(&mpv->pb, mx - pred_x, 1); + ff_h263_encode_motion(&mpv->pb, my - pred_y, 1); + s->reorder_pb[5] = mpv->pb; score[1] += lambda * put_bits_count(&s->reorder_pb[5]); dxy = (mx & 1) + 2 * (my & 1); @@ -463,10 +464,10 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane, motion_ptr[1] = motion_ptr[2] = motion_ptr[3] = - motion_ptr[0 + 2 * s->m.b8_stride] = - motion_ptr[1 + 2 * s->m.b8_stride] = - motion_ptr[2 + 2 * s->m.b8_stride] = - motion_ptr[3 + 2 * s->m.b8_stride] = 0; + motion_ptr[0 + 2 * mpv->b8_stride] = + motion_ptr[1 + 2 * mpv->b8_stride] = + motion_ptr[2 + 2 * mpv->b8_stride] = + motion_ptr[3 + 2 * mpv->b8_stride] = 0; } } @@ -479,7 +480,7 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane, if (best == 0) s->hdsp.put_pixels_tab[0][0](decoded, temp, stride, 16); } - s->m.first_slice_line = 0; + mpv->first_slice_line = 0; } return 0; } @@ -487,6 +488,7 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane, static av_cold int svq1_encode_end(AVCodecContext *avctx) { SVQ1EncContext *const s = avctx->priv_data; + MPVEncContext *const mpv = &s->m.common; int i; if (avctx->frame_number) @@ -494,12 +496,12 @@ static av_cold int svq1_encode_end(AVCodecContext *avctx) s->rd_total / (double)(avctx->width * avctx->height * avctx->frame_number)); - s->m.mb_type = NULL; - ff_mpv_common_end(&s->m); + mpv->mb_type = NULL; + ff_mpv_common_end(&s->m.common); - av_freep(&s->m.me.scratchpad); - av_freep(&s->m.me.map); - av_freep(&s->m.me.score_map); + av_freep(&mpv->me.scratchpad); + av_freep(&mpv->me.map); + av_freep(&mpv->me.score_map); av_freep(&s->mb_type); av_freep(&s->dummy); av_freep(&s->scratchbuf); @@ -518,6 +520,7 @@ static av_cold int svq1_encode_end(AVCodecContext *avctx) static av_cold int svq1_encode_init(AVCodecContext *avctx) { SVQ1EncContext *const s = avctx->priv_data; + MPVEncContext *const mpv = &s->m.common; int ret; if (avctx->width >= 4096 || avctx->height >= 4096) { @@ -527,7 +530,7 @@ static av_cold int svq1_encode_init(AVCodecContext *avctx) ff_hpeldsp_init(&s->hdsp, avctx->flags); ff_me_cmp_init(&s->mecc, avctx); - ff_mpegvideoencdsp_init(&s->m.mpvencdsp, avctx); + ff_mpegvideoencdsp_init(&mpv->mpvencdsp, avctx); s->current_picture = av_frame_alloc(); s->last_picture = av_frame_alloc(); @@ -545,26 +548,26 @@ static av_cold int svq1_encode_init(AVCodecContext *avctx) s->c_block_height = (s->frame_height / 4 + 15) / 16; s->avctx = avctx; - s->m.avctx = avctx; + mpv->avctx = avctx; - if ((ret = ff_mpv_common_init(&s->m)) < 0) { + if ((ret = ff_mpv_common_init(&s->m.common)) < 0) { return ret; } - s->m.picture_structure = PICT_FRAME; - s->m.me.temp = - s->m.me.scratchpad = av_mallocz((avctx->width + 64) * + mpv->picture_structure = PICT_FRAME; + mpv->me.temp = + mpv->me.scratchpad = av_mallocz((avctx->width + 64) * 2 * 16 * 2 * sizeof(uint8_t)); - s->m.me.map = av_mallocz(ME_MAP_SIZE * sizeof(uint32_t)); - s->m.me.score_map = av_mallocz(ME_MAP_SIZE * sizeof(uint32_t)); + mpv->me.map = av_mallocz(ME_MAP_SIZE * sizeof(uint32_t)); + mpv->me.score_map = av_mallocz(ME_MAP_SIZE * sizeof(uint32_t)); s->mb_type = av_mallocz((s->y_block_width + 1) * s->y_block_height * sizeof(int16_t)); s->dummy = av_mallocz((s->y_block_width + 1) * s->y_block_height * sizeof(int32_t)); s->ssd_int8_vs_int16 = ssd_int8_vs_int16_c; - if (!s->m.me.temp || !s->m.me.scratchpad || !s->m.me.map || - !s->m.me.score_map || !s->mb_type || !s->dummy) { + if (!mpv->me.temp || !mpv->me.scratchpad || !mpv->me.map || + !mpv->me.score_map || !s->mb_type || !s->dummy) { return AVERROR(ENOMEM); } diff --git a/libavcodec/wmv2enc.c b/libavcodec/wmv2enc.c index cd35fd19b1..11d566fe5a 100644 --- a/libavcodec/wmv2enc.c +++ b/libavcodec/wmv2enc.c @@ -43,7 +43,7 @@ typedef struct WMV2EncContext { static int encode_ext_header(WMV2EncContext *w) { - MPVMainEncContext *const s = &w->msmpeg4.s; + MPVEncContext *const s = &w->msmpeg4.s.common; PutBitContext pb; int code; @@ -70,13 +70,14 @@ static int encode_ext_header(WMV2EncContext *w) static av_cold int wmv2_encode_init(AVCodecContext *avctx) { WMV2EncContext *const w = avctx->priv_data; - MPVMainEncContext *const s = &w->msmpeg4.s; + MPVMainEncContext *const m = &w->msmpeg4.s; + MPVEncContext *const s = &m->common; s->private_ctx = &w->common; if (ff_mpv_encode_init(avctx) < 0) return -1; - ff_wmv2_common_init(s); + ff_wmv2_common_init(&m->common); avctx->extradata_size = 4; avctx->extradata = av_mallocz(avctx->extradata_size + AV_INPUT_BUFFER_PADDING_SIZE); @@ -88,9 +89,10 @@ static av_cold int wmv2_encode_init(AVCodecContext *avctx) return 0; } -int ff_wmv2_encode_picture_header(MPVMainEncContext *s, int picture_number) +int ff_wmv2_encode_picture_header(MPVMainEncContext *m, int picture_number) { - WMV2EncContext *const w = (WMV2EncContext *) s; + WMV2EncContext *const w = (WMV2EncContext *) m; + MPVEncContext *const s = &m->common; put_bits(&s->pb, 1, s->pict_type - 1); if (s->pict_type == AV_PICTURE_TYPE_I)