From patchwork Fri Mar 25 18:52:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Avison X-Patchwork-Id: 34985 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:ab0:5fda:0:0:0:0:0 with SMTP id g26csp1432367uaj; Fri, 25 Mar 2022 11:54:07 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy0Ih4nQULlgOk0Z0WH5jvx9UjDQx5Wk11KonbgD5wj30MB7tGxcUr99PkQzcRahUO0Z35z X-Received: by 2002:a05:6402:847:b0:419:7fb0:9e29 with SMTP id b7-20020a056402084700b004197fb09e29mr91996edz.45.1648234447250; Fri, 25 Mar 2022 11:54:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1648234447; cv=none; d=google.com; s=arc-20160816; b=OWLNHwLjHkEeCmEna6EX0X8unaC+yQlTAkGxRFlu49y7VjVr68AvK4zHNO9xsqFgDJ 3VvDeJNayi+aNt4EbFQEd3kclf0rL3xBxpjFrNGPKOEOg2OqUZsbBiOWW8qKjJdr8i1h YMWLSrLN4cVqeqZYf0DbG9P+zXroIq+DhdqJkt3FM3XO4/OZkKDQAxKBa7wj2ESg3Hy8 81AU2CcRWeb/AwDqQJQCAMkiBKg3oi3BOsHBCOXWhb61lg2W3Q9qyTuRGKDG5HOsIdLc lLm+uxveLYboVdNPx+vLVtv09g7dYUym1wVtGvAGzBrOlSL4KiSnhHucCNqCyYyxsxGf HXiQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:delivered-to; bh=lhh3oN740Xq4CbaF/yEWDyji5JHbK5/pVzJhsmYJYsQ=; b=jvGO4l9foq6S0mde+vftbpsmwtMqdTdFMiqywT/oJvkA5BAjDEYwi+A/qJ04MxJpcK QAF7VqESoA2SdFrFoNjHtF27F1JgLa9bA3amoqPNJXU6teODbpsGCp3TNv6S4/sPQ1aJ qCmiEV6yjD7VfQrW+x9ZAjlNxUhK5Tr0sBP0qeV0BRw6A0kWW0m+2Q3UGlBmFTCsGUmP Y5ZthM/m+6PdyIOaBu7CGrw85Ub/FB0S9fsmZ/vohvQFE6fYbd4C+BZmhYywDSwve6h3 M8pH8qBQy2EihN1U0G7iEQ04uyh6ez6j7sIayPTky7TbYhtt9+f9GMbwQUA00NK1q4/b iA5A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id gj6-20020a170907740600b006df76385d0esi3214548ejc.430.2022.03.25.11.54.06; Fri, 25 Mar 2022 11:54:07 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 54F6968B223; Fri, 25 Mar 2022 20:53:47 +0200 (EET) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from outmail149112.authsmtp.co.uk (unknown [62.13.149.112]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 8FEE868B165 for ; Fri, 25 Mar 2022 20:53:40 +0200 (EET) Received: from mail-c233.authsmtp.com (mail-c233.authsmtp.com [62.13.128.233]) by punt17.authsmtp.com. (8.15.2/8.15.2) with ESMTP id 22PIre3o052285; Fri, 25 Mar 2022 18:53:40 GMT (envelope-from bavison@riscosopen.org) Received: from rpi2021 (237.63.9.51.dyn.plus.net [51.9.63.237]) (authenticated bits=0) by mail.authsmtp.com (8.15.2/8.15.2) with ESMTPSA id 22PIrbLN046469 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Fri, 25 Mar 2022 18:53:38 GMT (envelope-from bavison@riscosopen.org) Received: by rpi2021 (sSMTP sendmail emulation); Fri, 25 Mar 2022 18:53:37 +0000 From: Ben Avison To: ffmpeg-devel@ffmpeg.org Date: Fri, 25 Mar 2022 18:52:49 +0000 Message-Id: <20220325185257.513933-3-bavison@riscosopen.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220325185257.513933-1-bavison@riscosopen.org> References: <20220317185819.466470-1-bavison@riscosopen.org> <20220325185257.513933-1-bavison@riscosopen.org> MIME-Version: 1.0 X-Server-Quench: e23cb997-ac6c-11ec-a0f2-84349711df28 X-AuthReport-Spam: If SPAM / abuse - report it at: http://www.authsmtp.com/abuse X-AuthRoute: OCd1YggXA1ZfRRob ESQCJDVBUg4iPRpU DBlFKhFVNl8UURhQ KkJXbgASJgdBAnRQ QXkJW1ZWQFx5U2Z2 YQlYIwBcfENQWQZ0 UktOXVBXFgB3AFID BH5nEEsOCQVCfnp2 bAhgV3JTWgovd0Uu EEddEnBXNmEzdWEe BRNFJgMCch5CehxB Y1d+VSdbY21JDRoR IyQTd2hremwHHWxe QgwGLlsJRBRDNzIw Dw4JRDk0BQUEQTs+ NQcrYkIGFUAKPEIo NBM9VEkEKHcA X-Authentic-SMTP: 61633632303230.1021:7600 X-AuthFastPath: 0 (Was 255) X-AuthSMTP-Origin: 51.9.63.237/2525 X-AuthVirus-Status: No virus detected - but ensure you scan with your own anti-virus system. Subject: [FFmpeg-devel] [PATCH 02/10] checkasm: Add vc1dsp inverse transform tests X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Ben Avison Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: +DD7aKRgPaxN This test deliberately doesn't exercise the full range of inputs described in the committee draft VC-1 standard. It says: input coefficients in frequency domain, D, satisfy -2048 <= D < 2047 intermediate coefficients, E, satisfy -4096 <= E < 4095 fully inverse-transformed coefficients, R, satisfy -512 <= R < 511 For one thing, the inequalities look odd. Did they mean them to go the other way round? That would make more sense because the equations generally both add and subtract coefficients multiplied by constants, including powers of 2. Requiring the most-negative values to be valid extends the number of bits to represent the intermediate values just for the sake of that one case! For another thing, the extreme values don't look to occur in real streams - both in my experience and supported by the following comment in the AArch32 decoder: tNhalf is half of the value of tN (as described in vc1_inv_trans_8x8_c). This is done because sometimes files have input that causes tN + tM to overflow. To avoid this overflow, we compute tNhalf, then compute tNhalf + tM (which doesn't overflow), and then we use vhadd to compute (tNhalf + (tNhalf + tM)) >> 1 which does not overflow because it is one instruction. My AArch64 decoder goes further than this. It calculates tNhalf and tM then does an SRA (essentially a fused halve and add) to compute (tN + tM) >> 1 without ever having to hold (tNhalf + tM) in a 16-bit element without overflowing. It only encounters difficulties if either tNhalf or tM overflow in isolation. I haven't had sight of the final standard, so it's possible that these issues were dealt with during finalisation, which could explain the lack of usage of extreme inputs in real streams. Or a preponderance of decoders that only support 16-bit intermediate values in their inverse transforms might have caused encoders to steer clear of such cases. I have effectively followed this approach in the test, and limited the scale of the coefficients sufficient that both the existing AArch32 decoder and my new AArch64 decoder both pass. Signed-off-by: Ben Avison --- tests/checkasm/vc1dsp.c | 258 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 258 insertions(+) diff --git a/tests/checkasm/vc1dsp.c b/tests/checkasm/vc1dsp.c index db916d08f9..0823ccad31 100644 --- a/tests/checkasm/vc1dsp.c +++ b/tests/checkasm/vc1dsp.c @@ -29,6 +29,200 @@ #include "libavutil/intreadwrite.h" #include "libavutil/mem_internal.h" +typedef struct matrix { + size_t width; + size_t height; + float d[]; +} matrix; + +static const matrix T8 = { 8, 8, { + 12, 12, 12, 12, 12, 12, 12, 12, + 16, 15, 9, 4, -4, -9, -15, -16, + 16, 6, -6, -16, -16, -6, 6, 16, + 15, -4, -16, -9, 9, 16, 4, -15, + 12, -12, -12, 12, 12, -12, -12, 12, + 9, -16, 4, 15, -15, -4, 16, -9, + 6, -16, 16, -6, -6, 16, -16, 6, + 4, -9, 15, -16, 16, -15, 9, -4 +} }; + +static const matrix T4 = { 4, 4, { + 17, 17, 17, 17, + 22, 10, -10, -22, + 17, -17, -17, 17, + 10, -22, 22, -10 +} }; + +static const matrix T8t = { 8, 8, { + 12, 16, 16, 15, 12, 9, 6, 4, + 12, 15, 6, -4, -12, -16, -16, -9, + 12, 9, -6, -16, -12, 4, 16, 15, + 12, 4, -16, -9, 12, 15, -6, -16, + 12, -4, -16, 9, 12, -15, -6, 16, + 12, -9, -6, 16, -12, -4, 16, -15, + 12, -15, 6, 4, -12, 16, -16, 9, + 12, -16, 16, -15, 12, -9, 6, -4 +} }; + +static const matrix T4t = { 4, 4, { + 17, 22, 17, 10, + 17, 10, -17, -22, + 17, -10, -17, 22, + 17, -22, 17, -10 +} }; + +static matrix *new_matrix(size_t width, size_t height) +{ + matrix *out = av_mallocz(sizeof (matrix) + height * width * sizeof (float)); + if (out == NULL) { + fprintf(stderr, "Memory allocation failure\n"); + exit(EXIT_FAILURE); + } + out->width = width; + out->height = height; + return out; +} + +static matrix *multiply(const matrix *a, const matrix *b) +{ + matrix *out; + if (a->width != b->height) { + fprintf(stderr, "Incompatible multiplication\n"); + exit(EXIT_FAILURE); + } + out = new_matrix(b->width, a->height); + for (int j = 0; j < out->height; ++j) + for (int i = 0; i < out->width; ++i) { + float sum = 0; + for (int k = 0; k < a->width; ++k) + sum += a->d[j * a->width + k] * b->d[k * b->width + i]; + out->d[j * out->width + i] = sum; + } + return out; +} + +static void normalise(matrix *a) +{ + for (int j = 0; j < a->height; ++j) + for (int i = 0; i < a->width; ++i) { + float *p = a->d + j * a->width + i; + *p *= 64; + if (a->height == 4) + *p /= (const unsigned[]) { 289, 292, 289, 292 } [j]; + else + *p /= (const unsigned[]) { 288, 289, 292, 289, 288, 289, 292, 289 } [j]; + if (a->width == 4) + *p /= (const unsigned[]) { 289, 292, 289, 292 } [i]; + else + *p /= (const unsigned[]) { 288, 289, 292, 289, 288, 289, 292, 289 } [i]; + } +} + +static void divide_and_round_nearest(matrix *a, float by) +{ + for (int j = 0; j < a->height; ++j) + for (int i = 0; i < a->width; ++i) { + float *p = a->d + j * a->width + i; + *p = rintf(*p / by); + } +} + +static void tweak(matrix *a) +{ + for (int j = 4; j < a->height; ++j) + for (int i = 0; i < a->width; ++i) { + float *p = a->d + j * a->width + i; + *p += 1; + } +} + +/* The VC-1 spec places restrictions on the values permitted at three + * different stages: + * - D: the input coefficients in frequency domain + * - E: the intermediate coefficients, inverse-transformed only horizontally + * - R: the fully inverse-transformed coefficients + * + * To fully cater for the ranges specified requires various intermediate + * values to be held to 17-bit precision; yet these conditions do not appear + * to be utilised in real-world streams. At least some assembly + * implementations have chosen to restrict these values to 16-bit precision, + * to accelerate the decoding of real-world streams at the cost of strict + * adherence to the spec. To avoid our test marking these as failures, + * reduce our random inputs. + */ +#define ATTENUATION 4 + +static matrix *generate_inverse_quantized_transform_coefficients(size_t width, size_t height) +{ + matrix *raw, *tmp, *D, *E, *R; + raw = new_matrix(width, height); + for (int i = 0; i < width * height; ++i) + raw->d[i] = (int) (rnd() % (1024/ATTENUATION)) - 512/ATTENUATION; + tmp = multiply(height == 8 ? &T8 : &T4, raw); + D = multiply(tmp, width == 8 ? &T8t : &T4t); + normalise(D); + divide_and_round_nearest(D, 1); + for (int i = 0; i < width * height; ++i) { + if (D->d[i] < -2048/ATTENUATION || D->d[i] > 2048/ATTENUATION-1) { + /* Rare, so simply try again */ + av_free(raw); + av_free(tmp); + av_free(D); + return generate_inverse_quantized_transform_coefficients(width, height); + } + } + E = multiply(D, width == 8 ? &T8 : &T4); + divide_and_round_nearest(E, 8); + for (int i = 0; i < width * height; ++i) + if (E->d[i] < -4096/ATTENUATION || E->d[i] > 4096/ATTENUATION-1) { + /* Rare, so simply try again */ + av_free(raw); + av_free(tmp); + av_free(D); + av_free(E); + return generate_inverse_quantized_transform_coefficients(width, height); + } + R = multiply(height == 8 ? &T8t : &T4t, E); + tweak(R); + divide_and_round_nearest(R, 128); + for (int i = 0; i < width * height; ++i) + if (R->d[i] < -512/ATTENUATION || R->d[i] > 512/ATTENUATION-1) { + /* Rare, so simply try again */ + av_free(raw); + av_free(tmp); + av_free(D); + av_free(E); + av_free(R); + return generate_inverse_quantized_transform_coefficients(width, height); + } + av_free(raw); + av_free(tmp); + av_free(E); + av_free(R); + return D; +} + +#define RANDOMIZE_BUFFER16(name, size) \ + do { \ + int i; \ + for (i = 0; i < size; ++i) { \ + uint16_t r = rnd(); \ + AV_WN16A(name##0 + i, r); \ + AV_WN16A(name##1 + i, r); \ + } \ + } while (0) + +#define RANDOMIZE_BUFFER8(name, size) \ + do { \ + int i; \ + for (i = 0; i < size; ++i) { \ + uint8_t r = rnd(); \ + name##0[i] = r; \ + name##1[i] = r; \ + } \ + } while (0) + + #define RANDOMIZE_BUFFER8_MID_WEIGHTED(name, size) \ do { \ uint8_t *p##0 = name##0, *p##1 = name##1; \ @@ -42,6 +236,28 @@ } \ } while (0) +#define CHECK_INV_TRANS(func, width, height) \ + do { \ + if (check_func(h.func, "vc1dsp." #func)) { \ + matrix *coeffs; \ + declare_func_emms(AV_CPU_FLAG_MMX, void, uint8_t *, ptrdiff_t, int16_t *); \ + RANDOMIZE_BUFFER16(inv_trans_in, 10 * 8); \ + RANDOMIZE_BUFFER8(inv_trans_out, 10 * 24); \ + coeffs = generate_inverse_quantized_transform_coefficients(width, height); \ + for (int j = 0; j < height; ++j) \ + for (int i = 0; i < width; ++i) { \ + int idx = 8 + j * 8 + i; \ + inv_trans_in1[idx] = inv_trans_in0[idx] = coeffs->d[j * width + i]; \ + } \ + call_ref(inv_trans_out0 + 24 + 8, 24, inv_trans_in0 + 8); \ + call_new(inv_trans_out1 + 24 + 8, 24, inv_trans_in1 + 8); \ + if (memcmp(inv_trans_out0, inv_trans_out1, 10 * 24)) \ + fail(); \ + bench_new(inv_trans_out1 + 24 + 8, 24, inv_trans_in1 + 8); \ + av_free(coeffs); \ + } \ + } while (0) + #define CHECK_LOOP_FILTER(func) \ do { \ if (check_func(h.func, "vc1dsp." #func)) { \ @@ -72,6 +288,20 @@ void checkasm_check_vc1dsp(void) { + /* Inverse transform input coefficients are stored in a 16-bit buffer + * with row stride of 8 coefficients irrespective of transform size. + * vc1_inv_trans_8x8 differs from the others in two ways: coefficients + * are stored in column-major order, and the outputs are written back + * to the input buffer, so we oversize it slightly to catch overruns. */ + LOCAL_ALIGNED_16(int16_t, inv_trans_in0, [10 * 8]); + LOCAL_ALIGNED_16(int16_t, inv_trans_in1, [10 * 8]); + + /* For all but vc1_inv_trans_8x8, the inverse transform is narrowed and + * added with saturation to an array of unsigned 8-bit values. Oversize + * this by 8 samples left and right and one row above and below. */ + LOCAL_ALIGNED_8(uint8_t, inv_trans_out0, [10 * 24]); + LOCAL_ALIGNED_8(uint8_t, inv_trans_out1, [10 * 24]); + /* Deblocking filter buffers are big enough to hold a 16x16 block, * plus 4 rows/columns above/left to hold filter inputs (depending on * whether v or h neighbouring block edge) plus 4 rows/columns @@ -83,6 +313,34 @@ void checkasm_check_vc1dsp(void) ff_vc1dsp_init(&h); + if (check_func(h.vc1_inv_trans_8x8, "vc1dsp.vc1_inv_trans_8x8")) { + matrix *coeffs; + declare_func_emms(AV_CPU_FLAG_MMX, void, int16_t *); + RANDOMIZE_BUFFER16(inv_trans_in, 10 * 8); + coeffs = generate_inverse_quantized_transform_coefficients(8, 8); + for (int j = 0; j < 8; ++j) + for (int i = 0; i < 8; ++i) { + int idx = 8 + i * 8 + j; + inv_trans_in1[idx] = inv_trans_in0[idx] = coeffs->d[j * 8 + i]; + } + call_ref(inv_trans_in0 + 8); + call_new(inv_trans_in1 + 8); + if (memcmp(inv_trans_in0, inv_trans_in1, 10 * 8 * sizeof (int16_t))) + fail(); + bench_new(inv_trans_in1 + 8); + av_free(coeffs); + } + + CHECK_INV_TRANS(vc1_inv_trans_8x4, 8, 4); + CHECK_INV_TRANS(vc1_inv_trans_4x8, 4, 8); + CHECK_INV_TRANS(vc1_inv_trans_4x4, 4, 4); + CHECK_INV_TRANS(vc1_inv_trans_8x8_dc, 8, 8); + CHECK_INV_TRANS(vc1_inv_trans_8x4_dc, 8, 4); + CHECK_INV_TRANS(vc1_inv_trans_4x8_dc, 4, 8); + CHECK_INV_TRANS(vc1_inv_trans_4x4_dc, 4, 4); + + report("inv_trans"); + CHECK_LOOP_FILTER(vc1_v_loop_filter4); CHECK_LOOP_FILTER(vc1_h_loop_filter4); CHECK_LOOP_FILTER(vc1_v_loop_filter8);