From patchwork Tue Jun 4 18:59:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?R=C3=A9mi_Denis-Courmont?= X-Patchwork-Id: 49558 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a59:d153:0:b0:460:55fa:d5ed with SMTP id bt19csp96437vqb; Tue, 4 Jun 2024 11:59:28 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCWfsqC3yUZmRB7JVSqEgy3XbUixucIB5I6kcKLwHFTflU/qQbGUo8IlpVjVtDXpdnAN0QxwLeGUhVXk1QEAkebGu9VFv5ZSgryjMQ== X-Google-Smtp-Source: AGHT+IHz8mrCKq4Do88lu+97cI5+Q0KQIbaVdSieovjwTBncBKxDgVfmlvUbin5b+p3vZ+SCOrxY X-Received: by 2002:a50:aa93:0:b0:578:5d4c:9401 with SMTP id 4fb4d7f45d1cf-57a8b7c612amr331124a12.28.1717527568103; Tue, 04 Jun 2024 11:59:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1717527568; cv=none; d=google.com; s=arc-20160816; b=SksY6KNPVVQglb2qVWvHnRRCgqpoYUwvde8idtwq0qh1o62+Jm0F4G+qYWBTamllE7 njJDt7UVJOtLdjuysRGcqUb8pIcG/io6vfvHxCE9m+amIaUm3Ez6U5svoH6dceeTgz/V XAzuNJdrgQ69iuIgdWKz3Kf1JHgB5/e2Ttldx0GbsMoDcNgN1PXWM1yeUOSI9waZY4kU rx0L/pYqTLGjSyvZRa0ZEdQMjsd+lgbvsM4Y+GucfK00plHEv1jMCFxgGK2mrRVmVppX ++0hQ223QkFQSieHGAU1Tm4QKdszKgEUA0ZsuCodfRSgnnjJLXpcgdX7I0Pqs+RBLL6H W5zA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:reply-to:list-subscribe :list-help:list-post:list-archive:list-unsubscribe:list-id :precedence:subject:mime-version:references:in-reply-to:message-id :date:to:from:delivered-to; bh=7GjEs9I/1/k32Mnx3opNJMFVR18Ns3HcrXUX+Hx+oww=; fh=YOA8vD9MJZuwZ71F/05pj6KdCjf6jQRmzLS+CATXUQk=; b=u8/u8bXbWiXgIlh5m7Wb04L3rog7Yt6h/Je77y62ltcnTS0OtRG6B8VEh6hnQNeEeZ CjYiTZsTiLfu6DZGqcb4ZBWytWALqBLKyl9ViK9oYCVZzMegCK/pOBfF1XHHW+mVA56g rrrtScIFxYvWkF9AOe8ssbrUNLEIMW2Fr/uvyjArbXhwd7fgOcJt8KG6jo+vRDCEfumD IZMUylAosvjRMYww9fjqgAeCuk4rU8VCJdX4GGs5rqxyC1oB4u4lIajP9gL4qbVACmu3 fhtFs4EO0UMgVOxGQwbT3I/2LqOfS6GY9jHxAfvf0lIttJ/baqPk3weoSdVRJlmdZtQ5 x6QQ==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id 4fb4d7f45d1cf-57a31ca5f11si5392096a12.574.2024.06.04.11.59.27; Tue, 04 Jun 2024 11:59:28 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 65B8268D6FE; Tue, 4 Jun 2024 21:59:17 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from ursule.remlab.net (vps-a2bccee9.vps.ovh.net [51.75.19.47]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 598E868D2C1 for ; Tue, 4 Jun 2024 21:59:09 +0300 (EEST) Received: from basile.remlab.net (localhost [IPv6:::1]) by ursule.remlab.net (Postfix) with ESMTP id C1F53C018B for ; Tue, 4 Jun 2024 21:59:08 +0300 (EEST) From: =?utf-8?q?R=C3=A9mi_Denis-Courmont?= To: ffmpeg-devel@ffmpeg.org Date: Tue, 4 Jun 2024 21:59:06 +0300 Message-ID: <20240604185908.22309-2-remi@remlab.net> X-Mailer: git-send-email 2.45.1 In-Reply-To: <20240604185908.22309-1-remi@remlab.net> References: <20240604185908.22309-1-remi@remlab.net> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCHv2 2/4] lavc/vc1dsp: R-V V vc1_inv_trans_8x4 X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: DicZ1MNy4Gyd T-Head C908 (cycles): vc1dsp.vc1_inv_trans_8x4_c: 626.2 vc1dsp.vc1_inv_trans_8x4_rvv_i32: 215.2 --- Changes since version 1: - Properly set VXRM (callee-clobbered). --- libavcodec/riscv/vc1dsp_init.c | 2 + libavcodec/riscv/vc1dsp_rvv.S | 73 ++++++++++++++++++++++++++++++++++ 2 files changed, 75 insertions(+) diff --git a/libavcodec/riscv/vc1dsp_init.c b/libavcodec/riscv/vc1dsp_init.c index b8a1015ce5..e63870ad44 100644 --- a/libavcodec/riscv/vc1dsp_init.c +++ b/libavcodec/riscv/vc1dsp_init.c @@ -29,6 +29,7 @@ void ff_vc1_inv_trans_8x8_dc_rvv(uint8_t *dest, ptrdiff_t stride, int16_t *block void ff_vc1_inv_trans_8x8_rvv(int16_t block[64]); void ff_vc1_inv_trans_4x8_dc_rvv(uint8_t *dest, ptrdiff_t stride, int16_t *block); void ff_vc1_inv_trans_8x4_dc_rvv(uint8_t *dest, ptrdiff_t stride, int16_t *block); +void ff_vc1_inv_trans_8x4_rvv(uint8_t *dest, ptrdiff_t stride, int16_t *block); void ff_vc1_inv_trans_4x4_dc_rvv(uint8_t *dest, ptrdiff_t stride, int16_t *block); void ff_put_pixels16x16_rvi(uint8_t *dst, const uint8_t *src, ptrdiff_t line_size, int rnd); void ff_put_pixels8x8_rvi(uint8_t *dst, const uint8_t *src, ptrdiff_t line_size, int rnd); @@ -55,6 +56,7 @@ av_cold void ff_vc1dsp_init_riscv(VC1DSPContext *dsp) if (flags & AV_CPU_FLAG_RVV_I32) { if (ff_rv_vlen_least(128)) { dsp->vc1_inv_trans_8x8 = ff_vc1_inv_trans_8x8_rvv; + dsp->vc1_inv_trans_8x4 = ff_vc1_inv_trans_8x4_rvv; dsp->vc1_inv_trans_4x8_dc = ff_vc1_inv_trans_4x8_dc_rvv; dsp->vc1_inv_trans_4x4_dc = ff_vc1_inv_trans_4x4_dc_rvv; dsp->avg_vc1_mspel_pixels_tab[0][0] = ff_avg_pixels16x16_rvv; diff --git a/libavcodec/riscv/vc1dsp_rvv.S b/libavcodec/riscv/vc1dsp_rvv.S index e15783d113..d003185ade 100644 --- a/libavcodec/riscv/vc1dsp_rvv.S +++ b/libavcodec/riscv/vc1dsp_rvv.S @@ -173,6 +173,31 @@ func ff_vc1_inv_trans_8_rvv, zve32x jr t0 endfunc + .variant_cc ff_vc1_inv_trans_4_rvv +func ff_vc1_inv_trans_4_rvv, zve32x + li t3, 17 + vmul.vx v8, v0, t3 + li t4, 22 + vmul.vx v10, v2, t3 + li t2, 10 + vmul.vx v14, v1, t4 + vadd.vv v24, v8, v10 # t1 + vsub.vv v25, v8, v10 # t2 + vmul.vx v16, v3, t2 + vmul.vx v18, v3, t4 + vmul.vx v20, v1, t2 + vadd.vv v26, v14, v16 # t3 + vsub.vv v27, v18, v20 # t4 + vadd.vv v0, v24, v26 + vsub.vv v1, v25, v27 + vadd.vv v2, v25, v27 + vsub.vv v3, v24, v26 + .irp n,0,1,2,3 + vssra.vx v\n, v\n, t1 # + 4 >> 3 or + 64 >> 7 + .endr + jr t0 +endfunc + func ff_vc1_inv_trans_8x8_rvv, zve32x csrwi vxrm, 0 vsetivli zero, 8, e16, m1, ta, ma @@ -223,6 +248,54 @@ func ff_vc1_inv_trans_8x8_rvv, zve32x ret endfunc +func ff_vc1_inv_trans_8x4_rvv, zve32x + csrwi vxrm, 0 + vsetivli zero, 4, e16, mf2, ta, ma + vlseg8e16.v v0, (a2) + jal t0, ff_vc1_inv_trans_8_rvv + vsseg8e16.v v0, (a2) + addi a3, a2, 1 * 8 * 2 + vsetivli zero, 8, e16, m1, ta, ma + vle16.v v0, (a2) + addi a4, a2, 2 * 8 * 2 + vle16.v v1, (a3) + addi a5, a2, 3 * 8 * 2 + vle16.v v2, (a4) + vle16.v v3, (a5) + .irp n,0,1,2,3 + # shift 4 vectors of 8 elems after transpose instead of 8 of 4 + vssra.vi v\n, v\n, 3 + .endr + li t1, 7 + jal t0, ff_vc1_inv_trans_4_rvv + add a3, a1, a0 + vle8.v v8, (a0) + add a4, a1, a3 + vle8.v v9, (a3) + add a5, a1, a4 + vle8.v v10, (a4) + vle8.v v11, (a5) + vsetvli zero, zero, e8, mf2, ta, ma + vwaddu.wv v0, v0, v8 + vwaddu.wv v1, v1, v9 + vwaddu.wv v2, v2, v10 + vwaddu.wv v3, v3, v11 + vsetvli zero, zero, e16, m1, ta, ma + .irp n,0,1,2,3 + vmax.vx v\n, v\n, zero + .endr + vsetvli zero, zero, e8, mf2, ta, ma + vnclipu.wi v8, v0, 0 + vnclipu.wi v9, v1, 0 + vse8.v v8, (a0) + vnclipu.wi v10, v2, 0 + vse8.v v9, (a3) + vnclipu.wi v11, v3, 0 + vse8.v v10, (a4) + vse8.v v11, (a5) + ret +endfunc + .macro mspel_op op pos n1 n2 add t1, \pos, a2 v\op\()e8.v v\n1, (\pos)