From patchwork Wed Jul 3 17:09:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?R=C3=A9mi_Denis-Courmont?= X-Patchwork-Id: 50306 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a59:cc64:0:b0:482:c625:d099 with SMTP id k4csp3227267vqv; Wed, 3 Jul 2024 10:10:09 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCUtCOPav7kEZ3DDTTu8iHHNN9qYvd9B8lyWDluUYAHRLtT463g2B1OsdiaOanZYZsdLAVKi2wO0P2JINhaa+sLu1evRXL/9DsShPg== X-Google-Smtp-Source: AGHT+IEKG1nWAwrbZ9ifudKllCzy9N6ZPOee0+1evpYNJ0hl5pF8S1xqpB+UQtJfOnmcabKvjyIz X-Received: by 2002:a17:906:dac4:b0:a77:aa6d:e0c7 with SMTP id a640c23a62f3a-a77aa6de154mr71929666b.30.1720026608917; Wed, 03 Jul 2024 10:10:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1720026608; cv=none; d=google.com; s=arc-20160816; b=ip1I+S96anMy0iCUtbRPYROYsd6CgYe6RGyn0ZKm8l34Q3ASW0KecssJ5P+PPvA/26 jzhtMB6OJR3NL/JcbO+vgHkNu8qQU43bFYxThvOx8STXP0yBgHeNjkXjI7ZbnnlJFyal kXToT9v4FwnoBYZFKnwIQZ1cv2n0jx/I31Xyc1iMyS+iHJch3GuyLkGfltZ43RXuKj2h C55AA3AWbtiQLPnXBSvxPiOF6CYH++Z15qQCS7dN2sa53S+T+m6wM3f+Pb+mZiTYqczH qxjapXXsAz44/OmANVy7yjakQMvUmY0vFxB5qZslLe9CvIixton01rO+2IkrNK2XmKx4 oyKA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:reply-to:list-subscribe :list-help:list-post:list-archive:list-unsubscribe:list-id :precedence:subject:mime-version:references:in-reply-to:message-id :date:to:from:delivered-to; bh=pGQ+9vNUpMF3FQ0GiDQbRYFfVhGLOyLa1S/MPctm+KI=; fh=YOA8vD9MJZuwZ71F/05pj6KdCjf6jQRmzLS+CATXUQk=; b=WzFR/vShLxK1+d1bE4+xdD/Vh5IodDOHnmNn5Ja0uh58bii+XmXPBs7oiDg61P+uYB CVB+/45jsiQ0rr3SSCxnDKsqEitfvh1MZuRVFNiO5nxzpG5O/hIgh16g2PjfyOCCv1Pt AhrLYOvlARRtCyrY+0gFff8RWtnw6KIRXWdgQgIcUYBR7ePwacTDHcnbglHqeV6P8XWf kRXJt87fK+pXLao5olzT+yGtclyMcNUQnYpXTnwM/NDaXm1lH2jaywoE1uZIQJWzXSlJ ACiE2wMAH4jnF3e8MhYU8ctsx259ppDwefUUP+ShwzSbnGSl8rAY5a5Z0q8GY11SIDXx mr7w==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id a640c23a62f3a-a72ab0753c4si568628566b.542.2024.07.03.10.10.08; Wed, 03 Jul 2024 10:10:08 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 394AB68D991; Wed, 3 Jul 2024 20:10:05 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from ursule.remlab.net (vps-a2bccee9.vps.ovh.net [51.75.19.47]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 9BAB868D8D3 for ; Wed, 3 Jul 2024 20:09:58 +0300 (EEST) Received: from basile.remlab.net (localhost [IPv6:::1]) by ursule.remlab.net (Postfix) with ESMTP id 02C10C0143 for ; Wed, 3 Jul 2024 20:09:57 +0300 (EEST) From: =?utf-8?q?R=C3=A9mi_Denis-Courmont?= To: ffmpeg-devel@ffmpeg.org Date: Wed, 3 Jul 2024 20:09:57 +0300 Message-ID: <20240703170957.17063-1-remi@remlab.net> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240702171336.26390-1-remi@remlab.net> References: <20240702171336.26390-1-remi@remlab.net> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH 5/5] lavc/h264dsp: R-V V 8-bit h264_idct8_add X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: ZLd35rkYs5wN T-Head C908 (cycles): h264_idct8_add_8bpp_c: 1072.0 h264_idct8_add_8bpp_rvv_i32: 318.5 --- libavcodec/riscv/h264dsp_init.c | 2 + libavcodec/riscv/h264idct_rvv.S | 137 +++++++++++++++++++++++++++++++- 2 files changed, 138 insertions(+), 1 deletion(-) diff --git a/libavcodec/riscv/h264dsp_init.c b/libavcodec/riscv/h264dsp_init.c index f78ca3ea05..bf9743eb6b 100644 --- a/libavcodec/riscv/h264dsp_init.c +++ b/libavcodec/riscv/h264dsp_init.c @@ -35,6 +35,7 @@ void ff_h264_h_loop_filter_luma_mbaff_8_rvv(uint8_t *pix, ptrdiff_t stride, int alpha, int beta, int8_t *tc0); void ff_h264_idct_add_8_rvv(uint8_t *dst, int16_t *block, int stride); +void ff_h264_idct8_add_8_rvv(uint8_t *dst, int16_t *block, int stride); void ff_h264_idct_add16_8_rvv(uint8_t *dst, const int *blockoffset, int16_t *block, int stride, const uint8_t nnzc[5 * 8]); @@ -65,6 +66,7 @@ av_cold void ff_h264dsp_init_riscv(H264DSPContext *dsp, const int bit_depth, ff_h264_h_loop_filter_luma_mbaff_8_rvv; dsp->h264_idct_add = ff_h264_idct_add_8_rvv; + dsp->h264_idct8_add = ff_h264_idct8_add_8_rvv; # if __riscv_xlen == 64 dsp->h264_idct_add16 = ff_h264_idct_add16_8_rvv; dsp->h264_idct_add16intra = ff_h264_idct_add16intra_8_rvv; diff --git a/libavcodec/riscv/h264idct_rvv.S b/libavcodec/riscv/h264idct_rvv.S index b36a7f7572..5dd55085bc 100644 --- a/libavcodec/riscv/h264idct_rvv.S +++ b/libavcodec/riscv/h264idct_rvv.S @@ -103,6 +103,140 @@ func ff_h264_idct_add_8_rvv, zve32x ret endfunc + .variant_cc ff_h264_idct8_rvv +func ff_h264_idct8_rvv, zve32x + vsra.vi v9, v7, 1 + vsra.vi v11, v3, 1 + vsra.vi v12, v2, 1 + vsra.vi v13, v5, 1 + vsra.vi v14, v6, 1 + vsra.vi v15, v1, 1 + vadd.vv v9, v3, v9 + vsub.vv v11, v1, v11 + vsub.vv v13, v13, v1 + vadd.vv v15, v3, v15 + vsub.vv v9, v5, v9 + vadd.vv v11, v11, v7 + vadd.vv v13, v13, v7 + vadd.vv v15, v15, v5 + vadd.vv v8, v0, v4 # a0 + vsub.vv v9, v9, v7 # a1 + vsub.vv v10, v0, v4 # a2 + vsub.vv v11, v11, v3 # a3 + vsub.vv v12, v12, v6 # a4 + vadd.vv v13, v13, v5 # a5 + vadd.vv v14, v14, v2 # a6 + vadd.vv v15, v15, v1 # a7 +# 8-15 + vsra.vi v7, v9, 2 + vsra.vi v5, v11, 2 + vsra.vi v3, v13, 2 + vsra.vi v1, v15, 2 +# 1,3,5,7,8-15 free: 0,2,4,6 + vadd.vv v0, v8, v14 # b0 + vadd.vv v6, v10, v12 # b2 + vsub.vv v2, v10, v12 # b4 + vsub.vv v4, v8, v14 # b6 +# 0-7,9,11,13,15 free: 8,10,12,14 + vsub.vv v8, v15, v7 # b7 + vsub.vv v14, v5, v13 # b5 + vadd.vv v12, v1, v9 # b1 + vadd.vv v10, v11, v3 # b3 + vadd.vv v1, v6, v14 + vsub.vv v6, v6, v14 + vsub.vv v7, v0, v8 + vadd.vv v0, v0, v8 + vsub.vv v5, v2, v10 + vadd.vv v2, v2, v10 + vadd.vv v3, v4, v12 + vsub.vv v4, v4, v12 + jr t0 +endfunc + +func ff_h264_idct8_add_8_rvv, zve32x + csrwi vxrm, 0 +.Lidct8_add_8_rvv: + vsetivli zero, 8, e16, m1, ta, ma + addi t1, a1, 1 * 8 * 2 + vle16.v v0, (a1) + addi t2, a1, 2 * 8 * 2 + vle16.v v1, (t1) + addi t3, a1, 3 * 8 * 2 + vle16.v v2, (t2) + addi t4, a1, 4 * 8 * 2 + vle16.v v3, (t3) + addi t5, a1, 5 * 8 * 2 + vle16.v v4, (t4) + addi t6, a1, 6 * 8 * 2 + vle16.v v5, (t5) + addi a7, a1, 7 * 8 * 2 + vle16.v v6, (t6) + vle16.v v7, (a7) + jal t0, ff_h264_idct8_rvv + vse16.v v0, (a1) + vse16.v v1, (t1) + vse16.v v2, (t2) + vse16.v v3, (t3) + vse16.v v4, (t4) + vse16.v v5, (t5) + vse16.v v6, (t6) + vse16.v v7, (a7) + vlseg8e16.v v0, (a1) + .rept 1024 / __riscv_xlen + sx zero, ((__riscv_xlen / 8) * \+)(a1) + .endr + jal t0, ff_h264_idct8_rvv + add t1, a0, a2 + vle8.v v16, (a0) + add t2, t1, a2 + vle8.v v17, (t1) + add t3, t2, a2 + vle8.v v18, (t2) + add t4, t3, a2 + vle8.v v19, (t3) + add t5, t4, a2 + vle8.v v20, (t4) + add t6, t5, a2 + vle8.v v21, (t5) + add a7, t6, a2 + vle8.v v22, (t6) + vle8.v v23, (a7) + .irp n,0,1,2,3,4,5,6,7 + vssra.vi v\n, v\n, 6 + .endr + vsetvli zero, zero, e8, mf2, ta, ma + vwaddu.wv v0, v0, v16 + vwaddu.wv v1, v1, v17 + vwaddu.wv v2, v2, v18 + vwaddu.wv v3, v3, v19 + vwaddu.wv v4, v4, v20 + vwaddu.wv v5, v5, v21 + vwaddu.wv v6, v6, v22 + vwaddu.wv v7, v7, v23 + vsetvli zero, zero, e16, m1, ta, ma + .irp n,0,1,2,3,4,5,6,7 + vmax.vx v\n, v\n, zero + .endr + vsetvli zero, zero, e8, mf2, ta, ma + vnclipu.wi v16, v0, 0 + vnclipu.wi v17, v1, 0 + vnclipu.wi v18, v2, 0 + vnclipu.wi v19, v3, 0 + vnclipu.wi v20, v4, 0 + vnclipu.wi v21, v5, 0 + vnclipu.wi v22, v6, 0 + vnclipu.wi v23, v7, 0 + vse8.v v16, (a0) + vse8.v v17, (t1) + vse8.v v18, (t2) + vse8.v v19, (t3) + vse8.v v20, (t4) + vse8.v v21, (t5) + vse8.v v22, (t6) + vse8.v v23, (a7) + ret +endfunc + const ff_h264_scan8 .byte 014, 015, 024, 025, 016, 017, 026, 027 .byte 034, 035, 044, 045, 036, 037, 046, 047 @@ -251,6 +385,7 @@ func ff_h264_idct_add16intra_\depth\()_rvv, zve32x endfunc func ff_h264_idct8_add4_\depth\()_rvv, zve32x + csrwi vxrm, 0 addi sp, sp, -80 lla t0, ff_h264_scan8 sd s0, (sp) @@ -300,7 +435,7 @@ func ff_h264_idct8_add4_\depth\()_rvv, zve32x call ff_h264_idct8_dc_add_\depth\()_c j 3f 2: - call ff_h264_idct8_add_\depth\()_c + call .Lidct8_add_\depth\()_rvv 3: srli s3, s3, 1 addi s5, s5, 4 * 4