From patchwork Tue Jun 4 18:59:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?R=C3=A9mi_Denis-Courmont?= X-Patchwork-Id: 49556 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a59:d153:0:b0:460:55fa:d5ed with SMTP id bt19csp96567vqb; Tue, 4 Jun 2024 11:59:47 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCX2ih+80ycGsDmC6U4PUypM53Q1Hm3MJdum4rKlef7eOpDwLEtMXHT8+Pp752RWi7uQAiIpdxGqi+im9ZP8ZHY5Tg8aybXpgEX52A== X-Google-Smtp-Source: AGHT+IHJ8VCmnUZUuPpsO9HRJc5sWhls4XVrIqOvwXTjMdwcEGkB4nQmcicvv67ZMetFwlrwCw4R X-Received: by 2002:a17:907:2717:b0:a63:41f7:d46 with SMTP id a640c23a62f3a-a699fa933a1mr19917866b.17.1717527586949; Tue, 04 Jun 2024 11:59:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1717527586; cv=none; d=google.com; s=arc-20160816; b=ncJUklCEpBDYgpjwRsiinU7kWmX5kuyCGI8SSrqOxeGYbs5czkoWVUQxbeENP2vjx5 z5LPwh26/f9/6yX+UpAyDCMY59fwCgJL0g8adnp8iVy/9m4Wls1MjAOfCNzrVNpIe9qd wSTNAedkiBjGhSY2LkwOzXlYoLODUIAxAHMwvQ1RrHl8KHTV0N99wRAWYvCtfkbpcRMi UsegrRJLbgWM1LcnTPeuheNHfg8eGF7Hy4DR7QLjtTrU+1DY8j/aq6ommSHjq5HqiR5w 3x7SH8awLWzCDEQ9pdEOjYruJYoOXshphhXSte29t0IFh/voz4VkorvhfPpgGAPyq5G2 aaRQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:reply-to:list-subscribe :list-help:list-post:list-archive:list-unsubscribe:list-id :precedence:subject:mime-version:references:in-reply-to:message-id :date:to:from:delivered-to; bh=v2YUgWgOQkALVUex6byVyqhWrtaDEnzPddMh69BB07k=; fh=YOA8vD9MJZuwZ71F/05pj6KdCjf6jQRmzLS+CATXUQk=; b=uQUxQ5U6FFhIrtE/YpmdmMcFi2CRASANugxmPNhLARK7ejCMrUhBDQ6bVa2ukA3Jb0 Gf7v6SZstHa+0T+A3FgdxWUH3mCzTduw6G12y+UD8zrFAjYurD5V8XQLhjuNeTwKzkmZ KbgusQ8Q083OxKRusZoRviXIWPuZ1dqNMPclLUBkPMKqwqre6Pq4hNIMVU8YJ+WO6fq8 q6oEBvalO5tiL0f7u0W9HOK+UzA1YgUPGCM3lWnpv2JgEtNo7yVBO8PvIrzf7NnuZbwA 5wNuNypTjfF3ES61aOsUrEOy7G2fHZ2SsXo9eG1K1Bu9mCburvfzqxEF74D8SfVjen9x UmDA==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id a640c23a62f3a-a67e6f0264dsi537092866b.155.2024.06.04.11.59.46; Tue, 04 Jun 2024 11:59:46 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 41DDD68D707; Tue, 4 Jun 2024 21:59:20 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from ursule.remlab.net (vps-a2bccee9.vps.ovh.net [51.75.19.47]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 847A068D4EE for ; Tue, 4 Jun 2024 21:59:09 +0300 (EEST) Received: from basile.remlab.net (localhost [IPv6:::1]) by ursule.remlab.net (Postfix) with ESMTP id 34125C02E9 for ; Tue, 4 Jun 2024 21:59:09 +0300 (EEST) From: =?utf-8?q?R=C3=A9mi_Denis-Courmont?= To: ffmpeg-devel@ffmpeg.org Date: Tue, 4 Jun 2024 21:59:08 +0300 Message-ID: <20240604185908.22309-4-remi@remlab.net> X-Mailer: git-send-email 2.45.1 In-Reply-To: <20240604185908.22309-1-remi@remlab.net> References: <20240604185908.22309-1-remi@remlab.net> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCHv2 4/4] lavc/vc1dsp: R-V V vc1_inv_trans_4x4 X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: X4p8Jn4uc2Pv T-Head C908 (cycles): vc1dsp.vc1_inv_trans_4x4_c: 310.7 vc1dsp.vc1_inv_trans_4x4_rvv_i32: 120.0 We could use 1 `vlseg4e64.v` instead of 4 `vle16.v`, but that seems to be about 7% slower. --- libavcodec/riscv/vc1dsp_init.c | 2 ++ libavcodec/riscv/vc1dsp_rvv.S | 47 ++++++++++++++++++++++++++++++++++ 2 files changed, 49 insertions(+) diff --git a/libavcodec/riscv/vc1dsp_init.c b/libavcodec/riscv/vc1dsp_init.c index cf9d42f377..de9002f395 100644 --- a/libavcodec/riscv/vc1dsp_init.c +++ b/libavcodec/riscv/vc1dsp_init.c @@ -32,6 +32,7 @@ void ff_vc1_inv_trans_4x8_rvv(uint8_t *dest, ptrdiff_t stride, int16_t *block); void ff_vc1_inv_trans_8x4_dc_rvv(uint8_t *dest, ptrdiff_t stride, int16_t *block); void ff_vc1_inv_trans_8x4_rvv(uint8_t *dest, ptrdiff_t stride, int16_t *block); void ff_vc1_inv_trans_4x4_dc_rvv(uint8_t *dest, ptrdiff_t stride, int16_t *block); +void ff_vc1_inv_trans_4x4_rvv(uint8_t *dest, ptrdiff_t stride, int16_t *block); void ff_put_pixels16x16_rvi(uint8_t *dst, const uint8_t *src, ptrdiff_t line_size, int rnd); void ff_put_pixels8x8_rvi(uint8_t *dst, const uint8_t *src, ptrdiff_t line_size, int rnd); void ff_avg_pixels16x16_rvv(uint8_t *dst, const uint8_t *src, ptrdiff_t line_size, int rnd); @@ -59,6 +60,7 @@ av_cold void ff_vc1dsp_init_riscv(VC1DSPContext *dsp) dsp->vc1_inv_trans_8x8 = ff_vc1_inv_trans_8x8_rvv; dsp->vc1_inv_trans_8x4 = ff_vc1_inv_trans_8x4_rvv; dsp->vc1_inv_trans_4x8 = ff_vc1_inv_trans_4x8_rvv; + dsp->vc1_inv_trans_4x4 = ff_vc1_inv_trans_4x4_rvv; dsp->vc1_inv_trans_4x8_dc = ff_vc1_inv_trans_4x8_dc_rvv; dsp->vc1_inv_trans_4x4_dc = ff_vc1_inv_trans_4x4_dc_rvv; dsp->avg_vc1_mspel_pixels_tab[0][0] = ff_avg_pixels16x16_rvv; diff --git a/libavcodec/riscv/vc1dsp_rvv.S b/libavcodec/riscv/vc1dsp_rvv.S index e22950d25d..03c942cba2 100644 --- a/libavcodec/riscv/vc1dsp_rvv.S +++ b/libavcodec/riscv/vc1dsp_rvv.S @@ -113,6 +113,8 @@ func ff_vc1_inv_trans_4x4_dc_rvv, zve32x ret endfunc +#include "../../riscv-tests/debug.S" + .variant_cc ff_vc1_inv_trans_8_rvv func ff_vc1_inv_trans_8_rvv, zve32x li t4, 12 @@ -373,6 +375,51 @@ func ff_vc1_inv_trans_4x8_rvv, zve32x ret endfunc +func ff_vc1_inv_trans_4x4_rvv, zve32x + li a3, 8 * 2 + csrwi vxrm, 0 + vsetivli zero, 4, e16, mf2, ta, ma + vlsseg4e16.v v0, (a2), a3 + li t1, 3 + jal t0, ff_vc1_inv_trans_4_rvv + vsseg4e16.v v0, (a2) + addi t1, a2, 1 * 4 * 2 + vle16.v v0, (a2) + addi t2, a2, 2 * 4 * 2 + vle16.v v1, (t1) + addi t3, a2, 3 * 4 * 2 + vle16.v v2, (t2) + vle16.v v3, (t3) + li t1, 7 + jal t0, ff_vc1_inv_trans_4_rvv + add t1, a1, a0 + vle8.v v8, (a0) + add t2, a1, t1 + vle8.v v9, (t1) + add t3, a1, t2 + vle8.v v10, (t2) + vle8.v v11, (t3) + vsetvli zero, zero, e8, mf4, ta, ma + vwaddu.wv v0, v0, v8 + vwaddu.wv v1, v1, v9 + vwaddu.wv v2, v2, v10 + vwaddu.wv v3, v3, v11 + vsetvli zero, zero, e16, mf2, ta, ma + .irp n,0,1,2,3 + vmax.vx v\n, v\n, zero + .endr + vsetvli zero, zero, e8, mf4, ta, ma + vnclipu.wi v8, v0, 0 + vnclipu.wi v9, v1, 0 + vse8.v v8, (a0) + vnclipu.wi v10, v2, 0 + vse8.v v9, (t1) + vnclipu.wi v11, v3, 0 + vse8.v v10, (t2) + vse8.v v11, (t3) + ret +endfunc + .macro mspel_op op pos n1 n2 add t1, \pos, a2 v\op\()e8.v v\n1, (\pos)