From patchwork Thu Jul 18 19:35:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?R=C3=A9mi_Denis-Courmont?= X-Patchwork-Id: 50634 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a59:a742:0:b0:482:c625:d099 with SMTP id f2csp198550vqm; Thu, 18 Jul 2024 12:36:18 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCUzkKI6MBVmQebj95tcEZZhF4COFlsyOTXBQn4pCtEHm3AYO4gqW4T0Hz2QNT4KnghN2LHZaEpAv86rzSqdq4KHkOrxxNXc4K0Kvg== X-Google-Smtp-Source: AGHT+IEfVYN59oWrZP7t0/HGsgP6CRNZkpschNs6qdsUPzJHt9pQ3wgoEuDrTw6T+FAYfD38rR/i X-Received: by 2002:ac2:4c55:0:b0:52c:cd77:fe03 with SMTP id 2adb3069b0e04-52ee53b725emr5415024e87.14.1721331378555; Thu, 18 Jul 2024 12:36:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1721331378; cv=none; d=google.com; s=arc-20160816; b=V24MyGx5RtT36rQd/xREkMXCeDEUXZYkrXUmRncNtmgOvgwRtw0xL6iH8PRsV2BVZr RR76CjhhXtJla4MiF0js3Od4ltgLErp9ZfKugGbg3ZNHRTw7uTC4vy3Ec35VZq5Yv516 IsZI28Ao94Xg0gkMJL3LadzfG1qTHQfKSmX5ZyZSHyBR3DTnNmqkBdQUz9yBUDEYkNsq CeL+IAjv9WZmMC17nyc+I6JfwZEf4jjypNn4oEifzxWgssbQs3qvMeh13ZnHgEfNZkwt nASM/z9O8lDkf0vsAwFluBS8l7k2+YfW0oT/BWV94lKbDo9PkD9N6TiGAcCjZVEbtacb I+vw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:reply-to:list-subscribe :list-help:list-post:list-archive:list-unsubscribe:list-id :precedence:subject:mime-version:references:in-reply-to:message-id :date:to:from:delivered-to; bh=TzpJ6Es9LoDBPSiPhq/h4JjaWM8A6wgLpbLWd4BgeQE=; fh=YOA8vD9MJZuwZ71F/05pj6KdCjf6jQRmzLS+CATXUQk=; b=e6LhptWmckdKFYl9zTIVB1tH6tanboz8cxlFDSAEDBTC7wgY6vm2TsX3EUvTIxwY4J S864mBMZyzigMkNtYorhHZPJ29LspObZbM/1f7IOnFyJTGARDsRnWTIgqCCv1CW+pofO oVXcA1Z8t3DXnek2yIqhrgVet9bYY0ifgjLwT6ypT8kvEiv7LKwnIOpES79FrDXnX3+y 9CWp4lArU0wLjOWDVW1ktrWuWyWmRKUg3G1HC9IhPLPjZ5wTbWJp/xBtp8HwqL/cBehR 60+VMI+9mMMurtZGuNA7w13JhhCMQ+NVUVPRV7WaV6LPNk/Pnv315UKpZ2eIMXfQB0A1 55Sw==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id 2adb3069b0e04-52ef07d854esi359671e87.561.2024.07.18.12.36.18; Thu, 18 Jul 2024 12:36:18 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 3983268D8A2; Thu, 18 Jul 2024 22:35:57 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from ursule.remlab.net (vps-a2bccee9.vps.ovh.net [51.75.19.47]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 26C7B68D8EC for ; Thu, 18 Jul 2024 22:35:48 +0300 (EEST) Received: from basile.remlab.net (localhost [IPv6:::1]) by ursule.remlab.net (Postfix) with ESMTP id 50F92C016F for ; Thu, 18 Jul 2024 22:35:47 +0300 (EEST) From: =?utf-8?q?R=C3=A9mi_Denis-Courmont?= To: ffmpeg-devel@ffmpeg.org Date: Thu, 18 Jul 2024 22:35:43 +0300 Message-ID: <20240718193546.18939-2-remi@remlab.net> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240718193546.18939-1-remi@remlab.net> References: <20240718193546.18939-1-remi@remlab.net> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH 2/5] lavc/h264dsp: move R-V V idct_dc_add X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: bvZhXQ3eTC9Z From: "J. Dekker" No functional changes. This just moves the assembler so that it can be referenced by other functions in h264idct_rvv.S with local jumps. Edited-by: Rémi Denis-Courmont --- libavcodec/riscv/h264dsp_rvv.S | 103 ------------------------------- libavcodec/riscv/h264idct_rvv.S | 105 ++++++++++++++++++++++++++++++++ 2 files changed, 105 insertions(+), 103 deletions(-) diff --git a/libavcodec/riscv/h264dsp_rvv.S b/libavcodec/riscv/h264dsp_rvv.S index 5c70709cf2..ed6a16a9c4 100644 --- a/libavcodec/riscv/h264dsp_rvv.S +++ b/libavcodec/riscv/h264dsp_rvv.S @@ -1,7 +1,6 @@ /* * SPDX-License-Identifier: BSD-2-Clause * - * Copyright (c) 2024 J. Dekker * Copyright © 2024 Rémi Denis-Courmont. * * Redistribution and use in source and binary forms, with or without @@ -326,105 +325,3 @@ func ff_h264_h_loop_filter_luma_mbaff_8_rvv, zve32x vssseg6e8.v v8, (a0), a1 ret endfunc - -.macro idct_dc_add8 width -func ff_h264_idct\width\()_dc_add_8_rvv, zve64x, zba -.if \width == 8 - vsetivli zero, \width, e16, m1, ta, ma -.else - vsetivli zero, \width, e16, mf2, ta, ma -.endif - lh a3, 0(a1) - addi a3, a3, 32 - srai a3, a3, 6 - sh zero, 0(a1) -.if \width == 8 - vlse64.v v24, (a0), a2 - vsetvli t0, zero, e16, m8, ta, ma -.else - vlse32.v v24, (a0), a2 - vsetvli t0, zero, e16, m4, ta, ma -.endif - vzext.vf2 v0, v24 - vadd.vx v0, v0, a3 - vmax.vx v0, v0, zero -.if \width == 8 - vsetvli zero, zero, e8, m4, ta, ma -.else - vsetvli zero, zero, e8, m2, ta, ma -.endif - vnclipu.wi v24, v0, 0 - vsetivli zero, \width, e8, m1, ta, ma -.if \width == 8 - vsse64.v v24, (a0), a2 -.else - vsse32.v v24, (a0), a2 -.endif - ret -endfunc -.endm - -idct_dc_add8 4 -idct_dc_add8 8 - -.macro idct_dc_add width -func ff_h264_idct\width\()_dc_add_16_rvv, zve64x, zba - vsetivli zero, \width, e16, m1, ta, ma - lw a3, 0(a1) - addi a3, a3, 32 - srai a3, a3, 6 - sw zero, 0(a1) - add t4, a0, a2 - sh1add t5, a2, a0 - sh1add t6, a2, t4 -.if \width == 8 - sh2add t0, a2, a0 - sh2add t1, a2, t4 - sh2add t2, a2, t5 - sh2add t3, a2, t6 -.endif - vle16.v v0, (a0) - vle16.v v1, (t4) - vle16.v v2, (t5) - vle16.v v3, (t6) -.if \width == 8 - vle16.v v4, (t0) - vle16.v v5, (t1) - vle16.v v6, (t2) - vle16.v v7, (t3) - vsetvli a6, zero, e16, m8, ta, ma -.else - vsetvli a6, zero, e16, m4, ta, ma -.endif - vadd.vx v0, v0, a3 - vmax.vx v0, v0, zero - vmin.vx v0, v0, a5 - vsetivli zero, \width, e16, m1, ta, ma - vse16.v v0, (a0) - vse16.v v1, (t4) - vse16.v v2, (t5) - vse16.v v3, (t6) -.if \width == 8 - vse16.v v4, (t0) - vse16.v v5, (t1) - vse16.v v6, (t2) - vse16.v v7, (t3) -.endif - ret -endfunc -.endm - -idct_dc_add 4 -idct_dc_add 8 - -.irp depth,9,10,12,14 -func ff_h264_idct4_dc_add_\depth\()_rvv, zve64x - li a5, (1 << \depth) - 1 - j ff_h264_idct4_dc_add_16_rvv -endfunc - -func ff_h264_idct8_dc_add_\depth\()_rvv, zve64x - li a5, (1 << \depth) - 1 - j ff_h264_idct8_dc_add_16_rvv -endfunc -.endr diff --git a/libavcodec/riscv/h264idct_rvv.S b/libavcodec/riscv/h264idct_rvv.S index 505f491308..37b27fc92a 100644 --- a/libavcodec/riscv/h264idct_rvv.S +++ b/libavcodec/riscv/h264idct_rvv.S @@ -1,4 +1,7 @@ /* + * SPDX-License-Identifier: BSD-2-Clause + * + * Copyright (c) 2024 J. Dekker * Copyright © 2024 Rémi Denis-Courmont. * * Redistribution and use in source and binary forms, with or without @@ -412,6 +415,108 @@ func ff_h264_idct8_add_\depth\()_rvv, zve32x endfunc .endr +.macro idct_dc_add8 width +func ff_h264_idct\width\()_dc_add_8_rvv, zve64x, zba +.if \width == 8 + vsetivli zero, \width, e16, m1, ta, ma +.else + vsetivli zero, \width, e16, mf2, ta, ma +.endif + lh a3, 0(a1) + addi a3, a3, 32 + srai a3, a3, 6 + sh zero, 0(a1) +.if \width == 8 + vlse64.v v24, (a0), a2 + vsetvli t0, zero, e16, m8, ta, ma +.else + vlse32.v v24, (a0), a2 + vsetvli t0, zero, e16, m4, ta, ma +.endif + vzext.vf2 v0, v24 + vadd.vx v0, v0, a3 + vmax.vx v0, v0, zero +.if \width == 8 + vsetvli zero, zero, e8, m4, ta, ma +.else + vsetvli zero, zero, e8, m2, ta, ma +.endif + vnclipu.wi v24, v0, 0 + vsetivli zero, \width, e8, m1, ta, ma +.if \width == 8 + vsse64.v v24, (a0), a2 +.else + vsse32.v v24, (a0), a2 +.endif + ret +endfunc +.endm + +idct_dc_add8 4 +idct_dc_add8 8 + +.macro idct_dc_add width +func ff_h264_idct\width\()_dc_add_16_rvv, zve64x, zba + vsetivli zero, \width, e16, m1, ta, ma + lw a3, 0(a1) + addi a3, a3, 32 + srai a3, a3, 6 + sw zero, 0(a1) + add t4, a0, a2 + sh1add t5, a2, a0 + sh1add t6, a2, t4 +.if \width == 8 + sh2add t0, a2, a0 + sh2add t1, a2, t4 + sh2add t2, a2, t5 + sh2add t3, a2, t6 +.endif + vle16.v v0, (a0) + vle16.v v1, (t4) + vle16.v v2, (t5) + vle16.v v3, (t6) +.if \width == 8 + vle16.v v4, (t0) + vle16.v v5, (t1) + vle16.v v6, (t2) + vle16.v v7, (t3) + vsetvli a6, zero, e16, m8, ta, ma +.else + vsetvli a6, zero, e16, m4, ta, ma +.endif + vadd.vx v0, v0, a3 + vmax.vx v0, v0, zero + vmin.vx v0, v0, a5 + vsetivli zero, \width, e16, m1, ta, ma + vse16.v v0, (a0) + vse16.v v1, (t4) + vse16.v v2, (t5) + vse16.v v3, (t6) +.if \width == 8 + vse16.v v4, (t0) + vse16.v v5, (t1) + vse16.v v6, (t2) + vse16.v v7, (t3) +.endif + ret +endfunc +.endm + +idct_dc_add 4 +idct_dc_add 8 + +.irp depth,9,10,12,14 +func ff_h264_idct4_dc_add_\depth\()_rvv, zve64x + li a5, (1 << \depth) - 1 + j ff_h264_idct4_dc_add_16_rvv +endfunc + +func ff_h264_idct8_dc_add_\depth\()_rvv, zve64x + li a5, (1 << \depth) - 1 + j ff_h264_idct8_dc_add_16_rvv +endfunc +.endr + const ff_h264_scan8 .byte 014, 015, 024, 025, 016, 017, 026, 027 .byte 034, 035, 044, 045, 036, 037, 046, 047