From patchwork Wed Nov 8 20:30:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?R=C3=A9mi_Denis-Courmont?= X-Patchwork-Id: 44579 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a20:4fa4:b0:181:818d:5e7f with SMTP id gh36csp54791pzb; Wed, 8 Nov 2023 12:31:13 -0800 (PST) X-Google-Smtp-Source: AGHT+IHddRThQjlTdfA2MWq7XmbePEQ75HP95o2DDUZWBaCcQumGbmbetPflvd9RGdcPF/O696e9 X-Received: by 2002:a17:907:97c3:b0:9e0:5dab:a0f6 with SMTP id js3-20020a17090797c300b009e05daba0f6mr2721433ejc.22.1699475473183; Wed, 08 Nov 2023 12:31:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1699475473; cv=none; d=google.com; s=arc-20160816; b=le6ieMbQlQZXeeFQQjVB/0IsVHfLKx0atw/OvlBh7lPIF2/KfFGwBdn+gnKFvkfYYH 7GKWYOnsiN4uRZeHeEO7xUEzJMrJLObkydP6pBZ0Mtok6wfJPSeahR88excqeF7Oxndn I6f2+/h7RzL1wWybFkTVyEc14lv++aVZYm1/fRqsw4UWdv1gX2IK7z4Vdc1uGPXeeoSu AmAT47M2gPL58v01VRf/fh7T4M5b/yxQdOV896L2HV5oTWCiOSQffhoPam87I7sqJyu4 OOTHUK03IYhqtiV+5dj+svZHI/ywQgsYVEybkGgQCAT90maT1vYWsnYka7yQud5v/glz cNUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:reply-to:list-subscribe :list-help:list-post:list-archive:list-unsubscribe:list-id :precedence:subject:mime-version:message-id:date:to:from :delivered-to; bh=7vAb6/kYhukVH3kDoFL7qDt4+X6eqlC631LpEArtY5A=; fh=YOA8vD9MJZuwZ71F/05pj6KdCjf6jQRmzLS+CATXUQk=; b=oJB7qe4QUgog+uYZVsl2j4Pc5hYRzAbTC1RS6N8dZhPoUOjTo7TsX1SwnToBzb8BDE FHiel7TADa+By8RZ8B6ORfgu0HdEieD4r7NT9IFi/RGb5MVdpYXuWMMbzwzgB3BRPiSE +wY2fQ85g9xfEHDL/ytr4zPGCUjfi29Wh9bmsIHrkEJDu5+haGc4ILg5cF7rNWJz7UiQ jIB+flfhjTghJ1d5pYt3MzNijZs3No+ZB7kx3z3IIdBV9X6SvruBoWzDXh2qjEJyGicy +10CmrhgqmamwV68mLwuYR3TKxnbH6+TB+snV890BibTuMDdsjI28522H6qBp38Qfs/O h1wg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id mm9-20020a170906cc4900b009de8812ba10si2556694ejb.55.2023.11.08.12.31.03; Wed, 08 Nov 2023 12:31:13 -0800 (PST) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 2264868CB11; Wed, 8 Nov 2023 22:30:48 +0200 (EET) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from ursule.remlab.net (vps-a2bccee9.vps.ovh.net [51.75.19.47]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id EF01D68CB55 for ; Wed, 8 Nov 2023 22:30:41 +0200 (EET) Received: from basile.remlab.net (localhost [IPv6:::1]) by ursule.remlab.net (Postfix) with ESMTP id 9319BC016C for ; Wed, 8 Nov 2023 22:30:41 +0200 (EET) From: =?utf-8?q?R=C3=A9mi_Denis-Courmont?= To: ffmpeg-devel@ffmpeg.org Date: Wed, 8 Nov 2023 22:30:41 +0200 Message-ID: <20231108203041.51648-1-remi@remlab.net> X-Mailer: git-send-email 2.42.0 MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH] lavc/sbrdsp: R-V V autocorrelate X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: xLiRnsHv5+pn With 5 accumulator vectors and 6 inputs, this can only use LMUL=2. Also the number of vector loop iterations is small, just 5 on 128-bit vector hardware. The vector loop is somewhat unusual in that it processes data in descending memory order, in order to save on vector slides: in descending order, we can extract elements to carry over to the next iteration from the bottom of the vectors directly. With ascending order (see in the Opus postfilter function), there are no ways to get the top elements directly. On the downside, this requires the use of separate shift and sub (the would-be SH3SUB instruction does not exist), with a small pipeline stall on the vector load address. The edge cases in scalar are done in scalar as this saves on loads and remains significantly faster than C. autocorrelate_c: 669.2 autocorrelate_rvv_f32: 421.0 --- libavcodec/riscv/sbrdsp_init.c | 12 +++-- libavcodec/riscv/sbrdsp_rvv.S | 89 ++++++++++++++++++++++++++++++++++ 2 files changed, 97 insertions(+), 4 deletions(-) diff --git a/libavcodec/riscv/sbrdsp_init.c b/libavcodec/riscv/sbrdsp_init.c index 71de681185..c1ed5b639c 100644 --- a/libavcodec/riscv/sbrdsp_init.c +++ b/libavcodec/riscv/sbrdsp_init.c @@ -26,6 +26,7 @@ void ff_sbr_sum64x5_rvv(float *z); float ff_sbr_sum_square_rvv(float (*x)[2], int n); void ff_sbr_neg_odd_64_rvv(float *x); +void ff_sbr_autocorrelate_rvv(const float x[40][2], float phi[3][2][2]); void ff_sbr_hf_g_filt_rvv(float (*Y)[2], const float (*X_high)[40][2], const float *g_filt, int m_max, intptr_t ixh); @@ -34,10 +35,13 @@ av_cold void ff_sbrdsp_init_riscv(SBRDSPContext *c) #if HAVE_RVV int flags = av_get_cpu_flags(); - if ((flags & AV_CPU_FLAG_RVV_F32) && (flags & AV_CPU_FLAG_RVB_ADDR)) { - c->sum64x5 = ff_sbr_sum64x5_rvv; - c->sum_square = ff_sbr_sum_square_rvv; - c->hf_g_filt = ff_sbr_hf_g_filt_rvv; + if (flags & AV_CPU_FLAG_RVV_F32) { + if (flags & AV_CPU_FLAG_RVB_ADDR) { + c->sum64x5 = ff_sbr_sum64x5_rvv; + c->sum_square = ff_sbr_sum_square_rvv; + c->hf_g_filt = ff_sbr_hf_g_filt_rvv; + } + c->autocorrelate = ff_sbr_autocorrelate_rvv; } #if __riscv_xlen >= 64 if ((flags & AV_CPU_FLAG_RVV_I64) && (flags & AV_CPU_FLAG_RVB_ADDR)) diff --git a/libavcodec/riscv/sbrdsp_rvv.S b/libavcodec/riscv/sbrdsp_rvv.S index 932a5dd7d1..2f3a0969d7 100644 --- a/libavcodec/riscv/sbrdsp_rvv.S +++ b/libavcodec/riscv/sbrdsp_rvv.S @@ -85,6 +85,95 @@ func ff_sbr_neg_odd_64_rvv, zve64x endfunc #endif +func ff_sbr_autocorrelate_rvv, zve32f + vsetvli t0, zero, e32, m4, ta, ma + vmv.v.x v0, zero + flw fa0, (a0) + vmv.v.x v4, zero + flw fa1, 4(a0) + vmv.v.x v8, zero + flw fa2, 8(a0) + li a2, 37 + flw fa3, 12(a0) + fmul.s ft10, fa0, fa0 + flw fa4, 16(a0) + fmul.s ft6, fa0, fa2 + flw fa5, 20(a0) + addi a0, a0, 38 * 8 + fmul.s ft7, fa0, fa3 + fmul.s ft2, fa0, fa4 + fmul.s ft3, fa0, fa5 + flw fa0, (a0) + fmadd.s ft10, fa1, fa1, ft10 + fmadd.s ft6, fa1, fa3, ft6 + flw fa3, 12(a0) + fnmsub.s ft7, fa1, fa2, ft7 + flw fa2, 8(a0) + fmadd.s ft2, fa1, fa5, ft2 + fnmsub.s ft3, fa1, fa4, ft3 + flw fa1, 4(a0) + fmul.s ft4, fa0, fa0 + fmul.s ft0, fa0, fa2 + fmul.s ft1, fa0, fa3 + fmadd.s ft4, fa1, fa1, ft4 + fmadd.s ft0, fa1, fa3, ft0 + fnmsub.s ft1, fa1, fa2, ft1 +1: + vsetvli t0, a2, e32, m2, tu, ma + slli t1, t0, 3 + sub a0, a0, t1 + vlseg2e32.v v16, (a0) + sub a2, a2, t0 + vfmacc.vv v0, v16, v16 + vfslide1down.vf v20, v16, fa0 + vfmacc.vv v4, v16, v20 + vfslide1down.vf v22, v18, fa1 + vfmacc.vv v0, v18, v18 + vfslide1down.vf v24, v20, fa2 + vfmacc.vv v4, v18, v22 + vfslide1down.vf v26, v22, fa3 + vfmacc.vv v6, v16, v22 + vfmv.f.s fa0, v16 + vfmacc.vv v8, v16, v24 + vfmv.f.s fa1, v18 + vfmacc.vv v10, v16, v26 + vfmv.f.s fa2, v20 + vfnmsac.vv v6, v18, v20 + vfmv.f.s fa3, v22 + vfmacc.vv v8, v18, v26 + vfnmsac.vv v10, v18, v24 + bnez a2, 1b + + vsetvli t0, zero, e32, m2, ta, ma + vfredusum.vs v0, v0, v2 + vfredusum.vs v4, v4, v2 + vfmv.f.s fa0, v0 + vfredusum.vs v6, v6, v2 + vfmv.f.s fa2, v4 + fadd.s ft4, ft4, fa0 + vfredusum.vs v8, v8, v2 + vfmv.f.s fa3, v6 + fadd.s ft0, ft0, fa2 + vfredusum.vs v10, v10, v2 + vfmv.f.s fa4, v8 + fadd.s ft1, ft1, fa3 + vfmv.f.s fa5, v10 + fsw ft0, (a1) + fadd.s ft2, ft2, fa4 + fsw ft1, 4(a1) + fadd.s ft3, ft3, fa5 + fsw ft2, 8(a1) + fadd.s ft6, ft6, fa2 + fsw ft3, 12(a1) + fadd.s ft7, ft7, fa3 + fsw ft4, 16(a1) + fadd.s ft10, ft10, fa0 + fsw ft6, 24(a1) + fsw ft7, 28(a1) + fsw ft10, 40(a1) + ret +endfunc + func ff_sbr_hf_g_filt_rvv, zve32f li t1, 40 * 2 * 4 sh3add a1, a4, a1