diff mbox series

[FFmpeg-devel,1/2] lavu/fixed_dsp: optimise R-V V fmul_reverse

Message ID 20231119113942.10269-1-remi@remlab.net
State Accepted
Commit 3a134e82994ff49b784056d2dfce0230a8256ebd
Headers show
Series [FFmpeg-devel,1/2] lavu/fixed_dsp: optimise R-V V fmul_reverse | expand


Context Check Description
yinshiyou/make_loongarch64 success Make finished
yinshiyou/make_fate_loongarch64 success Make fate finished
andriy/make_x86 success Make finished
andriy/make_fate_x86 success Make fate finished

Commit Message

Rémi Denis-Courmont Nov. 19, 2023, 11:39 a.m. UTC
Gathers are (unsurprisingly) a notable exception to the rule that R-V V
gets faster with larger group multipliers. So roll the function to speed
it up.

vector_fmul_reverse_fixed_c:       2840.7
vector_fmul_reverse_fixed_rvv_i32: 2430.2

vector_fmul_reverse_fixed_c:       2841.0
vector_fmul_reverse_fixed_rvv_i32:  962.2

It might be possible to further optimise the function by moving the
reverse-subtract out of the loop and adding ad-hoc tail handling.
 libavutil/riscv/fixed_dsp_rvv.S | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)
diff mbox series


diff --git a/libavutil/riscv/fixed_dsp_rvv.S b/libavutil/riscv/fixed_dsp_rvv.S
index 2bece88685..46bb591352 100644
--- a/libavutil/riscv/fixed_dsp_rvv.S
+++ b/libavutil/riscv/fixed_dsp_rvv.S
@@ -127,16 +127,17 @@  endfunc
 func ff_vector_fmul_reverse_fixed_rvv, zve32x
         csrwi   vxrm, 0
-        vsetvli t0, zero, e16, m4, ta, ma
+        // e16/m4 and e32/m8 are possible but slow the gathers down.
+        vsetvli t0, zero, e16, m1, ta, ma
         sh2add  a2, a3, a2
         vid.v   v0
         vadd.vi v0, v0, 1
-        vsetvli t0, a3, e16, m4, ta, ma
+        vsetvli t0, a3, e16, m1, ta, ma
         slli    t1, t0, 2
         vrsub.vx v4, v0, t0 // v4[i] = [VL-1, VL-2... 1, 0]
         sub     a2, a2, t1
-        vsetvli zero, zero, e32, m8, ta, ma
+        vsetvli zero, zero, e32, m2, ta, ma
         vle32.v v8, (a2)
         sub     a3, a3, t0
         vle32.v v16, (a1)