From patchwork Wed Apr 24 11:02:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lauri Kasanen X-Patchwork-Id: 12888 Return-Path: X-Original-To: patchwork@ffaux-bg.ffmpeg.org Delivered-To: patchwork@ffaux-bg.ffmpeg.org Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by ffaux.localdomain (Postfix) with ESMTP id C593D448AB6 for ; Wed, 24 Apr 2019 14:02:29 +0300 (EEST) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id A0C74689BC4; Wed, 24 Apr 2019 14:02:29 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mout.gmx.net (mout.gmx.net [212.227.15.18]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 5A20E680347 for ; Wed, 24 Apr 2019 14:02:23 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1556103742; bh=mDSD8pF2YWf85mJcNgXFWGb9jH5CQ+6awTlq1xhJ5cU=; h=X-UI-Sender-Class:Date:From:To:Subject; b=V4kDOYFZDN4PKzcu1vWxHfZG0gNuNSa4/uHyzlrZdMOsVsWvuypj7mesOt/JUgetL NqTx3GpQZHHf3vczBQh5Cznu5sSlmxhft39eWgPOz0rnYGu9xAb5CBdsKZBuW3Y1Sz zUZMv2oRbq3xX3idVbNv6TjOJHS76ozSO7fbhZOY= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from Valinor ([84.250.81.169]) by mail.gmx.com (mrgmx004 [212.227.17.184]) with ESMTPSA (Nemesis) id 1Mk0JW-1gvEZi2tei-00kQYW for ; Wed, 24 Apr 2019 13:02:22 +0200 Date: Wed, 24 Apr 2019 14:02:16 +0300 From: Lauri Kasanen To: ffmpeg-devel@ffmpeg.org Message-Id: <20190424140216.0e99c634c621882f5bf6a9a9@gmx.com> X-Mailer: Sylpheed 3.5.0 (GTK+ 2.18.6; x86_64-unknown-linux-gnu) Mime-Version: 1.0 X-Provags-ID: V03:K1:QryxlkupP4yAQlnu4CoZqs9W5tm7AK3zLy9ceFa0i/gc52UjTPI 7eWZXXE60ZpkUUsxSM8BMa0Cumryz1LPtsRvOIvHAM2qLw3Ijt5IxIaUBHdhIALCrQ3imcX SoZVpzk824pbKeyw0bgNzbV75vKXRo8rRy6xv4X2CNX2e5WztQfNV/TPQcRvJEbDQMAZFMw TqC7a57Xa1DElavOZXd2A== X-Spam-Flag: NO X-UI-Out-Filterresults: notjunk:1; V03:K0:owiwcFrzH90=:EDTZwJur6Q/YPhC6IX56rz CAb+p7SmL4LrWkGh7cb80+svfKBRFliu0o4RUzd/3BjTAXyjuZb1sxmPeAi2+PZ80pvV8cO4D mWB7NPLKDXMfuzg2Ph+ICT83BKMr+URWQfsl3MRkfalHJAumkdPHY3UuyxKHRNXed3xHU4/H5 Dmb/p6w3C5MRTHjx6I3knNNaBLEg6ScKIqwMGExjVSIwVY/i/KnzzLqzC52iEdRA/I4wexFw9 lfPLEGGFlQQLrhnUUr89iSq5JY4eAT59YCR04GLY39/C4fVJPv22DcEi+WyrYD7yTM0BocR77 sAbKtqRh2rlZg1O4IGAU6oDK953nkM8L2QMUsWhFM/OGBePt36gMLcYk/P4eRiPCsoj7h0iHy VELFioPTMKPrlArZyFUONe/luAjhchI8a1IJ5i1RJNc7xr98tD3ZGg4ul3BjS88T8h+f+R4Ps crPgSeeK8/5Unh01nltBxcIhuqWp0DGEHySk0RfAEKTusNmyyjnB3YpjPeHWtWUiuDxZ1aJkt A9z+ia9Uf+xzTPoY779I2NDjBNgdg1Vv6FfbLJPgFURdOgLPn1pqUxE1VRi808wL+2r/vj4Dy FFSnJ9jx4+2cGeC34chpCHGfCMl35D8oaBAbUyqEoOsU32QUdC4V1FmD6rVxlEvL8GZp6xFkm iMa9XeqW58q/l+9MEyQDoJ0RFpKwSyV6MAjIS8IbV5YPYb03srZrUngG/it4kslPsNE/XmgAx 0sa+XQ+oRSYiv4SGAbDlm1QLIVVVi5Pm4xIUxxPwvO39i5GB2O+xMdCuFwusNXgDCkX2ACGwd 0+/MRHjAo8caVAQxQ7XdPylR1t3liwWg+civD3PcW4kL3p6FBO/lx+z7924s7JSNN6y9qMtXM 8RX549M/3VcJyENGML/mP6cZdnxWH9Px5zj9LFrGJofElrisu4g2F3f4KdGSrF Subject: [FFmpeg-devel] [PATCH] swscale/ppc: VSX-optimize hscale_fast X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" ./ffmpeg -f lavfi -i yuvtestsrc=duration=1:size=1200x1440 -sws_flags fast_bilinear \ -s 2400x720 -f rawvideo -vframes 5 -pix_fmt abgr -nostats test.raw 4.27 speedup for hyscale_fast: 24796 UNITS in hyscale_fast, 4096 runs, 0 skips 5797 UNITS in hyscale_fast, 4096 runs, 0 skips 4.48 speedup for hcscale_fast: 19911 UNITS in hcscale_fast, 4095 runs, 1 skips 4437 UNITS in hcscale_fast, 4096 runs, 0 skips Signed-off-by: Lauri Kasanen --- libswscale/ppc/swscale_vsx.c | 196 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 196 insertions(+) This has the same limit as the x86 version, same width or larger only. Shrinking would require a gather load, which doesn't exist on PPC and is slow even on x86 AVX. I tried a manual gather load, and the vector function was 20% slower than C. -- 2.6.2 diff --git a/libswscale/ppc/swscale_vsx.c b/libswscale/ppc/swscale_vsx.c index ba00791..2e20ab3 100644 --- a/libswscale/ppc/swscale_vsx.c +++ b/libswscale/ppc/swscale_vsx.c @@ -1661,6 +1661,198 @@ YUV2PACKEDWRAPPER(yuv2, 422, yuyv422, AV_PIX_FMT_YUYV422) YUV2PACKEDWRAPPER(yuv2, 422, yvyu422, AV_PIX_FMT_YVYU422) YUV2PACKEDWRAPPER(yuv2, 422, uyvy422, AV_PIX_FMT_UYVY422) +static void hyscale_fast_vsx(SwsContext *c, int16_t *dst, int dstWidth, + const uint8_t *src, int srcW, int xInc) +{ + int i; + unsigned int xpos = 0, xx; + vector uint8_t vin, vin2, vperm; + vector int8_t vmul, valpha; + vector int16_t vtmp, vtmp2, vtmp3, vtmp4; + vector uint16_t vd_l, vd_r, vcoord16[2]; + vector uint32_t vcoord[4]; + const vector uint32_t vadd = (vector uint32_t) { + 0, + xInc * 1, + xInc * 2, + xInc * 3, + }; + const vector uint16_t vadd16 = (vector uint16_t) { // Modulo math + 0, + xInc * 1, + xInc * 2, + xInc * 3, + xInc * 4, + xInc * 5, + xInc * 6, + xInc * 7, + }; + const vector uint32_t vshift16 = vec_splats((uint32_t) 16); + const vector uint16_t vshift9 = vec_splat_u16(9); + const vector uint8_t vzero = vec_splat_u8(0); + const vector uint16_t vshift = vec_splat_u16(7); + + for (i = 0; i < dstWidth; i += 16) { + vcoord16[0] = vec_splats((uint16_t) xpos); + vcoord16[1] = vec_splats((uint16_t) (xpos + xInc * 8)); + + vcoord16[0] = vec_add(vcoord16[0], vadd16); + vcoord16[1] = vec_add(vcoord16[1], vadd16); + + vcoord16[0] = vec_sr(vcoord16[0], vshift9); + vcoord16[1] = vec_sr(vcoord16[1], vshift9); + valpha = (vector int8_t) vec_pack(vcoord16[0], vcoord16[1]); + + xx = xpos >> 16; + vin = vec_vsx_ld(0, &src[xx]); + + vcoord[0] = vec_splats(xpos & 0xffff); + vcoord[1] = vec_splats((xpos & 0xffff) + xInc * 4); + vcoord[2] = vec_splats((xpos & 0xffff) + xInc * 8); + vcoord[3] = vec_splats((xpos & 0xffff) + xInc * 12); + + vcoord[0] = vec_add(vcoord[0], vadd); + vcoord[1] = vec_add(vcoord[1], vadd); + vcoord[2] = vec_add(vcoord[2], vadd); + vcoord[3] = vec_add(vcoord[3], vadd); + + vcoord[0] = vec_sr(vcoord[0], vshift16); + vcoord[1] = vec_sr(vcoord[1], vshift16); + vcoord[2] = vec_sr(vcoord[2], vshift16); + vcoord[3] = vec_sr(vcoord[3], vshift16); + + vcoord16[0] = vec_pack(vcoord[0], vcoord[1]); + vcoord16[1] = vec_pack(vcoord[2], vcoord[3]); + vperm = vec_pack(vcoord16[0], vcoord16[1]); + + vin = vec_perm(vin, vin, vperm); + + vin2 = vec_vsx_ld(1, &src[xx]); + vin2 = vec_perm(vin2, vin2, vperm); + + vmul = (vector int8_t) vec_sub(vin2, vin); + vtmp = vec_mule(vmul, valpha); + vtmp2 = vec_mulo(vmul, valpha); + vtmp3 = vec_mergeh(vtmp, vtmp2); + vtmp4 = vec_mergel(vtmp, vtmp2); + + vd_l = (vector uint16_t) vec_mergeh(vin, vzero); + vd_r = (vector uint16_t) vec_mergel(vin, vzero); + vd_l = vec_sl(vd_l, vshift); + vd_r = vec_sl(vd_r, vshift); + + vd_l = vec_add(vd_l, (vector uint16_t) vtmp3); + vd_r = vec_add(vd_r, (vector uint16_t) vtmp4); + + vec_st((vector int16_t) vd_l, 0, &dst[i]); + vec_st((vector int16_t) vd_r, 0, &dst[i + 8]); + + xpos += xInc * 16; + } + for (i=dstWidth-1; (i*xInc)>>16 >=srcW-1; i--) + dst[i] = src[srcW-1]*128; +} + +#define HCSCALE(in, out) \ + vin = vec_vsx_ld(0, &in[xx]); \ + vin = vec_perm(vin, vin, vperm); \ +\ + vin2 = vec_vsx_ld(1, &in[xx]); \ + vin2 = vec_perm(vin2, vin2, vperm); \ +\ + vtmp = vec_mule(vin, valphaxor); \ + vtmp2 = vec_mulo(vin, valphaxor); \ + vtmp3 = vec_mergeh(vtmp, vtmp2); \ + vtmp4 = vec_mergel(vtmp, vtmp2); \ +\ + vtmp = vec_mule(vin2, valpha); \ + vtmp2 = vec_mulo(vin2, valpha); \ + vd_l = vec_mergeh(vtmp, vtmp2); \ + vd_r = vec_mergel(vtmp, vtmp2); \ +\ + vd_l = vec_add(vd_l, vtmp3); \ + vd_r = vec_add(vd_r, vtmp4); \ +\ + vec_st((vector int16_t) vd_l, 0, &out[i]); \ + vec_st((vector int16_t) vd_r, 0, &out[i + 8]) + +static void hcscale_fast_vsx(SwsContext *c, int16_t *dst1, int16_t *dst2, + int dstWidth, const uint8_t *src1, + const uint8_t *src2, int srcW, int xInc) +{ + int i; + unsigned int xpos = 0, xx; + vector uint8_t vin, vin2, vperm; + vector uint8_t valpha, valphaxor; + vector uint16_t vtmp, vtmp2, vtmp3, vtmp4; + vector uint16_t vd_l, vd_r, vcoord16[2]; + vector uint32_t vcoord[4]; + const vector uint8_t vxor = vec_splats((uint8_t) 127); + const vector uint32_t vadd = (vector uint32_t) { + 0, + xInc * 1, + xInc * 2, + xInc * 3, + }; + const vector uint16_t vadd16 = (vector uint16_t) { // Modulo math + 0, + xInc * 1, + xInc * 2, + xInc * 3, + xInc * 4, + xInc * 5, + xInc * 6, + xInc * 7, + }; + const vector uint32_t vshift16 = vec_splats((uint32_t) 16); + const vector uint16_t vshift9 = vec_splat_u16(9); + + for (i = 0; i < dstWidth; i += 16) { + vcoord16[0] = vec_splats((uint16_t) xpos); + vcoord16[1] = vec_splats((uint16_t) (xpos + xInc * 8)); + + vcoord16[0] = vec_add(vcoord16[0], vadd16); + vcoord16[1] = vec_add(vcoord16[1], vadd16); + + vcoord16[0] = vec_sr(vcoord16[0], vshift9); + vcoord16[1] = vec_sr(vcoord16[1], vshift9); + valpha = vec_pack(vcoord16[0], vcoord16[1]); + valphaxor = vec_xor(valpha, vxor); + + xx = xpos >> 16; + + vcoord[0] = vec_splats(xpos & 0xffff); + vcoord[1] = vec_splats((xpos & 0xffff) + xInc * 4); + vcoord[2] = vec_splats((xpos & 0xffff) + xInc * 8); + vcoord[3] = vec_splats((xpos & 0xffff) + xInc * 12); + + vcoord[0] = vec_add(vcoord[0], vadd); + vcoord[1] = vec_add(vcoord[1], vadd); + vcoord[2] = vec_add(vcoord[2], vadd); + vcoord[3] = vec_add(vcoord[3], vadd); + + vcoord[0] = vec_sr(vcoord[0], vshift16); + vcoord[1] = vec_sr(vcoord[1], vshift16); + vcoord[2] = vec_sr(vcoord[2], vshift16); + vcoord[3] = vec_sr(vcoord[3], vshift16); + + vcoord16[0] = vec_pack(vcoord[0], vcoord[1]); + vcoord16[1] = vec_pack(vcoord[2], vcoord[3]); + vperm = vec_pack(vcoord16[0], vcoord16[1]); + + HCSCALE(src1, dst1); + HCSCALE(src2, dst2); + + xpos += xInc * 16; + } + for (i=dstWidth-1; (i*xInc)>>16 >=srcW-1; i--) { + dst1[i] = src1[srcW-1]*128; + dst2[i] = src2[srcW-1]*128; + } +} + +#undef HCSCALE + #endif /* !HAVE_BIGENDIAN */ #endif /* HAVE_VSX */ @@ -1677,6 +1869,10 @@ av_cold void ff_sws_init_swscale_vsx(SwsContext *c) #if !HAVE_BIGENDIAN if (c->srcBpc == 8 && c->dstBpc <= 14) { c->hyScale = c->hcScale = hScale_real_vsx; + if (c->flags & SWS_FAST_BILINEAR && c->dstW >= c->srcW && c->chrDstW >= c->chrSrcW) { + c->hyscale_fast = hyscale_fast_vsx; + c->hcscale_fast = hcscale_fast_vsx; + } } if (!is16BPS(dstFormat) && !isNBPS(dstFormat) && dstFormat != AV_PIX_FMT_NV12 && dstFormat != AV_PIX_FMT_NV21 &&