From patchwork Thu Jan 7 09:41:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alan Kelly X-Patchwork-Id: 24819 Return-Path: X-Original-To: patchwork@ffaux-bg.ffmpeg.org Delivered-To: patchwork@ffaux-bg.ffmpeg.org Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by ffaux.localdomain (Postfix) with ESMTP id D103444B965 for ; Thu, 7 Jan 2021 11:41:48 +0200 (EET) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 97808689805; Thu, 7 Jan 2021 11:41:48 +0200 (EET) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id B05D268920B for ; Thu, 7 Jan 2021 11:41:41 +0200 (EET) Received: by mail-wm1-f74.google.com with SMTP id w204so2090494wmb.1 for ; Thu, 07 Jan 2021 01:41:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=itUvKesj7HzuN5h2HRj/nlcG2e+io93IaT3FkQ7NyfU=; b=J9y2DaRvAavNDqVw9cO3iH63nBVd7AbRcUoEBWs/lD073Rr9WYUDnS+TIMpUjnRMBL gkvTulIogVGWm7qFqgu41Smmj35wtgbWyaxvLHDKLPfFtp1THHE7bX/TLVhFFLB3BQ0Y AVhJs9Udl06wLGUeZ7+koUEapRyWLhxIcLK5Nv7FG6xVIXZm4Zzkl5rIwqmOgcFuN4Fp j6psnVsVAT+wx1wSq4lF++NysDepGQypAh45/djjCSghCgko974/JFl5Eh9jUU5ioqL1 tWKRoR1gBJ9uQOHNUXqsv3SBlnraxLntdQSNDVb7kwj8QHoNhayK4lg1nEm36rG68nYs RuaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=itUvKesj7HzuN5h2HRj/nlcG2e+io93IaT3FkQ7NyfU=; b=a2WMail5g1SumX4l1Vy7b5pJUCZPCjydBATCYit7Fg9vfmoJmuNNhhFGqiNOZL9vrQ /qtkzh+kmfL1uWBbdcaL13k0L1/Ilk8hk/4E3TlyFEGa/fJJqykZGHgehyYKMji59pND n5nDWZcgVkPMvLbiNBs0ftSUbdDQgefFKA/6namfCzu585JtWme4ZgsepN1vF+FBB+He HGxCw7RrCLHX4vUAjsp6oqvBuz++qjrLO0orAIpu4Ct9dFBG04PZIu7UF/VKfic7l3b/ 7WW43NHYEUeuifKeyWkcwLZkwVG00yhQXJONpyzklRz81I7f3JdIJ66xmvFjBcv01d3Q sO0g== X-Gm-Message-State: AOAM5327xOTGp2KRzsWaIS2/6ygZ6j8Q8Vi50F+/o24uO+pgTwLR7tHi bI2UeT/5hQmM0njUTtRfxq2FNKMNVX+iZMbtJJwkx/5Ykm2D2B+W8frWY1/uJcE28duHYHk6kGz NrgW6LJBreFloD5T33ncfCpdZBCeiE22tQN7LFr2qMhaNEaj3limees0c7g0F0LXW5cs6bEU= X-Google-Smtp-Source: ABdhPJwFBm8F7ud9rX+ybpRrASCBdC882sAfK9FYdrBdJtfoCoIF/nvWjmxhWWliAZqQARWiSrKpnS3KvGmKKfU= X-Received: from alankelly0.zrh.corp.google.com ([2a00:79e0:42:205:f693:9fff:fef7:aa73]) (user=alankelly job=sendgmr) by 2002:adf:e94c:: with SMTP id m12mr8142555wrn.141.1610012500871; Thu, 07 Jan 2021 01:41:40 -0800 (PST) Date: Thu, 7 Jan 2021 10:41:19 +0100 In-Reply-To: Message-Id: <20210107094119.3871253-1-alankelly@google.com> Mime-Version: 1.0 References: X-Mailer: git-send-email 2.29.2.729.g45daf8777d-goog From: Alan Kelly To: ffmpeg-devel@ffmpeg.org Subject: [FFmpeg-devel] [PATCH] Moves yuv2yuvX_sse3 to yasm, unrolls main loop and other small optimizations for ~20% speedup. X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Alan Kelly Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" --- Replaces mova with movdqu due to alignment issues libswscale/x86/Makefile | 1 + libswscale/x86/swscale.c | 106 +++++++++----------------------- libswscale/x86/yuv2yuvX.asm | 117 ++++++++++++++++++++++++++++++++++++ tests/checkasm/sw_scale.c | 98 ++++++++++++++++++++++++++++++ 4 files changed, 246 insertions(+), 76 deletions(-) create mode 100644 libswscale/x86/yuv2yuvX.asm diff --git a/libswscale/x86/Makefile b/libswscale/x86/Makefile index 831d5359aa..bfe383364e 100644 --- a/libswscale/x86/Makefile +++ b/libswscale/x86/Makefile @@ -13,3 +13,4 @@ X86ASM-OBJS += x86/input.o \ x86/scale.o \ x86/rgb_2_rgb.o \ x86/yuv_2_rgb.o \ + x86/yuv2yuvX.o \ diff --git a/libswscale/x86/swscale.c b/libswscale/x86/swscale.c index 3160fedf04..8cd8713705 100644 --- a/libswscale/x86/swscale.c +++ b/libswscale/x86/swscale.c @@ -197,81 +197,30 @@ void ff_updateMMXDitherTables(SwsContext *c, int dstY) } #if HAVE_MMXEXT -static void yuv2yuvX_sse3(const int16_t *filter, int filterSize, - const int16_t **src, uint8_t *dest, int dstW, - const uint8_t *dither, int offset) -{ - if(((uintptr_t)dest) & 15){ - yuv2yuvX_mmxext(filter, filterSize, src, dest, dstW, dither, offset); - return; - } - filterSize--; -#define MAIN_FUNCTION \ - "pxor %%xmm0, %%xmm0 \n\t" \ - "punpcklbw %%xmm0, %%xmm3 \n\t" \ - "movd %4, %%xmm1 \n\t" \ - "punpcklwd %%xmm1, %%xmm1 \n\t" \ - "punpckldq %%xmm1, %%xmm1 \n\t" \ - "punpcklqdq %%xmm1, %%xmm1 \n\t" \ - "psllw $3, %%xmm1 \n\t" \ - "paddw %%xmm1, %%xmm3 \n\t" \ - "psraw $4, %%xmm3 \n\t" \ - "movdqa %%xmm3, %%xmm4 \n\t" \ - "movdqa %%xmm3, %%xmm7 \n\t" \ - "movl %3, %%ecx \n\t" \ - "mov %0, %%"FF_REG_d" \n\t"\ - "mov (%%"FF_REG_d"), %%"FF_REG_S" \n\t"\ - ".p2align 4 \n\t" /* FIXME Unroll? */\ - "1: \n\t"\ - "movddup 8(%%"FF_REG_d"), %%xmm0 \n\t" /* filterCoeff */\ - "movdqa (%%"FF_REG_S", %%"FF_REG_c", 2), %%xmm2 \n\t" /* srcData */\ - "movdqa 16(%%"FF_REG_S", %%"FF_REG_c", 2), %%xmm5 \n\t" /* srcData */\ - "add $16, %%"FF_REG_d" \n\t"\ - "mov (%%"FF_REG_d"), %%"FF_REG_S" \n\t"\ - "test %%"FF_REG_S", %%"FF_REG_S" \n\t"\ - "pmulhw %%xmm0, %%xmm2 \n\t"\ - "pmulhw %%xmm0, %%xmm5 \n\t"\ - "paddw %%xmm2, %%xmm3 \n\t"\ - "paddw %%xmm5, %%xmm4 \n\t"\ - " jnz 1b \n\t"\ - "psraw $3, %%xmm3 \n\t"\ - "psraw $3, %%xmm4 \n\t"\ - "packuswb %%xmm4, %%xmm3 \n\t"\ - "movntdq %%xmm3, (%1, %%"FF_REG_c") \n\t"\ - "add $16, %%"FF_REG_c" \n\t"\ - "cmp %2, %%"FF_REG_c" \n\t"\ - "movdqa %%xmm7, %%xmm3 \n\t" \ - "movdqa %%xmm7, %%xmm4 \n\t" \ - "mov %0, %%"FF_REG_d" \n\t"\ - "mov (%%"FF_REG_d"), %%"FF_REG_S" \n\t"\ - "jb 1b \n\t" - - if (offset) { - __asm__ volatile( - "movq %5, %%xmm3 \n\t" - "movdqa %%xmm3, %%xmm4 \n\t" - "psrlq $24, %%xmm3 \n\t" - "psllq $40, %%xmm4 \n\t" - "por %%xmm4, %%xmm3 \n\t" - MAIN_FUNCTION - :: "g" (filter), - "r" (dest-offset), "g" ((x86_reg)(dstW+offset)), "m" (offset), - "m"(filterSize), "m"(((uint64_t *) dither)[0]) - : XMM_CLOBBERS("%xmm0" , "%xmm1" , "%xmm2" , "%xmm3" , "%xmm4" , "%xmm5" , "%xmm7" ,) - "%"FF_REG_d, "%"FF_REG_S, "%"FF_REG_c - ); - } else { - __asm__ volatile( - "movq %5, %%xmm3 \n\t" - MAIN_FUNCTION - :: "g" (filter), - "r" (dest-offset), "g" ((x86_reg)(dstW+offset)), "m" (offset), - "m"(filterSize), "m"(((uint64_t *) dither)[0]) - : XMM_CLOBBERS("%xmm0" , "%xmm1" , "%xmm2" , "%xmm3" , "%xmm4" , "%xmm5" , "%xmm7" ,) - "%"FF_REG_d, "%"FF_REG_S, "%"FF_REG_c - ); - } +#define YUV2YUVX_FUNC(opt, step) \ +void ff_yuv2yuvX_ ##opt(const int16_t *filter, long filterSize, const int16_t **src, \ + uint8_t *dest, int dstW, \ + const uint8_t *dither, int offset); \ +static void yuv2yuvX_ ##opt(const int16_t *filter, int filterSize, \ + const int16_t **src, uint8_t *dest, int dstW, \ + const uint8_t *dither, int offset) \ +{ \ + int remainder = (dstW % step); \ + int pixelsProcessed = dstW - remainder; \ + if(((uintptr_t)dest) & 15){ \ + yuv2yuvX_mmx(filter, filterSize, src, dest, dstW, dither, offset); \ + return; \ + } \ + ff_yuv2yuvX_ ##opt(filter, filterSize - 1, src, dest - offset, pixelsProcessed + offset, dither, offset); \ + if(remainder > 0){ \ + yuv2yuvX_mmx(filter, filterSize, src, dest + pixelsProcessed, remainder, dither, offset + pixelsProcessed); \ + } \ + return; \ } + +YUV2YUVX_FUNC(sse3, 32) +YUV2YUVX_FUNC(avx2, 64) + #endif #endif /* HAVE_INLINE_ASM */ @@ -402,9 +351,14 @@ av_cold void ff_sws_init_swscale_x86(SwsContext *c) #if HAVE_MMXEXT_INLINE if (INLINE_MMXEXT(cpu_flags)) sws_init_swscale_mmxext(c); - if (cpu_flags & AV_CPU_FLAG_SSE3){ - if(c->use_mmx_vfilter && !(c->flags & SWS_ACCURATE_RND)) + if (cpu_flags & AV_CPU_FLAG_AVX2){ + if(c->use_mmx_vfilter && !(c->flags & SWS_ACCURATE_RND)){ + c->yuv2planeX = yuv2yuvX_avx2; + } + } else if (cpu_flags & AV_CPU_FLAG_SSE3){ + if(c->use_mmx_vfilter && !(c->flags & SWS_ACCURATE_RND)){ c->yuv2planeX = yuv2yuvX_sse3; + } } #endif diff --git a/libswscale/x86/yuv2yuvX.asm b/libswscale/x86/yuv2yuvX.asm new file mode 100644 index 0000000000..ac0e9b6884 --- /dev/null +++ b/libswscale/x86/yuv2yuvX.asm @@ -0,0 +1,117 @@ +;****************************************************************************** +;* x86-optimized yuv2yuvX +;* Copyright 2020 Google LLC +;* Copyright (C) 2001-2011 Michael Niedermayer +;* +;* This file is part of FFmpeg. +;* +;* FFmpeg is free software; you can redistribute it and/or +;* modify it under the terms of the GNU Lesser General Public +;* License as published by the Free Software Foundation; either +;* version 2.1 of the License, or (at your option) any later version. +;* +;* FFmpeg is distributed in the hope that it will be useful, +;* but WITHOUT ANY WARRANTY; without even the implied warranty of +;* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +;* Lesser General Public License for more details. +;* +;* You should have received a copy of the GNU Lesser General Public +;* License along with FFmpeg; if not, write to the Free Software +;* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA +;****************************************************************************** + +%include "libavutil/x86/x86util.asm" + +SECTION .text + +;----------------------------------------------------------------------------- +; yuv2yuvX +; +; void ff_yuv2yuvX_(const int16_t *filter, int filterSize, +; uint8_t *dest, int dstW, +; const uint8_t *dither, int offset); +; +;----------------------------------------------------------------------------- + +%macro YUV2YUVX_FUNC 0 +cglobal yuv2yuvX, 7, 7, 8, filter, filterSize, src, dest, dstW, dither, offset +%if ARCH_X86_64 + movsxd dstWq, dstWd + movsxd offsetq, offsetd +%endif ; x86-64 + movddup m0, [filterq + 8] +%if cpuflag(avx2) + vpbroadcastq m3, [ditherq] +%else + movq xmm3, [ditherq] +%endif ; avx2 + cmp offsetd, 0 + jz .offset + + ; offset != 0 path. + psrlq m5, m3, $18 + psllq m3, m3, $28 + por m3, m3, m5 + +.offset: + movd xmm1, filterSized +%if cpuflag(avx2) + vpbroadcastw m1, xmm1 +%else + pshuflw m1, m1, q0000 + punpcklqdq m1, m1 +%endif ; avx2 + pxor m0, m0, m0 + mov filterSizeq, filterq + mov srcq, [filterSizeq] + punpcklbw m3, m0 + psllw m1, m1, 3 + paddw m3, m3, m1 + psraw m7, m3, 4 +.outerloop: + mova m4, m7 + mova m3, m7 + mova m6, m7 + mova m1, m7 +.loop: +%if cpuflag(avx2) + vpbroadcastq m0, [filterSizeq + 8] +%else + movddup m0, [filterSizeq + 8] +%endif + pmulhw m2, m0, [srcq + offsetq * 2] + pmulhw m5, m0, [srcq + offsetq * 2 + mmsize] + paddw m3, m3, m2 + paddw m4, m4, m5 + pmulhw m2, m0, [srcq + offsetq * 2 + 2 * mmsize] + pmulhw m5, m0, [srcq + offsetq * 2 + 3 * mmsize] + paddw m6, m6, m2 + paddw m1, m1, m5 + add filterSizeq, $10 + mov srcq, [filterSizeq] + test srcq, srcq + jnz .loop + psraw m3, m3, 3 + psraw m4, m4, 3 + psraw m6, m6, 3 + psraw m1, m1, 3 + packuswb m3, m3, m4 + packuswb m6, m6, m1 + mov srcq, [filterq] +%if cpuflag(avx2) + vpermq m3, m3, 216 + vpermq m6, m6, 216 +%endif + movdqu [destq + offsetq], m3 + movdqu [destq + offsetq + mmsize], m6 + add offsetq, mmsize * 2 + mov filterSizeq, filterq + cmp offsetq, dstWq + jb .outerloop + REP_RET +%endmacro + +INIT_XMM sse3 +YUV2YUVX_FUNC +INIT_YMM avx2 +YUV2YUVX_FUNC diff --git a/tests/checkasm/sw_scale.c b/tests/checkasm/sw_scale.c index 9efa2b4def..3bc6066f35 100644 --- a/tests/checkasm/sw_scale.c +++ b/tests/checkasm/sw_scale.c @@ -37,6 +37,102 @@ #define SRC_PIXELS 128 + +// This reference function is the same approximate algorithm employed by the +// SIMD functions +static void ref_function(const int16_t *filter, int filterSize, + const int16_t **src, uint8_t *dest, int dstW, + const uint8_t *dither, int offset) +{ + int i, d; + d = ((filterSize - 1) * 8 + dither[0]) >> 4; + for (i=0; i>3); + } +} + +static void check_yuv2yuvX(void) +{ + struct SwsContext *ctx; + int fsi, osi, i, j; +#define LARGEST_FILTER 16 +#define FILTER_SIZES 4 + static const int filter_sizes[FILTER_SIZES] = {1, 4, 8, 16}; + + declare_func_emms(AV_CPU_FLAG_MMX, void, const int16_t *filter, + int filterSize, const int16_t **src, uint8_t *dest, + int dstW, const uint8_t *dither, int offset); + + int dstW = SRC_PIXELS; + const int16_t **src; + LOCAL_ALIGNED_32(int16_t, src_pixels, [LARGEST_FILTER * SRC_PIXELS]); + LOCAL_ALIGNED_32(int16_t, filter_coeff, [LARGEST_FILTER]); + LOCAL_ALIGNED_32(uint8_t, dst0, [SRC_PIXELS]); + LOCAL_ALIGNED_32(uint8_t, dst1, [SRC_PIXELS]); + LOCAL_ALIGNED_32(uint8_t, dither, [SRC_PIXELS]); + union VFilterData{ + const int16_t *src; + uint16_t coeff[8]; + } *vFilterData; + uint8_t d_val = rnd(); + randomize_buffers(filter_coeff, LARGEST_FILTER); + randomize_buffers(src_pixels, LARGEST_FILTER * SRC_PIXELS); + ctx = sws_alloc_context(); + if (sws_init_context(ctx, NULL, NULL) < 0) + fail(); + + ff_getSwsFunc(ctx); + for(i = 0; i < SRC_PIXELS; ++i){ + dither[i] = d_val; + } + for(osi = 0; osi < 64; osi += 16){ + for(fsi = 0; fsi < FILTER_SIZES; ++fsi){ + src = av_malloc(sizeof(int16_t*) * filter_sizes[fsi]); + vFilterData = av_malloc((filter_sizes[fsi] + 2) * sizeof(union VFilterData)); + memset(vFilterData, 0, (filter_sizes[fsi] + 2) * sizeof(union VFilterData)); + for(i = 0; i < filter_sizes[fsi]; ++i){ + src[i] = &src_pixels[i * SRC_PIXELS]; + vFilterData[i].src = src[i]; + for(j = 0; j < 4; ++j){ + vFilterData[i].coeff[j + 4] = filter_coeff[i]; + } + } + if (check_func(ctx->yuv2planeX, "yuv2yuvX_%d_%d", filter_sizes[fsi], osi)){ + memset(dst0, 0, SRC_PIXELS * sizeof(dst0[0])); + memset(dst1, 0, SRC_PIXELS * sizeof(dst1[0])); + + // The reference function is not the scalar function selected when mmx + // is deactivated as the SIMD functions do not give the same result as + // the scalar ones due to rounding. The SIMD functions are activated by + // the flag SWS_ACCURATE_RND + ref_function(&filter_coeff[0], filter_sizes[fsi], src, dst0, dstW - osi, dither, osi); + // There's no point in calling new for the reference function + if(ctx->use_mmx_vfilter){ + call_new((const int16_t*)vFilterData, filter_sizes[fsi], src, dst1, dstW - osi, dither, osi); + if (memcmp(dst0, dst1, SRC_PIXELS * sizeof(dst0[0]))){ + fail(); + } + bench_new((const int16_t*)vFilterData, filter_sizes[fsi], src, dst1, dstW - osi, dither, osi); + } + } + free(src); + free(vFilterData); + } + } + sws_freeContext(ctx); +#undef FILTER_SIZES +} + static void check_hscale(void) { #define MAX_FILTER_WIDTH 40 @@ -131,4 +227,6 @@ void checkasm_check_sw_scale(void) { check_hscale(); report("hscale"); + check_yuv2yuvX(); + report("yuv2yuvX"); }