From patchwork Thu Sep 21 07:15:28 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: kaustubh.raste@imgtec.com X-Patchwork-Id: 5213 Delivered-To: ffmpegpatchwork@gmail.com Received: by 10.2.36.26 with SMTP id f26csp1623145jaa; Thu, 21 Sep 2017 00:14:35 -0700 (PDT) X-Google-Smtp-Source: AOwi7QCm0Bgo3fvok9j6wS4iKEOFzGHu5yhBX8+CY9hxghc35xkFyaYNLJDQ0zYJo9HhVNYPglQ3 X-Received: by 10.28.212.65 with SMTP id l62mr54370wmg.77.1505978075397; Thu, 21 Sep 2017 00:14:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1505978075; cv=none; d=google.com; s=arc-20160816; b=wfdEP7LGRXKjLEUHEniN1P0NBbJNf/8HmRMH8jcdmCHCN1xrYbITi3+bkZxp4lMaQN z4rK4E8FwNzwxqC2kHgtsG3NPtiE7FigRFci2r0F661nMW+DY0bud9HEOwzHG1lP5RbK uuSqXZYfIq7QBmlAj2jyL0+tyk39T1k9wVKmh7UiWHHfUwdz+VUErsrKnCLsYwKTNjWq 9xuU/5vSH+9yQbl+daoaTGwYfxP8IMuxrzx29NTpLs/r4FwZE94xeJM8YQ9xgBByj+sV qqgyeA2u4+I2Q8ichkRnT2PaT149M0y9YKVK5VGMLqRcPMZecrcgImzydnnncYOhopuO XezQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:message-id:date:to:from :delivered-to:arc-authentication-results; bh=4IWebXH7yVCv+cJj50/REP6hMgvEcy3vWER32z73a1o=; b=c0QKg3iI39ZkgaL/D+3fnKU9GYSbuQ/zGQL0IDfCRDuRHqJcmovwFaHgcxOXoGncnF 2X7kWQQJZFfcWnxltmI+zdRxYwP1pJ8t7xdqmEnAtnNRSD/iBgwXCb76xUrdeNQ3QdWH Wc33lEwUY8gb58HJZLecQn/l3dWWLGORR9XkNoI9teWfALlkffLqVF0rASLQzzZB0qYT 98Wop+2//Uby33d4YFsXXAhU653i/nhQNklJtnnAJ6RvtQtLwvM8BUVdK5j6Cxl6AYnr 5obcdBSa1NH1uN8Ue2WjzO9XDOg35VQ4VK+B5fQrkoq4+uS1mkKeruDhQkDdfzNKzeyQ aLbw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id p10si587398wrb.414.2017.09.21.00.14.34; Thu, 21 Sep 2017 00:14:35 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id E76C7688337; Thu, 21 Sep 2017 10:14:21 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mailapp01.imgtec.com (mailapp01.imgtec.com [195.59.15.196]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 7E02168063B for ; Thu, 21 Sep 2017 10:14:15 +0300 (EEST) Received: from hhmail02.hh.imgtec.org (unknown [10.100.10.20]) by Forcepoint Email with ESMTPS id EBB5CB00ED2B for ; Thu, 21 Sep 2017 08:14:20 +0100 (IST) Received: from pudesk204.pu.imgtec.org (192.168.91.13) by hhmail02.hh.imgtec.org (10.100.10.20) with Microsoft SMTP Server (TLS) id 14.3.361.1; Thu, 21 Sep 2017 08:14:23 +0100 From: To: Date: Thu, 21 Sep 2017 12:45:28 +0530 Message-ID: <1505978128-31209-1-git-send-email-kaustubh.raste@imgtec.com> X-Mailer: git-send-email 1.7.9.5 MIME-Version: 1.0 X-Originating-IP: [192.168.91.13] Subject: [FFmpeg-devel] [PATCH] avcodec/mips: Remove generic func use in hevc non-uni copy mc msa functions X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Kaustubh Raste Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" From: Kaustubh Raste Signed-off-by: Kaustubh Raste --- libavcodec/mips/hevcdsp_msa.c | 168 +++++++++++++++++++++++++++++++++++++++-- 1 file changed, 160 insertions(+), 8 deletions(-) diff --git a/libavcodec/mips/hevcdsp_msa.c b/libavcodec/mips/hevcdsp_msa.c index f2bc748..1a854b2 100644 --- a/libavcodec/mips/hevcdsp_msa.c +++ b/libavcodec/mips/hevcdsp_msa.c @@ -1,5 +1,5 @@ /* - * Copyright (c) 2015 Manojkumar Bhosale (Manojkumar.Bhosale@imgtec.com) + * Copyright (c) 2015 - 2017 Manojkumar Bhosale (Manojkumar.Bhosale@imgtec.com) * * This file is part of FFmpeg. * @@ -302,8 +302,34 @@ static void hevc_copy_16w_msa(uint8_t *src, int32_t src_stride, ST_SH4(in0_r, in1_r, in2_r, in3_r, dst, dst_stride); ST_SH4(in0_l, in1_l, in2_l, in3_l, (dst + 8), dst_stride); } else if (0 == (height % 8)) { - hevc_copy_16multx8mult_msa(src, src_stride, dst, dst_stride, - height, 16); + uint32_t loop_cnt; + v16i8 src0, src1, src2, src3, src4, src5, src6, src7; + v8i16 in0_r, in1_r, in2_r, in3_r, in0_l, in1_l, in2_l, in3_l; + + for (loop_cnt = (height >> 3); loop_cnt--;) { + LD_SB8(src, src_stride, src0, src1, src2, src3, src4, src5, src6, + src7); + src += (8 * src_stride); + ILVR_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, in0_r, + in1_r, in2_r, in3_r); + ILVL_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, in0_l, + in1_l, in2_l, in3_l); + SLLI_4V(in0_r, in1_r, in2_r, in3_r, 6); + SLLI_4V(in0_l, in1_l, in2_l, in3_l, 6); + ST_SH4(in0_r, in1_r, in2_r, in3_r, dst, dst_stride); + ST_SH4(in0_l, in1_l, in2_l, in3_l, (dst + 8), dst_stride); + dst += (4 * dst_stride); + + ILVR_B4_SH(zero, src4, zero, src5, zero, src6, zero, src7, in0_r, + in1_r, in2_r, in3_r); + ILVL_B4_SH(zero, src4, zero, src5, zero, src6, zero, src7, in0_l, + in1_l, in2_l, in3_l); + SLLI_4V(in0_r, in1_r, in2_r, in3_r, 6); + SLLI_4V(in0_l, in1_l, in2_l, in3_l, 6); + ST_SH4(in0_r, in1_r, in2_r, in3_r, dst, dst_stride); + ST_SH4(in0_l, in1_l, in2_l, in3_l, (dst + 8), dst_stride); + dst += (4 * dst_stride); + } } } @@ -311,29 +337,155 @@ static void hevc_copy_24w_msa(uint8_t *src, int32_t src_stride, int16_t *dst, int32_t dst_stride, int32_t height) { - hevc_copy_16multx8mult_msa(src, src_stride, dst, dst_stride, height, 16); - hevc_copy_8w_msa(src + 16, src_stride, dst + 16, dst_stride, height); + uint32_t loop_cnt; + v16i8 zero = { 0 }; + v16i8 src0, src1, src2, src3, src4, src5, src6, src7; + v8i16 in0_r, in1_r, in2_r, in3_r, in0_l, in1_l, in2_l, in3_l; + + for (loop_cnt = (height >> 2); loop_cnt--;) { + LD_SB4(src, src_stride, src0, src1, src2, src3); + LD_SB4((src + 16), src_stride, src4, src5, src6, src7); + src += (4 * src_stride); + ILVR_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, in0_r, in1_r, + in2_r, in3_r); + ILVL_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, in0_l, in1_l, + in2_l, in3_l); + SLLI_4V(in0_r, in1_r, in2_r, in3_r, 6); + SLLI_4V(in0_l, in1_l, in2_l, in3_l, 6); + ST_SH4(in0_r, in1_r, in2_r, in3_r, dst, dst_stride); + ST_SH4(in0_l, in1_l, in2_l, in3_l, (dst + 8), dst_stride); + ILVR_B4_SH(zero, src4, zero, src5, zero, src6, zero, src7, in0_r, in1_r, + in2_r, in3_r); + SLLI_4V(in0_r, in1_r, in2_r, in3_r, 6); + ST_SH4(in0_r, in1_r, in2_r, in3_r, (dst + 16), dst_stride); + dst += (4 * dst_stride); + } } static void hevc_copy_32w_msa(uint8_t *src, int32_t src_stride, int16_t *dst, int32_t dst_stride, int32_t height) { - hevc_copy_16multx8mult_msa(src, src_stride, dst, dst_stride, height, 32); + uint32_t loop_cnt; + v16i8 zero = { 0 }; + v16i8 src0, src1, src2, src3, src4, src5, src6, src7; + v8i16 in0_r, in1_r, in2_r, in3_r, in0_l, in1_l, in2_l, in3_l; + + for (loop_cnt = (height >> 2); loop_cnt--;) { + LD_SB4(src, src_stride, src0, src2, src4, src6); + LD_SB4((src + 16), src_stride, src1, src3, src5, src7); + src += (4 * src_stride); + + ILVR_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, in0_r, in1_r, + in2_r, in3_r); + ILVL_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, in0_l, in1_l, + in2_l, in3_l); + SLLI_4V(in0_r, in1_r, in2_r, in3_r, 6); + SLLI_4V(in0_l, in1_l, in2_l, in3_l, 6); + ST_SH4(in0_r, in0_l, in1_r, in1_l, dst, 8); + dst += dst_stride; + ST_SH4(in2_r, in2_l, in3_r, in3_l, dst, 8); + dst += dst_stride; + + ILVR_B4_SH(zero, src4, zero, src5, zero, src6, zero, src7, in0_r, in1_r, + in2_r, in3_r); + ILVL_B4_SH(zero, src4, zero, src5, zero, src6, zero, src7, in0_l, in1_l, + in2_l, in3_l); + SLLI_4V(in0_r, in1_r, in2_r, in3_r, 6); + SLLI_4V(in0_l, in1_l, in2_l, in3_l, 6); + ST_SH4(in0_r, in0_l, in1_r, in1_l, dst, 8); + dst += dst_stride; + ST_SH4(in2_r, in2_l, in3_r, in3_l, dst, 8); + dst += dst_stride; + } } static void hevc_copy_48w_msa(uint8_t *src, int32_t src_stride, int16_t *dst, int32_t dst_stride, int32_t height) { - hevc_copy_16multx8mult_msa(src, src_stride, dst, dst_stride, height, 48); + uint32_t loop_cnt; + v16i8 zero = { 0 }; + v16i8 src0, src1, src2, src3, src4, src5, src6, src7; + v16i8 src8, src9, src10, src11; + v8i16 in0_r, in1_r, in2_r, in3_r, in4_r, in5_r; + v8i16 in0_l, in1_l, in2_l, in3_l, in4_l, in5_l; + + for (loop_cnt = (height >> 2); loop_cnt--;) { + LD_SB3(src, 16, src0, src1, src2); + src += src_stride; + LD_SB3(src, 16, src3, src4, src5); + src += src_stride; + LD_SB3(src, 16, src6, src7, src8); + src += src_stride; + LD_SB3(src, 16, src9, src10, src11); + src += src_stride; + + ILVR_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, + in0_r, in1_r, in2_r, in3_r); + ILVL_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, + in0_l, in1_l, in2_l, in3_l); + ILVR_B2_SH(zero, src4, zero, src5, in4_r, in5_r); + ILVL_B2_SH(zero, src4, zero, src5, in4_l, in5_l); + SLLI_4V(in0_r, in1_r, in2_r, in3_r, 6); + SLLI_4V(in0_l, in1_l, in2_l, in3_l, 6); + SLLI_4V(in4_r, in5_r, in4_l, in5_l, 6); + ST_SH6(in0_r, in0_l, in1_r, in1_l, in2_r, in2_l, dst, 8); + dst += dst_stride; + ST_SH6(in3_r, in3_l, in4_r, in4_l, in5_r, in5_l, dst, 8); + dst += dst_stride; + + ILVR_B4_SH(zero, src6, zero, src7, zero, src8, zero, src9, + in0_r, in1_r, in2_r, in3_r); + ILVL_B4_SH(zero, src6, zero, src7, zero, src8, zero, src9, + in0_l, in1_l, in2_l, in3_l); + ILVR_B2_SH(zero, src10, zero, src11, in4_r, in5_r); + ILVL_B2_SH(zero, src10, zero, src11, in4_l, in5_l); + SLLI_4V(in0_r, in1_r, in2_r, in3_r, 6); + SLLI_4V(in0_l, in1_l, in2_l, in3_l, 6); + SLLI_4V(in4_r, in5_r, in4_l, in5_l, 6); + ST_SH6(in0_r, in0_l, in1_r, in1_l, in2_r, in2_l, dst, 8); + dst += dst_stride; + ST_SH6(in3_r, in3_l, in4_r, in4_l, in5_r, in5_l, dst, 8); + dst += dst_stride; + } } static void hevc_copy_64w_msa(uint8_t *src, int32_t src_stride, int16_t *dst, int32_t dst_stride, int32_t height) { - hevc_copy_16multx8mult_msa(src, src_stride, dst, dst_stride, height, 64); + uint32_t loop_cnt; + v16i8 zero = { 0 }; + v16i8 src0, src1, src2, src3, src4, src5, src6, src7; + v8i16 in0_r, in1_r, in2_r, in3_r, in0_l, in1_l, in2_l, in3_l; + + for (loop_cnt = (height >> 1); loop_cnt--;) { + LD_SB4(src, 16, src0, src1, src2, src3); + src += src_stride; + LD_SB4(src, 16, src4, src5, src6, src7); + src += src_stride; + + ILVR_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, + in0_r, in1_r, in2_r, in3_r); + ILVL_B4_SH(zero, src0, zero, src1, zero, src2, zero, src3, + in0_l, in1_l, in2_l, in3_l); + SLLI_4V(in0_r, in1_r, in2_r, in3_r, 6); + SLLI_4V(in0_l, in1_l, in2_l, in3_l, 6); + ST_SH4(in0_r, in0_l, in1_r, in1_l, dst, 8); + ST_SH4(in2_r, in2_l, in3_r, in3_l, (dst + 32), 8); + dst += dst_stride; + + ILVR_B4_SH(zero, src4, zero, src5, zero, src6, zero, src7, + in0_r, in1_r, in2_r, in3_r); + ILVL_B4_SH(zero, src4, zero, src5, zero, src6, zero, src7, + in0_l, in1_l, in2_l, in3_l); + SLLI_4V(in0_r, in1_r, in2_r, in3_r, 6); + SLLI_4V(in0_l, in1_l, in2_l, in3_l, 6); + ST_SH4(in0_r, in0_l, in1_r, in1_l, dst, 8); + ST_SH4(in2_r, in2_l, in3_r, in3_l, (dst + 32), 8); + dst += dst_stride; + } } static void hevc_hz_8t_4w_msa(uint8_t *src, int32_t src_stride,