From patchwork Tue Sep 26 05:10:14 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: kaustubh.raste@imgtec.com X-Patchwork-Id: 5282 Delivered-To: ffmpegpatchwork@gmail.com Received: by 10.2.36.26 with SMTP id f26csp3412388jaa; Mon, 25 Sep 2017 22:09:12 -0700 (PDT) X-Google-Smtp-Source: AOwi7QCJ6iAmN1mTEj9w/SsMIgyXPsL77WLDTovG/jrKPgURLLJHj3dGjPPSG0x5Tg16RqSXpqx4 X-Received: by 10.28.24.7 with SMTP id 7mr2305816wmy.78.1506402552658; Mon, 25 Sep 2017 22:09:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1506402552; cv=none; d=google.com; s=arc-20160816; b=HMMUo7KN3hQI3zPEhUAhnO10cLcFS8O+t5coAQPxNy4X9HcMcO/RndcsNw1U6fJYVE gYqLrHUn1FodoT22gaUaZMmgF+2akr1nPbJ/3wU9wwl68R+ZlvvwqKL4lsW3T9Xq7qgo YIOQHjxKorAaMdMvK3zwe0qVTQfrhNQlRONYgSx9FCYWo1Y0cSKWfX7iKEhq2UT22mZ+ BVM6JT5uD7SEOiTgGn0GmIh6KMHYnvWnqAF0b2APoIzQLTbhOCTVcEOl+g8rNT+2oAtP JLkGrJ5Kq/1EFq6ULoxnbCNBjxAVwwAMulY6iSrTqTJO/eH9VrZvy+FDV2xNskFLYDHo oVTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:message-id:date:to:from :delivered-to:arc-authentication-results; bh=lwS3D+1jKCtwSZi4IZ+a/92wqqK3AgXwX3c99+PkPQU=; b=AY14eFOsq7mEompx0ZCktS3fqAwPbU68MmckiG/61FzKkjyX+vEfw4bgGRrcStGXuT okCLzxjN0j21JY7oBg78oHTKU+vjtKSxGXsQg1EIBLr4vgBl15J+D+2Ljgz+Obw+iQUa KWLMLNQ7fHzWy+eigk4dKwQ15/qHac/1jdFrxdJoZVeLJnuEbm+oBy4gyYJyuzdJJnVR vvrPhN5eAu9S16x4WgItnrPDUT4Ydb3jYqOEgfGobwWyhkdHkNyrVPxTLbCyhq7rvufH /E6M2LoMPpbxqkHwb77AD/2JM1sVgt7tYVX8FuQkKSpMWymEOEEid6kfk4pwJfaf6zk5 Tozg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id w5si6522982wre.204.2017.09.25.22.09.12; Mon, 25 Sep 2017 22:09:12 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 2DDF2680AB2; Tue, 26 Sep 2017 08:08:58 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mailapp01.imgtec.com (mailapp01.imgtec.com [195.59.15.196]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 0A292680597 for ; Tue, 26 Sep 2017 08:08:52 +0300 (EEST) Received: from hhmail02.hh.imgtec.org (unknown [10.100.10.20]) by Forcepoint Email with ESMTPS id 1601258303B19 for ; Tue, 26 Sep 2017 06:09:00 +0100 (IST) Received: from pudesk204.pu.imgtec.org (192.168.91.13) by hhmail02.hh.imgtec.org (10.100.10.20) with Microsoft SMTP Server (TLS) id 14.3.361.1; Tue, 26 Sep 2017 06:09:01 +0100 From: To: Date: Tue, 26 Sep 2017 10:40:14 +0530 Message-ID: <1506402614-650-1-git-send-email-kaustubh.raste@imgtec.com> X-Mailer: git-send-email 1.7.9.5 MIME-Version: 1.0 X-Originating-IP: [192.168.91.13] Subject: [FFmpeg-devel] [PATCH] avcodec/mips: Removed generic function call in avc intra msa functions X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Kaustubh Raste Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" From: Kaustubh Raste Signed-off-by: Kaustubh Raste --- libavcodec/mips/h264pred_msa.c | 215 +++++++++++++++++----------------------- 1 file changed, 92 insertions(+), 123 deletions(-) diff --git a/libavcodec/mips/h264pred_msa.c b/libavcodec/mips/h264pred_msa.c index c297aec..b9990c1 100644 --- a/libavcodec/mips/h264pred_msa.c +++ b/libavcodec/mips/h264pred_msa.c @@ -106,115 +106,6 @@ static void intra_predict_horiz_16x16_msa(uint8_t *src, int32_t src_stride, dst, dst_stride); } -static void intra_predict_dc_8x8_msa(uint8_t *src_top, uint8_t *src_left, - int32_t src_stride_left, - uint8_t *dst, int32_t dst_stride, - uint8_t is_above, uint8_t is_left) -{ - uint32_t row; - uint32_t out, addition = 0; - v16u8 src_above, store; - v8u16 sum_above; - v4u32 sum_top; - v2u64 sum; - - if (is_left && is_above) { - src_above = LD_UB(src_top); - - sum_above = __msa_hadd_u_h(src_above, src_above); - sum_top = __msa_hadd_u_w(sum_above, sum_above); - sum = __msa_hadd_u_d(sum_top, sum_top); - addition = __msa_copy_u_w((v4i32) sum, 0); - - for (row = 0; row < 8; row++) { - addition += src_left[row * src_stride_left]; - } - - addition = (addition + 8) >> 4; - store = (v16u8) __msa_fill_b(addition); - } else if (is_left) { - for (row = 0; row < 8; row++) { - addition += src_left[row * src_stride_left]; - } - - addition = (addition + 4) >> 3; - store = (v16u8) __msa_fill_b(addition); - } else if (is_above) { - src_above = LD_UB(src_top); - - sum_above = __msa_hadd_u_h(src_above, src_above); - sum_top = __msa_hadd_u_w(sum_above, sum_above); - sum = __msa_hadd_u_d(sum_top, sum_top); - sum = (v2u64) __msa_srari_d((v2i64) sum, 3); - store = (v16u8) __msa_splati_b((v16i8) sum, 0); - } else { - store = (v16u8) __msa_ldi_b(128); - } - - out = __msa_copy_u_w((v4i32) store, 0); - - for (row = 8; row--;) { - SW(out, dst); - SW(out, (dst + 4)); - dst += dst_stride; - } -} - -static void intra_predict_dc_16x16_msa(uint8_t *src_top, uint8_t *src_left, - int32_t src_stride_left, - uint8_t *dst, int32_t dst_stride, - uint8_t is_above, uint8_t is_left) -{ - uint32_t row; - uint32_t addition = 0; - v16u8 src_above, store; - v8u16 sum_above; - v4u32 sum_top; - v2u64 sum; - - if (is_left && is_above) { - src_above = LD_UB(src_top); - - sum_above = __msa_hadd_u_h(src_above, src_above); - sum_top = __msa_hadd_u_w(sum_above, sum_above); - sum = __msa_hadd_u_d(sum_top, sum_top); - sum_top = (v4u32) __msa_pckev_w((v4i32) sum, (v4i32) sum); - sum = __msa_hadd_u_d(sum_top, sum_top); - addition = __msa_copy_u_w((v4i32) sum, 0); - - for (row = 0; row < 16; row++) { - addition += src_left[row * src_stride_left]; - } - - addition = (addition + 16) >> 5; - store = (v16u8) __msa_fill_b(addition); - } else if (is_left) { - for (row = 0; row < 16; row++) { - addition += src_left[row * src_stride_left]; - } - - addition = (addition + 8) >> 4; - store = (v16u8) __msa_fill_b(addition); - } else if (is_above) { - src_above = LD_UB(src_top); - - sum_above = __msa_hadd_u_h(src_above, src_above); - sum_top = __msa_hadd_u_w(sum_above, sum_above); - sum = __msa_hadd_u_d(sum_top, sum_top); - sum_top = (v4u32) __msa_pckev_w((v4i32) sum, (v4i32) sum); - sum = __msa_hadd_u_d(sum_top, sum_top); - sum = (v2u64) __msa_srari_d((v2i64) sum, 4); - store = (v16u8) __msa_splati_b((v16i8) sum, 0); - } else { - store = (v16u8) __msa_ldi_b(128); - } - - for (row = 16; row--;) { - ST_UB(store, dst); - dst += dst_stride; - } -} - #define INTRA_PREDICT_VALDC_8X8_MSA(val) \ static void intra_predict_##val##dc_8x8_msa(uint8_t *dst, int32_t dst_stride) \ { \ @@ -646,8 +537,42 @@ void ff_h264_intra_pred_dc_16x16_msa(uint8_t *src, ptrdiff_t stride) uint8_t *src_top = src - stride; uint8_t *src_left = src - 1; uint8_t *dst = src; + uint32_t addition = 0; + v16u8 src_above, out; + v8u16 sum_above; + v4u32 sum_top; + v2u64 sum; - intra_predict_dc_16x16_msa(src_top, src_left, stride, dst, stride, 1, 1); + src_above = LD_UB(src_top); + + sum_above = __msa_hadd_u_h(src_above, src_above); + sum_top = __msa_hadd_u_w(sum_above, sum_above); + sum = __msa_hadd_u_d(sum_top, sum_top); + sum_top = (v4u32) __msa_pckev_w((v4i32) sum, (v4i32) sum); + sum = __msa_hadd_u_d(sum_top, sum_top); + addition = __msa_copy_u_w((v4i32) sum, 0); + addition += src_left[ 0 * stride]; + addition += src_left[ 1 * stride]; + addition += src_left[ 2 * stride]; + addition += src_left[ 3 * stride]; + addition += src_left[ 4 * stride]; + addition += src_left[ 5 * stride]; + addition += src_left[ 6 * stride]; + addition += src_left[ 7 * stride]; + addition += src_left[ 8 * stride]; + addition += src_left[ 9 * stride]; + addition += src_left[10 * stride]; + addition += src_left[11 * stride]; + addition += src_left[12 * stride]; + addition += src_left[13 * stride]; + addition += src_left[14 * stride]; + addition += src_left[15 * stride]; + addition = (addition + 16) >> 5; + out = (v16u8) __msa_fill_b(addition); + + ST_UB8(out, out, out, out, out, out, out, out, dst, stride); + dst += (8 * stride); + ST_UB8(out, out, out, out, out, out, out, out, dst, stride); } void ff_h264_intra_pred_vert_16x16_msa(uint8_t *src, ptrdiff_t stride) @@ -666,38 +591,82 @@ void ff_h264_intra_pred_horiz_16x16_msa(uint8_t *src, ptrdiff_t stride) void ff_h264_intra_pred_dc_left_16x16_msa(uint8_t *src, ptrdiff_t stride) { - uint8_t *src_top = src - stride; uint8_t *src_left = src - 1; uint8_t *dst = src; - - intra_predict_dc_16x16_msa(src_top, src_left, stride, dst, stride, 0, 1); + uint32_t addition; + v16u8 out; + + addition = src_left[ 0 * stride]; + addition += src_left[ 1 * stride]; + addition += src_left[ 2 * stride]; + addition += src_left[ 3 * stride]; + addition += src_left[ 4 * stride]; + addition += src_left[ 5 * stride]; + addition += src_left[ 6 * stride]; + addition += src_left[ 7 * stride]; + addition += src_left[ 8 * stride]; + addition += src_left[ 9 * stride]; + addition += src_left[10 * stride]; + addition += src_left[11 * stride]; + addition += src_left[12 * stride]; + addition += src_left[13 * stride]; + addition += src_left[14 * stride]; + addition += src_left[15 * stride]; + + addition = (addition + 8) >> 4; + out = (v16u8) __msa_fill_b(addition); + + ST_UB8(out, out, out, out, out, out, out, out, dst, stride); + dst += (8 * stride); + ST_UB8(out, out, out, out, out, out, out, out, dst, stride); } void ff_h264_intra_pred_dc_top_16x16_msa(uint8_t *src, ptrdiff_t stride) { uint8_t *src_top = src - stride; - uint8_t *src_left = src - 1; uint8_t *dst = src; + v16u8 src_above, out; + v8u16 sum_above; + v4u32 sum_top; + v2u64 sum; + + src_above = LD_UB(src_top); - intra_predict_dc_16x16_msa(src_top, src_left, stride, dst, stride, 1, 0); + sum_above = __msa_hadd_u_h(src_above, src_above); + sum_top = __msa_hadd_u_w(sum_above, sum_above); + sum = __msa_hadd_u_d(sum_top, sum_top); + sum_top = (v4u32) __msa_pckev_w((v4i32) sum, (v4i32) sum); + sum = __msa_hadd_u_d(sum_top, sum_top); + sum = (v2u64) __msa_srari_d((v2i64) sum, 4); + out = (v16u8) __msa_splati_b((v16i8) sum, 0); + + ST_UB8(out, out, out, out, out, out, out, out, dst, stride); + dst += (8 * stride); + ST_UB8(out, out, out, out, out, out, out, out, dst, stride); } void ff_h264_intra_pred_dc_128_8x8_msa(uint8_t *src, ptrdiff_t stride) { - uint8_t *src_top = src - stride; - uint8_t *src_left = src - 1; - uint8_t *dst = src; + uint64_t out; + v16u8 store; + + store = (v16u8) __msa_fill_b(128); + out = __msa_copy_u_d((v2i64) store, 0); - intra_predict_dc_8x8_msa(src_top, src_left, stride, dst, stride, 0, 0); + SD4(out, out, out, out, src, stride); + src += (4 * stride); + SD4(out, out, out, out, src, stride); } void ff_h264_intra_pred_dc_128_16x16_msa(uint8_t *src, ptrdiff_t stride) { - uint8_t *src_top = src - stride; - uint8_t *src_left = src - 1; - uint8_t *dst = src; + v16u8 out; + + out = (v16u8) __msa_fill_b(128); - intra_predict_dc_16x16_msa(src_top, src_left, stride, dst, stride, 0, 0); + ST_UB8(out, out, out, out, out, out, out, out, src, stride); + src += (8 * stride); + ST_UB8(out, out, out, out, out, out, out, out, src, stride); } void ff_vp8_pred8x8_127_dc_8_msa(uint8_t *src, ptrdiff_t stride)