From patchwork Tue Sep 19 11:14:56 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manojkumar Bhosale X-Patchwork-Id: 5192 Delivered-To: ffmpegpatchwork@gmail.com Received: by 10.2.36.26 with SMTP id f26csp4515537jaa; Tue, 19 Sep 2017 04:15:11 -0700 (PDT) X-Google-Smtp-Source: AOwi7QCNvByp7TI1k5xf5YRG3AV+Cn7+cU3nc808fB6qX3+D/FkFEJX5//SGqWTF73hRyHiTCC24 X-Received: by 10.223.169.143 with SMTP id b15mr1169299wrd.127.1505819711051; Tue, 19 Sep 2017 04:15:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1505819711; cv=none; d=google.com; s=arc-20160816; b=sQJrw5LrO1AFNBqwS1VFhbgvLNdPP2oIKyssoMH2iyNWD6+uK9RvR2NbncTHlgMZQn qrJ0R+PovjURXmDTb1WpRdDswG9/UMXiSlm7jLoUqCYs7ZjnNirbx4AV/NZOPSf2d/MK Kn/1QEZHjJ2P5X9aW6pTEvzNO6m4bKNEsV1fEY//VMattZIM2MTlnoV/SPKTQKP8jGux wmO2qtI/6bRpuz/OemIHuLm/2meO1qPa3szfLzOcw2eMhkBVkZKIQBBDs8qvFmLlq1in zJ+jIYd6sukCYWjmv8KeOr9ufRBtkCfJJx34EndMbnVxCsfRGoTygcvEKj9cog1LhAzt DIKA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:content-language :accept-language:in-reply-to:references:message-id:date:thread-index :thread-topic:to:from:delivered-to:arc-authentication-results; bh=274dnlUp4uuhtVEAo1yPnl5rktzjMN2zdvp9lU2BEAI=; b=L2h7QTB3UWhZxnOLsMyJvkuIA/giNNOL2eAXZPoFQwP3i3RjconDpSY2RBltOgXAm1 0zILXSDtBGsItufqC4hHFetndOJW/oEcFjFv83GelgLZwMMoUDNZ5YSqPJHQ69Gl1AbS IcxRsvhuWPwDqW6xixPia13WslbhjuuHy1m6VNRgJcNsagwy8x4+XeZkEUBYOU2HqSmR CylO26j6oVpzLpBKqb9nwdrSgoSVvWrsWi/2SBOLt4lAd3FGkAY1PshbMoBlPGEbTMJN UV62aDFrnBszhSLvqG3NU/KKtXRGjrjV4Je1WgVkP1pp/HeMkeNY3ts9rGbG54cvx6a5 2s4A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id o5si1046835wmo.130.2017.09.19.04.15.10; Tue, 19 Sep 2017 04:15:10 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 5516268831D; Tue, 19 Sep 2017 14:15:00 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mailapp01.imgtec.com (mailapp01.imgtec.com [195.59.15.196]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 3585A680626 for ; Tue, 19 Sep 2017 14:14:54 +0300 (EEST) Received: from HHMAIL01.hh.imgtec.org (unknown [10.100.10.19]) by Forcepoint Email with ESMTPS id 007419C6C815A for ; Tue, 19 Sep 2017 12:14:58 +0100 (IST) Received: from PUMAIL01.pu.imgtec.org (192.168.91.250) by HHMAIL01.hh.imgtec.org (10.100.10.19) with Microsoft SMTP Server (TLS) id 14.3.361.1; Tue, 19 Sep 2017 12:15:00 +0100 Received: from PUMAIL01.pu.imgtec.org ([::1]) by PUMAIL01.pu.imgtec.org ([::1]) with mapi id 14.03.0266.001; Tue, 19 Sep 2017 16:44:57 +0530 From: Manojkumar Bhosale To: FFmpeg development discussions and patches Thread-Topic: [FFmpeg-devel] [PATCH] avcodec/mips: Unrolled loops and expanded functions in avc put mc 10 & 30 msa functions Thread-Index: AQHTMFg9Seqmej5S0UCaxc+jlcrjsqK8EEUQ Date: Tue, 19 Sep 2017 11:14:56 +0000 Message-ID: <70293ACCC3BA6A4E81FFCA024C7A86E1E0591F56@PUMAIL01.pu.imgtec.org> References: <1505723413-30798-1-git-send-email-kaustubh.raste@imgtec.com> In-Reply-To: <1505723413-30798-1-git-send-email-kaustubh.raste@imgtec.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [192.168.91.86] MIME-Version: 1.0 Subject: Re: [FFmpeg-devel] [PATCH] avcodec/mips: Unrolled loops and expanded functions in avc put mc 10 & 30 msa functions X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Kaustubh Raste Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" LGTM -----Original Message----- From: ffmpeg-devel [mailto:ffmpeg-devel-bounces@ffmpeg.org] On Behalf Of kaustubh.raste@imgtec.com Sent: Monday, September 18, 2017 2:00 PM To: ffmpeg-devel@ffmpeg.org Cc: Kaustubh Raste Subject: [FFmpeg-devel] [PATCH] avcodec/mips: Unrolled loops and expanded functions in avc put mc 10 & 30 msa functions From: Kaustubh Raste Signed-off-by: Kaustubh Raste --- libavcodec/mips/h264qpel_msa.c | 284 +++++++++++++++++++++++++++++++++++++++- 1 file changed, 278 insertions(+), 6 deletions(-) ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel diff --git a/libavcodec/mips/h264qpel_msa.c b/libavcodec/mips/h264qpel_msa.c index 05dffea..b7f6c3d 100644 --- a/libavcodec/mips/h264qpel_msa.c +++ b/libavcodec/mips/h264qpel_msa.c @@ -3065,37 +3065,309 @@ void ff_avg_h264_qpel4_mc00_msa(uint8_t *dst, const uint8_t *src, void ff_put_h264_qpel16_mc10_msa(uint8_t *dst, const uint8_t *src, ptrdiff_t stride) { - avc_luma_hz_qrt_16w_msa(src - 2, stride, dst, stride, 16, 0); + uint32_t loop_cnt; + v16i8 dst0, dst1, dst2, dst3, src0, src1, src2, src3, src4, src5, src6; + v16i8 mask0, mask1, mask2, mask3, mask4, mask5, src7, vec11; + v16i8 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7, vec8, vec9, vec10; + v8i16 res0, res1, res2, res3, res4, res5, res6, res7; + v16i8 minus5b = __msa_ldi_b(-5); + v16i8 plus20b = __msa_ldi_b(20); + + LD_SB3(&luma_mask_arr[0], 16, mask0, mask1, mask2); + mask3 = mask0 + 8; + mask4 = mask1 + 8; + mask5 = mask2 + 8; + src -= 2; + + for (loop_cnt = 4; loop_cnt--;) { + LD_SB2(src, 16, src0, src1); + src += stride; + LD_SB2(src, 16, src2, src3); + src += stride; + LD_SB2(src, 16, src4, src5); + src += stride; + LD_SB2(src, 16, src6, src7); + src += stride; + + XORI_B8_128_SB(src0, src1, src2, src3, src4, src5, src6, src7); + VSHF_B2_SB(src0, src0, src0, src1, mask0, mask3, vec0, vec3); + VSHF_B2_SB(src2, src2, src2, src3, mask0, mask3, vec6, vec9); + VSHF_B2_SB(src0, src0, src0, src1, mask1, mask4, vec1, vec4); + VSHF_B2_SB(src2, src2, src2, src3, mask1, mask4, vec7, vec10); + VSHF_B2_SB(src0, src0, src0, src1, mask2, mask5, vec2, vec5); + VSHF_B2_SB(src2, src2, src2, src3, mask2, mask5, vec8, vec11); + HADD_SB4_SH(vec0, vec3, vec6, vec9, res0, res1, res2, res3); + DPADD_SB4_SH(vec1, vec4, vec7, vec10, minus5b, minus5b, minus5b, + minus5b, res0, res1, res2, res3); + DPADD_SB4_SH(vec2, vec5, vec8, vec11, plus20b, plus20b, plus20b, + plus20b, res0, res1, res2, res3); + VSHF_B2_SB(src4, src4, src4, src5, mask0, mask3, vec0, vec3); + VSHF_B2_SB(src6, src6, src6, src7, mask0, mask3, vec6, vec9); + VSHF_B2_SB(src4, src4, src4, src5, mask1, mask4, vec1, vec4); + VSHF_B2_SB(src6, src6, src6, src7, mask1, mask4, vec7, vec10); + VSHF_B2_SB(src4, src4, src4, src5, mask2, mask5, vec2, vec5); + VSHF_B2_SB(src6, src6, src6, src7, mask2, mask5, vec8, vec11); + HADD_SB4_SH(vec0, vec3, vec6, vec9, res4, res5, res6, res7); + DPADD_SB4_SH(vec1, vec4, vec7, vec10, minus5b, minus5b, minus5b, + minus5b, res4, res5, res6, res7); + DPADD_SB4_SH(vec2, vec5, vec8, vec11, plus20b, plus20b, plus20b, + plus20b, res4, res5, res6, res7); + SLDI_B2_SB(src1, src3, src0, src2, src0, src2, 2); + SLDI_B2_SB(src5, src7, src4, src6, src4, src6, 2); + SRARI_H4_SH(res0, res1, res2, res3, 5); + SRARI_H4_SH(res4, res5, res6, res7, 5); + SAT_SH4_SH(res0, res1, res2, res3, 7); + SAT_SH4_SH(res4, res5, res6, res7, 7); + PCKEV_B2_SB(res1, res0, res3, res2, dst0, dst1); + PCKEV_B2_SB(res5, res4, res7, res6, dst2, dst3); + dst0 = __msa_aver_s_b(dst0, src0); + dst1 = __msa_aver_s_b(dst1, src2); + dst2 = __msa_aver_s_b(dst2, src4); + dst3 = __msa_aver_s_b(dst3, src6); + XORI_B4_128_SB(dst0, dst1, dst2, dst3); + ST_SB4(dst0, dst1, dst2, dst3, dst, stride); + dst += (4 * stride); + } } void ff_put_h264_qpel16_mc30_msa(uint8_t *dst, const uint8_t *src, ptrdiff_t stride) { - avc_luma_hz_qrt_16w_msa(src - 2, stride, dst, stride, 16, 1); + uint32_t loop_cnt; + v16i8 dst0, dst1, dst2, dst3, src0, src1, src2, src3, src4, src5, src6; + v16i8 mask0, mask1, mask2, mask3, mask4, mask5, src7, vec11; + v16i8 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7, vec8, vec9, vec10; + v8i16 res0, res1, res2, res3, res4, res5, res6, res7; + v16i8 minus5b = __msa_ldi_b(-5); + v16i8 plus20b = __msa_ldi_b(20); + + LD_SB3(&luma_mask_arr[0], 16, mask0, mask1, mask2); + mask3 = mask0 + 8; + mask4 = mask1 + 8; + mask5 = mask2 + 8; + src -= 2; + + for (loop_cnt = 4; loop_cnt--;) { + LD_SB2(src, 16, src0, src1); + src += stride; + LD_SB2(src, 16, src2, src3); + src += stride; + LD_SB2(src, 16, src4, src5); + src += stride; + LD_SB2(src, 16, src6, src7); + src += stride; + + XORI_B8_128_SB(src0, src1, src2, src3, src4, src5, src6, src7); + VSHF_B2_SB(src0, src0, src0, src1, mask0, mask3, vec0, vec3); + VSHF_B2_SB(src2, src2, src2, src3, mask0, mask3, vec6, vec9); + VSHF_B2_SB(src0, src0, src0, src1, mask1, mask4, vec1, vec4); + VSHF_B2_SB(src2, src2, src2, src3, mask1, mask4, vec7, vec10); + VSHF_B2_SB(src0, src0, src0, src1, mask2, mask5, vec2, vec5); + VSHF_B2_SB(src2, src2, src2, src3, mask2, mask5, vec8, vec11); + HADD_SB4_SH(vec0, vec3, vec6, vec9, res0, res1, res2, res3); + DPADD_SB4_SH(vec1, vec4, vec7, vec10, minus5b, minus5b, minus5b, + minus5b, res0, res1, res2, res3); + DPADD_SB4_SH(vec2, vec5, vec8, vec11, plus20b, plus20b, plus20b, + plus20b, res0, res1, res2, res3); + VSHF_B2_SB(src4, src4, src4, src5, mask0, mask3, vec0, vec3); + VSHF_B2_SB(src6, src6, src6, src7, mask0, mask3, vec6, vec9); + VSHF_B2_SB(src4, src4, src4, src5, mask1, mask4, vec1, vec4); + VSHF_B2_SB(src6, src6, src6, src7, mask1, mask4, vec7, vec10); + VSHF_B2_SB(src4, src4, src4, src5, mask2, mask5, vec2, vec5); + VSHF_B2_SB(src6, src6, src6, src7, mask2, mask5, vec8, vec11); + HADD_SB4_SH(vec0, vec3, vec6, vec9, res4, res5, res6, res7); + DPADD_SB4_SH(vec1, vec4, vec7, vec10, minus5b, minus5b, minus5b, + minus5b, res4, res5, res6, res7); + DPADD_SB4_SH(vec2, vec5, vec8, vec11, plus20b, plus20b, plus20b, + plus20b, res4, res5, res6, res7); + SLDI_B2_SB(src1, src3, src0, src2, src0, src2, 3); + SLDI_B2_SB(src5, src7, src4, src6, src4, src6, 3); + SRARI_H4_SH(res0, res1, res2, res3, 5); + SRARI_H4_SH(res4, res5, res6, res7, 5); + SAT_SH4_SH(res0, res1, res2, res3, 7); + SAT_SH4_SH(res4, res5, res6, res7, 7); + PCKEV_B2_SB(res1, res0, res3, res2, dst0, dst1); + PCKEV_B2_SB(res5, res4, res7, res6, dst2, dst3); + dst0 = __msa_aver_s_b(dst0, src0); + dst1 = __msa_aver_s_b(dst1, src2); + dst2 = __msa_aver_s_b(dst2, src4); + dst3 = __msa_aver_s_b(dst3, src6); + XORI_B4_128_SB(dst0, dst1, dst2, dst3); + ST_SB4(dst0, dst1, dst2, dst3, dst, stride); + dst += (4 * stride); + } } void ff_put_h264_qpel8_mc10_msa(uint8_t *dst, const uint8_t *src, ptrdiff_t stride) { - avc_luma_hz_qrt_8w_msa(src - 2, stride, dst, stride, 8, 0); + v16i8 src0, src1, src2, src3, src4, src5, src6, src7, mask0, mask1, mask2; + v16i8 tmp0, tmp1, tmp2, tmp3, vec11; + v16i8 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7, vec8, vec9, vec10; + v8i16 res0, res1, res2, res3, res4, res5, res6, res7; + v16i8 minus5b = __msa_ldi_b(-5); + v16i8 plus20b = __msa_ldi_b(20); + + LD_SB3(&luma_mask_arr[0], 16, mask0, mask1, mask2); + LD_SB8(src - 2, stride, src0, src1, src2, src3, src4, src5, src6, src7); + XORI_B8_128_SB(src0, src1, src2, src3, src4, src5, src6, src7); + VSHF_B2_SB(src0, src0, src1, src1, mask0, mask0, vec0, vec1); + VSHF_B2_SB(src2, src2, src3, src3, mask0, mask0, vec2, vec3); + HADD_SB4_SH(vec0, vec1, vec2, vec3, res0, res1, res2, res3); + VSHF_B2_SB(src0, src0, src1, src1, mask1, mask1, vec4, vec5); + VSHF_B2_SB(src2, src2, src3, src3, mask1, mask1, vec6, vec7); + DPADD_SB4_SH(vec4, vec5, vec6, vec7, minus5b, minus5b, minus5b, minus5b, + res0, res1, res2, res3); + VSHF_B2_SB(src0, src0, src1, src1, mask2, mask2, vec8, vec9); + VSHF_B2_SB(src2, src2, src3, src3, mask2, mask2, vec10, vec11); + DPADD_SB4_SH(vec8, vec9, vec10, vec11, plus20b, plus20b, plus20b, plus20b, + res0, res1, res2, res3); + VSHF_B2_SB(src4, src4, src5, src5, mask0, mask0, vec0, vec1); + VSHF_B2_SB(src6, src6, src7, src7, mask0, mask0, vec2, vec3); + HADD_SB4_SH(vec0, vec1, vec2, vec3, res4, res5, res6, res7); + VSHF_B2_SB(src4, src4, src5, src5, mask1, mask1, vec4, vec5); + VSHF_B2_SB(src6, src6, src7, src7, mask1, mask1, vec6, vec7); + DPADD_SB4_SH(vec4, vec5, vec6, vec7, minus5b, minus5b, minus5b, minus5b, + res4, res5, res6, res7); + VSHF_B2_SB(src4, src4, src5, src5, mask2, mask2, vec8, vec9); + VSHF_B2_SB(src6, src6, src7, src7, mask2, mask2, vec10, vec11); + DPADD_SB4_SH(vec8, vec9, vec10, vec11, plus20b, plus20b, plus20b, plus20b, + res4, res5, res6, res7); + SLDI_B2_SB(src0, src1, src0, src1, src0, src1, 2); + SLDI_B2_SB(src2, src3, src2, src3, src2, src3, 2); + SLDI_B2_SB(src4, src5, src4, src5, src4, src5, 2); + SLDI_B2_SB(src6, src7, src6, src7, src6, src7, 2); + PCKEV_D2_SB(src1, src0, src3, src2, src0, src1); + PCKEV_D2_SB(src5, src4, src7, src6, src4, src5); + SRARI_H4_SH(res0, res1, res2, res3, 5); + SRARI_H4_SH(res4, res5, res6, res7, 5); + SAT_SH4_SH(res0, res1, res2, res3, 7); + SAT_SH4_SH(res4, res5, res6, res7, 7); + PCKEV_B2_SB(res1, res0, res3, res2, tmp0, tmp1); + PCKEV_B2_SB(res5, res4, res7, res6, tmp2, tmp3); + tmp0 = __msa_aver_s_b(tmp0, src0); + tmp1 = __msa_aver_s_b(tmp1, src1); + tmp2 = __msa_aver_s_b(tmp2, src4); + tmp3 = __msa_aver_s_b(tmp3, src5); + XORI_B4_128_SB(tmp0, tmp1, tmp2, tmp3); + ST8x8_UB(tmp0, tmp1, tmp2, tmp3, dst, stride); } void ff_put_h264_qpel8_mc30_msa(uint8_t *dst, const uint8_t *src, ptrdiff_t stride) { - avc_luma_hz_qrt_8w_msa(src - 2, stride, dst, stride, 8, 1); + v16i8 src0, src1, src2, src3, src4, src5, src6, src7, mask0, mask1, mask2; + v16i8 tmp0, tmp1, tmp2, tmp3, vec11; + v16i8 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7, vec8, vec9, vec10; + v8i16 res0, res1, res2, res3, res4, res5, res6, res7; + v16i8 minus5b = __msa_ldi_b(-5); + v16i8 plus20b = __msa_ldi_b(20); + + LD_SB3(&luma_mask_arr[0], 16, mask0, mask1, mask2); + LD_SB8(src - 2, stride, src0, src1, src2, src3, src4, src5, src6, src7); + XORI_B8_128_SB(src0, src1, src2, src3, src4, src5, src6, src7); + VSHF_B2_SB(src0, src0, src1, src1, mask0, mask0, vec0, vec1); + VSHF_B2_SB(src2, src2, src3, src3, mask0, mask0, vec2, vec3); + HADD_SB4_SH(vec0, vec1, vec2, vec3, res0, res1, res2, res3); + VSHF_B2_SB(src0, src0, src1, src1, mask1, mask1, vec4, vec5); + VSHF_B2_SB(src2, src2, src3, src3, mask1, mask1, vec6, vec7); + DPADD_SB4_SH(vec4, vec5, vec6, vec7, minus5b, minus5b, minus5b, minus5b, + res0, res1, res2, res3); + VSHF_B2_SB(src0, src0, src1, src1, mask2, mask2, vec8, vec9); + VSHF_B2_SB(src2, src2, src3, src3, mask2, mask2, vec10, vec11); + DPADD_SB4_SH(vec8, vec9, vec10, vec11, plus20b, plus20b, plus20b, plus20b, + res0, res1, res2, res3); + VSHF_B2_SB(src4, src4, src5, src5, mask0, mask0, vec0, vec1); + VSHF_B2_SB(src6, src6, src7, src7, mask0, mask0, vec2, vec3); + HADD_SB4_SH(vec0, vec1, vec2, vec3, res4, res5, res6, res7); + VSHF_B2_SB(src4, src4, src5, src5, mask1, mask1, vec4, vec5); + VSHF_B2_SB(src6, src6, src7, src7, mask1, mask1, vec6, vec7); + DPADD_SB4_SH(vec4, vec5, vec6, vec7, minus5b, minus5b, minus5b, minus5b, + res4, res5, res6, res7); + VSHF_B2_SB(src4, src4, src5, src5, mask2, mask2, vec8, vec9); + VSHF_B2_SB(src6, src6, src7, src7, mask2, mask2, vec10, vec11); + DPADD_SB4_SH(vec8, vec9, vec10, vec11, plus20b, plus20b, plus20b, plus20b, + res4, res5, res6, res7); + SLDI_B2_SB(src0, src1, src0, src1, src0, src1, 3); + SLDI_B2_SB(src2, src3, src2, src3, src2, src3, 3); + SLDI_B2_SB(src4, src5, src4, src5, src4, src5, 3); + SLDI_B2_SB(src6, src7, src6, src7, src6, src7, 3); + PCKEV_D2_SB(src1, src0, src3, src2, src0, src1); + PCKEV_D2_SB(src5, src4, src7, src6, src4, src5); + SRARI_H4_SH(res0, res1, res2, res3, 5); + SRARI_H4_SH(res4, res5, res6, res7, 5); + SAT_SH4_SH(res0, res1, res2, res3, 7); + SAT_SH4_SH(res4, res5, res6, res7, 7); + PCKEV_B2_SB(res1, res0, res3, res2, tmp0, tmp1); + PCKEV_B2_SB(res5, res4, res7, res6, tmp2, tmp3); + tmp0 = __msa_aver_s_b(tmp0, src0); + tmp1 = __msa_aver_s_b(tmp1, src1); + tmp2 = __msa_aver_s_b(tmp2, src4); + tmp3 = __msa_aver_s_b(tmp3, src5); + XORI_B4_128_SB(tmp0, tmp1, tmp2, tmp3); + ST8x8_UB(tmp0, tmp1, tmp2, tmp3, dst, stride); } void ff_put_h264_qpel4_mc10_msa(uint8_t *dst, const uint8_t *src, ptrdiff_t stride) { - avc_luma_hz_qrt_4w_msa(src - 2, stride, dst, stride, 4, 0); + v16i8 src0, src1, src2, src3, res, mask0, mask1, mask2; + v16i8 vec0, vec1, vec2, vec3, vec4, vec5; + v8i16 res0, res1; + v16i8 minus5b = __msa_ldi_b(-5); + v16i8 plus20b = __msa_ldi_b(20); + + LD_SB3(&luma_mask_arr[48], 16, mask0, mask1, mask2); + LD_SB4(src - 2, stride, src0, src1, src2, src3); + XORI_B4_128_SB(src0, src1, src2, src3); + VSHF_B2_SB(src0, src1, src2, src3, mask0, mask0, vec0, vec1); + HADD_SB2_SH(vec0, vec1, res0, res1); + VSHF_B2_SB(src0, src1, src2, src3, mask1, mask1, vec2, vec3); + DPADD_SB2_SH(vec2, vec3, minus5b, minus5b, res0, res1); + VSHF_B2_SB(src0, src1, src2, src3, mask2, mask2, vec4, vec5); + DPADD_SB2_SH(vec4, vec5, plus20b, plus20b, res0, res1); + SRARI_H2_SH(res0, res1, 5); + SAT_SH2_SH(res0, res1, 7); + res = __msa_pckev_b((v16i8) res1, (v16i8) res0); + SLDI_B2_SB(src0, src1, src0, src1, src0, src1, 2); + SLDI_B2_SB(src2, src3, src2, src3, src2, src3, 2); + src0 = (v16i8) __msa_insve_w((v4i32) src0, 1, (v4i32) src1); + src1 = (v16i8) __msa_insve_w((v4i32) src2, 1, (v4i32) src3); + src0 = (v16i8) __msa_insve_d((v2i64) src0, 1, (v2i64) src1); + res = __msa_aver_s_b(res, src0); + res = (v16i8) __msa_xori_b((v16u8) res, 128); + ST4x4_UB(res, res, 0, 1, 2, 3, dst, stride); } void ff_put_h264_qpel4_mc30_msa(uint8_t *dst, const uint8_t *src, ptrdiff_t stride) { - avc_luma_hz_qrt_4w_msa(src - 2, stride, dst, stride, 4, 1); + v16i8 src0, src1, src2, src3, res, mask0, mask1, mask2; + v16i8 vec0, vec1, vec2, vec3, vec4, vec5; + v8i16 res0, res1; + v16i8 minus5b = __msa_ldi_b(-5); + v16i8 plus20b = __msa_ldi_b(20); + + LD_SB3(&luma_mask_arr[48], 16, mask0, mask1, mask2); + LD_SB4(src - 2, stride, src0, src1, src2, src3); + XORI_B4_128_SB(src0, src1, src2, src3); + VSHF_B2_SB(src0, src1, src2, src3, mask0, mask0, vec0, vec1); + HADD_SB2_SH(vec0, vec1, res0, res1); + VSHF_B2_SB(src0, src1, src2, src3, mask1, mask1, vec2, vec3); + DPADD_SB2_SH(vec2, vec3, minus5b, minus5b, res0, res1); + VSHF_B2_SB(src0, src1, src2, src3, mask2, mask2, vec4, vec5); + DPADD_SB2_SH(vec4, vec5, plus20b, plus20b, res0, res1); + SRARI_H2_SH(res0, res1, 5); + SAT_SH2_SH(res0, res1, 7); + res = __msa_pckev_b((v16i8) res1, (v16i8) res0); + SLDI_B2_SB(src0, src1, src0, src1, src0, src1, 3); + SLDI_B2_SB(src2, src3, src2, src3, src2, src3, 3); + src0 = (v16i8) __msa_insve_w((v4i32) src0, 1, (v4i32) src1); + src1 = (v16i8) __msa_insve_w((v4i32) src2, 1, (v4i32) src3); + src0 = (v16i8) __msa_insve_d((v2i64) src0, 1, (v2i64) src1); + res = __msa_aver_s_b(res, src0); + res = (v16i8) __msa_xori_b((v16u8) res, 128); + ST4x4_UB(res, res, 0, 1, 2, 3, dst, stride); } void ff_put_h264_qpel16_mc20_msa(uint8_t *dst, const uint8_t *src, -- 1.7.9.5 _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel _______________________________________________