From patchwork Wed May 24 16:51:27 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgQsWTc2No?= X-Patchwork-Id: 3729 Delivered-To: ffmpegpatchwork@gmail.com Received: by 10.103.10.2 with SMTP id 2csp407248vsk; Wed, 24 May 2017 09:51:39 -0700 (PDT) X-Received: by 10.28.31.136 with SMTP id f130mr6462090wmf.117.1495644699521; Wed, 24 May 2017 09:51:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1495644699; cv=none; d=google.com; s=arc-20160816; b=XlHYixqQyX25wxeR0jTOL8CS4qcKJGMS6GFiZwiA3lq/RSlsMe03cZ5BD8ZlXgfSj3 MynbppY03v5mKdPQFcFHDO9sxGyp0A+aa28Ue3KFAFZLNCq0M34RV+uLRZ0yqmtNx2yE fn/gQ6s1g9rkSk51+4/7MkOCZN8J57QXv5DL52fh4XaVJqdHedYbcZd/1brcuoE+G1By 9Y0QdeA2B2HOKLjqyMUiV9EmjRhJ/JA0vv+WXYB2BH2+feVH2gLDF00YK5rGwdI/ByNv uS1dqKYXhYBPjz/2b5PkHVSvy2T2/557TuCDci77hdfLkzBp9H7mr8sTbFS+btlP6NXP CzjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:reply-to:list-subscribe:list-help:list-post :list-archive:list-unsubscribe:list-id:precedence:subject:user-agent :in-reply-to:mime-version:references:mail-followup-to:message-id:to :from:date:delivered-to:arc-authentication-results; bh=mp9+9cTspbeN6+NgBYDe58+cVlnJiKXOr18745UlbaY=; b=DYwAbZpy+hD0lJNVFXdRbuXsbASP1/SNCYXFAGEo/1mpjoNiOZ8TIn88+Ov+nr7Ueu dBcmWiKbYTbV6HVPpMxn+0qPLRFkAgbQ+BK3XawaVZaBcyFgo90DDyUrRTv6vRAwPRwV h9w+pJL4YkPhD03FwSdTbbOcDa7pp1I2ST76YkQU+ThEbenFelMozbuLN/mM3etDO5jQ rIL+nYkI3JLDXpAIncQc5668LUr+OTA3m3EnKQHeSK+q9wZGXCs4TVfatEFkI7GYI/KM eko1Zx6JharHfp9APvNIY/Huu+p4g4oyRI8hNWWcRU6rPGzwSUEeNSrXlffZYRSgwut2 /S5Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id d24si19601518wrb.106.2017.05.24.09.51.38; Wed, 24 May 2017 09:51:39 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id CC2A8689B74; Wed, 24 May 2017 19:51:33 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from golem.pkh.me (LStLambert-657-1-117-164.w92-154.abo.wanadoo.fr [92.154.28.164]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 2A677688398 for ; Wed, 24 May 2017 19:51:27 +0300 (EEST) Received: from golem.pkh.me (localhost.localdomain [127.0.0.1]) by golem.pkh.me (OpenSMTPD) with ESMTPSA id 6b5b37a9 (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256:NO) for ; Wed, 24 May 2017 16:51:28 +0000 (UTC) Date: Wed, 24 May 2017 18:51:27 +0200 From: =?utf-8?B?Q2zDqW1lbnQgQsWTc2No?= To: FFmpeg development discussions and patches Message-ID: <20170524165126.GE5698@golem.pkh.me> Mail-Followup-To: FFmpeg development discussions and patches References: <1495642505-39672-1-git-send-email-rsbultje@gmail.com> MIME-Version: 1.0 In-Reply-To: <1495642505-39672-1-git-send-email-rsbultje@gmail.com> User-Agent: Mutt/1.8.2 (2017-04-18) Subject: Re: [FFmpeg-devel] [PATCH] frame_thread_encoder: extend critical code covered by finished_task_mutex. X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" On Wed, May 24, 2017 at 12:15:05PM -0400, Ronald S. Bultje wrote: > Should fix tsan errors in utvideoenc_rgb_left and related tests. > --- > libavcodec/frame_thread_encoder.c | 13 ++++++++++--- > 1 file changed, 10 insertions(+), 3 deletions(-) > > diff --git a/libavcodec/frame_thread_encoder.c b/libavcodec/frame_thread_encoder.c > index 27ae356..2746964 100644 > --- a/libavcodec/frame_thread_encoder.c > +++ b/libavcodec/frame_thread_encoder.c > @@ -273,14 +273,21 @@ int ff_thread_video_encode_frame(AVCodecContext *avctx, AVPacket *pkt, const AVF > > c->task_index = (c->task_index+1) % BUFFER_SIZE; > > - if(!c->finished_tasks[c->finished_task_index].outdata && (c->task_index - c->finished_task_index) % BUFFER_SIZE <= avctx->thread_count) > + pthread_mutex_lock(&c->finished_task_mutex); > + if(!c->finished_tasks[c->finished_task_index].outdata && > + (c->task_index - c->finished_task_index) % BUFFER_SIZE <= avctx->thread_count) { > + pthread_mutex_unlock(&c->finished_task_mutex); > return 0; > + } > + } else { > + pthread_mutex_lock(&c->finished_task_mutex); > } > > - if(c->task_index == c->finished_task_index) > + if(c->task_index == c->finished_task_index) { > + pthread_mutex_unlock(&c->finished_task_mutex); > return 0; > + } > > - pthread_mutex_lock(&c->finished_task_mutex); > while (!c->finished_tasks[c->finished_task_index].outdata) { > pthread_cond_wait(&c->finished_task_cond, &c->finished_task_mutex); > } LGTM alternatively you could do sth like the following (totally untested) to reduce the pthread spaghettis: diff --git a/libavcodec/frame_thread_encoder.c b/libavcodec/frame_thread_encoder.c index 27ae356af3..c1899dbe2e 100644 --- a/libavcodec/frame_thread_encoder.c +++ b/libavcodec/frame_thread_encoder.c @@ -272,15 +272,16 @@ int ff_thread_video_encode_frame(AVCodecContext *avctx, AVPacket *pkt, const AVF pthread_mutex_unlock(&c->task_fifo_mutex); c->task_index = (c->task_index+1) % BUFFER_SIZE; - - if(!c->finished_tasks[c->finished_task_index].outdata && (c->task_index - c->finished_task_index) % BUFFER_SIZE <= avctx->thread_count) - return 0; } - if(c->task_index == c->finished_task_index) + pthread_mutex_lock(&c->finished_task_mutex); + if(c->task_index == c->finished_task_index || + (frame && !c->finished_tasks[c->finished_task_index].outdata && + (c->task_index - c->finished_task_index) % BUFFER_SIZE <= avctx->thread_count)) { + pthread_mutex_unlock(&c->finished_task_mutex); return 0; + } - pthread_mutex_lock(&c->finished_task_mutex); while (!c->finished_tasks[c->finished_task_index].outdata) { pthread_cond_wait(&c->finished_task_cond, &c->finished_task_mutex); }