From patchwork Mon Feb 8 12:23:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andreas Rheinhardt X-Patchwork-Id: 25505 Return-Path: X-Original-To: patchwork@ffaux-bg.ffmpeg.org Delivered-To: patchwork@ffaux-bg.ffmpeg.org Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by ffaux.localdomain (Postfix) with ESMTP id 1AB7544A631 for ; Mon, 8 Feb 2021 14:24:12 +0200 (EET) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 00E9E6891E2; Mon, 8 Feb 2021 14:24:12 +0200 (EET) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-ed1-f48.google.com (mail-ed1-f48.google.com [209.85.208.48]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id D45EB6897C2 for ; Mon, 8 Feb 2021 14:24:04 +0200 (EET) Received: by mail-ed1-f48.google.com with SMTP id s26so12036917edt.10 for ; Mon, 08 Feb 2021 04:24:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references:reply-to :mime-version:content-transfer-encoding; bh=+FhbnOQDBaPHLUSkG1S9Vti2MXSpHbkWkm6x8gCHtr8=; b=gzHRTCCY6poV/HyEfiu1/bgatH75/vmP04Nd0b4J/cl+/BliFfNAJH8MOQA9yMq2Te HKcnz7vvcXN+s+9iTTh8gQ/WZ3zE1Y9AjJv+kMeYZbriOfvYK6rbOm/xxn2Zo5SzUPl3 tZCh/5HfuBfWTe12mKprV+GmGZ2YquXHUyaYaaGdB09ubZFVmHBBxZD+jXmXnCQVEb7f mX7CK3aucd5vB8NfGkATkaIyOvSbcSPqpZM/MnOosNoQlMx9ub9hsbrBPNTOYuPjWAwx ZTtFE5WnrDjZe/PrwDcLc1lhuoPOfW8xLBBeoF5GilD6rYPcWI9ELOMbxvpnaA0XUghc K2qg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:reply-to:mime-version:content-transfer-encoding; bh=+FhbnOQDBaPHLUSkG1S9Vti2MXSpHbkWkm6x8gCHtr8=; b=oa2HnV7Zul8xe0n5RVajMgSo2OgPq62pPwyc/1gaYmyjKUPccQwP4jRloBbidZZMvV CXAeUFbDnm7oKRhxIplw1l5CsnAN2P0i6tqpXJFOCZby/jlmAFK4TM7WEAEnNlVznR91 9J+xk8iy+WrNzqWL0UrzAtqIaec7AgLLGjf3eNfxanI5ONEQDU8mEGysJAWeAMIwfhzE RhYFr5QoDIduDIIgZdMueMX5ne9+JyPr2Npq6+vEENQERfMErLg9oL8rIhz7a9e3sor3 2g8TW1cU+Oj3lr1/QmTFj5bS+CQL1sB2JJaQ9Sg/u6KD5X6+XYHs/VYGBBCbPt+c46tB ljYw== X-Gm-Message-State: AOAM533m/ce9FAYPROJAPAonL+wx7iLXHeSLFuAPdlWY8KJmIjlh7gwI wIw1KWGgsrhs40odQzY3gRbk/5hN+T0= X-Google-Smtp-Source: ABdhPJzvIBkvI/U+f/OC/NJPtt/b2gArzQv2DqU4ckHBzogoHsolPxBNnpDCsFPaddPB9mLIqVOWjw== X-Received: by 2002:aa7:c755:: with SMTP id c21mr16994506eds.47.1612787043701; Mon, 08 Feb 2021 04:24:03 -0800 (PST) Received: from sblaptop.fritz.box (ipbcc1aa4b.dynamic.kabel-deutschland.de. [188.193.170.75]) by smtp.gmail.com with ESMTPSA id ah20sm8486820ejc.99.2021.02.08.04.24.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Feb 2021 04:24:03 -0800 (PST) From: Andreas Rheinhardt To: ffmpeg-devel@ffmpeg.org Date: Mon, 8 Feb 2021 13:23:25 +0100 Message-Id: <20210208122330.555354-3-andreas.rheinhardt@gmail.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210208122330.555354-1-andreas.rheinhardt@gmail.com> References: <20210208122330.555354-1-andreas.rheinhardt@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH 3/8] avcodec/frame_thread_encoder: Avoid allocations of AVPackets, fix deadlock X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Andreas Rheinhardt Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" Up until now, when doing frame thread encoding, each worker thread tried to allocate an AVPacket for every AVFrame to be encoded; said packets would then be handed back to the main thread, where the content of said packet is copied into the packet actually destined for output; the temporary AVPacket is then freed. Besides being wasteful this also has another problem: There is a risk of deadlock, namely if no AVPacket can be allocated at all. The user doesn't get an error at all in this case and the worker threads will simply try to allocate a packet again and again. If the user has supplied enough frames, the user's thread will block until a task has been completed, which just doesn't happen if no packet can ever be allocated. This patch instead modifies the code to allocate the packets during init; they are then reused again and again. Signed-off-by: Andreas Rheinhardt --- libavcodec/frame_thread_encoder.c | 61 +++++++++++++++++++------------ 1 file changed, 37 insertions(+), 24 deletions(-) diff --git a/libavcodec/frame_thread_encoder.c b/libavcodec/frame_thread_encoder.c index 9ca34e7ffb..bcd3c94f8b 100644 --- a/libavcodec/frame_thread_encoder.c +++ b/libavcodec/frame_thread_encoder.c @@ -32,13 +32,18 @@ #include "thread.h" #define MAX_THREADS 64 -#define BUFFER_SIZE (2*MAX_THREADS) +/* There can be as many as MAX_THREADS + 1 outstanding tasks. + * An additional + 1 is needed so that one can distinguish + * the case of zero and MAX_THREADS + 1 outstanding tasks modulo + * the number of buffers. */ +#define BUFFER_SIZE (MAX_THREADS + 2) typedef struct{ AVFrame *indata; AVPacket *outdata; int64_t return_code; unsigned index; + int finished; } Task; typedef struct{ @@ -49,8 +54,9 @@ typedef struct{ pthread_mutex_t task_fifo_mutex; pthread_cond_t task_fifo_cond; - Task finished_tasks[BUFFER_SIZE]; - pthread_mutex_t finished_task_mutex; + unsigned max_tasks; + Task tasks[BUFFER_SIZE]; + pthread_mutex_t finished_task_mutex; /* Guards tasks[i].finished */ pthread_cond_t finished_task_cond; unsigned task_index; @@ -63,17 +69,13 @@ typedef struct{ static void * attribute_align_arg worker(void *v){ AVCodecContext *avctx = v; ThreadContext *c = avctx->internal->frame_thread_encoder; - AVPacket *pkt = NULL; while (!atomic_load(&c->exit)) { int got_packet = 0, ret; + AVPacket *pkt; AVFrame *frame; Task task; - if(!pkt) pkt = av_packet_alloc(); - if(!pkt) continue; - av_init_packet(pkt); - pthread_mutex_lock(&c->task_fifo_mutex); while (av_fifo_size(c->task_fifo) <= 0 || atomic_load(&c->exit)) { if (atomic_load(&c->exit)) { @@ -84,7 +86,12 @@ static void * attribute_align_arg worker(void *v){ } av_fifo_generic_read(c->task_fifo, &task, sizeof(task), NULL); pthread_mutex_unlock(&c->task_fifo_mutex); + /* The main thread ensures that any two outstanding tasks have + * different indices, ergo each worker thread owns its element + * of c->tasks with the exception of finished, which is shared + * with the main thread and guarded by finished_task_mutex. */ frame = task.indata; + pkt = c->tasks[task.index].outdata; ret = avctx->codec->encode2(avctx, pkt, frame, &got_packet); if(got_packet) { @@ -101,13 +108,12 @@ static void * attribute_align_arg worker(void *v){ pthread_mutex_unlock(&c->buffer_mutex); av_frame_free(&frame); pthread_mutex_lock(&c->finished_task_mutex); - c->finished_tasks[task.index].outdata = pkt; pkt = NULL; - c->finished_tasks[task.index].return_code = ret; + c->tasks[task.index].return_code = ret; + c->tasks[task.index].finished = 1; pthread_cond_signal(&c->finished_task_cond); pthread_mutex_unlock(&c->finished_task_mutex); } end: - av_free(pkt); pthread_mutex_lock(&c->buffer_mutex); avcodec_close(avctx); pthread_mutex_unlock(&c->buffer_mutex); @@ -194,6 +200,12 @@ int ff_frame_thread_encoder_init(AVCodecContext *avctx, AVDictionary *options){ pthread_cond_init(&c->finished_task_cond, NULL); atomic_init(&c->exit, 0); + c->max_tasks = avctx->thread_count + 2; + for (unsigned i = 0; i < c->max_tasks; i++) { + if (!(c->tasks[i].outdata = av_packet_alloc())) + goto fail; + } + for(i=0; ithread_count ; i++){ AVDictionary *tmp = NULL; int ret; @@ -261,8 +273,8 @@ void ff_frame_thread_encoder_free(AVCodecContext *avctx){ av_frame_free(&task.indata); } - for (i=0; ifinished_tasks[i].outdata); + for (unsigned i = 0; i < c->max_tasks; i++) { + av_packet_free(&c->tasks[i].outdata); } pthread_mutex_destroy(&c->task_fifo_mutex); @@ -276,7 +288,7 @@ void ff_frame_thread_encoder_free(AVCodecContext *avctx){ int ff_thread_video_encode_frame(AVCodecContext *avctx, AVPacket *pkt, const AVFrame *frame, int *got_packet_ptr){ ThreadContext *c = avctx->internal->frame_thread_encoder; - Task task; + Task *outtask, task; int ret; av_assert1(!*got_packet_ptr); @@ -298,27 +310,28 @@ int ff_thread_video_encode_frame(AVCodecContext *avctx, AVPacket *pkt, const AVF pthread_cond_signal(&c->task_fifo_cond); pthread_mutex_unlock(&c->task_fifo_mutex); - c->task_index = (c->task_index+1) % BUFFER_SIZE; + c->task_index = (c->task_index + 1) % c->max_tasks; } + outtask = &c->tasks[c->finished_task_index]; pthread_mutex_lock(&c->finished_task_mutex); if (c->task_index == c->finished_task_index || - (frame && !c->finished_tasks[c->finished_task_index].outdata && - (c->task_index - c->finished_task_index) % BUFFER_SIZE <= avctx->thread_count)) { + (frame && !outtask->finished && + (c->task_index - c->finished_task_index + c->max_tasks) % c->max_tasks <= avctx->thread_count)) { pthread_mutex_unlock(&c->finished_task_mutex); return 0; } - - while (!c->finished_tasks[c->finished_task_index].outdata) { + while (!outtask->finished) { pthread_cond_wait(&c->finished_task_cond, &c->finished_task_mutex); } - task = c->finished_tasks[c->finished_task_index]; - *pkt = *(AVPacket*)(task.outdata); + /* We now own outtask completely: No worker thread touches it any more, + * because there is no outstanding task with this index. */ + outtask->finished = 0; + av_packet_move_ref(pkt, outtask->outdata); if(pkt->data) *got_packet_ptr = 1; - av_freep(&c->finished_tasks[c->finished_task_index].outdata); - c->finished_task_index = (c->finished_task_index+1) % BUFFER_SIZE; + c->finished_task_index = (c->finished_task_index + 1) % c->max_tasks; pthread_mutex_unlock(&c->finished_task_mutex); - return task.return_code; + return outtask->return_code; }