From patchwork Mon Jun 20 19:20:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paolo Prete X-Patchwork-Id: 36356 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a20:1a22:b0:84:42e0:ad30 with SMTP id cj34csp1824534pzb; Mon, 20 Jun 2022 12:15:05 -0700 (PDT) X-Google-Smtp-Source: AGRyM1sEk5YJ7fk5qeZln8rAmC7HXp0SmriJqcd99U4ixeyBWiG0Jr1pFe1P5h2XIzljDOB32LGp X-Received: by 2002:a05:6402:240e:b0:433:4642:f86a with SMTP id t14-20020a056402240e00b004334642f86amr30462090eda.313.1655752505313; Mon, 20 Jun 2022 12:15:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1655752505; cv=none; d=google.com; s=arc-20160816; b=qovuR1jxHY9d0qMmKiPUzuJRjR4glk9yRu/qnni2ktpbtJhcboMxrUFex/CvbsNlKh IuScFyG7y4TkNxzaTd3DEagl1GNjRUNDIpDcvT/z2oPuMTzz28VG7eQy2BZZGiGV4qzK 3VlEhec7ADc60DaCpIOkwU8S/YOpAPucEb/U1vGhAYI9J5S4sRI6Z1EtXPQoWrZ28h4h inhXL343z6LByP8r33Ch4dWupmjpPppEnlinV+1RWoHKbM8hDJ5ZnvfHNDgYRMKzvIkE uqiTjUr/ei6cFIqsJngDxJ6vOU7k/wQLvqUxTLYJ571BooMkqzelys9/RphmCcQkK9Ti qyBw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:reply-to:list-subscribe:list-help:list-post :list-archive:list-unsubscribe:list-id:precedence:subject:references :mime-version:message-id:to:from:date:dkim-signature:delivered-to; bh=a3ruhGDBgpbTXR2PqGpPlEaYOG7lDwxmdz0Q5hPPw20=; b=Icbv+xcFzYL9W7qq5q3roi4ueAdAY6kH50dy9iBkui4xweOwnMd6lJqfS7NlbHJBMP k587tKulhrAPXjRnCHEJgw+jq/FEWexdvC6mVtxGkbfVF6QLCl39upn8MNfi4KRcQcRO W3mOTl6bo6zDktkU5l0FyfuzVcH4TRmZyKUiaGmggvcea3Hs2UsfN8uKj/sMyO7/U4/3 RyWViMXvI0pSdkYeyBOAa5/qRp9yuonn2rPfxzSO+QXfqGjYF+Zo1eIIbUf/esFbjz07 F2drMM+Ijl8IwjtidD5O4vqD7QCKnX1V6TNRYqp8ug546Y6BYx453teyRyv7vqxtjjC7 P0/A== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@yahoo.it header.s=s2048 header.b=FfANlDbr; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id bs8-20020a056402304800b004357f2f78adsi512795edb.614.2022.06.20.12.15.04; Mon, 20 Jun 2022 12:15:05 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@yahoo.it header.s=s2048 header.b=FfANlDbr; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 7CD0F68B3F9; Mon, 20 Jun 2022 22:15:01 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from sonic313-21.consmr.mail.ir2.yahoo.com (sonic313-21.consmr.mail.ir2.yahoo.com [77.238.179.188]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 9D3E968B3F9 for ; Mon, 20 Jun 2022 22:14:54 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.it; s=s2048; t=1655752493; bh=+gUSAIv14Zy+2JVL/R/9FocWaHjz85+7rmey+X0fzbo=; h=Date:From:To:Subject:References:From:Subject:Reply-To; b=FfANlDbr7lP5qdXT7gEvMRSLfYV/ktKWqmZo6rPUzWpVc/oHy5j895q/kqtZOLxb1d3pHYaJmShjxkwZRIdpJR8iVOh+Rny6f/uFTEPcOmVGEBgi0+qqMHEGXJ0uutTeOaJBvxJsYdLTofqYhQEiV8zdZgv8KXiHGvcingsMtlZQZ5K0OijXxg5ryqcFHvEcXAITklwwTgvNU+jVIUdV/SkgUJgGZv3Un5rrXjxojRS6AgE7QiDxd3XIUJO1GCCzKpq2RRxthKgwBerS1PP2mrSYBxgaOsngUDDjTSZhyWy1t9reFh1XSqL0nz5GD3qAZy7840fFOcZaak/HmmVyfg== X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1655752493; bh=MTLzgf51m8G0sNotn99jvgK6kjmTaLSqmgmfS6IXVKI=; h=X-Sonic-MF:Date:From:To:Subject:From:Subject; b=WxiBYfIqGg2YnfpFSVrLlpIBNHJeg95sVsrroa9jDDaEHV/yGLcod2cZHzE+O4M9ahPbskG8G10oMeV8GS7bfvSPNv9Ctlb801hGv9oZh1O6gAldYyunKMlvcWPNUqbuGn7C4RKkGTV8W4TEzTPFYIxzodCn7pR8BqbOg3ZXQxLalYeJ2Jj1F2/skZCfye/z40/vg8RqNYFImqnn2706/Jcn9jk6cjtsBn1siHqco7HKK4afMmh0uH3lLwgu7AHy36FuAN32CHYd6Cbld/ChCCw/LuiNuFOOvVsOG75E6sjQ99j8xzyOTTzd9Gn1h/9IWcEGlcRu/pWLf2WmRDBIZA== X-YMail-OSG: vgIop1gVM1lmuwgRO4fD7OK4XeLTkDX3dA2mZ0mNgLBF0oPmWAEVnjcf7gDCqcS rCSDNVdou.Fh3iLV1mvS7BZ1Oz3z63MF3hKmeor0k.iJpBHGVAb88vOSw5IxbLPdNLYHEWKOey3w Abi9tlDZObSRQ5Y4t4ilXUdTHYHv88DN4UAOcryWRibQqRoSCZ4unSZLXTTGGu0_1UbqCWHKfTOG 8q6LZVpxZCxzDoGME3GYuNtinV6ohGBLdOqJeuLQHRPec617TdYuaBCY9ryYitUp1VrJ3InwXO7k SakdZb72zF41lhxzWzguo0ciWEMo326XRYDTp5NJH8.k1KtgyS1O1OLvQVE7hExnhaCsDkHcgbd9 XyK6Tu.YfXp5QJxE1Trd6rZwlLqFrVM6YyueBBTYgTIaNCZRw9HPZU8JyTwuAdciO_VosPp9qoH7 tWiD8he1piilx_C4MksneUuQJ9mkBf2BV5Ih5eudFd1xoSKdnWIWu7UP8GQJiLT04_ysii1WNfXF S7xOHm3KgUZbtY76HqAvBJNdTqwErqAfdQXxp56N3xPEOoiokNwG.wbX.3tC.n44OxsKHZewNqAW ZuZroP8Sc4yQl0oS6TBBjlKUojPDtCckjsv7sxmGJafLfAo72rIEJdIGBx0H58iX8c25PBPt1gAR p1QoHhPYklOx_cioQCpmMq3LvcycPFbmMSgDgH1lX64EFTNFOlRzKY9nECTGVjqmLfRQEfQ5Ict1 PDM48Oib_xAhJ_pFJWjmwz.fZA01ybjt_HNFoqLbTpIxsiKQDJUSz1.ivjk.nGjb1a0pZfUuu_Ri QanwvDUQGQuh9DoNbyKVQzUgA.SEAC2Oq1yV.EpybbQCl86GswFI23q6l07H9xs6TnwHEDalq7gL 4qpINrgx0aK61v9Z96_gNMRAsTQJt.cvIf4Lqemi374UEWeTWUVkEq8ZfFLQ3sqLvJNSQAmjexFT O1BO0h4tvQmg4UOte3DOIfanUaBwJYX6OX9bUo3eZxER1_cHFF7UuN65azPdRlu8saUwW4Uze96T 7s1rb6s888tKHAwVH4IX_EwMsDTclmm.XVJ11C6DRnAfybHaoNG10Bqngemiv_E6aOVUb24R6d4m pr4e8Pr4AOxeywfY.lIlskQdXpkcDC22SUjVCY1lbvz3x3nd.7x0uPVwNzYI3xyA07Ilr2mi4Q.1 wDN8dU_M5jsiO2zdpQ6LyLjNpxG.bbJF3FNF5fJUou3bpvr6ymJBN8M4W.DurjK_J3sWg5FPodf4 5SCBIYiow1MawVgmULgfo2vkpjrhz8T1CA0G6eqWDzjNE8B_1chdfIrotNXOkHL_G5FEI0oRQIoP l3n_JcResdis2Q4OVbKiHgxLkpENVShFO3tRylMdtrsAnKR6mkjNRyEKswhalhFsC.S7CEvGTHUC Rlp_9rU9wpP2GB8CvSQ0zbJ1DAuSX07Gw2KVtiEPdD3EQ64YzfhYOIpQsoTAOiK6dF1vlXUqkQQV hhO2LE6UWQgbiDT1xbZuRziHgJdqo9c251RaVbeEHbVz1IhzjVlfVomLGBY0TBEqUhJpzqpKcR1X PxwMjzvo5m2m4tqoXk0aHNrikwLf7NwHZnxJrbKwzar5uJs2.l7Jpq4PpuBTm6RLJA571HVQJmGC jJ9b0Zrr9SLwLzTY8Y2C4swIWaSP1K7flA6b.OkXKh0ZL3rwyyjwZRX0K5dpRTAcK3zHBcssBpqi 6MtvehMNx5vTvjiElsYIl5FY4NBAA0ljfYNTsBqQ.HrjkDSFpeHAhBVGgmZSHCpcYYptJELSeXWA HxjpoAhttE_wznalz2K3Ja4PKM27.Hzyt8iGkqb0nh45POigZBr8ZYCIcJwsPtcwDsruaiQopTUB UlmjiRfHFYPul6dBlmgbrdFh6i.LncVr64F.uVzEngIFlTFDOFnNxI_NpeWkpbY_puibvevmM.e1 ZSDOOcgCqVMnGpGItSL0_JVfvwB8iGL1u2lorEeRcl_dTUwIarNBjNsuEtKqJ9PZBYJHYQsBAfzh Bh6JweN3hZISTvqbh.S4Ey6yXX3MRzK9FLaZhlOZzGvNr7oCRXBuGIlzokqaLNpQ1vFt4DaWHtGY j.cMsWrexzzBHVmNVnclxKJl5vNHKxlQ_Aa8jI30JQTmzFwflngzD478j8pigYbqWMiKh.90JH7g FnpjIGuO6x7oIn2Iy88Ji_HY1rc1usmj67K0B83LRtVjROdcQHjQFGitDtA6.PKArj9tlnb4mn0d 59A-- X-Sonic-MF: Received: from sonic.gate.mail.ne1.yahoo.com by sonic313.consmr.mail.ir2.yahoo.com with HTTP; Mon, 20 Jun 2022 19:14:53 +0000 Date: Mon, 20 Jun 2022 19:20:39 +0000 (UTC) From: Paolo Prete To: FFmpeg Development Discussions and Patches Message-ID: <859326096.10559933.1655752840062@mail.yahoo.com> MIME-Version: 1.0 References: <859326096.10559933.1655752840062.ref@mail.yahoo.com> X-Mailer: WebService/1.1.20280 YMailNorrin Subject: [FFmpeg-devel] [PATCH v2] doc/examples/muxing: code rewrite with improved readability and fixed issues X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: FmHAmYM39Qy7 Please review it! Difference to v1: *) AVCodecParameter(s) allocated with avcodec_parameters_alloc() *) AVRational  objects created with av_make_q() (in the original muxing.c they are created through a cast to AVRational) *) removed unuseful "else"s *) fixed error check in case of avcodec_alloc_context3() fails From 2e9cc74d3a8b637d8adff5d78632cf1321d7e2cf Mon Sep 17 00:00:00 2001 From: paolo Date: Mon, 20 Jun 2022 20:56:47 +0200 Subject: [PATCH v2] doc/examples/muxing: code rewrite with improved readability and fixed issues Improved readability with functions that have clearer prototypes and that don't mix logically unrelated blocks of code Fixed issues in case of unsupported extensions Fixed memory leaks on errors, which are now properly propagated to the main() function Fixed issue on raw images output fprintf() replaced with av_log() Input A/V parameters exposed in the main() function and easier to customize --- doc/examples/muxing.c | 919 +++++++++++++++++++----------------------- 1 file changed, 423 insertions(+), 496 deletions(-) diff --git a/doc/examples/muxing.c b/doc/examples/muxing.c index 3acb778322..a91134ac7c 100644 --- a/doc/examples/muxing.c +++ b/doc/examples/muxing.c @@ -1,5 +1,6 @@ /* * Copyright (c) 2003 Fabrice Bellard + * (Code rewrite by Paolo Prete, 2022) * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal @@ -24,625 +25,551 @@ * @file * libavformat API example. * - * Output a media file in any supported libavformat format. The default + * Output a media file in a set of supported libavformat formats. The default * codecs are used. * @example muxing.c */ -#include -#include -#include -#include - -#include -#include -#include -#include -#include #include #include -#include +#include #include +#include -#define STREAM_DURATION 10.0 -#define STREAM_FRAME_RATE 25 /* 25 images/s */ -#define STREAM_PIX_FMT AV_PIX_FMT_YUV420P /* default pix_fmt */ - -#define SCALE_FLAGS SWS_BICUBIC +#define VIDEO_FRAME_RATE 25 /* 25 images/s */ +#define VIDEO_SCALE_FLAGS SWS_BICUBIC +#define STREAM_DURATION 10.0 /* 10 seconds */ -// a wrapper around a single output AVStream -typedef struct OutputStream { - AVStream *st; - AVCodecContext *enc; +static void log_error(const char *s, int *num) +{ + if (num) + av_log(NULL, AV_LOG_ERROR, "%s (error '%s')\n", s, av_err2str(*num)); + else + av_log(NULL, AV_LOG_ERROR, "%s\n", s); +} - /* pts of the next frame that will be generated */ - int64_t next_pts; - int samples_count; +static int mux_encoded_pkt(AVPacket *out_pkt, AVFormatContext *out_fmt_ctx, + enum AVMediaType type) +{ + int ret; + AVRational enc_time_base, str_time_base; - AVFrame *frame; - AVFrame *tmp_frame; + if (out_fmt_ctx->streams[0]->codecpar->codec_type == type) + out_pkt->stream_index = 0; + else if ((out_fmt_ctx->nb_streams > 1) && (type == AVMEDIA_TYPE_VIDEO)) + out_pkt->stream_index = 1; + str_time_base = out_fmt_ctx->streams[out_pkt->stream_index]->time_base; - AVPacket *tmp_pkt; + if (type == AVMEDIA_TYPE_AUDIO) + enc_time_base = ((AVRational *)out_fmt_ctx->opaque)[0]; + else + enc_time_base = ((AVRational *)out_fmt_ctx->opaque)[1]; - float t, tincr, tincr2; + av_packet_rescale_ts(out_pkt, enc_time_base, str_time_base); - struct SwsContext *sws_ctx; - struct SwrContext *swr_ctx; -} OutputStream; + av_log(NULL, AV_LOG_INFO, "stream_index=%d, size=%d, pts_time=%s\n", + out_pkt->stream_index, + out_pkt->size, av_ts2timestr(out_pkt->pts, &str_time_base)); -static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt) -{ - AVRational *time_base = &fmt_ctx->streams[pkt->stream_index]->time_base; + if ((ret = av_interleaved_write_frame(out_fmt_ctx, out_pkt)) < 0) + log_error("Error calling av_interleaved_write_frame()", &ret); - printf("pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\n", - av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base), - av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base), - av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base), - pkt->stream_index); + return ret; } -static int write_frame(AVFormatContext *fmt_ctx, AVCodecContext *c, - AVStream *st, AVFrame *frame, AVPacket *pkt) +static int is_extension_supported(const char *filename) { - int ret; - - // send the frame to the encoder - ret = avcodec_send_frame(c, frame); - if (ret < 0) { - fprintf(stderr, "Error sending a frame to the encoder: %s\n", - av_err2str(ret)); - exit(1); - } - - while (ret >= 0) { - ret = avcodec_receive_packet(c, pkt); - if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) - break; - else if (ret < 0) { - fprintf(stderr, "Error encoding a frame: %s\n", av_err2str(ret)); - exit(1); - } + const char *extensions[] = {".aac", ".avi", ".bmp", ".jpeg", ".mka", + ".mkv", ".mov", ".mp4", ".flv", ".ts"}; + int i, size = sizeof(extensions) / sizeof(extensions[0]); + char *dot = strrchr(filename, '.'); - /* rescale output packet timestamp values from codec to stream timebase */ - av_packet_rescale_ts(pkt, c->time_base, st->time_base); - pkt->stream_index = st->index; - - /* Write the compressed frame to the media file. */ - log_packet(fmt_ctx, pkt); - ret = av_interleaved_write_frame(fmt_ctx, pkt); - /* pkt is now blank (av_interleaved_write_frame() takes ownership of - * its contents and resets pkt), so that no unreferencing is necessary. - * This would be different if one used av_write_frame(). */ - if (ret < 0) { - fprintf(stderr, "Error while writing output packet: %s\n", av_err2str(ret)); - exit(1); - } + for (i = 0; i < size; i++) { + if (dot && !strcmp(dot, extensions[i])) + return 1; } - return ret == AVERROR_EOF ? 1 : 0; + log_error("File extension not supported", NULL); + av_log(NULL, AV_LOG_WARNING, "Please choose one of the following extensions: "); + for (i = 0; i < size - 1; i++) + av_log(NULL, AV_LOG_WARNING, "%s, ", extensions[i]); + av_log(NULL, AV_LOG_WARNING, "%s\n", extensions[size-1]); + + return 0; } -/* Add an output stream. */ -static void add_stream(OutputStream *ost, AVFormatContext *oc, - const AVCodec **codec, - enum AVCodecID codec_id) +static int get_default_enc_params(AVCodecParameters *params, + const char *fname, enum AVMediaType type) { - AVCodecContext *c; - int i; - - /* find the encoder */ - *codec = avcodec_find_encoder(codec_id); - if (!(*codec)) { - fprintf(stderr, "Could not find encoder for '%s'\n", - avcodec_get_name(codec_id)); - exit(1); - } - - ost->tmp_pkt = av_packet_alloc(); - if (!ost->tmp_pkt) { - fprintf(stderr, "Could not allocate AVPacket\n"); - exit(1); - } - - ost->st = avformat_new_stream(oc, NULL); - if (!ost->st) { - fprintf(stderr, "Could not allocate stream\n"); - exit(1); - } - ost->st->id = oc->nb_streams-1; - c = avcodec_alloc_context3(*codec); - if (!c) { - fprintf(stderr, "Could not alloc an encoding context\n"); - exit(1); - } - ost->enc = c; - - switch ((*codec)->type) { - case AVMEDIA_TYPE_AUDIO: - c->sample_fmt = (*codec)->sample_fmts ? - (*codec)->sample_fmts[0] : AV_SAMPLE_FMT_FLTP; - c->bit_rate = 64000; - c->sample_rate = 44100; - if ((*codec)->supported_samplerates) { - c->sample_rate = (*codec)->supported_samplerates[0]; - for (i = 0; (*codec)->supported_samplerates[i]; i++) { - if ((*codec)->supported_samplerates[i] == 44100) - c->sample_rate = 44100; - } - } - av_channel_layout_copy(&c->ch_layout, &(AVChannelLayout)AV_CHANNEL_LAYOUT_STEREO); - ost->st->time_base = (AVRational){ 1, c->sample_rate }; - break; - - case AVMEDIA_TYPE_VIDEO: - c->codec_id = codec_id; - - c->bit_rate = 400000; - /* Resolution must be a multiple of two. */ - c->width = 352; - c->height = 288; - /* timebase: This is the fundamental unit of time (in seconds) in terms - * of which frame timestamps are represented. For fixed-fps content, - * timebase should be 1/framerate and timestamp increments should be - * identical to 1. */ - ost->st->time_base = (AVRational){ 1, STREAM_FRAME_RATE }; - c->time_base = ost->st->time_base; - - c->gop_size = 12; /* emit one intra frame every twelve frames at most */ - c->pix_fmt = STREAM_PIX_FMT; - if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) { - /* just for testing, we also add B-frames */ - c->max_b_frames = 2; - } - if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) { - /* Needed to avoid using macroblocks in which some coeffs overflow. - * This does not happen with normal video, it just happens here as - * the motion of the chroma plane does not match the luma plane. */ - c->mb_decision = 2; - } - break; - - default: - break; + AVFormatContext *tmp_fctx; + enum AVCodecID id; + const AVCodec *c; + int ret = 0; + + if ((ret = avformat_alloc_output_context2(&tmp_fctx, NULL, NULL, fname)) < 0) { + log_error("Could not get default encoder", &ret); + return AVERROR_EXIT; } - /* Some formats want stream headers to be separate. */ - if (oc->oformat->flags & AVFMT_GLOBALHEADER) - c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER; -} - -/**************************************************************/ -/* audio output */ - -static AVFrame *alloc_audio_frame(enum AVSampleFormat sample_fmt, - const AVChannelLayout *channel_layout, - int sample_rate, int nb_samples) -{ - AVFrame *frame = av_frame_alloc(); - int ret; + id = (type == AVMEDIA_TYPE_AUDIO) ? tmp_fctx->oformat->audio_codec : + tmp_fctx->oformat->video_codec; - if (!frame) { - fprintf(stderr, "Error allocating an audio frame\n"); - exit(1); + if (!(c = avcodec_find_encoder(id))) { + avformat_free_context(tmp_fctx); + return ret; } - frame->format = sample_fmt; - av_channel_layout_copy(&frame->ch_layout, channel_layout); - frame->sample_rate = sample_rate; - frame->nb_samples = nb_samples; - - if (nb_samples) { - ret = av_frame_get_buffer(frame, 0); - if (ret < 0) { - fprintf(stderr, "Error allocating an audio buffer\n"); - exit(1); - } + params->codec_type = c->type; + params->codec_id = c-> id; + if (c->type == AVMEDIA_TYPE_AUDIO) { + params->format = c->sample_fmts ? + c->sample_fmts[0] : AV_SAMPLE_FMT_FLTP; + params->ch_layout = (AVChannelLayout)AV_CHANNEL_LAYOUT_STEREO; + params->sample_rate = c->supported_samplerates ? + c->supported_samplerates[0] : 44100; + } else if (c->type == AVMEDIA_TYPE_VIDEO) { + params->format = c->pix_fmts ? c->pix_fmts[0] : AV_PIX_FMT_YUV420P; } + avformat_free_context(tmp_fctx); - return frame; + return ret; } -static void open_audio(AVFormatContext *oc, const AVCodec *codec, - OutputStream *ost, AVDictionary *opt_arg) +static int init_encoder(AVCodecContext **enc_ctx, AVCodecParameters *params) { - AVCodecContext *c; - int nb_samples; + const AVCodec *codec = NULL; int ret; - AVDictionary *opt = NULL; - c = ost->enc; - - /* open it */ - av_dict_copy(&opt, opt_arg, 0); - ret = avcodec_open2(c, codec, &opt); - av_dict_free(&opt); - if (ret < 0) { - fprintf(stderr, "Could not open audio codec: %s\n", av_err2str(ret)); - exit(1); + if (!(codec = avcodec_find_encoder(params->codec_id))) { + log_error("Could not find encoder", NULL); + return AVERROR_EXIT; } - /* init signal generator */ - ost->t = 0; - ost->tincr = 2 * M_PI * 110.0 / c->sample_rate; - /* increment frequency by 110 Hz per second */ - ost->tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate; - - if (c->codec->capabilities & AV_CODEC_CAP_VARIABLE_FRAME_SIZE) - nb_samples = 10000; - else - nb_samples = c->frame_size; - - ost->frame = alloc_audio_frame(c->sample_fmt, &c->ch_layout, - c->sample_rate, nb_samples); - ost->tmp_frame = alloc_audio_frame(AV_SAMPLE_FMT_S16, &c->ch_layout, - c->sample_rate, nb_samples); - - /* copy the stream parameters to the muxer */ - ret = avcodec_parameters_from_context(ost->st->codecpar, c); - if (ret < 0) { - fprintf(stderr, "Could not copy the stream parameters\n"); - exit(1); + if (!(*enc_ctx = avcodec_alloc_context3(codec))) { + log_error("Could not allocate the encoding context", NULL); + return AVERROR_EXIT; } - /* create resampler context */ - ost->swr_ctx = swr_alloc(); - if (!ost->swr_ctx) { - fprintf(stderr, "Could not allocate resampler context\n"); - exit(1); + (*enc_ctx)->codec_id = params->codec_id; + (*enc_ctx)->codec_type = params->codec_type; + if (params->codec_type == AVMEDIA_TYPE_AUDIO) { + (*enc_ctx)->sample_fmt = params->format; + (*enc_ctx)->sample_rate = params->sample_rate; + (*enc_ctx)->time_base = av_make_q(1, params->sample_rate); + (*enc_ctx)->ch_layout = params->ch_layout; + } else if (params->codec_type == AVMEDIA_TYPE_VIDEO) { + (*enc_ctx)->width = params->width; + (*enc_ctx)->height = params->height; + (*enc_ctx)->time_base = av_make_q(1, VIDEO_FRAME_RATE); + (*enc_ctx)->gop_size = 12; + (*enc_ctx)->pix_fmt = params->format; } - /* set options */ - av_opt_set_chlayout (ost->swr_ctx, "in_chlayout", &c->ch_layout, 0); - av_opt_set_int (ost->swr_ctx, "in_sample_rate", c->sample_rate, 0); - av_opt_set_sample_fmt(ost->swr_ctx, "in_sample_fmt", AV_SAMPLE_FMT_S16, 0); - av_opt_set_chlayout (ost->swr_ctx, "out_chlayout", &c->ch_layout, 0); - av_opt_set_int (ost->swr_ctx, "out_sample_rate", c->sample_rate, 0); - av_opt_set_sample_fmt(ost->swr_ctx, "out_sample_fmt", c->sample_fmt, 0); - - /* initialize the resampling context */ - if ((ret = swr_init(ost->swr_ctx)) < 0) { - fprintf(stderr, "Failed to initialize the resampling context\n"); - exit(1); + if ((ret = avcodec_open2(*enc_ctx, codec, NULL)) < 0) { + log_error("Could not open input codec", &ret); + return ret; + } else { + return 0; } } -/* Prepare a 16 bit dummy audio frame of 'frame_size' samples and - * 'nb_channels' channels. */ -static AVFrame *get_audio_frame(OutputStream *ost) +static int init_audio_convert(struct SwrContext **ctx, AVCodecParameters *in_params, + AVCodecParameters *out_params) { - AVFrame *frame = ost->tmp_frame; - int j, i, v; - int16_t *q = (int16_t*)frame->data[0]; - - /* check if we want to generate more frames */ - if (av_compare_ts(ost->next_pts, ost->enc->time_base, - STREAM_DURATION, (AVRational){ 1, 1 }) > 0) - return NULL; - - for (j = 0; j nb_samples; j++) { - v = (int)(sin(ost->t) * 10000); - for (i = 0; i < ost->enc->ch_layout.nb_channels; i++) - *q++ = v; - ost->t += ost->tincr; - ost->tincr += ost->tincr2; + swr_alloc_set_opts2(ctx, + &(out_params->ch_layout), + out_params->format, out_params->sample_rate, + &(in_params->ch_layout), + in_params->format, in_params->sample_rate, + 0, NULL); + if (!*ctx) { + log_error("Could not allocate resample context", NULL); + return AVERROR(ENOMEM); } - frame->pts = ost->next_pts; - ost->next_pts += frame->nb_samples; - - return frame; + return 0; } -/* - * encode one audio frame and send it to the muxer - * return 1 when encoding is finished, 0 otherwise - */ -static int write_audio_frame(AVFormatContext *oc, OutputStream *ost) +static int init_video_convert(struct SwsContext **ctx, AVCodecParameters *in_params, + AVCodecParameters *out_params) { - AVCodecContext *c; - AVFrame *frame; - int ret; - int dst_nb_samples; - - c = ost->enc; - - frame = get_audio_frame(ost); - - if (frame) { - /* convert samples from native format to destination codec format, using the resampler */ - /* compute destination number of samples */ - dst_nb_samples = av_rescale_rnd(swr_get_delay(ost->swr_ctx, c->sample_rate) + frame->nb_samples, - c->sample_rate, c->sample_rate, AV_ROUND_UP); - av_assert0(dst_nb_samples == frame->nb_samples); - - /* when we pass a frame to the encoder, it may keep a reference to it - * internally; - * make sure we do not overwrite it here - */ - ret = av_frame_make_writable(ost->frame); - if (ret < 0) - exit(1); - - /* convert to destination format */ - ret = swr_convert(ost->swr_ctx, - ost->frame->data, dst_nb_samples, - (const uint8_t **)frame->data, frame->nb_samples); - if (ret < 0) { - fprintf(stderr, "Error while converting\n"); - exit(1); - } - frame = ost->frame; - - frame->pts = av_rescale_q(ost->samples_count, (AVRational){1, c->sample_rate}, c->time_base); - ost->samples_count += dst_nb_samples; + *ctx = sws_getContext(in_params->width, in_params->height, + in_params->format, + out_params->width, out_params->height, + out_params->codec_id == out_params->format, + VIDEO_SCALE_FLAGS, NULL, NULL, NULL); + if (!*ctx) { + log_error("Could not allocate scale context", NULL); + return AVERROR(ENOMEM); } - return write_frame(oc, c, ost->st, frame, ost->tmp_pkt); + return 0; } -/**************************************************************/ -/* video output */ - -static AVFrame *alloc_picture(enum AVPixelFormat pix_fmt, int width, int height) +static int init_avframe(AVFrame **frame, AVCodecParameters *params) { - AVFrame *picture; int ret; - picture = av_frame_alloc(); - if (!picture) - return NULL; + if (!(*frame = av_frame_alloc())) { + log_error("Could not allocate AVFrame", NULL); + return AVERROR(ENOMEM); + } - picture->format = pix_fmt; - picture->width = width; - picture->height = height; + (*frame)->opaque = ¶ms->codec_type; + if (params->codec_type == AVMEDIA_TYPE_AUDIO) { + (*frame)->nb_samples = params->frame_size; + (*frame)->sample_rate = params->sample_rate; + (*frame)->format = params->format; + (*frame)->ch_layout = params->ch_layout; + } else { + (*frame)->width = params->width; + (*frame)->height = params->height; + (*frame)->format = params->format; + } - /* allocate the buffers for the frame data */ - ret = av_frame_get_buffer(picture, 0); - if (ret < 0) { - fprintf(stderr, "Could not allocate frame data.\n"); - exit(1); + /* Allocate the frame's data buffer */ + if ((ret = av_frame_get_buffer(*frame, 0)) < 0) { + log_error("Could not allocate buffer for AVFrame", &ret); + return AVERROR(ENOMEM); } - return picture; + return 0; } -static void open_video(AVFormatContext *oc, const AVCodec *codec, - OutputStream *ost, AVDictionary *opt_arg) +static int init_muxer(AVFormatContext **out_fmt_ctx, AVCodecContext *audio_enc_ctx, + AVCodecContext *video_enc_ctx, const char *filename) { int ret; - AVCodecContext *c = ost->enc; - AVDictionary *opt = NULL; + AVStream *out_audio_str, *out_video_str; - av_dict_copy(&opt, opt_arg, 0); + if ((ret = avformat_alloc_output_context2(out_fmt_ctx, NULL, NULL, filename)) < 0) { + log_error("Could not create output context", &ret); + return ret; + } - /* open the codec */ - ret = avcodec_open2(c, codec, &opt); - av_dict_free(&opt); - if (ret < 0) { - fprintf(stderr, "Could not open video codec: %s\n", av_err2str(ret)); - exit(1); - } - - /* allocate and init a re-usable frame */ - ost->frame = alloc_picture(c->pix_fmt, c->width, c->height); - if (!ost->frame) { - fprintf(stderr, "Could not allocate video frame\n"); - exit(1); - } - - /* If the output format is not YUV420P, then a temporary YUV420P - * picture is needed too. It is then converted to the required - * output format. */ - ost->tmp_frame = NULL; - if (c->pix_fmt != AV_PIX_FMT_YUV420P) { - ost->tmp_frame = alloc_picture(AV_PIX_FMT_YUV420P, c->width, c->height); - if (!ost->tmp_frame) { - fprintf(stderr, "Could not allocate temporary picture\n"); - exit(1); + /* open the output file, if needed */ + if (!((*out_fmt_ctx)->oformat->flags & AVFMT_NOFILE)) { + if ((ret = avio_open(&(*out_fmt_ctx)->pb, filename, AVIO_FLAG_WRITE)) < 0) { + log_error("Could not open output file", &ret); + return ret; } } - /* copy the stream parameters to the muxer */ - ret = avcodec_parameters_from_context(ost->st->codecpar, c); - if (ret < 0) { - fprintf(stderr, "Could not copy the stream parameters\n"); - exit(1); + if (audio_enc_ctx) { + if (!(out_audio_str = avformat_new_stream(*out_fmt_ctx, NULL))) { + log_error("Could not create new stream", NULL); + return AVERROR(ENOMEM); + } + out_audio_str->id = (*out_fmt_ctx)->nb_streams - 1; + avcodec_parameters_from_context(out_audio_str->codecpar, audio_enc_ctx); + } + + if (video_enc_ctx) { + if (!(out_video_str = avformat_new_stream(*out_fmt_ctx, NULL))) { + log_error("Could not create new stream", NULL); + return AVERROR(ENOMEM); + } + out_video_str->id = (*out_fmt_ctx)->nb_streams - 1; + avcodec_parameters_from_context(out_video_str->codecpar, video_enc_ctx); } + + av_dump_format(*out_fmt_ctx, 0, filename, 1); + + /* Write the stream header, if any. */ + if (avformat_write_header(*out_fmt_ctx, NULL) < 0) { + log_error("avformat_write_header() error", NULL); + return AVERROR_EXIT; + } + + return 0; } -/* Prepare a dummy image. */ -static void fill_yuv_image(AVFrame *pict, int frame_index, - int width, int height) +static void fill_dummy_s16_frame(AVFrame *frame) { - int x, y, i; + int j, i, v; + static float t, tincr, tincr2; + int16_t *data = (int16_t*)frame->data[0]; + static int frame_ctr; + + if (!tincr) { + t = 0; + tincr = 2 * M_PI * 110.0 / frame->sample_rate; + /* increment frequency by 110 Hz per second */ + tincr2 = tincr / frame->sample_rate; + } + for (j = 0; j nb_samples; j++) { + v = (int)(sin(t) * 10000); + for (i = 0; i < frame->ch_layout.nb_channels; i++) + *data++ = v; + t += tincr; + tincr += tincr2; + } + frame->pts = frame->nb_samples*(++frame_ctr); +} - i = frame_index; +static void fill_dummy_yuv420p_frame(AVFrame *frame) +{ + int x, y; + static int idx; /* Y */ - for (y = 0; y < height; y++) - for (x = 0; x < width; x++) - pict->data[0][y * pict->linesize[0] + x] = x + y + i * 3; + for (y = 0; y < frame->width; y++) + for (x = 0; x < frame->width; x++) + frame->data[0][y * frame->linesize[0] + x] = x + y + idx * 3; /* Cb and Cr */ - for (y = 0; y < height / 2; y++) { - for (x = 0; x < width / 2; x++) { - pict->data[1][y * pict->linesize[1] + x] = 128 + y + i * 2; - pict->data[2][y * pict->linesize[2] + x] = 64 + x + i * 5; + for (y = 0; y < frame->height / 2; y++) { + for (x = 0; x < frame->width / 2; x++) { + frame->data[1][y * frame->linesize[1] + x] = 128 + y + idx * 2; + frame->data[2][y * frame->linesize[2] + x] = 64 + x + idx * 5; } } + + frame->pts = idx++; } -static AVFrame *get_video_frame(OutputStream *ost) +static int convert_frame(void *convert_ctx, AVFrame *in_frame, AVFrame *out_frame) { - AVCodecContext *c = ost->enc; - - /* check if we want to generate more frames */ - if (av_compare_ts(ost->next_pts, c->time_base, - STREAM_DURATION, (AVRational){ 1, 1 }) > 0) - return NULL; - - /* when we pass a frame to the encoder, it may keep a reference to it - * internally; make sure we do not overwrite it here */ - if (av_frame_make_writable(ost->frame) < 0) - exit(1); - - if (c->pix_fmt != AV_PIX_FMT_YUV420P) { - /* as we only generate a YUV420P picture, we must convert it - * to the codec pixel format if needed */ - if (!ost->sws_ctx) { - ost->sws_ctx = sws_getContext(c->width, c->height, - AV_PIX_FMT_YUV420P, - c->width, c->height, - c->pix_fmt, - SCALE_FLAGS, NULL, NULL, NULL); - if (!ost->sws_ctx) { - fprintf(stderr, - "Could not initialize the conversion context\n"); - exit(1); - } + int ret; + enum AVMediaType *type = (enum AVMediaType *)(in_frame->opaque); + + if (av_frame_make_writable(out_frame) < 0) { + log_error("av_frame_make_writable() error", NULL); + return AVERROR_EXIT; + } + + if (*type == AVMEDIA_TYPE_AUDIO) { + if ((ret = swr_convert_frame((struct SwrContext *)convert_ctx, out_frame, + (const AVFrame *)in_frame)) != 0) { + log_error("Error converting AVFrame", &ret); + return ret; } - fill_yuv_image(ost->tmp_frame, ost->next_pts, c->width, c->height); - sws_scale(ost->sws_ctx, (const uint8_t * const *) ost->tmp_frame->data, - ost->tmp_frame->linesize, 0, c->height, ost->frame->data, - ost->frame->linesize); } else { - fill_yuv_image(ost->frame, ost->next_pts, c->width, c->height); + sws_scale((struct SwsContext *)convert_ctx, (const uint8_t * const *)in_frame->data, + in_frame->linesize, 0, in_frame->height, out_frame->data, + out_frame->linesize); } - ost->frame->pts = ost->next_pts++; - - return ost->frame; + out_frame->pts = in_frame->pts; + return 0; } -/* - * encode one video frame and send it to the muxer - * return 1 when encoding is finished, 0 otherwise - */ -static int write_video_frame(AVFormatContext *oc, OutputStream *ost) +static int encode_frame(AVCodecContext *ctx, AVFrame *in_frame, AVPacket *out_pkt) { - return write_frame(oc, ost->enc, ost->st, get_video_frame(ost), ost->tmp_pkt); + static int is_flushing_audio = 0, is_flushing_video = 0; + int ret = 0; + int is_audio = ctx->codec->type == AVMEDIA_TYPE_AUDIO; + + if ((is_audio && !is_flushing_audio) || (!is_audio && !is_flushing_video)) { + ret = avcodec_send_frame(ctx, in_frame); + } + if (ret < 0) { + log_error("Error sending frame to the encoder (error '%s')\n", &ret); + return ret; + } else if (ret == 0) { + ret = avcodec_receive_packet(ctx, out_pkt); + if ((ret < 0) && (ret != AVERROR(EAGAIN)) && (ret != AVERROR_EOF)) { + log_error("Error receiving encoded packet (error '%s')\n", &ret); + return ret; + } + } + + if (is_audio) + is_flushing_audio = (in_frame == NULL); + else + is_flushing_video = (in_frame == NULL); + + return ret; } -static void close_stream(AVFormatContext *oc, OutputStream *ost) +static int frame_exceeds_stream_duration(AVFrame *fr) { - avcodec_free_context(&ost->enc); - av_frame_free(&ost->frame); - av_frame_free(&ost->tmp_frame); - av_packet_free(&ost->tmp_pkt); - sws_freeContext(ost->sws_ctx); - swr_free(&ost->swr_ctx); + enum AVMediaType *type = (enum AVMediaType *)(fr->opaque); + AVRational tb = (*type == AVMEDIA_TYPE_AUDIO) ? av_make_q(1, fr->sample_rate) : + av_make_q(1, VIDEO_FRAME_RATE); + + return av_compare_ts(fr->pts, tb ,STREAM_DURATION, av_make_q(1, 1)) > 0; } -/**************************************************************/ -/* media file output */ +static enum AVMediaType media_type_of_earlier_frame(AVFrame *audio_fr, + AVFrame *video_fr) +{ + if (!audio_fr) + return AVMEDIA_TYPE_VIDEO; + if (!video_fr) + return AVMEDIA_TYPE_AUDIO; + + if (av_compare_ts(audio_fr->pts, av_make_q(1, audio_fr->sample_rate), + video_fr->pts, av_make_q(1, VIDEO_FRAME_RATE)) < 0) + return AVMEDIA_TYPE_AUDIO; + else + return AVMEDIA_TYPE_VIDEO; +} int main(int argc, char **argv) { - OutputStream video_st = { 0 }, audio_st = { 0 }; - const AVOutputFormat *fmt; - const char *filename; - AVFormatContext *oc; - const AVCodec *audio_codec, *video_codec; - int ret; - int have_video = 0, have_audio = 0; - int encode_video = 0, encode_audio = 0; - AVDictionary *opt = NULL; - int i; - - if (argc < 2) { + const char *fname; + AVCodecContext *audio_enc_ctx = NULL, *video_enc_ctx = NULL, *enc_ctx = NULL; + + /* NOTE: if you want to modify the audio/video input ".format" parameter, + * you need to modify the corresponding fill_dummy_XXX_frame() function(s) too */ + AVCodecParameters *audio_in_params = avcodec_parameters_alloc(), + *audio_enc_params = avcodec_parameters_alloc(), + *video_in_params = avcodec_parameters_alloc(), + *video_enc_params = avcodec_parameters_alloc(); + struct AVRational enc_timebases[2]; + AVFrame *in_audio_frame = NULL, *converted_audio_frame = NULL, + *in_video_frame = NULL, *converted_video_frame = NULL, + *frame_to_encode = NULL; + struct SwrContext *audio_convert_ctx = NULL; + struct SwsContext *video_convert_ctx = NULL; + enum AVMediaType media_type; + AVFormatContext *out_fmt_ctx = NULL; + AVPacket *out_pkt = av_packet_alloc(); + int ret = 0, process_audio = 0, process_video = 0; + + if (argc != 2) { printf("usage: %s output_file\n" "API example program to output a media file with libavformat.\n" - "This program generates a synthetic audio and video stream, encodes and\n" + "This program generates a synthetic audio and/or video stream, encodes and\n" "muxes them into a file named output_file.\n" "The output format is automatically guessed according to the file extension.\n" - "Raw images can also be output by using '%%d' in the filename.\n" + "BMP or JPEG images can also be output by using '%%d' in the filename.\n" "\n", argv[0]); - return 1; + ret = 1; + goto end; } - filename = argv[1]; - for (i = 2; i+1 < argc; i+=2) { - if (!strcmp(argv[i], "-flags") || !strcmp(argv[i], "-fflags")) - av_dict_set(&opt, argv[i]+1, argv[i+1], 0); + if (!audio_in_params || !audio_enc_params || + !video_in_params || !video_enc_params || !out_pkt) { + log_error("could not allocate resources", NULL); + ret = AVERROR(ENOMEM); + goto end; } - /* allocate the output media context */ - avformat_alloc_output_context2(&oc, NULL, NULL, filename); - if (!oc) { - printf("Could not deduce output format from file extension: using MPEG.\n"); - avformat_alloc_output_context2(&oc, NULL, "mpeg", filename); - } - if (!oc) - return 1; + audio_in_params->codec_type = AVMEDIA_TYPE_AUDIO; + audio_in_params->format = AV_SAMPLE_FMT_S16; + audio_in_params->sample_rate = 44100; + audio_in_params->ch_layout = (AVChannelLayout)AV_CHANNEL_LAYOUT_STEREO; - fmt = oc->oformat; + video_in_params->codec_type = AVMEDIA_TYPE_VIDEO; + video_in_params->width = 352; + video_in_params->height = 288; + video_in_params->format = AV_PIX_FMT_YUV420P; - /* Add the audio and video streams using the default format codecs - * and initialize the codecs. */ - if (fmt->video_codec != AV_CODEC_ID_NONE) { - add_stream(&video_st, oc, &video_codec, fmt->video_codec); - have_video = 1; - encode_video = 1; + fname = argv[1]; + if (!is_extension_supported(fname)) { + ret = AVERROR_EXIT; + goto end; } - if (fmt->audio_codec != AV_CODEC_ID_NONE) { - add_stream(&audio_st, oc, &audio_codec, fmt->audio_codec); - have_audio = 1; - encode_audio = 1; - } - - /* Now that all the parameters are set, we can open the audio and - * video codecs and allocate the necessary encode buffers. */ - if (have_video) - open_video(oc, video_codec, &video_st, opt); - if (have_audio) - open_audio(oc, audio_codec, &audio_st, opt); - - av_dump_format(oc, 0, filename, 1); + /* Desume the default codecs and their default parameters from the filename */ + if ((ret = get_default_enc_params(audio_enc_params, fname, AVMEDIA_TYPE_AUDIO)) < 0) + goto end; + if ((ret = get_default_enc_params(video_enc_params, fname, AVMEDIA_TYPE_VIDEO)) < 0) + goto end; + process_audio = audio_enc_params->codec_id != AV_CODEC_ID_NONE; + process_video = video_enc_params->codec_id != AV_CODEC_ID_NONE; + if (!process_audio && !process_video) { + log_error("Could not get default encoder(s)", NULL); + ret = AVERROR_EXIT; + goto end; + } - /* open the output file, if needed */ - if (!(fmt->flags & AVFMT_NOFILE)) { - ret = avio_open(&oc->pb, filename, AVIO_FLAG_WRITE); - if (ret < 0) { - fprintf(stderr, "Could not open '%s': %s\n", filename, - av_err2str(ret)); - return 1; - } + if (process_audio) { + /* Prepare the audio encoder*/ + if ((ret = init_encoder(&audio_enc_ctx, audio_enc_params)) < 0) + goto end; + enc_timebases[0] = audio_enc_ctx->time_base; + audio_in_params->frame_size = audio_enc_params->frame_size = audio_enc_ctx->frame_size; + + /* Allocate an audio resampler and its input and output AVFrames */ + if ((ret = init_audio_convert(&audio_convert_ctx, audio_in_params, + audio_enc_params)) < 0) + goto end; + if ((ret = init_avframe(&in_audio_frame, audio_in_params)) < 0) + goto end; + if ((ret = init_avframe(&converted_audio_frame, audio_enc_params)) < 0) + goto end; } - /* Write the stream header, if any. */ - ret = avformat_write_header(oc, &opt); - if (ret < 0) { - fprintf(stderr, "Error occurred when opening output file: %s\n", - av_err2str(ret)); - return 1; + if (process_video) { + video_enc_params->width = video_in_params->width; + video_enc_params->height = video_in_params->height; + if ((ret = init_encoder(&video_enc_ctx, video_enc_params)) < 0) + goto end; + enc_timebases[1] = video_enc_ctx->time_base; + if ((ret = init_video_convert(&video_convert_ctx,video_in_params, + video_enc_params)) < 0) + goto end; + if ((ret = init_avframe(&in_video_frame, video_in_params)) < 0) + goto end; + if ((ret = init_avframe(&converted_video_frame, video_enc_params)) < 0) + goto end; } - while (encode_video || encode_audio) { - /* select the stream to encode */ - if (encode_video && - (!encode_audio || av_compare_ts(video_st.next_pts, video_st.enc->time_base, - audio_st.next_pts, audio_st.enc->time_base) <= 0)) { - encode_video = !write_video_frame(oc, &video_st); + /* Create the output container for the encoded frames */ + if ((ret = init_muxer(&out_fmt_ctx, audio_enc_ctx, video_enc_ctx, fname)) < 0) + goto end; + out_fmt_ctx->opaque = &enc_timebases; + + while (process_audio || process_video) { + + frame_to_encode = NULL; + media_type = media_type_of_earlier_frame(in_audio_frame, in_video_frame); + + /* fill and convert the input frames */ + if (media_type == AVMEDIA_TYPE_AUDIO) { + enc_ctx = audio_enc_ctx; + fill_dummy_s16_frame(in_audio_frame); + if ((ret = convert_frame(audio_convert_ctx, in_audio_frame, + converted_audio_frame)) != 0) + goto end; + if (!frame_exceeds_stream_duration(converted_audio_frame)) + frame_to_encode = converted_audio_frame; } else { - encode_audio = !write_audio_frame(oc, &audio_st); + enc_ctx = video_enc_ctx; + fill_dummy_yuv420p_frame(in_video_frame); + if ((ret = convert_frame(video_convert_ctx, in_video_frame, + converted_video_frame)) != 0) + goto end; + if (!frame_exceeds_stream_duration(in_video_frame)) + frame_to_encode = converted_video_frame; } - } - - av_write_trailer(oc); - /* Close each codec. */ - if (have_video) - close_stream(oc, &video_st); - if (have_audio) - close_stream(oc, &audio_st); + /* encode the converted frames and mux the encoded packets */ + if ((ret = encode_frame(enc_ctx, frame_to_encode, out_pkt)) == 0) { + if ((ret = mux_encoded_pkt(out_pkt, out_fmt_ctx, media_type)) < 0) + goto end; + } - if (!(fmt->flags & AVFMT_NOFILE)) - /* Close the output file. */ - avio_closep(&oc->pb); + /* check if the encoders have been fully flushed */ + process_audio &= !((ret == AVERROR_EOF) && (media_type == AVMEDIA_TYPE_AUDIO)); + process_video &= !((ret == AVERROR_EOF) && (media_type == AVMEDIA_TYPE_VIDEO)); - /* free the stream */ - avformat_free_context(oc); + } - return 0; + av_write_trailer(out_fmt_ctx); + ret = 0; + +end: + + avcodec_parameters_free(&audio_in_params); + avcodec_parameters_free(&video_in_params); + avcodec_parameters_free(&audio_enc_params); + avcodec_parameters_free(&video_enc_params); + avcodec_free_context(&audio_enc_ctx); + avcodec_free_context(&video_enc_ctx); + av_frame_free(&in_audio_frame); + av_frame_free(&in_video_frame); + av_frame_free(&converted_audio_frame); + av_frame_free(&converted_video_frame); + swr_free(&audio_convert_ctx); + sws_freeContext(video_convert_ctx); + if (out_fmt_ctx) + avio_closep(&out_fmt_ctx->pb); + avformat_free_context(out_fmt_ctx); + av_packet_free(&out_pkt); + + return ret; } -- 2.32.0