From patchwork Thu Nov 17 10:16:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anton Khirnov X-Patchwork-Id: 39314 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a21:999a:b0:a4:2148:650a with SMTP id ve26csp997402pzb; Thu, 17 Nov 2022 02:17:31 -0800 (PST) X-Google-Smtp-Source: AA0mqf6F/i0NfmsQb2BSWiJk215QYzNhtk/PC1mzzffP9iOatGU0xT9fbIAzWH3lovMnqXRz3X3a X-Received: by 2002:a17:906:cb90:b0:772:e95f:cdce with SMTP id mf16-20020a170906cb9000b00772e95fcdcemr1585450ejb.78.1668680251371; Thu, 17 Nov 2022 02:17:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668680251; cv=none; d=google.com; s=arc-20160816; b=WTP2rakDIsqooZnE4/k48fJjye6VtHgVfQ9LmWRVcNKHJZwNjbM6txS2rKQ8Pi9fE3 zqrSeaGIbgPPeTjPgy5V8Vp/BFXaMm6o7biUYXI+amxiUFapvZO+VJNc1f76aO9jqltS R9ET+j7tVes6+b390+kslJS0tHLRb05jQKjgewu5P87vpKxCZdxiZL7MpzMS2tL7s/UO YnlH9ydUXPAWAxcxBIPO+z8jQlu/TDzL2QMJj04yW8Dt/FUYIeU0OKPlO2NY4oYaaEUJ oRUAdryRQ6+fwffM1djgzv6aSG6GMr54Vrxt2XbY0/2/kL7LL5zv9yc3F1CW1bhRNzr+ Ik3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:reply-to:list-subscribe :list-help:list-post:list-archive:list-unsubscribe:list-id :precedence:subject:mime-version:references:in-reply-to:message-id :date:to:from:delivered-to; bh=To3AmOYdPxlZlMKEFcWOyT4ah4gW/cNh1qxTfBKHBxU=; b=rHU7xAcUJd8+YRxYvW/7mi4+UJZ7GWhymgSLaEUCmvqVE/a3zkZDST1dfZj5/yUZuN ETM96mCR2yiNAT5LQh2D38VpVHEMRonl1gzbcjQtmq4LTeKP/8h/MbmQ1fOjRMH+sH80 77X2+m9BZp3L6UKhXP2DMZZyQLHalzoYkBUK97AAfRcAA4/lk3VZb8SUddNJx3Xa7Nwe /8GCY28Xeve1/s2WilalFo2DdN5FmOOd4xkcbFmiH0tQ5hh5rckukPoRNeAGsMx5A/wb REjNMD0nJgwjx7Wi2nSi8mq5Nn+bvMx/THQF7sIF/GDUG5Lhgu1qsq23NiQs/5pWV8hW dU8g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id du17-20020a17090772d100b007ae83c7b8desi336813ejc.59.2022.11.17.02.17.30; Thu, 17 Nov 2022 02:17:31 -0800 (PST) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 789F368BD85; Thu, 17 Nov 2022 12:16:55 +0200 (EET) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail0.khirnov.net (red.khirnov.net [176.97.15.12]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id C9E0368BC53 for ; Thu, 17 Nov 2022 12:16:46 +0200 (EET) Received: from localhost (localhost [IPv6:::1]) by mail0.khirnov.net (Postfix) with ESMTP id 845762404F8 for ; Thu, 17 Nov 2022 11:16:46 +0100 (CET) Received: from mail0.khirnov.net ([IPv6:::1]) by localhost (mail0.khirnov.net [IPv6:::1]) (amavisd-new, port 10024) with ESMTP id Qq5BZkIJ7h-5 for ; Thu, 17 Nov 2022 11:16:44 +0100 (CET) Received: from libav.khirnov.net (libav.khirnov.net [IPv6:2a00:c500:561:201::7]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256 client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "libav.khirnov.net", Issuer "smtp.khirnov.net SMTP CA" (verified OK)) by mail0.khirnov.net (Postfix) with ESMTPS id 4638F2404F7 for ; Thu, 17 Nov 2022 11:16:43 +0100 (CET) Received: from libav.khirnov.net (libav.khirnov.net [IPv6:::1]) by libav.khirnov.net (Postfix) with ESMTP id 2F9993A05F9 for ; Thu, 17 Nov 2022 11:16:43 +0100 (CET) From: Anton Khirnov To: ffmpeg-devel@ffmpeg.org Date: Thu, 17 Nov 2022 11:16:36 +0100 Message-Id: <20221117101640.6789-6-anton@khirnov.net> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221117101640.6789-1-anton@khirnov.net> References: <20221117101640.6789-1-anton@khirnov.net> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH 06/10] fftools/ffmpeg: remove the input_streams global X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: 67zCYes+BuBX Replace it with an array of streams in each InputFile. This is a more accurate reflection of the actual relationship between InputStream and InputFile. Analogous to what was previously done to output streams in 7ef7a22251b852faab9404c85399ba8ac5dfbdc3. --- fftools/ffmpeg.c | 86 +++++++++++---------------- fftools/ffmpeg.h | 16 +++-- fftools/ffmpeg_demux.c | 71 ++++++++++++++-------- fftools/ffmpeg_filter.c | 7 +-- fftools/ffmpeg_mux_init.c | 121 ++++++++++++++++++++------------------ fftools/ffmpeg_opt.c | 4 +- 6 files changed, 161 insertions(+), 144 deletions(-) diff --git a/fftools/ffmpeg.c b/fftools/ffmpeg.c index 0be2715687..0944f56b80 100644 --- a/fftools/ffmpeg.c +++ b/fftools/ffmpeg.c @@ -141,8 +141,6 @@ unsigned nb_output_dumped = 0; static BenchmarkTimeStamps current_time; AVIOContext *progress_avio = NULL; -InputStream **input_streams = NULL; -int nb_input_streams = 0; InputFile **input_files = NULL; int nb_input_files = 0; @@ -279,7 +277,7 @@ static void sub2video_heartbeat(InputStream *ist, int64_t pts) video frames could be accumulating in the filter graph while a filter (possibly overlay) is desperately waiting for a subtitle frame. */ for (i = 0; i < infile->nb_streams; i++) { - InputStream *ist2 = input_streams[infile->ist_index + i]; + InputStream *ist2 = infile->streams[i]; if (!ist2->sub2video.frame) continue; /* subtitles seem to be usually muxed ahead of other streams; @@ -502,28 +500,6 @@ static int decode_interrupt_cb(void *ctx) const AVIOInterruptCB int_cb = { decode_interrupt_cb, NULL }; -static void ist_free(InputStream **pist) -{ - InputStream *ist = *pist; - - if (!ist) - return; - - av_frame_free(&ist->decoded_frame); - av_packet_free(&ist->pkt); - av_dict_free(&ist->decoder_opts); - avsubtitle_free(&ist->prev_sub.subtitle); - av_frame_free(&ist->sub2video.frame); - av_freep(&ist->filters); - av_freep(&ist->hwaccel_device); - av_freep(&ist->dts_buffer); - - avcodec_free_context(&ist->dec_ctx); - avcodec_parameters_free(&ist->par); - - av_freep(pist); -} - static void ffmpeg_cleanup(int ret) { int i, j; @@ -580,9 +556,6 @@ static void ffmpeg_cleanup(int ret) for (i = 0; i < nb_input_files; i++) ifile_close(&input_files[i]); - for (i = 0; i < nb_input_streams; i++) - ist_free(&input_streams[i]); - if (vstats_file) { if (fclose(vstats_file)) av_log(NULL, AV_LOG_ERROR, @@ -592,7 +565,6 @@ static void ffmpeg_cleanup(int ret) av_freep(&vstats_filename); av_freep(&filter_nbthreads); - av_freep(&input_streams); av_freep(&input_files); av_freep(&output_files); @@ -628,6 +600,22 @@ static OutputStream *ost_iter(OutputStream *prev) return NULL; } +InputStream *ist_iter(InputStream *prev) +{ + int if_idx = prev ? prev->file_index : 0; + int ist_idx = prev ? prev->st->index + 1 : 0; + + for (; if_idx < nb_input_files; if_idx++) { + InputFile *f = input_files[if_idx]; + if (ist_idx < f->nb_streams) + return f->streams[ist_idx]; + + ist_idx = 0; + } + + return NULL; +} + void remove_avoptions(AVDictionary **a, AVDictionary *b) { const AVDictionaryEntry *t = NULL; @@ -1410,7 +1398,7 @@ static void print_final_stats(int64_t total_size) i, f->ctx->url); for (j = 0; j < f->nb_streams; j++) { - InputStream *ist = input_streams[f->ist_index + j]; + InputStream *ist = f->streams[j]; enum AVMediaType type = ist->par->codec_type; total_size += ist->data_size; @@ -2541,10 +2529,9 @@ static enum AVPixelFormat get_format(AVCodecContext *s, const enum AVPixelFormat return *p; } -static int init_input_stream(int ist_index, char *error, int error_len) +static int init_input_stream(InputStream *ist, char *error, int error_len) { int ret; - InputStream *ist = input_streams[ist_index]; if (ist->decoding_needed) { const AVCodec *codec = ist->dec; @@ -3163,12 +3150,12 @@ static int transcode_init(void) InputFile *ifile = input_files[i]; if (ifile->readrate || ifile->rate_emu) for (j = 0; j < ifile->nb_streams; j++) - input_streams[j + ifile->ist_index]->start = av_gettime_relative(); + ifile->streams[j]->start = av_gettime_relative(); } /* init input streams */ - for (i = 0; i < nb_input_streams; i++) - if ((ret = init_input_stream(i, error, sizeof(error))) < 0) + for (ist = ist_iter(NULL); ist; ist = ist_iter(ist)) + if ((ret = init_input_stream(ist, error, sizeof(error))) < 0) goto dump_format; /* @@ -3199,7 +3186,7 @@ static int transcode_init(void) int discard = AVDISCARD_ALL; for (k = 0; k < p->nb_stream_indexes; k++) - if (!input_streams[ifile->ist_index + p->stream_index[k]]->discard) { + if (!ifile->streams[p->stream_index[k]]->discard) { discard = AVDISCARD_DEFAULT; break; } @@ -3210,9 +3197,7 @@ static int transcode_init(void) dump_format: /* dump the stream mapping */ av_log(NULL, AV_LOG_INFO, "Stream mapping:\n"); - for (i = 0; i < nb_input_streams; i++) { - ist = input_streams[i]; - + for (ist = ist_iter(NULL); ist; ist = ist_iter(ist)) { for (j = 0; j < ist->nb_filters; j++) { if (!filtergraph_is_simple(ist->filters[j]->graph)) { av_log(NULL, AV_LOG_INFO, " Stream #%d:%d (%s) -> %s", @@ -3416,7 +3401,7 @@ static int check_keyboard_interaction(int64_t cur_time) if (key == 'd' || key == 'D'){ int debug=0; if(key == 'D') { - debug = input_streams[0]->dec_ctx->debug << 1; + debug = ist_iter(NULL)->dec_ctx->debug << 1; if(!debug) debug = 1; while (debug & FF_DEBUG_DCT_COEFF) //unsupported, would just crash debug += debug; @@ -3434,9 +3419,8 @@ static int check_keyboard_interaction(int64_t cur_time) if (k <= 0 || sscanf(buf, "%d", &debug)!=1) fprintf(stderr,"error parsing debug value\n"); } - for(i=0;idec_ctx->debug = debug; - } + for (InputStream *ist = ist_iter(NULL); ist; ist = ist_iter(ist)) + ist->dec_ctx->debug = debug; for (OutputStream *ost = ost_iter(NULL); ost; ost = ost_iter(ost)) { if (ost->enc_ctx) ost->enc_ctx->debug = debug; @@ -3480,7 +3464,7 @@ static void reset_eagain(void) static void decode_flush(InputFile *ifile) { for (int i = 0; i < ifile->nb_streams; i++) { - InputStream *ist = input_streams[ifile->ist_index + i]; + InputStream *ist = ifile->streams[i]; int ret; if (!ist->processing_needed) @@ -3627,7 +3611,7 @@ static int process_input(int file_index) } for (i = 0; i < ifile->nb_streams; i++) { - ist = input_streams[ifile->ist_index + i]; + ist = ifile->streams[i]; if (ist->processing_needed) { ret = process_input_packet(ist, NULL, 0); if (ret>0) @@ -3650,7 +3634,7 @@ static int process_input(int file_index) reset_eagain(); - ist = input_streams[ifile->ist_index + pkt->stream_index]; + ist = ifile->streams[pkt->stream_index]; ist->data_size += pkt->size; ist->nb_packets++; @@ -3682,8 +3666,8 @@ static int process_input(int file_index) ts_discontinuity_process(ifile, ist, pkt); if (debug_ts) { - av_log(NULL, AV_LOG_INFO, "demuxer+ffmpeg -> ist_index:%d type:%s pkt_pts:%s pkt_pts_time:%s pkt_dts:%s pkt_dts_time:%s duration:%s duration_time:%s off:%s off_time:%s\n", - ifile->ist_index + pkt->stream_index, + av_log(NULL, AV_LOG_INFO, "demuxer+ffmpeg -> ist_index:%d:%d type:%s pkt_pts:%s pkt_pts_time:%s pkt_dts:%s pkt_dts_time:%s duration:%s duration_time:%s off:%s off_time:%s\n", + ifile->index, pkt->stream_index, av_get_media_type_string(ist->par->codec_type), av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, &ist->st->time_base), av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, &ist->st->time_base), @@ -3887,8 +3871,7 @@ static int transcode(void) } /* at the end of stream, we must flush the decoder buffers */ - for (i = 0; i < nb_input_streams; i++) { - ist = input_streams[i]; + for (ist = ist_iter(NULL); ist; ist = ist_iter(ist)) { if (!input_files[ist->file_index]->eof_reached) { process_input_packet(ist, NULL, 0); } @@ -3924,8 +3907,7 @@ static int transcode(void) } /* close each decoder */ - for (i = 0; i < nb_input_streams; i++) { - ist = input_streams[i]; + for (ist = ist_iter(NULL); ist; ist = ist_iter(ist)) { if (ist->decoding_needed) { avcodec_close(ist->dec_ctx); } diff --git a/fftools/ffmpeg.h b/fftools/ffmpeg.h index b1e466406d..5fc3b2d3d4 100644 --- a/fftools/ffmpeg.h +++ b/fftools/ffmpeg.h @@ -437,7 +437,6 @@ typedef struct InputFile { AVFormatContext *ctx; int eof_reached; /* true if eof reached */ int eagain; /* true if last read attempt returned EAGAIN */ - int ist_index; /* index of first stream in input_streams */ int64_t input_ts_offset; int input_sync_ref; /** @@ -452,8 +451,13 @@ typedef struct InputFile { int64_t last_ts; int64_t start_time; /* user-specified start time in AV_TIME_BASE or AV_NOPTS_VALUE */ int64_t recording_time; - int nb_streams; /* number of stream that ffmpeg is aware of; may be different - from ctx.nb_streams if new streams appear during av_read_frame() */ + + /* streams that ffmpeg is aware of; + * there may be extra streams in ctx that are not mapped to an InputStream + * if new streams appear dynamically during demuxing */ + InputStream **streams; + int nb_streams; + int rate_emu; float readrate; int accurate_seek; @@ -629,8 +633,6 @@ typedef struct OutputFile { int bitexact; } OutputFile; -extern InputStream **input_streams; -extern int nb_input_streams; extern InputFile **input_files; extern int nb_input_files; @@ -766,6 +768,10 @@ void ifile_close(InputFile **f); */ int ifile_get_packet(InputFile *f, AVPacket **pkt); +/* iterate over all input streams in all input files; + * pass NULL to start iteration */ +InputStream *ist_iter(InputStream *prev); + #define SPECIFIER_OPT_FMT_str "%s" #define SPECIFIER_OPT_FMT_i "%i" #define SPECIFIER_OPT_FMT_i64 "%"PRId64 diff --git a/fftools/ffmpeg_demux.c b/fftools/ffmpeg_demux.c index 76778d774d..33035cd4d5 100644 --- a/fftools/ffmpeg_demux.c +++ b/fftools/ffmpeg_demux.c @@ -140,13 +140,13 @@ static int seek_to_start(Demuxer *d) return ret; got_durations++; - ist = input_streams[ifile->ist_index + dur.stream_idx]; + ist = ifile->streams[dur.stream_idx]; ifile_duration_update(d, ist, dur.duration); } } else { for (int i = 0; i < ifile->nb_streams; i++) { int64_t duration = 0; - ist = input_streams[ifile->ist_index + i]; + ist = ifile->streams[i]; if (ist->framerate.num) { duration = av_rescale_q(1, av_inv_q(ist->framerate), ist->st->time_base); @@ -169,14 +169,14 @@ static int seek_to_start(Demuxer *d) static void ts_fixup(Demuxer *d, AVPacket *pkt, int *repeat_pict) { InputFile *ifile = &d->f; - InputStream *ist = input_streams[ifile->ist_index + pkt->stream_index]; + InputStream *ist = ifile->streams[pkt->stream_index]; const int64_t start_time = ifile->start_time_effective; int64_t duration; if (debug_ts) { - av_log(NULL, AV_LOG_INFO, "demuxer -> ist_index:%d type:%s " + av_log(NULL, AV_LOG_INFO, "demuxer -> ist_index:%d:%d type:%s " "pkt_pts:%s pkt_pts_time:%s pkt_dts:%s pkt_dts_time:%s duration:%s duration_time:%s\n", - ifile->ist_index + pkt->stream_index, + ifile->index, pkt->stream_index, av_get_media_type_string(ist->st->codecpar->codec_type), av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, &ist->st->time_base), av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, &ist->st->time_base), @@ -381,7 +381,7 @@ static int thread_start(Demuxer *d) int nb_audio_dec = 0; for (int i = 0; i < f->nb_streams; i++) { - InputStream *ist = input_streams[f->ist_index + i]; + InputStream *ist = f->streams[i]; nb_audio_dec += !!(ist->decoding_needed && ist->st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO); } @@ -428,7 +428,7 @@ int ifile_get_packet(InputFile *f, AVPacket **pkt) ); float scale = f->rate_emu ? 1.0 : f->readrate; for (i = 0; i < f->nb_streams; i++) { - InputStream *ist = input_streams[f->ist_index + i]; + InputStream *ist = f->streams[i]; int64_t stream_ts_offset, pts, now; if (!ist->nb_packets || (ist->decoding_needed && !ist->got_output)) continue; stream_ts_offset = FFMAX(ist->first_dts != AV_NOPTS_VALUE ? ist->first_dts : 0, file_start); @@ -447,13 +447,35 @@ int ifile_get_packet(InputFile *f, AVPacket **pkt) if (msg.looping) return 1; - ist = input_streams[f->ist_index + msg.pkt->stream_index]; + ist = f->streams[msg.pkt->stream_index]; ist->last_pkt_repeat_pict = msg.repeat_pict; *pkt = msg.pkt; return 0; } +static void ist_free(InputStream **pist) +{ + InputStream *ist = *pist; + + if (!ist) + return; + + av_frame_free(&ist->decoded_frame); + av_packet_free(&ist->pkt); + av_dict_free(&ist->decoder_opts); + avsubtitle_free(&ist->prev_sub.subtitle); + av_frame_free(&ist->sub2video.frame); + av_freep(&ist->filters); + av_freep(&ist->hwaccel_device); + av_freep(&ist->dts_buffer); + + avcodec_free_context(&ist->dec_ctx); + avcodec_parameters_free(&ist->par); + + av_freep(pist); +} + void ifile_close(InputFile **pf) { InputFile *f = *pf; @@ -464,6 +486,10 @@ void ifile_close(InputFile **pf) thread_stop(d); + for (int i = 0; i < f->nb_streams; i++) + ist_free(&f->streams[i]); + av_freep(&f->streams); + avformat_close_input(&f->ctx); av_freep(pf); @@ -562,10 +588,11 @@ static void add_display_matrix_to_stream(const OptionsContext *o, vflip_set ? vflip : 0); } -/* Add all the streams from the given input file to the global - * list of input streams. */ -static void add_input_streams(const OptionsContext *o, AVFormatContext *ic) +/* Add all the streams from the given input file to the demuxer */ +static void add_input_streams(const OptionsContext *o, Demuxer *d) { + InputFile *f = &d->f; + AVFormatContext *ic = f->ctx; int i, ret; for (i = 0; i < ic->nb_streams; i++) { @@ -582,9 +609,9 @@ static void add_input_streams(const OptionsContext *o, AVFormatContext *ic) const AVOption *discard_opt = av_opt_find(&cc, "skip_frame", NULL, 0, AV_OPT_SEARCH_FAKE_OBJ); - ist = ALLOC_ARRAY_ELEM(input_streams, nb_input_streams); + ist = ALLOC_ARRAY_ELEM(f->streams, f->nb_streams); ist->st = st; - ist->file_index = nb_input_files; + ist->file_index = f->index; ist->discard = 1; st->discard = AVDISCARD_ALL; ist->nb_samples = 0; @@ -1013,24 +1040,16 @@ int ifile_open(const OptionsContext *o, const char *filename) } } - /* update the current parameters so that they match the one of the input stream */ - add_input_streams(o, ic); - - /* dump the file content */ - av_dump_format(ic, nb_input_files, filename, 0); - d = allocate_array_elem(&input_files, sizeof(*d), &nb_input_files); f = &d->f; f->ctx = ic; f->index = nb_input_files - 1; - f->ist_index = nb_input_streams - ic->nb_streams; f->start_time = start_time; f->recording_time = recording_time; f->input_sync_ref = o->input_sync_ref; f->input_ts_offset = o->input_ts_offset; f->ts_offset = o->input_ts_offset - (copy_ts ? (start_at_zero && ic->start_time != AV_NOPTS_VALUE ? ic->start_time : 0) : timestamp); - f->nb_streams = ic->nb_streams; f->rate_emu = o->rate_emu; f->accurate_seek = o->accurate_seek; d->loop = o->loop; @@ -1049,11 +1068,17 @@ int ifile_open(const OptionsContext *o, const char *filename) d->thread_queue_size = o->thread_queue_size; + /* update the current parameters so that they match the one of the input stream */ + add_input_streams(o, d); + + /* dump the file content */ + av_dump_format(ic, f->index, filename, 0); + /* check if all codec options have been used */ unused_opts = strip_specifiers(o->g->codec_opts); - for (i = f->ist_index; i < nb_input_streams; i++) { + for (i = 0; i < f->nb_streams; i++) { e = NULL; - while ((e = av_dict_get(input_streams[i]->decoder_opts, "", e, + while ((e = av_dict_get(f->streams[i]->decoder_opts, "", e, AV_DICT_IGNORE_SUFFIX))) av_dict_set(&unused_opts, e->key, NULL, 0); } diff --git a/fftools/ffmpeg_filter.c b/fftools/ffmpeg_filter.c index 718826a485..5d50092b72 100644 --- a/fftools/ffmpeg_filter.c +++ b/fftools/ffmpeg_filter.c @@ -271,7 +271,7 @@ static void init_input_filter(FilterGraph *fg, AVFilterInOut *in) "matches no streams.\n", p, fg->graph_desc); exit_program(1); } - ist = input_streams[input_files[file_idx]->ist_index + st->index]; + ist = input_files[file_idx]->streams[st->index]; if (ist->user_set_discard == AVDISCARD_ALL) { av_log(NULL, AV_LOG_FATAL, "Stream specifier '%s' in filtergraph description %s " "matches a disabled input stream.\n", p, fg->graph_desc); @@ -279,14 +279,13 @@ static void init_input_filter(FilterGraph *fg, AVFilterInOut *in) } } else { /* find the first unused stream of corresponding type */ - for (i = 0; i < nb_input_streams; i++) { - ist = input_streams[i]; + for (ist = ist_iter(NULL); ist; ist = ist_iter(ist)) { if (ist->user_set_discard == AVDISCARD_ALL) continue; if (ist->dec_ctx->codec_type == type && ist->discard) break; } - if (i == nb_input_streams) { + if (!ist) { av_log(NULL, AV_LOG_FATAL, "Cannot find a matching stream for " "unlabeled input pad %d on filter %s\n", in->pad_idx, in->filter_ctx->name); diff --git a/fftools/ffmpeg_mux_init.c b/fftools/ffmpeg_mux_init.c index 6db70cc852..b6782b49aa 100644 --- a/fftools/ffmpeg_mux_init.c +++ b/fftools/ffmpeg_mux_init.c @@ -171,7 +171,7 @@ static int get_preset_file_2(const char *preset_name, const char *codec_name, AV } static OutputStream *new_output_stream(Muxer *mux, const OptionsContext *o, - enum AVMediaType type, int source_index) + enum AVMediaType type, InputStream *ist) { AVFormatContext *oc = mux->fc; MuxStream *ms; @@ -354,8 +354,8 @@ static OutputStream *new_output_stream(Muxer *mux, const OptionsContext *o, if (ost->enc_ctx && av_get_exact_bits_per_sample(ost->enc_ctx->codec_id) == 24) av_dict_set(&ost->swr_opts, "output_sample_bits", "24", 0); - if (source_index >= 0) { - ost->ist = input_streams[source_index]; + if (ist) { + ost->ist = ist; ost->ist->discard = 0; ost->ist->st->discard = ost->ist->user_set_discard; } @@ -419,14 +419,14 @@ static void parse_matrix_coeffs(uint16_t *dest, const char *str) } } -static OutputStream *new_video_stream(Muxer *mux, const OptionsContext *o, int source_index) +static OutputStream *new_video_stream(Muxer *mux, const OptionsContext *o, InputStream *ist) { AVFormatContext *oc = mux->fc; AVStream *st; OutputStream *ost; char *frame_rate = NULL, *max_frame_rate = NULL, *frame_aspect_ratio = NULL; - ost = new_output_stream(mux, o, AVMEDIA_TYPE_VIDEO, source_index); + ost = new_output_stream(mux, o, AVMEDIA_TYPE_VIDEO, ist); st = ost->st; MATCH_PER_STREAM_OPT(frame_rates, str, frame_rate, oc, st); @@ -657,13 +657,13 @@ static OutputStream *new_video_stream(Muxer *mux, const OptionsContext *o, int s return ost; } -static OutputStream *new_audio_stream(Muxer *mux, const OptionsContext *o, int source_index) +static OutputStream *new_audio_stream(Muxer *mux, const OptionsContext *o, InputStream *ist) { AVFormatContext *oc = mux->fc; AVStream *st; OutputStream *ost; - ost = new_output_stream(mux, o, AVMEDIA_TYPE_AUDIO, source_index); + ost = new_output_stream(mux, o, AVMEDIA_TYPE_AUDIO, ist); st = ost->st; @@ -755,11 +755,11 @@ static OutputStream *new_audio_stream(Muxer *mux, const OptionsContext *o, int s return ost; } -static OutputStream *new_data_stream(Muxer *mux, const OptionsContext *o, int source_index) +static OutputStream *new_data_stream(Muxer *mux, const OptionsContext *o, InputStream *ist) { OutputStream *ost; - ost = new_output_stream(mux, o, AVMEDIA_TYPE_DATA, source_index); + ost = new_output_stream(mux, o, AVMEDIA_TYPE_DATA, ist); if (ost->enc_ctx) { av_log(NULL, AV_LOG_FATAL, "Data stream encoding not supported yet (only streamcopy)\n"); exit_program(1); @@ -768,11 +768,11 @@ static OutputStream *new_data_stream(Muxer *mux, const OptionsContext *o, int so return ost; } -static OutputStream *new_unknown_stream(Muxer *mux, const OptionsContext *o, int source_index) +static OutputStream *new_unknown_stream(Muxer *mux, const OptionsContext *o, InputStream *ist) { OutputStream *ost; - ost = new_output_stream(mux, o, AVMEDIA_TYPE_UNKNOWN, source_index); + ost = new_output_stream(mux, o, AVMEDIA_TYPE_UNKNOWN, ist); if (ost->enc_ctx) { av_log(NULL, AV_LOG_FATAL, "Unknown stream encoding not supported yet (only streamcopy)\n"); exit_program(1); @@ -781,19 +781,19 @@ static OutputStream *new_unknown_stream(Muxer *mux, const OptionsContext *o, int return ost; } -static OutputStream *new_attachment_stream(Muxer *mux, const OptionsContext *o, int source_index) +static OutputStream *new_attachment_stream(Muxer *mux, const OptionsContext *o, InputStream *ist) { - OutputStream *ost = new_output_stream(mux, o, AVMEDIA_TYPE_ATTACHMENT, source_index); + OutputStream *ost = new_output_stream(mux, o, AVMEDIA_TYPE_ATTACHMENT, ist); ost->finished = 1; return ost; } -static OutputStream *new_subtitle_stream(Muxer *mux, const OptionsContext *o, int source_index) +static OutputStream *new_subtitle_stream(Muxer *mux, const OptionsContext *o, InputStream *ist) { AVStream *st; OutputStream *ost; - ost = new_output_stream(mux, o, AVMEDIA_TYPE_SUBTITLE, source_index); + ost = new_output_stream(mux, o, AVMEDIA_TYPE_SUBTITLE, ist); st = ost->st; if (ost->enc_ctx) { @@ -816,8 +816,8 @@ static void init_output_filter(OutputFilter *ofilter, const OptionsContext *o, OutputStream *ost; switch (ofilter->type) { - case AVMEDIA_TYPE_VIDEO: ost = new_video_stream(mux, o, -1); break; - case AVMEDIA_TYPE_AUDIO: ost = new_audio_stream(mux, o, -1); break; + case AVMEDIA_TYPE_VIDEO: ost = new_video_stream(mux, o, NULL); break; + case AVMEDIA_TYPE_AUDIO: ost = new_audio_stream(mux, o, NULL); break; default: av_log(NULL, AV_LOG_FATAL, "Only video and audio filters are supported " "currently.\n"); @@ -854,8 +854,8 @@ static void init_output_filter(OutputFilter *ofilter, const OptionsContext *o, static void map_auto_video(Muxer *mux, const OptionsContext *o) { AVFormatContext *oc = mux->fc; - InputStream *ist; - int best_score = 0, idx = -1; + InputStream *best_ist = NULL; + int best_score = 0; int qcr; /* video: highest resolution */ @@ -865,10 +865,11 @@ static void map_auto_video(Muxer *mux, const OptionsContext *o) qcr = avformat_query_codec(oc->oformat, oc->oformat->video_codec, 0); for (int j = 0; j < nb_input_files; j++) { InputFile *ifile = input_files[j]; - int file_best_score = 0, file_best_idx = -1; + InputStream *file_best_ist = NULL; + int file_best_score = 0; for (int i = 0; i < ifile->nb_streams; i++) { + InputStream *ist = ifile->streams[i]; int score; - ist = input_streams[ifile->ist_index + i]; if (ist->user_set_discard == AVDISCARD_ALL || ist->st->codecpar->codec_type != AVMEDIA_TYPE_VIDEO) @@ -884,27 +885,28 @@ static void map_auto_video(Muxer *mux, const OptionsContext *o) if((qcr==MKTAG('A', 'P', 'I', 'C')) && !(ist->st->disposition & AV_DISPOSITION_ATTACHED_PIC)) continue; file_best_score = score; - file_best_idx = ifile->ist_index + i; + file_best_ist = ist; } } - if (file_best_idx >= 0) { - if((qcr == MKTAG('A', 'P', 'I', 'C')) || !(ist->st->disposition & AV_DISPOSITION_ATTACHED_PIC)) - file_best_score -= 5000000*!!(input_streams[file_best_idx]->st->disposition & AV_DISPOSITION_DEFAULT); + if (file_best_ist) { + if((qcr == MKTAG('A', 'P', 'I', 'C')) || + !(file_best_ist->st->disposition & AV_DISPOSITION_ATTACHED_PIC)) + file_best_score -= 5000000*!!(file_best_ist->st->disposition & AV_DISPOSITION_DEFAULT); if (file_best_score > best_score) { best_score = file_best_score; - idx = file_best_idx; + best_ist = file_best_ist; } } } - if (idx >= 0) - new_video_stream(mux, o, idx); + if (best_ist) + new_video_stream(mux, o, best_ist); } static void map_auto_audio(Muxer *mux, const OptionsContext *o) { AVFormatContext *oc = mux->fc; - InputStream *ist; - int best_score = 0, idx = -1; + InputStream *best_ist = NULL; + int best_score = 0; /* audio: most channels */ if (av_guess_codec(oc->oformat, NULL, oc->url, NULL, AVMEDIA_TYPE_AUDIO) == AV_CODEC_ID_NONE) @@ -912,10 +914,11 @@ static void map_auto_audio(Muxer *mux, const OptionsContext *o) for (int j = 0; j < nb_input_files; j++) { InputFile *ifile = input_files[j]; - int file_best_score = 0, file_best_idx = -1; + InputStream *file_best_ist = NULL; + int file_best_score = 0; for (int i = 0; i < ifile->nb_streams; i++) { + InputStream *ist = ifile->streams[i]; int score; - ist = input_streams[ifile->ist_index + i]; if (ist->user_set_discard == AVDISCARD_ALL || ist->st->codecpar->codec_type != AVMEDIA_TYPE_AUDIO) @@ -926,19 +929,19 @@ static void map_auto_audio(Muxer *mux, const OptionsContext *o) + 5000000*!!(ist->st->disposition & AV_DISPOSITION_DEFAULT); if (score > file_best_score) { file_best_score = score; - file_best_idx = ifile->ist_index + i; + file_best_ist = ist; } } - if (file_best_idx >= 0) { - file_best_score -= 5000000*!!(input_streams[file_best_idx]->st->disposition & AV_DISPOSITION_DEFAULT); + if (file_best_ist) { + file_best_score -= 5000000*!!(file_best_ist->st->disposition & AV_DISPOSITION_DEFAULT); if (file_best_score > best_score) { best_score = file_best_score; - idx = file_best_idx; + best_ist = file_best_ist; } } } - if (idx >= 0) - new_audio_stream(mux, o, idx); + if (best_ist) + new_audio_stream(mux, o, best_ist); } static void map_auto_subtitle(Muxer *mux, const OptionsContext *o) @@ -951,15 +954,15 @@ static void map_auto_subtitle(Muxer *mux, const OptionsContext *o) if (!avcodec_find_encoder(oc->oformat->subtitle_codec) && !subtitle_codec_name) return; - for (int i = 0; i < nb_input_streams; i++) - if (input_streams[i]->st->codecpar->codec_type == AVMEDIA_TYPE_SUBTITLE) { + for (InputStream *ist = ist_iter(NULL); ist; ist = ist_iter(ist)) + if (ist->st->codecpar->codec_type == AVMEDIA_TYPE_SUBTITLE) { AVCodecDescriptor const *input_descriptor = - avcodec_descriptor_get(input_streams[i]->st->codecpar->codec_id); + avcodec_descriptor_get(ist->st->codecpar->codec_id); AVCodecDescriptor const *output_descriptor = NULL; AVCodec const *output_codec = avcodec_find_encoder(oc->oformat->subtitle_codec); int input_props = 0, output_props = 0; - if (input_streams[i]->user_set_discard == AVDISCARD_ALL) + if (ist->user_set_discard == AVDISCARD_ALL) continue; if (output_codec) output_descriptor = avcodec_descriptor_get(output_codec->id); @@ -973,7 +976,7 @@ static void map_auto_subtitle(Muxer *mux, const OptionsContext *o) input_descriptor && output_descriptor && (!input_descriptor->props || !output_descriptor->props)) { - new_subtitle_stream(mux, o, i); + new_subtitle_stream(mux, o, ist); break; } } @@ -984,12 +987,16 @@ static void map_auto_data(Muxer *mux, const OptionsContext *o) AVFormatContext *oc = mux->fc; /* Data only if codec id match */ enum AVCodecID codec_id = av_guess_codec(oc->oformat, NULL, oc->url, NULL, AVMEDIA_TYPE_DATA); - for (int i = 0; codec_id != AV_CODEC_ID_NONE && i < nb_input_streams; i++) { - if (input_streams[i]->user_set_discard == AVDISCARD_ALL) + + if (codec_id == AV_CODEC_ID_NONE) + return; + + for (InputStream *ist = ist_iter(NULL); ist; ist = ist_iter(ist)) { + if (ist->user_set_discard == AVDISCARD_ALL) continue; - if (input_streams[i]->st->codecpar->codec_type == AVMEDIA_TYPE_DATA - && input_streams[i]->st->codecpar->codec_id == codec_id ) - new_data_stream(mux, o, i); + if (ist->st->codecpar->codec_type == AVMEDIA_TYPE_DATA && + ist->st->codecpar->codec_id == codec_id ) + new_data_stream(mux, o, ist); } } @@ -1023,9 +1030,7 @@ loop_end: } init_output_filter(ofilter, o, mux); } else { - int src_idx = input_files[map->file_index]->ist_index + map->stream_index; - - ist = input_streams[input_files[map->file_index]->ist_index + map->stream_index]; + ist = input_files[map->file_index]->streams[map->stream_index]; if (ist->user_set_discard == AVDISCARD_ALL) { av_log(NULL, AV_LOG_FATAL, "Stream #%d:%d is disabled and cannot be mapped.\n", map->file_index, map->stream_index); @@ -1041,14 +1046,14 @@ loop_end: return; switch (ist->st->codecpar->codec_type) { - case AVMEDIA_TYPE_VIDEO: new_video_stream (mux, o, src_idx); break; - case AVMEDIA_TYPE_AUDIO: new_audio_stream (mux, o, src_idx); break; - case AVMEDIA_TYPE_SUBTITLE: new_subtitle_stream (mux, o, src_idx); break; - case AVMEDIA_TYPE_DATA: new_data_stream (mux, o, src_idx); break; - case AVMEDIA_TYPE_ATTACHMENT: new_attachment_stream(mux, o, src_idx); break; + case AVMEDIA_TYPE_VIDEO: new_video_stream (mux, o, ist); break; + case AVMEDIA_TYPE_AUDIO: new_audio_stream (mux, o, ist); break; + case AVMEDIA_TYPE_SUBTITLE: new_subtitle_stream (mux, o, ist); break; + case AVMEDIA_TYPE_DATA: new_data_stream (mux, o, ist); break; + case AVMEDIA_TYPE_ATTACHMENT: new_attachment_stream(mux, o, ist); break; case AVMEDIA_TYPE_UNKNOWN: if (copy_unknown_streams) { - new_unknown_stream (mux, o, src_idx); + new_unknown_stream (mux, o, ist); break; } default: @@ -1096,7 +1101,7 @@ static void of_add_attachments(Muxer *mux, const OptionsContext *o) avio_read(pb, attachment, len); memset(attachment + len, 0, AV_INPUT_BUFFER_PADDING_SIZE); - ost = new_attachment_stream(mux, o, -1); + ost = new_attachment_stream(mux, o, NULL); ost->attachment_filename = o->attachments[i]; ost->st->codecpar->extradata = attachment; ost->st->codecpar->extradata_size = len; diff --git a/fftools/ffmpeg_opt.c b/fftools/ffmpeg_opt.c index 61aa0be0ab..a9dcf0e088 100644 --- a/fftools/ffmpeg_opt.c +++ b/fftools/ffmpeg_opt.c @@ -417,7 +417,7 @@ static int opt_map(void *optctx, const char *opt, const char *arg) if (check_stream_specifier(input_files[file_idx]->ctx, input_files[file_idx]->ctx->streams[i], *p == ':' ? p + 1 : p) <= 0) continue; - if (input_streams[input_files[file_idx]->ist_index + i]->user_set_discard == AVDISCARD_ALL) { + if (input_files[file_idx]->streams[i]->user_set_discard == AVDISCARD_ALL) { disabled = 1; continue; } @@ -523,7 +523,7 @@ static int opt_map_channel(void *optctx, const char *opt, const char *arg) if (allow_unused = strchr(mapchan, '?')) *allow_unused = 0; if (m->channel_idx < 0 || m->channel_idx >= st->codecpar->ch_layout.nb_channels || - input_streams[input_files[m->file_idx]->ist_index + m->stream_idx]->user_set_discard == AVDISCARD_ALL) { + input_files[m->file_idx]->streams[m->stream_idx]->user_set_discard == AVDISCARD_ALL) { if (allow_unused) { av_log(NULL, AV_LOG_VERBOSE, "mapchan: invalid audio channel #%d.%d.%d\n", m->file_idx, m->stream_idx, m->channel_idx);