diff mbox series

[FFmpeg-devel] frame: add a time_base field

Message ID Mj0IlBo--3-2@lynne.ee
State New
Headers show
Series [FFmpeg-devel] frame: add a time_base field | expand

Checks

Context Check Description
andriy/make_x86 success Make finished
andriy/make_fate_x86 success Make fate finished
andriy/make_ppc success Make finished
andriy/make_fate_ppc success Make fate finished

Commit Message

Lynne Sept. 7, 2021, 5:29 p.m. UTC
This adds a time_base field (currently unused), analogue to the 
AVPacket.time_base field.

Patch attached. doc/APIchanges and version bump to be done at push time.
Subject: [PATCH] frame: add a time_base field

This adds a time_base field (currently unused), analogue to the
AVPacket.time_base field.
---
 libavutil/frame.c | 2 ++
 libavutil/frame.h | 6 ++++++
 2 files changed, 8 insertions(+)

Comments

Paul B Mahol Sept. 7, 2021, 5:48 p.m. UTC | #1
On Tue, Sep 7, 2021 at 7:37 PM Andreas Rheinhardt <
andreas.rheinhardt@outlook.com> wrote:

> Lynne:
> > This adds a time_base field (currently unused), analogue to the
> > AVPacket.time_base field.
> >
> > Patch attached. doc/APIchanges and version bump to be done at push time.
> >
>
> > +    /**
>
> > +     * Time base for the timestamps in this frame. May be 0, in which
> case the
>
> > +     * time_base from the frame source should be used.
>
> > +     */
>
> > +    AVRational time_base;
>
> > +
>
> Does the pkt_duration count as timestamp or not?
>

While here please add frame duration too.


>
> - Andreas
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
>
Lynne Sept. 7, 2021, 6:38 p.m. UTC | #2
7 Sept 2021, 19:36 by andreas.rheinhardt@outlook.com:

> Lynne:
>
>> This adds a time_base field (currently unused), analogue to the 
>> AVPacket.time_base field.
>>
>> Patch attached. doc/APIchanges and version bump to be done at push time.
>>
>> +    /**
>>
>> +     * Time base for the timestamps in this frame. May be 0, in which case the
>>
>> +     * time_base from the frame source should be used.
>>
>> +     */
>>
>> +    AVRational time_base;
>>
>> +
>>
>
> Does the pkt_duration count as timestamp or not?
>

Good point, it does if you think of it starting from 0 for every frame :)
Reworded to:
>    /** 
>     * Time base for the timestamps and durations in this frame.
>     * May be 0, in which case the time_base from the frame source should be used.
>     */
Marton Balint Sept. 7, 2021, 11:25 p.m. UTC | #3
On Tue, 7 Sep 2021, Lynne wrote:

> 7 Sept 2021, 19:36 by andreas.rheinhardt@outlook.com:
>
>> Lynne:
>>
>>> This adds a time_base field (currently unused), analogue to the 
>>> AVPacket.time_base field.
>>>
>>> Patch attached. doc/APIchanges and version bump to be done at push time.
>>>
>>> +    /**
>>> +     * Time base for the timestamps in this frame. May be 0, in which case the
>>> +     * time_base from the frame source should be used.
>>> +     */
>>> +    AVRational time_base;

So how this is going to work? Will e.g. avcodec_send_frame check 
the time base of the frame and recalculate pts/duration to the encoder 
time base? Same goes for every function which is receiving frames?

Also I would like to point out that until the actual support for this is 
added to the libraries I don't see why we should add this, because 
sizeof(AVFrame) is not part of the ABI, so this can be added anytime.

Regards,
Marton
Lynne Sept. 8, 2021, 12:13 a.m. UTC | #4
8 Sept 2021, 01:25 by cus@passwd.hu:

>
>
> On Tue, 7 Sep 2021, Lynne wrote:
>
>> 7 Sept 2021, 19:36 by andreas.rheinhardt@outlook.com:
>>
>>> Lynne:
>>>
>>>> This adds a time_base field (currently unused), analogue to the AVPacket.time_base field.
>>>>
>>>> Patch attached. doc/APIchanges and version bump to be done at push time.
>>>>
>>>> +    /**
>>>> +     * Time base for the timestamps in this frame. May be 0, in which case the
>>>> +     * time_base from the frame source should be used.
>>>> +     */
>>>> +    AVRational time_base;
>>>>
>
> So how this is going to work? Will e.g. avcodec_send_frame check the time base of the frame and recalculate pts/duration to the encoder time base? Same goes for every function which is receiving frames?
>

If it's zero, it'll use the initial avctx timebase. If it's non-zero and different, IMO it should
warn the user that the timebase has changed and carry on as we currently do, with the
output packets still using the initial timebase, and the input timestamps unchanged.
A new AV_CODEC_CAP_DYNAMIC_TIMEBASE and
AV_CODEC_FLAG2_DYNAMIC_TIMEBASE flags may be introduced, which if supplied,
will either adapt the timestamps to the initial timebase if the encoder does not indicate
support via the new codec cap flag (most wrappers) or proceed to generate packets
with a new timebase if the encoder supports it (most native encoders).


> Also I would like to point out that until the actual support for this is added to the libraries I don't see why we should add this, because sizeof(AVFrame) is not part of the ABI, so this can be added anytime.
>

Right, fair enough. Consider it an RFC then.
Nicolas George Sept. 9, 2021, 10:02 a.m. UTC | #5
Marton Balint (12021-09-08):
> So how this is going to work? Will e.g. avcodec_send_frame check the time
> base of the frame and recalculate pts/duration to the encoder time base?
> Same goes for every function which is receiving frames?


Thanks for raising this question.

If we add this field without being very clear on its semantic, then
different parts of the code and applications will quickly start making
slightly different assumptions on it, and the result will be more
complexity but little benefit since we cannot assume the code does the
right thing. And we will have to support it until we decide on an API
break.

Among the question that need answering before considering adding it:

- Is it input or output or both? Is it used by libav* to tell the
  application what time base is used for a frame it produces (output),
  as a reminder and clarification? Or is it also used for the
  application to tell libav* the time base it wants to use (input)?

- If it is input, then is libav* forced to accept it, any value? Or can
  it reject it? Alter it? We have this in libavformat: the application
  can set any time base it wants for AVStream before writing the header,
  but write_header can say "I'm Afraid I Can't Do That, Dave" and choose
  a time base supported by the format.

- If it is input, when is it allowed to change? And again, what happens
  if the change is not supported?

- Who is responsible for keeping it up to date? We have part of the code
  that do:

  frame->pts = av_rescale_q(frame->pts, in->time_base, out->time_base);

  and obviously they do not bother to change frame->time_base, because
  there is no frame->time_base now. Some of the changes can be done
  automatically, but not all. Who will go over the code to check? It
  seems to me we will find bugs where frame->time_base is not updated in
  the next ten years at least.

All in all, this addition seems to make everything more complex. It
really means that all timestamps become full-fledged rationals.
Everywhere we used to store a timestamp we will have to add a time_base
field. Everywhere we used to just compare timestamps with < we will have
to use a rational comparison function.

It is doable, but it is a significant cost. So before considering it, on
top of clarifying the semantic, we also need an answer to the question:
What problem is this helping to solve? And: is there no simpler
solution?


As for a duration field, it is not as bad as a time_base field because
it does not make everything more complex, but it still needs much
clarification on its semantic:

- Is it input, output or both?

- If it is input, what happens if it is not consistent with the
  timestamp of the next frame?

- What happens if it is not consistent with the sample count?

Speaking as the maintainer of libavfilter, and without closing any
further discussion, I say: the duration of a frame is currently defined
as the difference with the timestamp of the next frame, and it should
continue to be. If a duration field is added to the frame, it will only
be used by the framework and/or select few utility filters to fill in
information that may be missing.

Regards,
Lynne Sept. 9, 2021, 10:41 a.m. UTC | #6
9 Sept 2021, 12:02 by george@nsup.org:

> Marton Balint (12021-09-08):
>
>> So how this is going to work? Will e.g. avcodec_send_frame check the time
>> base of the frame and recalculate pts/duration to the encoder time base?
>> Same goes for every function which is receiving frames?
>>
>
>
> Thanks for raising this question.
>
> If we add this field without being very clear on its semantic, then
> different parts of the code and applications will quickly start making
> slightly different assumptions on it, and the result will be more
> complexity but little benefit since we cannot assume the code does the
> right thing. And we will have to support it until we decide on an API
> break.
>

Did you read what I wrote? I think I was very clear.
It's an output, unless a specific flag is set to accept it as an input.
API users don't have to maintain it without the flag set.
It helps solve the problem of piping timebases for API users
who isolate all of their contexts and only relay frames throughout,
without them needing to use the opaque fields.
We already have the same field for AVPackets, so not introducing
one for frames would make no sense.
Lynne Sept. 9, 2021, 10:46 a.m. UTC | #7
9 Sept 2021, 12:02 by george@nsup.org:

> Speaking as the maintainer of libavfilter, and without closing any
> further discussion, I say: the duration of a frame is currently defined
> as the difference with the timestamp of the next frame, and it should
> continue to be. If a duration field is added to the frame, it will only
> be used by the framework and/or select few utility filters to fill in
> information that may be missing.
>

That's fine, no argument about this. We talked about this on IRC
and I agreed that duration fields on frames make no sense and
are difficult to guarantee.
One thing though, "Speaking as the maintainer of libavfilter"?
Please don't discredit Paul, he maintains and cares about lavfi
just as much as you do.
Nicolas George Sept. 9, 2021, 10:48 a.m. UTC | #8
Lynne (12021-09-09):
> Did you read what I wrote? I think I was very clear.

I did. And you, did you read what I wrote? You are only answering one of
the questions.

> It's an output, unless a specific flag is set to accept it as an input.
> API users don't have to maintain it without the flag set.
> It helps solve the problem of piping timebases for API users
> who isolate all of their contexts and only relay frames throughout,
> without them needing to use the opaque fields.

So this is the problem it is trying to solve? Applications who do not
bother to have a time_base field where they need one, so we have to add
one ourselves, and have all the maintenance nightmare of keeping it up
to date.

No, thank you.

> We already have the same field for AVPackets, so not introducing
> one for frames would make no sense.

I already thought it was a bad idea when it was added to AVPacket. I
should have spoken up at the time, but I did not. So let us remove it
instead. Or at least not continue in that direction.

Neither frames nor packets exist in a void. They are all related to a
context. And I do not mean an AVSomethingContext in terms of libav*
structures, but a logical context that explains a lot of things about
them, and the time_base is only a tiny part of it.

Regards,
Nicolas George Sept. 9, 2021, 10:51 a.m. UTC | #9
Lynne (12021-09-09):
> That's fine, no argument about this. We talked about this on IRC
> and I agreed that duration fields on frames make no sense and
> are difficult to guarantee.

Thank you for mentioning this.

Not everybody can spend time synchronously on IRC.

> One thing though, "Speaking as the maintainer of libavfilter"?
> Please don't discredit Paul, he maintains and cares about lavfi
> just as much as you do.

I am not discrediting Paul. He maintains a lot of useful filters, and he
is the expert on intra-frame threading, but the global framework,
including negotiation and scheduling, is really my thing ever since
Stefano stopped worked on the code.

Regards,
Lynne Sept. 9, 2021, 12:41 p.m. UTC | #10
9 Sept 2021, 12:48 by george@nsup.org:

> Lynne (12021-09-09):
>
>> Did you read what I wrote? I think I was very clear.
>>
>
> I did. And you, did you read what I wrote? You are only answering one of
> the questions.
>
>> It's an output, unless a specific flag is set to accept it as an input.
>> API users don't have to maintain it without the flag set.
>> It helps solve the problem of piping timebases for API users
>> who isolate all of their contexts and only relay frames throughout,
>> without them needing to use the opaque fields.
>>
>
> So this is the problem it is trying to solve? Applications who do not
> bother to have a time_base field where they need one, so we have to add
> one ourselves, and have all the maintenance nightmare of keeping it up
> to date.
>
> No, thank you.
>

It's not a maintenance nightmare, it's a single optional field. Please don't
reject my ideas and needs outright, I'm not the only API user who would
benefit from this. Browsers have had issues with sandboxing for ages,
and they'd love to have all state be maintained in the frames. But since
that's unreasonable, I think just having the timebase used for the timestamps
makes a large difference to usability.


>> We already have the same field for AVPackets, so not introducing
>> one for frames would make no sense.
>>
>
> I already thought it was a bad idea when it was added to AVPacket. I
> should have spoken up at the time, but I did not. So let us remove it
> instead. Or at least not continue in that direction.
>

What direction? It's completely reasonable to have timebases for the
currently-isolated timestamps in AVFrames in the same way it was
reasonable for packets.


> Neither frames nor packets exist in a void. They are all related to a
> context. And I do not mean an AVSomethingContext in terms of libav*
> structures, but a logical context that explains a lot of things about
> them, and the time_base is only a tiny part of it.
>

But they do exist in a void, I've been using them within a void with
only AVFrames serving as links between components. It's really
neat. Only avcodecparameters breaks that.

You say you don't understand this patch, but on the other hand,
I don't understand your reasoning at all. It's not an over-arching
API design change, it's a single optional field that API users
were already including in their opaque context anyway.
Nicolas George Sept. 9, 2021, 12:53 p.m. UTC | #11
Lynne (12021-09-09):
> It's not a maintenance nightmare, it's a single optional field. Please don't

Then please answer this simple question: How do you guarantee that this
new field will always be correctly kept updated?

> reject my ideas and needs outright, I'm not the only API user who would
> benefit from this. Browsers have had issues with sandboxing for ages,
> and they'd love to have all state be maintained in the frames. But since
> that's unreasonable, I think just having the timebase used for the timestamps
> makes a large difference to usability.

Software design 101: if you need some information along with an object,
you define a structure to store that object and that information
together.

In other words:

typedef struct My_frame {
    AVFrame *frame;
    AVRationl time_base;
    whatever else I need;
} My_frame;

> What direction? It's completely reasonable to have timebases for the
> currently-isolated timestamps in AVFrames in the same way it was
> reasonable for packets.

Isolated timestamps make no sense.

> But they do exist in a void, I've been using them within a void with
> only AVFrames serving as links between components. It's really
> neat. Only avcodecparameters breaks that.

No, they do not exist in a void: not existing in the void is the
difference between a frame and just an image.

> You say you don't understand this patch, but on the other hand,

I have not said I do not understand the patch. I have asked that you (1)
clarify the semantic and (2) specify what problem it is trying to solve
so we can see if there is a simpler solution.

If the problem is that people want to use AVFrame in isolation, then
there is a simpler solution: we answer that it is not how it is meant to
work.

I will add: if fftools/*.c could work without time_base in AVFrame, then
any application can.

> I don't understand your reasoning at all. It's not an over-arching
> API design change, it's a single optional field that API users
> were already including in their opaque context anyway.

And something extra we have to maintain and keep updated.

Regards,
Lynne Sept. 9, 2021, 1:42 p.m. UTC | #12
9 Sept 2021, 14:53 by george@nsup.org:

> Lynne (12021-09-09):
>
>> It's not a maintenance nightmare, it's a single optional field. Please don't
>>
>
> Then please answer this simple question: How do you guarantee that this
> new field will always be correctly kept updated?
>

Because all of our codecs pass their frames through a wrapper function before
they get to the user. So, we just set the field there, add a FATE test, and now
they're guaranteed to be correctly kept updated.


>> reject my ideas and needs outright, I'm not the only API user who would
>> benefit from this. Browsers have had issues with sandboxing for ages,
>> and they'd love to have all state be maintained in the frames. But since
>> that's unreasonable, I think just having the timebase used for the timestamps
>> makes a large difference to usability.
>>
>
> Software design 101: if you need some information along with an object,
> you define a structure to store that object and that information
> together.
>

So why are timestamps part of frames at all then?


> In other words:
>
> typedef struct My_frame {
>  AVFrame *frame;
>  AVRationl time_base;
>  whatever else I need;
> } My_frame;
>
>> What direction? It's completely reasonable to have timebases for the
>> currently-isolated timestamps in AVFrames in the same way it was
>> reasonable for packets.
>>
>
> Isolated timestamps make no sense.
>

They're not isolated, they're part of the frame. And to make any sense
of them, you need a timebase.


>> But they do exist in a void, I've been using them within a void with
>> only AVFrames serving as links between components. It's really
>> neat. Only avcodecparameters breaks that.
>>
>
> No, they do not exist in a void: not existing in the void is the
> difference between a frame and just an image.
>
>> You say you don't understand this patch, but on the other hand,
>>
>
> I have not said I do not understand the patch. I have asked that you (1)
> clarify the semantic and (2) specify what problem it is trying to solve
> so we can see if there is a simpler solution.
>

I did clarify the semantics. In technicolor. The message I sent earlier
reads like a spec. I don't mind sending a new RFC patch which includes
said clarifications in the comment section of the field.


> If the problem is that people want to use AVFrame in isolation, then
> there is a simpler solution: we answer that it is not how it is meant to
> work.
>
> I will add: if fftools/*.c could work without time_base in AVFrame, then
> any application can.
>
>> I don't understand your reasoning at all. It's not an over-arching
>> API design change, it's a single optional field that API users
>> were already including in their opaque context anyway.
>>
>
> And something extra we have to maintain and keep updated.
>

It's a single optional field! The way the comment is worded now,
we don't even have any obligation to update it.

I still don't understand your reasoning, other than "I don't like it.".
Nicolas George Sept. 9, 2021, 1:53 p.m. UTC | #13
Lynne (12021-09-09):
> Because all of our codecs pass their frames through a wrapper function before
> they get to the user. So, we just set the field there, add a FATE test, and now
> they're guaranteed to be correctly kept updated.

This is wrong and not enough. Codecs are not the only origin for frames.
To ensure this field is always up-to-date, you need to find all the
places where it can go out-of-sync, and make sure there is some kind of
wrapper.

> So why are timestamps part of frames at all then?

Seriously? Because the timestamps, unlike the time base, change from
frame to frame.

> They're not isolated, they're part of the frame. And to make any sense
> of them, you need a timebase.

No, one frame does not need a timestamp. Timestamps become necessary
when there are several frames.

> It's a single optional field! The way the comment is worded now,
> we don't even have any obligation to update it.

It is a single optional field, but keeping it up to date requires a lot
of code.

It is a single REDUNDANT field, and not having redundant information
should be one of the first things an experienced developer has learned.

Regards,
Paul B Mahol Sept. 9, 2021, 5:40 p.m. UTC | #14
On Thu, Sep 9, 2021 at 12:51 PM Nicolas George <george@nsup.org> wrote:

> Lynne (12021-09-09):
> > That's fine, no argument about this. We talked about this on IRC
> > and I agreed that duration fields on frames make no sense and
> > are difficult to guarantee.
>
> Thank you for mentioning this.
>
> Not everybody can spend time synchronously on IRC.
>

Such people should than leave project.


> > One thing though, "Speaking as the maintainer of libavfilter"?
> > Please don't discredit Paul, he maintains and cares about lavfi
> > just as much as you do.
>
> I am not discrediting Paul. He maintains a lot of useful filters, and he
> is the expert on intra-frame threading, but the global framework,
> including negotiation and scheduling, is really my thing ever since
> Stefano stopped worked on the code.
>

I read this as personal attack.

Tell me how one is supposed to fix last frame duration when muxing &
encoding?


>
> Regards,
>
> --
>   Nicolas George
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
>
Lynne Sept. 9, 2021, 7:09 p.m. UTC | #15
9 Sept 2021, 15:53 by george@nsup.org:

> Lynne (12021-09-09):
>
>> Because all of our codecs pass their frames through a wrapper function before
>> they get to the user. So, we just set the field there, add a FATE test, and now
>> they're guaranteed to be correctly kept updated.
>>
>
> This is wrong and not enough. Codecs are not the only origin for frames.
> To ensure this field is always up-to-date, you need to find all the
> places where it can go out-of-sync, and make sure there is some kind of
> wrapper.
>

No, you don't, there's nothing special about this!


>> So why are timestamps part of frames at all then?
>>
>
> Seriously? Because the timestamps, unlike the time base, change from
> frame to frame.
>

For now. No reason why they can't in the future, specifically
with codecs that embed timebases in their bitstream.


>> They're not isolated, they're part of the frame. And to make any sense
>> of them, you need a timebase.
>>
>
> No, one frame does not need a timestamp. Timestamps become necessary
> when there are several frames.
>

Well, no, single-frame video also exists. We already have a packet duration
field, so we are able to express such! And to make sense of that field,
you also need a timebase.


>> It's a single optional field! The way the comment is worded now,
>> we don't even have any obligation to update it.
>>
>
> It is a single optional field, but keeping it up to date requires a lot
> of code.
>

It's 10 lines of code. No.
Nicolas George Sept. 9, 2021, 7:15 p.m. UTC | #16
Lynne (12021-09-09):
> No, you don't, there's nothing special about this!

There is no need for something special, it is an universal fact of
programming that if several redundant pieces of information are supposed
to be in sync, unless there are strong systems to keep them in sync,
they will eventually get out of sync because of a code inconsistency.

Avoiding redundancy is one of the most important principles of code
hygiene.

This is why I oppose this change, and this is why I propose to remove
the time_base field in AVPacket.
Lynne Sept. 9, 2021, 7:17 p.m. UTC | #17
9 Sept 2021, 19:40 by onemda@gmail.com:

> On Thu, Sep 9, 2021 at 12:51 PM Nicolas George <george@nsup.org> wrote:
>
>> Lynne (12021-09-09):
>> > That's fine, no argument about this. We talked about this on IRC
>> > and I agreed that duration fields on frames make no sense and
>> > are difficult to guarantee.
>>
>> Thank you for mentioning this.
>>
>> Not everybody can spend time synchronously on IRC.
>>
>
> Such people should than leave project.
>
>
>> > One thing though, "Speaking as the maintainer of libavfilter"?
>> > Please don't discredit Paul, he maintains and cares about lavfi
>> > just as much as you do.
>>
>> I am not discrediting Paul. He maintains a lot of useful filters, and he
>> is the expert on intra-frame threading, but the global framework,
>> including negotiation and scheduling, is really my thing ever since
>> Stefano stopped worked on the code.
>>
>
> I read this as personal attack.
>
> Tell me how one is supposed to fix last frame duration when muxing &
> encoding?
>

A duration field that's set to the previous frame's duration would be
a good start. It'll work with CFR and most VFR video. If it's a static
image, the duration could be set to infinity or NOPTS to flag this.

I don't mind a patch which introduces that, although we already have
a pkt_duration field, which may fit. But setting it as above may be
an API-breaking change, so a dedicated field would be better.
Paul B Mahol Sept. 9, 2021, 7:21 p.m. UTC | #18
On Thu, Sep 9, 2021 at 9:18 PM Lynne <dev@lynne.ee> wrote:

> 9 Sept 2021, 19:40 by onemda@gmail.com:
>
> > On Thu, Sep 9, 2021 at 12:51 PM Nicolas George <george@nsup.org> wrote:
> >
> >> Lynne (12021-09-09):
> >> > That's fine, no argument about this. We talked about this on IRC
> >> > and I agreed that duration fields on frames make no sense and
> >> > are difficult to guarantee.
> >>
> >> Thank you for mentioning this.
> >>
> >> Not everybody can spend time synchronously on IRC.
> >>
> >
> > Such people should than leave project.
> >
> >
> >> > One thing though, "Speaking as the maintainer of libavfilter"?
> >> > Please don't discredit Paul, he maintains and cares about lavfi
> >> > just as much as you do.
> >>
> >> I am not discrediting Paul. He maintains a lot of useful filters, and he
> >> is the expert on intra-frame threading, but the global framework,
> >> including negotiation and scheduling, is really my thing ever since
> >> Stefano stopped worked on the code.
> >>
> >
> > I read this as personal attack.
> >
> > Tell me how one is supposed to fix last frame duration when muxing &
> > encoding?
> >
>
> A duration field that's set to the previous frame's duration would be
> a good start. It'll work with CFR and most VFR video. If it's a static
> image, the duration could be set to infinity or NOPTS to flag this.
>
> I don't mind a patch which introduces that, although we already have
> a pkt_duration field, which may fit. But setting it as above may be
> an API-breaking change, so a dedicated field would be better.
>


Look at gif bug report opened for years.

Looks like you are never interested in fixing bugs.

Last duration of frame of gif is simply lost.


> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
>
Nicolas George Sept. 9, 2021, 7:23 p.m. UTC | #19
Paul B Mahol (12021-09-09):
> Such people should than leave project.

I will chose to ignore that useless remark.

> I read this as personal attack.

This was not my intent, I am sorry you took it that way. If you would
please explain to me how you read a personal attack in a sentence that
affirms our different area of expertise, I will try to make my intent
clearer in the future.

> Tell me how one is supposed to fix last frame duration when muxing &
> encoding?

It seems to me the solution I implemented for libavfilter works for
these cases as well: include a final timestamp when flishing the encoder
or muxer.

Regards,
Lynne Sept. 9, 2021, 7:35 p.m. UTC | #20
9 Sept 2021, 21:15 by george@nsup.org:

> Lynne (12021-09-09):
>
>> No, you don't, there's nothing special about this!
>>
>
> There is no need for something special, it is an universal fact of
> programming that if several redundant pieces of information are supposed
> to be in sync, unless there are strong systems to keep them in sync,
> they will eventually get out of sync because of a code inconsistency.
>
> Avoiding redundancy is one of the most important principles of code
> hygiene.
>
> This is why I oppose this change, and this is why I propose to remove
> the time_base field in AVPacket.
>

It's a necessary piece of information pertinent to the correct
presenting of each frame. Moreover, it simplifies the API,
which new users are still finding difficult to use. Like for example
timebase negotiation in lavf, which requires a complicated dance
to perform, and is not documented anywhere. And which
timebase field are you supposed to use from lavf? The global
context's? The stream's? The codecparameter's? This
field eliminates any source of doubt which timebase to use.
And this isn't about the difference between frames and packets
either, frames can be wrapped as packets too.
Additionally, this will allows us to deal with stream switching and
stream splicing correctly without needing to reinitialize components.

Right now, all the arguments you've given are "it's redundant"
(it isn't, you __need__ a timebase to make sense of any timestamps,
and if a timebase isn't found in the frame but halfway across Jupiter,
it's simply missing), it's complicated (it isn't, it's a 10-line patch,
maximum), it's hard to keep in sync (it isn't, it's a frame field like
any other which will be printed by ffprobe and tested by FATE).
The only honest argument you stated has been an implicit "I don't
like this".
Nicolas George Sept. 9, 2021, 7:55 p.m. UTC | #21
Lynne (12021-09-09):
> It's a necessary piece of information pertinent to the correct
> presenting of each frame. Moreover, it simplifies the API,

That piece of information is already present along with all the other
pieces of information necessary to make sense of a frame.

> which new users are still finding difficult to use. Like for example
> timebase negotiation in lavf, which requires a complicated dance
> to perform, and is not documented anywhere. And which
> timebase field are you supposed to use from lavf? The global
> context's? The stream's? The codecparameter's? This

This is already documented. Maybe the documentation can be made clearer.

> field eliminates any source of doubt which timebase to use.
> And this isn't about the difference between frames and packets
> either, frames can be wrapped as packets too.
> Additionally, this will allows us to deal with stream switching and
> stream splicing correctly without needing to reinitialize components.

This is an interesting remark, but please consider that relying on the
change of a field that used to be constant is a terrible API break.

> Right now, all the arguments you've given are "it's redundant"
> (it isn't, you __need__ a timebase to make sense of any timestamps,
> and if a timebase isn't found in the frame but halfway across Jupiter,
> it's simply missing), it's complicated (it isn't, it's a 10-line patch,
> maximum), it's hard to keep in sync (it isn't, it's a frame field like
> any other which will be printed by ffprobe and tested by FATE).

And yes, that is all the arguments I have, or you can even say it in
singular: argument, because it is one and the same: redundancy is a bad
idea because it is hard to keep in sync.

As for "halfway across Jupiter", stop this ridiculous hyperbole. The
fact is that all frames are relative to a certain context that give
meaning to the timestamps: an AVStream when it comes from a demuxer or
goes to a muxer, an AVFilterLink in libavfilter, an OutputStream in
fftools, another application-defined data structure for any application.

There is never a frame in isolation, so designing an API as if there
were is a waste of time.

> The only honest argument you stated has been an implicit "I don't
> like this".

Please have the courtesy of not accusing me of dishonesty. This is an
absolute prerequisite for constructive discussion.
Andreas Rheinhardt Sept. 9, 2021, 8:18 p.m. UTC | #22
Lynne:
> 9 Sept 2021, 21:15 by george@nsup.org:
> 
>> Lynne (12021-09-09):
>>
>>> No, you don't, there's nothing special about this!
>>>
>>
>> There is no need for something special, it is an universal fact of
>> programming that if several redundant pieces of information are supposed
>> to be in sync, unless there are strong systems to keep them in sync,
>> they will eventually get out of sync because of a code inconsistency.
>>
>> Avoiding redundancy is one of the most important principles of code
>> hygiene.
>>
>> This is why I oppose this change, and this is why I propose to remove
>> the time_base field in AVPacket.
>>
> 
> It's a necessary piece of information pertinent to the correct
> presenting of each frame. Moreover, it simplifies the API,
> which new users are still finding difficult to use. Like for example
> timebase negotiation in lavf, which requires a complicated dance
> to perform, and is not documented anywhere. And which

What is undocumented?

> timebase field are you supposed to use from lavf? The global
> context's? The stream's? The codecparameter's? This

Global context? You mean AVFormatContext? It does not contain a timebase
at all; the timestamps and durations contained in it are in
microseconds. And AVCodecParameters also hasn't a timebase either. Only
the stream has one.

> field eliminates any source of doubt which timebase to use.
> And this isn't about the difference between frames and packets
> either, frames can be wrapped as packets too.
> Additionally, this will allows us to deal with stream switching and
> stream splicing correctly without needing to reinitialize components.
> 
> Right now, all the arguments you've given are "it's redundant"
> (it isn't, you __need__ a timebase to make sense of any timestamps,
> and if a timebase isn't found in the frame but halfway across Jupiter,
> it's simply missing), it's complicated (it isn't, it's a 10-line patch,
> maximum), it's hard to keep in sync (it isn't, it's a frame field like
> any other which will be printed by ffprobe and tested by FATE).
> The only honest argument you stated has been an implicit "I don't
> like this".
Lynne Sept. 9, 2021, 9:45 p.m. UTC | #23
9 Sept 2021, 22:18 by andreas.rheinhardt@outlook.com:

> Lynne:
>
>> 9 Sept 2021, 21:15 by george@nsup.org:
>>
>>> Lynne (12021-09-09):
>>>
>>>> No, you don't, there's nothing special about this!
>>>>
>>>
>>> There is no need for something special, it is an universal fact of
>>> programming that if several redundant pieces of information are supposed
>>> to be in sync, unless there are strong systems to keep them in sync,
>>> they will eventually get out of sync because of a code inconsistency.
>>>
>>> Avoiding redundancy is one of the most important principles of code
>>> hygiene.
>>>
>>> This is why I oppose this change, and this is why I propose to remove
>>> the time_base field in AVPacket.
>>>
>>
>> It's a necessary piece of information pertinent to the correct
>> presenting of each frame. Moreover, it simplifies the API,
>> which new users are still finding difficult to use. Like for example
>> timebase negotiation in lavf, which requires a complicated dance
>> to perform, and is not documented anywhere. And which
>>
>
> What is undocumented?
>

Muxing's timebase selection. Yes, it's documented. No, it __really__ doesn't help.
I got utterly confused, and other API users have felt the same way about it.
Populate timebase with your stream's input timebase. Initialize the muxer.
That field then gets overwritten with the timebase lavf wants. Then you
have to change all packet timebases to it.
It's by far the most insane API we have, and the most insane API I've ever seen.


>> timebase field are you supposed to use from lavf? The global
>> context's? The stream's? The codecparameter's? This
>>
>
> Global context? You mean AVFormatContext? It does not contain a timebase
> at all; the timestamps and durations contained in it are in
> microseconds. And AVCodecParameters also hasn't a timebase either. Only
> the stream has one.
>

Yes, I forgot about that. Still, chapters have their own timebase,
packets have their own timebase now, no reason why frames shouldn't
get one too.
diff mbox series

Patch

diff --git a/libavutil/frame.c b/libavutil/frame.c
index b0ceaf7145..1922e4e5c2 100644
--- a/libavutil/frame.c
+++ b/libavutil/frame.c
@@ -63,6 +63,7 @@  static void get_frame_defaults(AVFrame *frame)
     frame->pkt_duration        = 0;
     frame->pkt_pos             = -1;
     frame->pkt_size            = -1;
+    frame->time_base           = (AVRational){ 0, 1 };
     frame->key_frame           = 1;
     frame->sample_aspect_ratio = (AVRational){ 0, 1 };
     frame->format              = -1; /* unknown */
@@ -278,6 +279,7 @@  static int frame_copy_props(AVFrame *dst, const AVFrame *src, int force_copy)
     dst->pkt_pos                = src->pkt_pos;
     dst->pkt_size               = src->pkt_size;
     dst->pkt_duration           = src->pkt_duration;
+    dst->time_base              = src->time_base;
     dst->reordered_opaque       = src->reordered_opaque;
     dst->quality                = src->quality;
     dst->best_effort_timestamp  = src->best_effort_timestamp;
diff --git a/libavutil/frame.h b/libavutil/frame.h
index ff2540a20f..93740ee0f7 100644
--- a/libavutil/frame.h
+++ b/libavutil/frame.h
@@ -402,6 +402,12 @@  typedef struct AVFrame {
      */
     int64_t pkt_dts;
 
+    /**
+     * Time base for the timestamps in this frame. May be 0, in which case the
+     * time_base from the frame source should be used.
+     */
+    AVRational time_base;
+
     /**
      * picture number in bitstream order
      */