diff mbox series

[FFmpeg-devel,3/3] avdevice: use av_gettime_relative() for timestamps

Message ID 20210207000423.8948-3-cus@passwd.hu
State New
Headers show
Series [FFmpeg-devel,1/3] avfilter/avf_showcqt: use av_gettime_relative() instead of av_gettime() | expand

Checks

Context Check Description
andriy/x86_make success Make finished
andriy/x86_make_fate success Make fate finished
andriy/PPC64_make success Make finished
andriy/PPC64_make_fate success Make fate finished

Commit Message

Marton Balint Feb. 7, 2021, 12:04 a.m. UTC
av_gettime_relative() is using the monotonic clock therefore more suitable as a
timestamp source and for relative time calculations.

Probably fixes ticket #9089.

Signed-off-by: Marton Balint <cus@passwd.hu>
---
 libavdevice/alsa_dec.c        | 2 +-
 libavdevice/bktr.c            | 6 +++---
 libavdevice/fbdev_dec.c       | 4 ++--
 libavdevice/gdigrab.c         | 4 ++--
 libavdevice/jack.c            | 2 +-
 libavdevice/kmsgrab.c         | 4 ++--
 libavdevice/openal-dec.c      | 2 +-
 libavdevice/oss_dec.c         | 2 +-
 libavdevice/pulse_audio_dec.c | 2 +-
 libavdevice/sndio_dec.c       | 2 +-
 libavdevice/xcbgrab.c         | 4 ++--
 11 files changed, 17 insertions(+), 17 deletions(-)

Comments

Nicolas George Feb. 7, 2021, 1:21 p.m. UTC | #1
Marton Balint (12021-02-07):
> av_gettime_relative() is using the monotonic clock therefore more suitable as a
> timestamp source and for relative time calculations.
> 
> Probably fixes ticket #9089.

Not ok.

This is a user-visible change of behavior, we cannot do it just like
that.

Furthermore, for devices, timestamps are not only needed to synchronize
between devices within the same computer, they may be needed to
synchronize between devices connected to different computers, or between
devices and other network streams (IP webcams are the obvious example).

The best course of action is to generalize what was done with v4l: make
it an option.

Regards,
Kieran Kunhya Feb. 7, 2021, 3:22 p.m. UTC | #2
On Sun, 7 Feb 2021, 14:21 Nicolas George, <george@nsup.org> wrote:

> Marton Balint (12021-02-07):
> > av_gettime_relative() is using the monotonic clock therefore more
> suitable as a
> > timestamp source and for relative time calculations.
> >
> > Probably fixes ticket #9089.
>
> Not ok.
>
> This is a user-visible change of behavior, we cannot do it just like
> that.
>
> Furthermore, for devices, timestamps are not only needed to synchronize
> between devices within the same computer, they may be needed to
> synchronize between devices connected to different computers, or between
> devices and other network streams (IP webcams are the obvious example).
>

It is wrong to suggest that the clock of an IP webcam has any relation at
all to the PC clock both absolute or relative.

The best course of action is to generalize what was done with v4l: make
> it an option.
>

This is as usual a major flaw in v4l2. The monotonic clock should always be
used. It's the least-worst clock.

Kieran

Regards,
>
> --
>   Nicolas George
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
Nicolas George Feb. 7, 2021, 3:30 p.m. UTC | #3
Kieran Kunhya (12021-02-07):
> It is wrong to suggest that the clock of an IP webcam has any relation at
> all to the PC clock both absolute or relative.

It is not wrong to let users observe that their particular model of IP
webcam uses Unix timestamps and is correctly synchronized. This is a
common practice, after all.

> This is as usual a major flaw in v4l2. The monotonic clock should always be
> used. It's the least-worst clock.

Care to justify how giving the user an option is worse than not?

Care to explain how you propose to synchronize device captured on
different computers?

Regards,
Kieran Kunhya Feb. 7, 2021, 3:51 p.m. UTC | #4
Sent from my mobile device

On Sun, 7 Feb 2021, 16:31 Nicolas George, <george@nsup.org> wrote:

> Kieran Kunhya (12021-02-07):
> > It is wrong to suggest that the clock of an IP webcam has any relation at
> > all to the PC clock both absolute or relative.
>
> It is not wrong to let users observe that their particular model of IP
> webcam uses Unix timestamps and is correctly synchronized. This is a
> common practice, after all.


This is nonsense. The clock on any device comes from the oscillator onboard
the device. Not from any PC clock on the motherboard. These clocks are
almost certainly not the same model, they have different temperatures etc
so they drift differently.

> This is as usual a major flaw in v4l2. The monotonic clock should always
> be
> > used. It's the least-worst clock.
>
> Care to justify how giving the user an option is worse than not?


Give the user the option not to use the monotonic clock.

Care to explain how you propose to synchronize device captured on
> different computers?


This is not possible without high end capture equipment to synchronise the
clocks on each camera. Everything else is guesswork.

I encourage you to read the section on clocks here:
https://haasn.xyz/posts/2016-12-25-falsehoods-programmers-believe-about-%5Bvideo-stuff%5D.html

Kieran

Regards,
>
> --
>   Nicolas George
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
Nicolas George Feb. 7, 2021, 3:55 p.m. UTC | #5
Kieran Kunhya (12021-02-07):
> Give the user the option not to use the monotonic clock.

So this should be an option. Thank you very much.

> This is not possible without high end capture equipment to synchronise the
> clocks on each camera. Everything else is guesswork.

This is nonsense. The accuracy of a PC wall time clock with NTP is
plenty accurate enough for most practical purposes.

Regards,
Kieran Kunhya Feb. 7, 2021, 4 p.m. UTC | #6
Sent from my mobile device

On Sun, 7 Feb 2021, 16:55 Nicolas George, <george@nsup.org> wrote:

> Kieran Kunhya (12021-02-07):
> > Give the user the option not to use the monotonic clock.
>
> So this should be an option. Thank you very much.


Yes the montonic clock should be the default.

> This is not possible without high end capture equipment to synchronise the
> > clocks on each camera. Everything else is guesswork.
>
> This is nonsense. The accuracy of a PC wall time clock with NTP is
> plenty accurate enough for most practical purposes.
>

You misunderstand greatly. How does the clock rate of NTP magically make
its way to the camera? It doesn't except in very high end setups where the
oscillator on the camera is drifted to match an external time source.

Kieran

Regards,
>
> --
>   Nicolas George
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
Nicolas George Feb. 7, 2021, 4:14 p.m. UTC | #7
Kieran Kunhya (12021-02-07):
> Yes the montonic clock should be the default.

Changing the default is acceptable to me. Probably after a transition
period.

> You misunderstand greatly. How does the clock rate of NTP magically make
> its way to the camera?

I do not know, I do not care, I am not a driver developer. All I know is
that the kernel can give me a timestamp based on the NTP-synchronized
wall time clock, and that it has better than 1/100 second accuracy,
which is plenty enough for must use cases.

Regards,
Kieran Kunhya Feb. 7, 2021, 4:22 p.m. UTC | #8
On Sun, 7 Feb 2021, 17:15 Nicolas George, <george@nsup.org> wrote:

> Kieran Kunhya (12021-02-07):
> > Yes the montonic clock should be the default.
>
> Changing the default is acceptable to me. Probably after a transition
> period.
>
> > You misunderstand greatly. How does the clock rate of NTP magically make
> > its way to the camera?
>
> I do not know, I do not care, I am not a driver developer. All I know is
> that the kernel can give me a timestamp based on the NTP-synchronized
> wall time clock, and that it has better than 1/100 second accuracy,
> which is plenty enough for must use cases.
>

You do care because different sources generated either by a PC or different
physical cameras will drift. E.g at 25fps after one hour you might get
90000 frames from one and 90002 or whatever frames from another etc. And of
course the timestamps will drift slowly. Unfortunately you seem to share
the same fundamental misunderstandings as the kernel developers.

Applications like VLC and upipe have sophisticated filtering and drift
compensation to correct all of this.

Kieran

Regards,
>
> --
>   Nicolas George
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
Nicolas George Feb. 7, 2021, 4:28 p.m. UTC | #9
Kieran Kunhya (12021-02-07):
> You do care because different sources generated either by a PC or different
> physical cameras will drift. E.g at 25fps after one hour you might get
> 90000 frames from one and 90002 or whatever frames from another etc. And of
> course the timestamps will drift slowly. Unfortunately you seem to share
> the same fundamental misunderstandings as the kernel developers.

Your reasoning is backwards: different time bases is precisely the
reason we need the option of timestamps with a common reference frame.

With the situation you describe, the application that captures the two
cameras can detect that one is faster than the other and duplicate or
skip frames as necessary. Without a common timestamp, it is not
possible.

Regards,
Kieran Kunhya Feb. 7, 2021, 4:32 p.m. UTC | #10
On Sun, 7 Feb 2021, 17:29 Nicolas George, <george@nsup.org> wrote:

> Kieran Kunhya (12021-02-07):
> > You do care because different sources generated either by a PC or
> different
> > physical cameras will drift. E.g at 25fps after one hour you might get
> > 90000 frames from one and 90002 or whatever frames from another etc. And
> of
> > course the timestamps will drift slowly. Unfortunately you seem to share
> > the same fundamental misunderstandings as the kernel developers.
>
> Your reasoning is backwards: different time bases is precisely the
> reason we need the option of timestamps with a common reference frame.
>
> With the situation you describe, the application that captures the two
> cameras can detect that one is faster than the other and duplicate or
> skip frames as necessary. Without a common timestamp, it is not
> possible.
>

Yes and thus you need a monotonic clock as a starting point for proper
filtering. Ffmpeg has neither.

Kieran


> --
>   Nicolas George
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
Nicolas George Feb. 7, 2021, 4:35 p.m. UTC | #11
Kieran Kunhya (12021-02-07):
> Yes and thus you need a monotonic clock as a starting point for proper
> filtering. Ffmpeg has neither.

The kernel provides timestamps that are accurate enough. I do not
understand why you insist on the contrary in the face of facts.

Also, for a properly configured system, the wall time clock is
monotonic.

Regards,
Kieran Kunhya Feb. 7, 2021, 5:02 p.m. UTC | #12
Sent from my mobile device

On Sun, 7 Feb 2021, 17:35 Nicolas George, <george@nsup.org> wrote:

> Kieran Kunhya (12021-02-07):
> > Yes and thus you need a monotonic clock as a starting point for proper
> > filtering. Ffmpeg has neither.
>
> The kernel provides timestamps that are accurate enough. I do not
> understand why you insist on the contrary in the face of facts.
>
> Also, for a properly configured system, the wall time clock is
> monotonic.
>

This time can jump forward or backwards. It is trivial to do this. The
monotonic clock cannot do that.

Kieran

Regards,
>
> --
>   Nicolas George
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
Nicolas George Feb. 7, 2021, 5:34 p.m. UTC | #13
Kieran Kunhya (12021-02-07):
> This time can jump forward or backwards. It is trivial to do this.

Not on a correctly configured system, it does not.

> The monotonic clock cannot do that.

The monotonic clock cannot synchronize from different computers.

Each has limitations the other has not, this is why we need both as
options. Because YOU do not know if the user's computers are correctly
synchronized, YOU do not know if the user needs to synchronise between
different computers. But the users knows. Hence the option. For the
user, because the user knows.

Regards,
Kieran Kunhya Feb. 7, 2021, 5:44 p.m. UTC | #14
On Sun, 7 Feb 2021, 18:34 Nicolas George, <george@nsup.org> wrote:

> Kieran Kunhya (12021-02-07):
> > This time can jump forward or backwards. It is trivial to do this.
>
> Not on a correctly configured system, it does not.
>

NTP is allowed to jump when it wants for example. It's not trivial for the
user to guarantee this.

> The monotonic clock cannot do that.
>
> The monotonic clock cannot synchronize from different computers.
>
> Each has limitations the other has not, this is why we need both as
> options. Because YOU do not know if the user's computers are correctly
> synchronized, YOU do not know if the user needs to synchronise between
> different computers. But the users knows. Hence the option. For the
> user, because the user knows.
>

Exactly, so by default you must give the user a clock which is guaranteed
not to jump as you don't know. If they need to synchronise between PCs
using your naive process they can choose gettimeofday. Hopefully they
understand the consequences of that unlike you. It's still a flawed process
because of timestamp jitter etc.

Kieran


Regards,
>
> --
>   Nicolas George
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
Nicolas George Feb. 7, 2021, 6:21 p.m. UTC | #15
Kieran Kunhya (12021-02-07):
> NTP is allowed to jump when it wants for example. It's not trivial for the
> user to guarantee this.

In practice, it does not.

> Exactly, so by default you must give the user a clock which is guaranteed
> not to jump as you don't know. If they need to synchronise between PCs
> using your naive process they can choose gettimeofday.

So why are you still arguing this?

> Hopefully they understand the consequences of that unlike you. It's
> still a flawed process because of timestamp jitter etc.

I obviously understand the consequences better than you, since you did
not even realize why the wall time clock was needed.

I think this should be the end of the discussion.

Regards,
Kieran Kunhya Feb. 7, 2021, 7:17 p.m. UTC | #16
Sent from my mobile device

On Sun, 7 Feb 2021, 19:21 Nicolas George, <george@nsup.org> wrote:

> Kieran Kunhya (12021-02-07):
> > NTP is allowed to jump when it wants for example. It's not trivial for
> the
> > user to guarantee this.
>
> In practice, it does not.
>

Read the docs where it explains where it can jump.


> > Exactly, so by default you must give the user a clock which is guaranteed
> > not to jump as you don't know. If they need to synchronise between PCs
> > using your naive process they can choose gettimeofday.
>
> So why are you still arguing this?
>
> > Hopefully they understand the consequences of that unlike you. It's
> > still a flawed process because of timestamp jitter etc.
>
> I obviously understand the consequences better than you, since you did
> not even realize why the wall time clock was needed.
>

Your scenario is contrived. Its impossible to sync exactly between
different machines for all the reasons I explained and you didn't
understand.

I think this should be the end of the discussion.
>

Yes, since you are not interested in reading the NTP docs or understanding
the purpose of a monotonic clock.

Kieran


> Regards,
>
> --
>   Nicolas George
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
Nicolas George Feb. 7, 2021, 7:20 p.m. UTC | #17
Kieran Kunhya (12021-02-07):
> Your scenario is contrived. Its impossible to sync exactly between
> different machines for all the reasons I explained and you didn't
> understand.

I understand very well. What you refuse to understand is that all this
theoretical stuff does not matter. What matters is that in practice it
works, full stop.

Regards,
Kieran Kunhya Feb. 7, 2021, 7:26 p.m. UTC | #18
On Sun, 7 Feb 2021, 20:20 Nicolas George, <george@nsup.org> wrote:

> Kieran Kunhya (12021-02-07):
> > Your scenario is contrived. Its impossible to sync exactly between
> > different machines for all the reasons I explained and you didn't
> > understand.
>
> I understand very well. What you refuse to understand is that all this
> theoretical stuff does not matter. What matters is that in practice it
> works, full stop.
>


It doesn't work in practice.
It's why every Android phone captures video at 29.970fps ± jitter instead
of the correct 30000/1001

Kieran


> Regards,
>
> --
>   Nicolas George
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
Nicolas George Feb. 7, 2021, 7:27 p.m. UTC | #19
Kieran Kunhya (12021-02-07):
> It doesn't work in practice.
> It's why every Android phone captures video at 29.970fps ± jitter instead
> of the correct 30000/1001

Good enough for most uses. Let the user decide.

Regards,
Kieran Kunhya Feb. 7, 2021, 7:30 p.m. UTC | #20
Sent from my mobile device

On Sun, 7 Feb 2021, 20:27 Nicolas George, <george@nsup.org> wrote:

> Kieran Kunhya (12021-02-07):
> > It doesn't work in practice.
> > It's why every Android phone captures video at 29.970fps ± jitter instead
> > of the correct 30000/1001
>
> Good enough for most uses. Let the user decide.
>

QED that you don't understand the problem whatsoever.

Kieran


> Regards,
>
> --
>   Nicolas George
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
Nicolas George Feb. 7, 2021, 7:33 p.m. UTC | #21
Kieran Kunhya (12021-02-07):
> QED that you don't understand the problem whatsoever.

This is the third time you've been rude today.

I am sorry, but it's you who do not understand what people actually
need. Most use cases do not need better precision than a few hundredth
of seconds, if that.
diff mbox series

Patch

diff --git a/libavdevice/alsa_dec.c b/libavdevice/alsa_dec.c
index 36494e921c..04875af590 100644
--- a/libavdevice/alsa_dec.c
+++ b/libavdevice/alsa_dec.c
@@ -125,7 +125,7 @@  static int audio_read_packet(AVFormatContext *s1, AVPacket *pkt)
         ff_timefilter_reset(s->timefilter);
     }
 
-    dts = av_gettime();
+    dts = av_gettime_relative();
     snd_pcm_delay(s->h, &delay);
     dts -= av_rescale(delay + res, 1000000, s->sample_rate);
     pkt->pts = ff_timefilter_update(s->timefilter, dts, s->last_period);
diff --git a/libavdevice/bktr.c b/libavdevice/bktr.c
index 2601adbba8..8e66fd64d6 100644
--- a/libavdevice/bktr.c
+++ b/libavdevice/bktr.c
@@ -225,14 +225,14 @@  static void bktr_getframe(uint64_t per_frame)
 {
     uint64_t curtime;
 
-    curtime = av_gettime();
+    curtime = av_gettime_relative();
     if (!last_frame_time
         || ((last_frame_time + per_frame) > curtime)) {
         if (!usleep(last_frame_time + per_frame + per_frame / 8 - curtime)) {
             if (!nsignals)
                 av_log(NULL, AV_LOG_INFO,
                        "SLEPT NO signals - %d microseconds late\n",
-                       (int)(av_gettime() - last_frame_time - per_frame));
+                       (int)(av_gettime_relative() - last_frame_time - per_frame));
         }
     }
     nsignals = 0;
@@ -250,7 +250,7 @@  static int grab_read_packet(AVFormatContext *s1, AVPacket *pkt)
 
     bktr_getframe(s->per_frame);
 
-    pkt->pts = av_gettime();
+    pkt->pts = av_gettime_relative();
     memcpy(pkt->data, video_buf, video_buf_size);
 
     return video_buf_size;
diff --git a/libavdevice/fbdev_dec.c b/libavdevice/fbdev_dec.c
index 6a51816868..a25f0ff7c1 100644
--- a/libavdevice/fbdev_dec.c
+++ b/libavdevice/fbdev_dec.c
@@ -157,11 +157,11 @@  static int fbdev_read_packet(AVFormatContext *avctx, AVPacket *pkt)
     uint8_t *pin, *pout;
 
     if (fbdev->time_frame == AV_NOPTS_VALUE)
-        fbdev->time_frame = av_gettime();
+        fbdev->time_frame = av_gettime_relative();
 
     /* wait based on the frame rate */
     while (1) {
-        curtime = av_gettime();
+        curtime = av_gettime_relative();
         delay = fbdev->time_frame - curtime;
         av_log(avctx, AV_LOG_TRACE,
                 "time_frame:%"PRId64" curtime:%"PRId64" delay:%"PRId64"\n",
diff --git a/libavdevice/gdigrab.c b/libavdevice/gdigrab.c
index f4444406fa..814a1914a6 100644
--- a/libavdevice/gdigrab.c
+++ b/libavdevice/gdigrab.c
@@ -394,7 +394,7 @@  gdigrab_read_header(AVFormatContext *s1)
     gdigrab->header_size = sizeof(BITMAPFILEHEADER) + sizeof(BITMAPINFOHEADER) +
                            (bpp <= 8 ? (1 << bpp) : 0) * sizeof(RGBQUAD) /* palette size */;
     gdigrab->time_base   = av_inv_q(gdigrab->framerate);
-    gdigrab->time_frame  = av_gettime() / av_q2d(gdigrab->time_base);
+    gdigrab->time_frame  = av_gettime_relative() / av_q2d(gdigrab->time_base);
 
     gdigrab->hwnd       = hwnd;
     gdigrab->source_hdc = source_hdc;
@@ -551,7 +551,7 @@  static int gdigrab_read_packet(AVFormatContext *s1, AVPacket *pkt)
 
     /* wait based on the frame rate */
     for (;;) {
-        curtime = av_gettime();
+        curtime = av_gettime_relative();
         delay = time_frame * av_q2d(time_base) - curtime;
         if (delay <= 0) {
             if (delay < INT64_C(-1000000) * av_q2d(time_base)) {
diff --git a/libavdevice/jack.c b/libavdevice/jack.c
index 34f1c6de97..7b973b8478 100644
--- a/libavdevice/jack.c
+++ b/libavdevice/jack.c
@@ -77,7 +77,7 @@  static int process_callback(jack_nframes_t nframes, void *arg)
 
     /* Retrieve filtered cycle time */
     cycle_time = ff_timefilter_update(self->timefilter,
-                                      av_gettime() / 1000000.0 - (double) cycle_delay / self->sample_rate,
+                                      av_gettime_relative() / 1000000.0 - (double) cycle_delay / self->sample_rate,
                                       self->buffer_size);
 
     /* Check if an empty packet is available, and if there's enough space to send it back once filled */
diff --git a/libavdevice/kmsgrab.c b/libavdevice/kmsgrab.c
index 94e32b9cae..58ad9b6de3 100644
--- a/libavdevice/kmsgrab.c
+++ b/libavdevice/kmsgrab.c
@@ -268,7 +268,7 @@  static int kmsgrab_read_packet(AVFormatContext *avctx, AVPacket *pkt)
     int64_t now;
     int err;
 
-    now = av_gettime();
+    now = av_gettime_relative();
     if (ctx->frame_last) {
         int64_t delay;
         while (1) {
@@ -276,7 +276,7 @@  static int kmsgrab_read_packet(AVFormatContext *avctx, AVPacket *pkt)
             if (delay <= 0)
                 break;
             av_usleep(delay);
-            now = av_gettime();
+            now = av_gettime_relative();
         }
     }
     ctx->frame_last = now;
diff --git a/libavdevice/openal-dec.c b/libavdevice/openal-dec.c
index 57de665eb6..72320b2c6f 100644
--- a/libavdevice/openal-dec.c
+++ b/libavdevice/openal-dec.c
@@ -201,7 +201,7 @@  static int read_packet(AVFormatContext* ctx, AVPacket *pkt)
     /* Create a packet of appropriate size */
     if ((error = av_new_packet(pkt, nb_samples*ad->sample_step)) < 0)
         goto fail;
-    pkt->pts = av_gettime();
+    pkt->pts = av_gettime_relative();
 
     /* Fill the packet with the available samples */
     alcCaptureSamples(ad->device, pkt->data, nb_samples);
diff --git a/libavdevice/oss_dec.c b/libavdevice/oss_dec.c
index 13ace7000d..bd8b34d7cd 100644
--- a/libavdevice/oss_dec.c
+++ b/libavdevice/oss_dec.c
@@ -87,7 +87,7 @@  static int audio_read_packet(AVFormatContext *s1, AVPacket *pkt)
     pkt->size = ret;
 
     /* compute pts of the start of the packet */
-    cur_time = av_gettime();
+    cur_time = av_gettime_relative();
     bdelay = ret;
     if (ioctl(s->fd, SNDCTL_DSP_GETISPACE, &abufi) == 0) {
         bdelay += abufi.bytes;
diff --git a/libavdevice/pulse_audio_dec.c b/libavdevice/pulse_audio_dec.c
index 50a3c971ae..932ca4252f 100644
--- a/libavdevice/pulse_audio_dec.c
+++ b/libavdevice/pulse_audio_dec.c
@@ -304,7 +304,7 @@  static int pulse_read_packet(AVFormatContext *s, AVPacket *pkt)
         goto unlock_and_fail;
     }
 
-    dts = av_gettime();
+    dts = av_gettime_relative();
     pa_operation_unref(pa_stream_update_timing_info(pd->stream, NULL, NULL));
 
     if (pa_stream_get_latency(pd->stream, &latency, &negative) >= 0) {
diff --git a/libavdevice/sndio_dec.c b/libavdevice/sndio_dec.c
index ebb485a2c7..d321fa3031 100644
--- a/libavdevice/sndio_dec.c
+++ b/libavdevice/sndio_dec.c
@@ -75,7 +75,7 @@  static int audio_read_packet(AVFormatContext *s1, AVPacket *pkt)
     s->softpos += ret;
 
     /* compute pts of the start of the packet */
-    cur_time = av_gettime();
+    cur_time = av_gettime_relative();
 
     bdelay = ret + s->hwpos - s->softpos;
 
diff --git a/libavdevice/xcbgrab.c b/libavdevice/xcbgrab.c
index 95bdc8ab9d..a8a2a1fd76 100644
--- a/libavdevice/xcbgrab.c
+++ b/libavdevice/xcbgrab.c
@@ -206,7 +206,7 @@  static int64_t wait_frame(AVFormatContext *s, AVPacket *pkt)
     c->time_frame += c->frame_duration;
 
     for (;;) {
-        curtime = av_gettime();
+        curtime = av_gettime_relative();
         delay   = c->time_frame - curtime;
         if (delay <= 0)
             break;
@@ -591,7 +591,7 @@  static int create_stream(AVFormatContext *s)
     c->time_base  = (AVRational){ st->avg_frame_rate.den,
                                   st->avg_frame_rate.num };
     c->frame_duration = av_rescale_q(1, c->time_base, AV_TIME_BASE_Q);
-    c->time_frame = av_gettime();
+    c->time_frame = av_gettime_relative();
 
     ret = pixfmt_from_pixmap_format(s, geo->depth, &st->codecpar->format, &c->bpp);
     free(geo);