Message ID | 20200116002016.4528-4-cus@passwd.hu |
---|---|
State | New |
Headers | show |
Series | [FFmpeg-devel,1/4] avformat/udp: add newline after warning | expand |
Context | Check | Description |
---|---|---|
andriy/ffmpeg-patchwork | success | Make fate finished |
On Thu, Jan 16, 2020 at 01:20:16AM +0100, Marton Balint wrote: > It is not supported by every threading implementation, and the only thing we > gain with it is an immediate shutdown of receiving packets on close and > avoiding the poll call before reading the data. > > I don't think it is a big issue if it takes 0.1 sec of delay to close an udp > stream. Back when this was introduced the delay was 1 sec, which was indeed > noticable. > > And anybody who needs performance sensitive UDP should not use the fifo buffer > in the first place, because the kernel can buffer the data much more > effectively. > > Signed-off-by: Marton Balint <cus@passwd.hu> > --- > libavformat/udp.c | 57 +++++++++++++++++++++++++------------------------------ > 1 file changed, 26 insertions(+), 31 deletions(-) this breaks build on mingw64 src/libavformat/udp.c: In function ‘udp_read’: src/libavformat/udp.c:980:17: error: implicit declaration of function ‘pthread_cond_timedwait’ [-Werror=implicit-function-declaration] int err = pthread_cond_timedwait(&s->cond, &s->mutex, &tv); ^ cc1: some warnings being treated as errors make: *** [libavformat/udp.o] Error 1 make: *** Waiting for unfinished jobs.... [...]
On Fri, Jan 17, 2020 at 12:30 AM Michael Niedermayer <michael@niedermayer.cc> wrote: > > On Thu, Jan 16, 2020 at 01:20:16AM +0100, Marton Balint wrote: > > It is not supported by every threading implementation, and the only thing we > > gain with it is an immediate shutdown of receiving packets on close and > > avoiding the poll call before reading the data. > > > > I don't think it is a big issue if it takes 0.1 sec of delay to close an udp > > stream. Back when this was introduced the delay was 1 sec, which was indeed > > noticable. > > > > And anybody who needs performance sensitive UDP should not use the fifo buffer > > in the first place, because the kernel can buffer the data much more > > effectively. > > > > Signed-off-by: Marton Balint <cus@passwd.hu> > > --- > > libavformat/udp.c | 57 +++++++++++++++++++++++++------------------------------ > > 1 file changed, 26 insertions(+), 31 deletions(-) > > this breaks build on mingw64 > > src/libavformat/udp.c: In function ‘udp_read’: > src/libavformat/udp.c:980:17: error: implicit declaration of function ‘pthread_cond_timedwait’ [-Werror=implicit-function-declaration] > int err = pthread_cond_timedwait(&s->cond, &s->mutex, &tv); > ^ We could add that to our own w32 pthread wrapper, unfortunately whoever designed pthread_cond_timedwait was on drugs, the absolute timespec is just crazy, instead of a delay in (milli|micro|nano)seconds, so we'll practically undo the math that same function already does to add its timeout to current time. But oh well. - Hendrik
On Fri, 17 Jan 2020 at 18:44, Hendrik Leppkes <h.leppkes@gmail.com> wrote: > On Fri, Jan 17, 2020 at 12:30 AM Michael Niedermayer > <michael@niedermayer.cc> wrote: > > > > On Thu, Jan 16, 2020 at 01:20:16AM +0100, Marton Balint wrote: > > > It is not supported by every threading implementation, and the only > thing we > > > gain with it is an immediate shutdown of receiving packets on close and > > > avoiding the poll call before reading the data. > > > > > > I don't think it is a big issue if it takes 0.1 sec of delay to close > an udp > > > stream. Back when this was introduced the delay was 1 sec, which was > indeed > > > noticable. > > > > > > And anybody who needs performance sensitive UDP should not use the > fifo buffer > > > in the first place, because the kernel can buffer the data much more > > > effectively. > > > > > > Signed-off-by: Marton Balint <cus@passwd.hu> > > > --- > > > libavformat/udp.c | 57 > +++++++++++++++++++++++++------------------------------ > > > 1 file changed, 26 insertions(+), 31 deletions(-) > > > > this breaks build on mingw64 > > > > src/libavformat/udp.c: In function ‘udp_read’: > > src/libavformat/udp.c:980:17: error: implicit declaration of function > ‘pthread_cond_timedwait’ [-Werror=implicit-function-declaration] > > int err = pthread_cond_timedwait(&s->cond, &s->mutex, > &tv); > > ^ > > We could add that to our own w32 pthread wrapper, unfortunately > whoever designed pthread_cond_timedwait was on drugs, the absolute > timespec is just crazy, instead of a delay in > (milli|micro|nano)seconds, so we'll practically undo the math that > same function already does to add its timeout to current time. But oh > well. > > - Hendrik I had an old patch for adding pthread_cond_timedwait wrapper for win32. Haven't checked if it still applies but its attached if it is useful.
On 1/17/2020 4:51 AM, Matt Oliver wrote: > On Fri, 17 Jan 2020 at 18:44, Hendrik Leppkes <h.leppkes@gmail.com> wrote: > >> On Fri, Jan 17, 2020 at 12:30 AM Michael Niedermayer >> <michael@niedermayer.cc> wrote: >>> >>> On Thu, Jan 16, 2020 at 01:20:16AM +0100, Marton Balint wrote: >>>> It is not supported by every threading implementation, and the only >> thing we >>>> gain with it is an immediate shutdown of receiving packets on close and >>>> avoiding the poll call before reading the data. >>>> >>>> I don't think it is a big issue if it takes 0.1 sec of delay to close >> an udp >>>> stream. Back when this was introduced the delay was 1 sec, which was >> indeed >>>> noticable. >>>> >>>> And anybody who needs performance sensitive UDP should not use the >> fifo buffer >>>> in the first place, because the kernel can buffer the data much more >>>> effectively. >>>> >>>> Signed-off-by: Marton Balint <cus@passwd.hu> >>>> --- >>>> libavformat/udp.c | 57 >> +++++++++++++++++++++++++------------------------------ >>>> 1 file changed, 26 insertions(+), 31 deletions(-) >>> >>> this breaks build on mingw64 >>> >>> src/libavformat/udp.c: In function ‘udp_read’: >>> src/libavformat/udp.c:980:17: error: implicit declaration of function >> ‘pthread_cond_timedwait’ [-Werror=implicit-function-declaration] >>> int err = pthread_cond_timedwait(&s->cond, &s->mutex, >> &tv); >>> ^ >> >> We could add that to our own w32 pthread wrapper, unfortunately >> whoever designed pthread_cond_timedwait was on drugs, the absolute >> timespec is just crazy, instead of a delay in >> (milli|micro|nano)seconds, so we'll practically undo the math that >> same function already does to add its timeout to current time. But oh >> well. >> >> - Hendrik > > > I had an old patch for adding pthread_cond_timedwait wrapper for win32. > Haven't checked if it still applies but its attached if it is useful. [...] > diff --git a/compat/w32pthreads.h b/compat/w32pthreads.h > index 4ac2a99..abd54b2 100644 > --- a/compat/w32pthreads.h > +++ b/compat/w32pthreads.h > @@ -38,6 +38,7 @@ > #define WIN32_LEAN_AND_MEAN > #include <windows.h> > #include <process.h> > +#include <time.h> > > #if _WIN32_WINNT < 0x0600 && defined(__MINGW32__) > #undef MemoryBarrier > @@ -48,6 +49,7 @@ > #include "libavutil/common.h" > #include "libavutil/internal.h" > #include "libavutil/mem.h" > +#include "libavutil/time.h" > > typedef struct pthread_t { > void *handle; > @@ -171,6 +173,17 @@ static inline int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex > return 0; > } > > +static inline int pthread_cond_timedwait(pthread_cond_t *cond, pthread_mutex_t *mutex, > + const struct timespec *abstime) > +{ > + int64_t abs_milli = abstime->tv_nsec * 1000LL + abstime->tv_nsec / 1.0e6; > + DWORD t = FFMAX(abs_milli - av_gettime(), 0LL); > + > + if (SleepConditionVariableCS(cond, mutex, t) == WAIT_TIMEOUT) Needs to use SleepConditionVariableSRW() instead, and when it fails it needs a call to GetLastError() and convert ERROR_TIMEOUT to ETIMEDOUT, or return EPERM or EINVAL otherwise (No point converting all error codes). > + return ETIMEDOUT; > + return 0; > +} > + > static inline int pthread_cond_signal(pthread_cond_t *cond) > { > WakeConditionVariable(cond); > @@ -367,6 +380,44 @@ static av_unused int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mu > return pthread_mutex_lock(mutex); > } > > +static inline int pthread_cond_timedwait(pthread_cond_t *cond, pthread_mutex_t *mutex, > + const struct timespec *abstime) > +{ This one isn't needed anymore. We don't support Windows <= NT5 anymore. No comment about the os2threads implementation.
On Fri, 17 Jan 2020, Michael Niedermayer wrote: > On Thu, Jan 16, 2020 at 01:20:16AM +0100, Marton Balint wrote: >> It is not supported by every threading implementation, and the only thing we >> gain with it is an immediate shutdown of receiving packets on close and >> avoiding the poll call before reading the data. >> >> I don't think it is a big issue if it takes 0.1 sec of delay to close an udp >> stream. Back when this was introduced the delay was 1 sec, which was indeed >> noticable. >> >> And anybody who needs performance sensitive UDP should not use the fifo buffer >> in the first place, because the kernel can buffer the data much more >> effectively. This patch also fixes ticket #5717 by the way. >> >> Signed-off-by: Marton Balint <cus@passwd.hu> >> --- >> libavformat/udp.c | 57 +++++++++++++++++++++++++------------------------------ >> 1 file changed, 26 insertions(+), 31 deletions(-) > > this breaks build on mingw64 > > src/libavformat/udp.c: In function ‘udp_read’: > src/libavformat/udp.c:980:17: error: implicit declaration of function ‘pthread_cond_timedwait’ [-Werror=implicit-function-declaration] > int err = pthread_cond_timedwait(&s->cond, &s->mutex, &tv); > ^ > cc1: some warnings being treated as errors > make: *** [libavformat/udp.o] Error 1 > make: *** Waiting for unfinished jobs.... With the pthread_cond_timedwait compat patch applied, this should no longer be an issue. Regards, Marton
Am Di., 21. Jan. 2020 um 22:56 Uhr schrieb Marton Balint <cus@passwd.hu>: > > > > On Fri, 17 Jan 2020, Michael Niedermayer wrote: > > > On Thu, Jan 16, 2020 at 01:20:16AM +0100, Marton Balint wrote: > >> It is not supported by every threading implementation, and the only thing we > >> gain with it is an immediate shutdown of receiving packets on close and > >> avoiding the poll call before reading the data. > >> > >> I don't think it is a big issue if it takes 0.1 sec of delay to close an udp > >> stream. Back when this was introduced the delay was 1 sec, which was indeed > >> noticable. > >> > >> And anybody who needs performance sensitive UDP should not use the fifo buffer > >> in the first place, because the kernel can buffer the data much more > >> effectively. > > This patch also fixes ticket #5717 by the way. Please mention the ticket number in the commit message. Carl Eugen
Marton Balint (12020-01-16): > It is not supported by every threading implementation, and the only thing we > gain with it is an immediate shutdown of receiving packets on close and > avoiding the poll call before reading the data. > > I don't think it is a big issue if it takes 0.1 sec of delay to close an udp > stream. Back when this was introduced the delay was 1 sec, which was indeed > noticable. I don't like this. 0.1 s is less noticeable than 1 s for interactive human users, but for applications it may be way too much. We really should start thinking on a better solution to handle I/O. Unfortunately, seeing the cold feet reaction to something as benign as AVWriter, I am not confident this project has what it takes to go in the right direction. Regards,
On Wed, 22 Jan 2020, Nicolas George wrote: > Marton Balint (12020-01-16): >> It is not supported by every threading implementation, and the only thing we >> gain with it is an immediate shutdown of receiving packets on close and >> avoiding the poll call before reading the data. >> >> I don't think it is a big issue if it takes 0.1 sec of delay to close an udp >> stream. Back when this was introduced the delay was 1 sec, which was indeed >> noticable. > > I don't like this. 0.1 s is less noticeable than 1 s for interactive > human users, but for applications it may be way too much. pthread_cancel is not implemented properly on Win32, also it is generally not considered a good practice. 0.1 sec delay is still the lesser evil than a lockup as shown in ticket #5717. An alternate approach might be to call shutdown() from the main thread and hope that this interrupts the recvfrom() call in the receiver thread. It tends to work for linux, I am not sure how other platforms would react, what do you think? Thanks, Marton
On Wed, Jan 22, 2020 at 10:12 PM Marton Balint <cus@passwd.hu> wrote: > > > > On Wed, 22 Jan 2020, Nicolas George wrote: > > > Marton Balint (12020-01-16): > >> It is not supported by every threading implementation, and the only thing we > >> gain with it is an immediate shutdown of receiving packets on close and > >> avoiding the poll call before reading the data. > >> > >> I don't think it is a big issue if it takes 0.1 sec of delay to close an udp > >> stream. Back when this was introduced the delay was 1 sec, which was indeed > >> noticable. > > > > I don't like this. 0.1 s is less noticeable than 1 s for interactive > > human users, but for applications it may be way too much. > > pthread_cancel is not implemented properly on Win32, also it is generally > not considered a good practice. 0.1 sec delay is still the lesser evil > than a lockup as shown in ticket #5717. > > An alternate approach might be to call shutdown() from the main thread and > hope that this interrupts the recvfrom() call in the receiver thread. It > tends to work for linux, I am not sure how other platforms would react, > what do you think? > This is how it works without practical delay on Windows with pthread_cancel as a noop right now. Calling closesocket() from the main thread unblocks recvfrom calls with an error return, allowing the thread to immediately close. Not sure how reliable this is on other systems, but in Win32 it its specifically documented to unblock waiting worker threads. - Hendrik
On Thu, 23 Jan 2020 at 08:12, Marton Balint <cus@passwd.hu> wrote: > > > On Wed, 22 Jan 2020, Nicolas George wrote: > > > Marton Balint (12020-01-16): > >> It is not supported by every threading implementation, and the only > thing we > >> gain with it is an immediate shutdown of receiving packets on close and > >> avoiding the poll call before reading the data. > >> > >> I don't think it is a big issue if it takes 0.1 sec of delay to close > an udp > >> stream. Back when this was introduced the delay was 1 sec, which was > indeed > >> noticable. > > > > I don't like this. 0.1 s is less noticeable than 1 s for interactive > > human users, but for applications it may be way too much. > > pthread_cancel is not implemented properly on Win32, also it is generally > not considered a good practice. 0.1 sec delay is still the lesser evil > than a lockup as shown in ticket #5717. > > An alternate approach might be to call shutdown() from the main thread and > hope that this interrupts the recvfrom() call in the receiver thread. It > tends to work for linux, I am not sure how other platforms would react, > what do you think? I actually had an old patch for that too. On Win32 the shutdown function will cause any existing recv calls in other threads to instantly return with an error code. I havnt tested the patch recently with current ffmpeg master but it should serve as a proof of concept.
On Thu, 23 Jan 2020, Matt Oliver wrote: > On Thu, 23 Jan 2020 at 08:12, Marton Balint <cus@passwd.hu> wrote: > >> >> >> On Wed, 22 Jan 2020, Nicolas George wrote: >> >>> Marton Balint (12020-01-16): >>>> It is not supported by every threading implementation, and the only >> thing we >>>> gain with it is an immediate shutdown of receiving packets on close and >>>> avoiding the poll call before reading the data. >>>> >>>> I don't think it is a big issue if it takes 0.1 sec of delay to close >> an udp >>>> stream. Back when this was introduced the delay was 1 sec, which was >> indeed >>>> noticable. >>> >>> I don't like this. 0.1 s is less noticeable than 1 s for interactive >>> human users, but for applications it may be way too much. >> >> pthread_cancel is not implemented properly on Win32, also it is generally >> not considered a good practice. 0.1 sec delay is still the lesser evil >> than a lockup as shown in ticket #5717. >> >> An alternate approach might be to call shutdown() from the main thread and >> hope that this interrupts the recvfrom() call in the receiver thread. It >> tends to work for linux, I am not sure how other platforms would react, >> what do you think? > > > I actually had an old patch for that too. On Win32 the shutdown function > will cause any existing recv calls in other threads to instantly return > with an error code. I tested this on Windows7/10 and it does not seem to work. closesocket() indeed aborts the pending recv() call, but not shutdown(). CancelIoEx() seems to work though. Regards, Marton
Marton Balint (12020-01-24): > I tested this on Windows7/10 and it does not seem to work. closesocket() > indeed aborts the pending recv() call, but not shutdown(). CancelIoEx() > seems to work though. Using one of these to implement a pthread_cancel() that works for our situation seems like the best option, then, does it not? Regards,
diff --git a/libavformat/udp.c b/libavformat/udp.c index 11af9b2500..75fa8c5e31 100644 --- a/libavformat/udp.c +++ b/libavformat/udp.c @@ -61,8 +61,8 @@ #define IPPROTO_UDPLITE 136 #endif -#if HAVE_PTHREAD_CANCEL -#include <pthread.h> +#if HAVE_THREADS +#include "libavutil/thread.h" #endif #ifndef IPV6_ADD_MEMBERSHIP @@ -98,7 +98,7 @@ typedef struct UDPContext { int64_t bitrate; /* number of bits to send per second */ int64_t burst_bits; int close_req; -#if HAVE_PTHREAD_CANCEL +#if HAVE_THREADS pthread_t circular_buffer_thread; pthread_mutex_t mutex; pthread_cond_t cond; @@ -454,32 +454,34 @@ static int udp_get_file_handle(URLContext *h) return s->udp_fd; } -#if HAVE_PTHREAD_CANCEL -static void *circular_buffer_task_rx( void *_URLContext) +#if HAVE_THREADS +static void *circular_buffer_task_rx(void *_URLContext) { URLContext *h = _URLContext; UDPContext *s = h->priv_data; - int old_cancelstate; - pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &old_cancelstate); pthread_mutex_lock(&s->mutex); if (ff_socket_nonblock(s->udp_fd, 0) < 0) { av_log(h, AV_LOG_ERROR, "Failed to set blocking mode"); s->circular_buffer_error = AVERROR(EIO); goto end; } - while(1) { - int len; + while (!s->close_req) { + int ret, len; struct sockaddr_storage addr; socklen_t addr_len = sizeof(addr); pthread_mutex_unlock(&s->mutex); - /* Blocking operations are always cancellation points; - see "General Information" / "Thread Cancelation Overview" - in Single Unix. */ - pthread_setcancelstate(PTHREAD_CANCEL_ENABLE, &old_cancelstate); + ret = ff_network_wait_fd(s->udp_fd, 0); + if (ret < 0) { + pthread_mutex_lock(&s->mutex); + if (ret != AVERROR(EAGAIN) && ret != AVERROR(EINTR)) { + s->circular_buffer_error = ret; + goto end; + } + continue; + } len = recvfrom(s->udp_fd, s->tmp+4, sizeof(s->tmp)-4, 0, (struct sockaddr *)&addr, &addr_len); - pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &old_cancelstate); pthread_mutex_lock(&s->mutex); if (len < 0) { if (ff_neterrno() != AVERROR(EAGAIN) && ff_neterrno() != AVERROR(EINTR)) { @@ -520,14 +522,12 @@ static void *circular_buffer_task_tx( void *_URLContext) { URLContext *h = _URLContext; UDPContext *s = h->priv_data; - int old_cancelstate; int64_t target_timestamp = av_gettime_relative(); int64_t start_timestamp = av_gettime_relative(); int64_t sent_bits = 0; int64_t burst_interval = s->bitrate ? (s->burst_bits * 1000000 / s->bitrate) : 0; int64_t max_delay = s->bitrate ? ((int64_t)h->max_packet_size * 8 * 1000000 / s->bitrate + 1) : 0; - pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &old_cancelstate); pthread_mutex_lock(&s->mutex); if (ff_socket_nonblock(s->udp_fd, 0) < 0) { @@ -562,7 +562,6 @@ static void *circular_buffer_task_tx( void *_URLContext) av_fifo_generic_read(s->fifo, s->tmp, len, NULL); pthread_mutex_unlock(&s->mutex); - pthread_setcancelstate(PTHREAD_CANCEL_ENABLE, &old_cancelstate); if (s->bitrate) { timestamp = av_gettime_relative(); @@ -608,7 +607,6 @@ static void *circular_buffer_task_tx( void *_URLContext) } } - pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &old_cancelstate); pthread_mutex_lock(&s->mutex); } @@ -667,7 +665,7 @@ static int udp_open(URLContext *h, const char *uri, int flags) /* assume if no digits were found it is a request to enable it */ if (buf == endptr) s->overrun_nonfatal = 1; - if (!HAVE_PTHREAD_CANCEL) + if (!HAVE_THREADS) av_log(h, AV_LOG_WARNING, "'overrun_nonfatal' option was set but it is not supported " "on this build (pthread support is required)\n"); @@ -695,14 +693,14 @@ static int udp_open(URLContext *h, const char *uri, int flags) } if (av_find_info_tag(buf, sizeof(buf), "fifo_size", p)) { s->circular_buffer_size = strtol(buf, NULL, 10); - if (!HAVE_PTHREAD_CANCEL) + if (!HAVE_THREADS) av_log(h, AV_LOG_WARNING, "'circular_buffer_size' option was set but it is not supported " "on this build (pthread support is required)\n"); } if (av_find_info_tag(buf, sizeof(buf), "bitrate", p)) { s->bitrate = strtoll(buf, NULL, 10); - if (!HAVE_PTHREAD_CANCEL) + if (!HAVE_THREADS) av_log(h, AV_LOG_WARNING, "'bitrate' option was set but it is not supported " "on this build (pthread support is required)\n"); @@ -877,7 +875,7 @@ static int udp_open(URLContext *h, const char *uri, int flags) s->udp_fd = udp_fd; -#if HAVE_PTHREAD_CANCEL +#if HAVE_THREADS /* Create thread in case of: 1. Input and circular_buffer_size is set @@ -914,7 +912,7 @@ static int udp_open(URLContext *h, const char *uri, int flags) #endif return 0; -#if HAVE_PTHREAD_CANCEL +#if HAVE_THREADS thread_fail: pthread_cond_destroy(&s->cond); cond_fail: @@ -944,7 +942,7 @@ static int udp_read(URLContext *h, uint8_t *buf, int size) int ret; struct sockaddr_storage addr; socklen_t addr_len = sizeof(addr); -#if HAVE_PTHREAD_CANCEL +#if HAVE_THREADS int avail, nonblock = h->flags & AVIO_FLAG_NONBLOCK; if (s->fifo) { @@ -1008,7 +1006,7 @@ static int udp_write(URLContext *h, const uint8_t *buf, int size) UDPContext *s = h->priv_data; int ret; -#if HAVE_PTHREAD_CANCEL +#if HAVE_THREADS if (s->fifo) { uint8_t tmp[4]; @@ -1057,9 +1055,9 @@ static int udp_close(URLContext *h) { UDPContext *s = h->priv_data; -#if HAVE_PTHREAD_CANCEL +#if HAVE_THREADS // Request close once writing is finished - if (s->thread_started && !(h->flags & AVIO_FLAG_READ)) { + if (s->thread_started) { pthread_mutex_lock(&s->mutex); s->close_req = 1; pthread_cond_signal(&s->cond); @@ -1069,12 +1067,9 @@ static int udp_close(URLContext *h) if (s->is_multicast && (h->flags & AVIO_FLAG_READ)) udp_leave_multicast_group(s->udp_fd, (struct sockaddr *)&s->dest_addr,(struct sockaddr *)&s->local_addr_storage); -#if HAVE_PTHREAD_CANCEL +#if HAVE_THREADS if (s->thread_started) { int ret; - // Cancel only read, as write has been signaled as success to the user - if (h->flags & AVIO_FLAG_READ) - pthread_cancel(s->circular_buffer_thread); ret = pthread_join(s->circular_buffer_thread, NULL); if (ret != 0) av_log(h, AV_LOG_ERROR, "pthread_join(): %s\n", strerror(ret));
It is not supported by every threading implementation, and the only thing we gain with it is an immediate shutdown of receiving packets on close and avoiding the poll call before reading the data. I don't think it is a big issue if it takes 0.1 sec of delay to close an udp stream. Back when this was introduced the delay was 1 sec, which was indeed noticable. And anybody who needs performance sensitive UDP should not use the fifo buffer in the first place, because the kernel can buffer the data much more effectively. Signed-off-by: Marton Balint <cus@passwd.hu> --- libavformat/udp.c | 57 +++++++++++++++++++++++++------------------------------ 1 file changed, 26 insertions(+), 31 deletions(-)