diff mbox

[FFmpeg-devel] HTTP: optimize forward seek performance

Message ID 5C6D12FC-B117-4EF8-AA7C-320426C12D5D@me.com
State Superseded
Headers show

Commit Message

Joel Cunningham Jan. 11, 2017, 11:01 p.m. UTC
Hi,

I’ve been working on optimizing HTTP forward seek performance for ffmpeg and would like to contribute this patch into mainline ffmpeg.  Please see the below patch for an explanation of the issue and proposed fix.  I have provided evidence of the current performance issue and my sample MP4 so others can reproduce and observe the behavior.

Files are located in Dropbox here: https://www.dropbox.com/sh/4q4ru8isdv22joj/AABU3XyXmgLMiEFqucf1LdZ3a?dl=0

GRMT0003.MP4 - test video file
mac-ffplay-baseline.pcapng - wireshark capture of ffplay (49abd) playing the above test file on MacOS 10.12.2 from a remote NGINX server
mac-ffplay-optimize-patch.pcapng - same ffplay setup but with patch applied 
ffplay_output.log - console output of ffplay with patch (loglevel debug)

I’m happy to discuss this issue further if the below description doesn’t fully communicate the issue.

Thanks,

Joel

From 89a3ed8aab9168313b4f7e83c00857f9b715ba4e Mon Sep 17 00:00:00 2001
From: Joel Cunningham <joel.cunningham@me.com>
Date: Wed, 11 Jan 2017 13:55:02 -0600
Subject: [PATCH] HTTP: optimize forward seek performance

This commit optimizes HTTP forward seeks by advancing the stream on
the current connection when the seek amount is within the current
TCP window rather than closing the connection and opening a new one.
This improves performance because with TCP flow control, a window's
worth of data is always either in the local socket buffer already or
in-flight from the sender.

The previous behavior of closing the connection, then opening a new
with a new HTTP range value results in a massive amounts of discarded
and re-sent data when large TCP windows are used.  This has been observed
on MacOS/iOS which starts with an inital window of 256KB and grows up to
1MB depending on the bandwidth-product delay.

When seeking within a window's worth of data and we close the connection,
then open a new one within the same window's worth of data, we discard
from the current offset till the end of the window.  Then on the new
connection the server ends up re-sending the previous data from new
offset till the end of old window.

Example:

TCP window size: 64KB
Position: 32KB
Forward seek position: 40KB

      *                      (Next window)
32KB |--------------| 96KB |---------------| 160KB
        *
  40KB |---------------| 104KB

Re-sent amount: 96KB - 40KB = 56KB

For a real world test example, I have MP4 file of ~25MB, which ffplay
only reads ~16MB and performs 177 seeks. With current ffmpeg, this results
in 177 HTTP GETs and ~73MB worth of TCP data communication.  With this
patch, ffmpeg issues 4 HTTP GETs for a total of ~20MB of TCP data
communication.

To support this feature, a new URL function has been added to get the
stream buffer size from the TCP protocol.  The stream advancement logic
has been implemented in the HTTP layer since this the layer in charge of
the seek and creating/destroying the TCP connections.

This feature has been tested on Windows 7 and MacOS/iOS.  Windows support
is slightly complicated by the fact that when TCP window auto-tuning is
enabled, SO_RCVBUF doesn't report the real window size, but it does if
SO_RCVBUF was manually set (disabling auto-tuning). So we can only use
this optimization on Windows in the later case
---
 libavformat/avio.c |  7 ++++++
 libavformat/http.c | 69 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 libavformat/tcp.c  | 21 +++++++++++++++++
 libavformat/url.h  |  8 +++++++
 4 files changed, 105 insertions(+)

Comments

Ronald S. Bultje Jan. 12, 2017, 12:47 p.m. UTC | #1
Hi Joel,

On Wed, Jan 11, 2017 at 6:01 PM, Joel Cunningham <joel.cunningham@me.com>
wrote:

> Hi,
>
> I’ve been working on optimizing HTTP forward seek performance for ffmpeg
> and would like to contribute this patch into mainline ffmpeg.  Please see
> the below patch for an explanation of the issue and proposed fix.  I have
> provided evidence of the current performance issue and my sample MP4 so
> others can reproduce and observe the behavior.
>
> Files are located in Dropbox here: https://www.dropbox.com/sh/
> 4q4ru8isdv22joj/AABU3XyXmgLMiEFqucf1LdZ3a?dl=0
>
> GRMT0003.MP4 - test video file
> mac-ffplay-baseline.pcapng - wireshark capture of ffplay (49abd) playing
> the above test file on MacOS 10.12.2 from a remote NGINX server
> mac-ffplay-optimize-patch.pcapng - same ffplay setup but with patch
> applied
> ffplay_output.log - console output of ffplay with patch (loglevel debug)
>
> I’m happy to discuss this issue further if the below description doesn’t
> fully communicate the issue.
>
> Thanks,
>
> Joel
>
> From 89a3ed8aab9168313b4f7e83c00857f9b715ba4e Mon Sep 17 00:00:00 2001
> From: Joel Cunningham <joel.cunningham@me.com>
> Date: Wed, 11 Jan 2017 13:55:02 -0600
> Subject: [PATCH] HTTP: optimize forward seek performance
>
> This commit optimizes HTTP forward seeks by advancing the stream on
> the current connection when the seek amount is within the current
> TCP window rather than closing the connection and opening a new one.
> This improves performance because with TCP flow control, a window's
> worth of data is always either in the local socket buffer already or
> in-flight from the sender.
>
> The previous behavior of closing the connection, then opening a new
> with a new HTTP range value results in a massive amounts of discarded
> and re-sent data when large TCP windows are used.  This has been observed
> on MacOS/iOS which starts with an inital window of 256KB and grows up to
> 1MB depending on the bandwidth-product delay.
>
> When seeking within a window's worth of data and we close the connection,
> then open a new one within the same window's worth of data, we discard
> from the current offset till the end of the window.  Then on the new
> connection the server ends up re-sending the previous data from new
> offset till the end of old window.
>
> Example:
>
> TCP window size: 64KB
> Position: 32KB
> Forward seek position: 40KB
>
>       *                      (Next window)
> 32KB |--------------| 96KB |---------------| 160KB
>         *
>   40KB |---------------| 104KB
>
> Re-sent amount: 96KB - 40KB = 56KB
>
> For a real world test example, I have MP4 file of ~25MB, which ffplay
> only reads ~16MB and performs 177 seeks. With current ffmpeg, this results
> in 177 HTTP GETs and ~73MB worth of TCP data communication.  With this
> patch, ffmpeg issues 4 HTTP GETs for a total of ~20MB of TCP data
> communication.
>
> To support this feature, a new URL function has been added to get the
> stream buffer size from the TCP protocol.  The stream advancement logic
> has been implemented in the HTTP layer since this the layer in charge of
> the seek and creating/destroying the TCP connections.
>
> This feature has been tested on Windows 7 and MacOS/iOS.  Windows support
> is slightly complicated by the fact that when TCP window auto-tuning is
> enabled, SO_RCVBUF doesn't report the real window size, but it does if
> SO_RCVBUF was manually set (disabling auto-tuning). So we can only use
> this optimization on Windows in the later case
> ---
>  libavformat/avio.c |  7 ++++++
>  libavformat/http.c | 69 ++++++++++++++++++++++++++++++
> ++++++++++++++++++++++++
>  libavformat/tcp.c  | 21 +++++++++++++++++
>  libavformat/url.h  |  8 +++++++
>  4 files changed, 105 insertions(+)


Very interesting. There's some minor stylistic nits (double brackets where
there shouldn't be, I didn't check super-closely for that), but overall
this is a pretty thoughtful patch in what it's trying to do.

As for Windows, I'm surprised that there wouldn't be any API to get the
real current window size. I mean, I understand that they would decrease
window size on-demand, but I would expect them to do that w/o throwing out
already-downloaded data - so essentially I expect delayed auto-tuning. In
that case, getting the current window size should be trivial and valid
until the next read. Very strange. I suppose no workarounds (perhaps with
windows-specific API) exist? Also, what is the practical implication? Does
it work as before? Or better but not optimal? Or worse?

I'm wondering if there's some way to make the interaction between tcp and
http less awkward. Similar problems exist between rtp/udp, where we want
deeper protocol integration and the generic API basically inhibits us.
Probably out of scope for this patch and not a terribly big deal because
the URL API is currently not exposed, but we eventually want to expose this
API (IMO) so at some point this will need some more thoughts.

If nobody else has comments, I'm inclined to accept this patch (I'll fix
the cosmetics myself).

Ronald
Nicolas George Jan. 12, 2017, 12:53 p.m. UTC | #2
Le duodi 22 nivôse, an CCXXV, Joel Cunningham a écrit :
> This commit optimizes HTTP forward seeks by advancing the stream on
> the current connection when the seek amount is within the current
> TCP window rather than closing the connection and opening a new one.
> This improves performance because with TCP flow control, a window's
> worth of data is always either in the local socket buffer already or
> in-flight from the sender.

Thanks for the patch. You may have not noticed, but there is already a
similar logic in the higher-level API, aviobuf. See the code for
avio_seek(). I suspect it would be a better place to implement the logic
of avoiding seeks on slow-ish protocols.

Regards,
Joel Cunningham Jan. 12, 2017, 3:25 p.m. UTC | #3
> On Jan 12, 2017, at 6:47 AM, Ronald S. Bultje <rsbultje@gmail.com> wrote:
> 
> Hi Joel,
> 
> On Wed, Jan 11, 2017 at 6:01 PM, Joel Cunningham <joel.cunningham@me.com>
> wrote:
> 
>> Hi,
>> 
>> I’ve been working on optimizing HTTP forward seek performance for ffmpeg
>> and would like to contribute this patch into mainline ffmpeg.  Please see
>> the below patch for an explanation of the issue and proposed fix.  I have
>> provided evidence of the current performance issue and my sample MP4 so
>> others can reproduce and observe the behavior.
>> 
>> Files are located in Dropbox here: https://www.dropbox.com/sh/
>> 4q4ru8isdv22joj/AABU3XyXmgLMiEFqucf1LdZ3a?dl=0
>> 
>> GRMT0003.MP4 - test video file
>> mac-ffplay-baseline.pcapng - wireshark capture of ffplay (49abd) playing
>> the above test file on MacOS 10.12.2 from a remote NGINX server
>> mac-ffplay-optimize-patch.pcapng - same ffplay setup but with patch
>> applied
>> ffplay_output.log - console output of ffplay with patch (loglevel debug)
>> 
>> I’m happy to discuss this issue further if the below description doesn’t
>> fully communicate the issue.
>> 
>> Thanks,
>> 
>> Joel
>> 
>> From 89a3ed8aab9168313b4f7e83c00857f9b715ba4e Mon Sep 17 00:00:00 2001
>> From: Joel Cunningham <joel.cunningham@me.com>
>> Date: Wed, 11 Jan 2017 13:55:02 -0600
>> Subject: [PATCH] HTTP: optimize forward seek performance
>> 
>> This commit optimizes HTTP forward seeks by advancing the stream on
>> the current connection when the seek amount is within the current
>> TCP window rather than closing the connection and opening a new one.
>> This improves performance because with TCP flow control, a window's
>> worth of data is always either in the local socket buffer already or
>> in-flight from the sender.
>> 
>> The previous behavior of closing the connection, then opening a new
>> with a new HTTP range value results in a massive amounts of discarded
>> and re-sent data when large TCP windows are used.  This has been observed
>> on MacOS/iOS which starts with an inital window of 256KB and grows up to
>> 1MB depending on the bandwidth-product delay.
>> 
>> When seeking within a window's worth of data and we close the connection,
>> then open a new one within the same window's worth of data, we discard
>> from the current offset till the end of the window.  Then on the new
>> connection the server ends up re-sending the previous data from new
>> offset till the end of old window.
>> 
>> Example:
>> 
>> TCP window size: 64KB
>> Position: 32KB
>> Forward seek position: 40KB
>> 
>>      *                      (Next window)
>> 32KB |--------------| 96KB |---------------| 160KB
>>        *
>>  40KB |---------------| 104KB
>> 
>> Re-sent amount: 96KB - 40KB = 56KB
>> 
>> For a real world test example, I have MP4 file of ~25MB, which ffplay
>> only reads ~16MB and performs 177 seeks. With current ffmpeg, this results
>> in 177 HTTP GETs and ~73MB worth of TCP data communication.  With this
>> patch, ffmpeg issues 4 HTTP GETs for a total of ~20MB of TCP data
>> communication.
>> 
>> To support this feature, a new URL function has been added to get the
>> stream buffer size from the TCP protocol.  The stream advancement logic
>> has been implemented in the HTTP layer since this the layer in charge of
>> the seek and creating/destroying the TCP connections.
>> 
>> This feature has been tested on Windows 7 and MacOS/iOS.  Windows support
>> is slightly complicated by the fact that when TCP window auto-tuning is
>> enabled, SO_RCVBUF doesn't report the real window size, but it does if
>> SO_RCVBUF was manually set (disabling auto-tuning). So we can only use
>> this optimization on Windows in the later case
>> ---
>> libavformat/avio.c |  7 ++++++
>> libavformat/http.c | 69 ++++++++++++++++++++++++++++++
>> ++++++++++++++++++++++++
>> libavformat/tcp.c  | 21 +++++++++++++++++
>> libavformat/url.h  |  8 +++++++
>> 4 files changed, 105 insertions(+)
> 
> 
> Very interesting. There's some minor stylistic nits (double brackets where
> there shouldn't be, I didn't check super-closely for that), but overall
> this is a pretty thoughtful patch in what it's trying to do.

My apologies for any stylistic issues, this is my first ffmpeg change and I’m still getting familiar with the style

> 
> As for Windows, I'm surprised that there wouldn't be any API to get the
> real current window size. I mean, I understand that they would decrease
> window size on-demand, but I would expect them to do that w/o throwing out
> already-downloaded data - so essentially I expect delayed auto-tuning. In
> that case, getting the current window size should be trivial and valid
> until the next read. Very strange. I suppose no workarounds (perhaps with
> windows-specific API) exist? Also, what is the practical implication? Does
> it work as before? Or better but not optimal? Or worse?

For Windows, when TCP receive window auto-tuning is enabled, SO_RCVBUF returns 8192 regardless of what the window value is.  According TCP RFCs, a receiver is not allowed to decrease the window, so the stack will only grow the window.  Typically in Windows it starts out at 17KB (on a WiFi link) and then grows upwards of 256KB depending on bandwidth-product delay.  If you manually set SO_RCVBUF via setsockopt, the TCP window is set to the specified value, auto-tuning is disabled and you can retrieve the correct value at anytime.  I didn’t find any windows specific API that worked when auto-tuning is enabled.  This issue isn’t as bad on Windows platforms because the window starts out small and grows very conservatively, thus less data is discarded and re-sent during each seek.

In terms of practical implications, Windows platforms will behave the same as before this patch.  The call to ffurl_get_stream_size() returns an error and then http_seek_internal proceeds with closing the connection and opening a new one at the new offset.  I did add support for checking if s->recv_buffer_size had been set in the TCP layer, which would mean the window had been manually set and retrieving the buffer size should be correct.  Unfortunately there is not currently a way to set recv_buffer_size for an HTTP protocol via options, so that would need an additional patch, but I tested with hardcoding s->recv_buffer_size to test the feature on Windows.

> 
> I'm wondering if there's some way to make the interaction between tcp and
> http less awkward. Similar problems exist between rtp/udp, where we want
> deeper protocol integration and the generic API basically inhibits us.
> Probably out of scope for this patch and not a terribly big deal because
> the URL API is currently not exposed, but we eventually want to expose this
> API (IMO) so at some point this will need some more thoughts.
> 

I agree there is some clunkiness in the abstraction since TCP specific things have to be generalized into a concept that could fit into other protocols, thus I came up with “stream size” nomenclature for the function.

> If nobody else has comments, I'm inclined to accept this patch (I'll fix
> the cosmetics myself).
> 

Joel
Joel Cunningham Jan. 12, 2017, 3:29 p.m. UTC | #4
> On Jan 12, 2017, at 6:53 AM, Nicolas George <george@nsup.org> wrote:
> 
> Le duodi 22 nivôse, an CCXXV, Joel Cunningham a écrit :
>> This commit optimizes HTTP forward seeks by advancing the stream on
>> the current connection when the seek amount is within the current
>> TCP window rather than closing the connection and opening a new one.
>> This improves performance because with TCP flow control, a window's
>> worth of data is always either in the local socket buffer already or
>> in-flight from the sender.
> 
> Thanks for the patch. You may have not noticed, but there is already a
> similar logic in the higher-level API, aviobuf. See the code for
> avio_seek(). I suspect it would be a better place to implement the logic
> of avoiding seeks on slow-ish protocols.

Thanks for the feedback, I’ll take a look into how the forwarding logic could be implemented in avio_seek().  I do want to be careful that this logic doesn’t execute for non-HTTP or non-stream based protocols which don’t have flow control/automatic buffering of data.

Joel
Andy Furniss Jan. 12, 2017, 4:08 p.m. UTC | #5
Joel Cunningham wrote:
> Hi,
>
> I’ve been working on optimizing HTTP forward seek performance for ffmpeg and would like to contribute this patch into mainline ffmpeg.  Please see the below patch for an explanation of the issue and proposed fix.  I have provided evidence of the current performance issue and my sample MP4 so others can reproduce and observe the behavior.
>
> Files are located in Dropbox here: https://www.dropbox.com/sh/4q4ru8isdv22joj/AABU3XyXmgLMiEFqucf1LdZ3a?dl=0
>
> GRMT0003.MP4 - test video file
> mac-ffplay-baseline.pcapng - wireshark capture of ffplay (49abd) playing the above test file on MacOS 10.12.2 from a remote NGINX server
> mac-ffplay-optimize-patch.pcapng - same ffplay setup but with patch applied
> ffplay_output.log - console output of ffplay with patch (loglevel debug)
>
> I’m happy to discuss this issue further if the below description doesn’t fully communicate the issue.
>
> Thanks,
>
> Joel
>
>  From 89a3ed8aab9168313b4f7e83c00857f9b715ba4e Mon Sep 17 00:00:00 2001
> From: Joel Cunningham <joel.cunningham@me.com>
> Date: Wed, 11 Jan 2017 13:55:02 -0600
> Subject: [PATCH] HTTP: optimize forward seek performance
>
> This commit optimizes HTTP forward seeks by advancing the stream on
> the current connection when the seek amount is within the current
> TCP window rather than closing the connection and opening a new one.
> This improves performance because with TCP flow control, a window's
> worth of data is always either in the local socket buffer already or
> in-flight from the sender.

Not saying there is anything wrong with this patch but this statement
doesn't seem quite right.

I may be wrong, but IIRC what's in flight + buffer will also depend on
the state of the senders congestion control window.
Joel Cunningham Jan. 12, 2017, 4:43 p.m. UTC | #6
> On Jan 12, 2017, at 10:08 AM, Andy Furniss <adf.lists@gmail.com> wrote:
> 
> Joel Cunningham wrote:
>> Hi,
>> 
>> I’ve been working on optimizing HTTP forward seek performance for ffmpeg and would like to contribute this patch into mainline ffmpeg.  Please see the below patch for an explanation of the issue and proposed fix.  I have provided evidence of the current performance issue and my sample MP4 so others can reproduce and observe the behavior.
>> 
>> Files are located in Dropbox here: https://www.dropbox.com/sh/4q4ru8isdv22joj/AABU3XyXmgLMiEFqucf1LdZ3a?dl=0
>> 
>> GRMT0003.MP4 - test video file
>> mac-ffplay-baseline.pcapng - wireshark capture of ffplay (49abd) playing the above test file on MacOS 10.12.2 from a remote NGINX server
>> mac-ffplay-optimize-patch.pcapng - same ffplay setup but with patch applied
>> ffplay_output.log - console output of ffplay with patch (loglevel debug)
>> 
>> I’m happy to discuss this issue further if the below description doesn’t fully communicate the issue.
>> 
>> Thanks,
>> 
>> Joel
>> 
>> From 89a3ed8aab9168313b4f7e83c00857f9b715ba4e Mon Sep 17 00:00:00 2001
>> From: Joel Cunningham <joel.cunningham@me.com>
>> Date: Wed, 11 Jan 2017 13:55:02 -0600
>> Subject: [PATCH] HTTP: optimize forward seek performance
>> 
>> This commit optimizes HTTP forward seeks by advancing the stream on
>> the current connection when the seek amount is within the current
>> TCP window rather than closing the connection and opening a new one.
>> This improves performance because with TCP flow control, a window's
>> worth of data is always either in the local socket buffer already or
>> in-flight from the sender.
> 
> Not saying there is anything wrong with this patch but this statement
> doesn't seem quite right.
> 
> I may be wrong, but IIRC what's in flight + buffer will also depend on
> the state of the senders congestion control window.
> 

You’re correct that the sender's congestion control algorithm will gate how much is in-flight and while the sender is discovering the available network bandwidth, it won’t be sending a full window’s worth at a time.  I didn’t really bring this into the conversation (and kind of hand-waved it) because there isn’t any way that the receiver would know this information, thus we can’t use it to potentially reduce our “read-ahead/discard” size based on the receiver’s window size. 

If we wanted to be conservative about our read-ahead/discard size, taking into account a possible congestion control preventing a full window’s worth of data from being in-flight, we could potentially take the value from SO_RCVBUF and divide it by 1/2 or 1/4, but then we’d still have some possibility for discarded and re-sent data for larger seeks within the window once the window has fully opened up.  This kind of information isn’t available via socket APIs and typically isn’t needed by TCP applications (ffmpeg really has an unique use case by issuing HTTP range requests from current position till end, then aborting at some point during the HTTP response when needing to seek)

Joel
Andy Furniss Jan. 12, 2017, 4:59 p.m. UTC | #7
Joel Cunningham wrote:

> According TCP RFCs, a receiver is not allowed to decrease the window,

Not sure that's true.

It's certainly possible to do it illegally, on a linux box tcp will log
a message something like heresy <ip address> shrank window.
That IME is where some middle box is "interfearing".

I've seen plenty of "legal" shrinks looking at tcpdumps - usually the
app is throttling it's read speed.

Just had a look between two Linux boxes on my lan and

wget --limit-rate=10k will do to see shrinking advertised window by
receiver using tcpdump.
Joel Cunningham Jan. 12, 2017, 4:59 p.m. UTC | #8
Nicolas,

I’ve found existing “read-ahead logic” in avio_seek to do what I’ve implemented in http_stream_forward().  This is controlled by SHORT_SEEK_THRESHOLD, currently set to 4KB.  I proto-typed increasing this to the 256KB (matches the initial TCP window in my test setup) and saw the same number of reduction in HTTP GETs and the number of seeks!  Thanks for the heads up, this should reduce my patch size!

I could plumb this setting (s->short_seek_threshold) to a URL function that would get the desired value from HTTP/TCP.  Looking at how ffio_init_context() is implemented, it doesn’t appear to have access to the URLContext pointer.  Any guidance on how I can plumb the protocol layer with aviobuf without a public API change?

Thanks,

Joel

> On Jan 12, 2017, at 9:29 AM, Joel Cunningham <joel.cunningham@me.com> wrote:
> 
> 
>> On Jan 12, 2017, at 6:53 AM, Nicolas George <george@nsup.org> wrote:
>> 
>> Le duodi 22 nivôse, an CCXXV, Joel Cunningham a écrit :
>>> This commit optimizes HTTP forward seeks by advancing the stream on
>>> the current connection when the seek amount is within the current
>>> TCP window rather than closing the connection and opening a new one.
>>> This improves performance because with TCP flow control, a window's
>>> worth of data is always either in the local socket buffer already or
>>> in-flight from the sender.
>> 
>> Thanks for the patch. You may have not noticed, but there is already a
>> similar logic in the higher-level API, aviobuf. See the code for
>> avio_seek(). I suspect it would be a better place to implement the logic
>> of avoiding seeks on slow-ish protocols.
> 
> Thanks for the feedback, I’ll take a look into how the forwarding logic could be implemented in avio_seek().  I do want to be careful that this logic doesn’t execute for non-HTTP or non-stream based protocols which don’t have flow control/automatic buffering of data.
> 
> Joel
> 
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Steinar H. Gunderson Jan. 12, 2017, 5:04 p.m. UTC | #9
On Thu, Jan 12, 2017 at 04:59:33PM +0000, Andy Furniss wrote:
> I've seen plenty of "legal" shrinks looking at tcpdumps - usually the
> app is throttling it's read speed.

You're not really allowed to shrink by more than you've received, though,
are you? Typically the buffer going down is just that the app hasn't read all
the data from the socket -- but that's a different case.

The point of the “no-shrink” rule is that once you've advertised a window to
the sender, the sender should be allowed to send that much data with no ack,
without keeping a copy, and without worrying it might get lost. If you could
shrink the window, that guarantee would disappear.

/* Steinar */
Andy Furniss Jan. 12, 2017, 5:22 p.m. UTC | #10
Steinar H. Gunderson wrote:
> On Thu, Jan 12, 2017 at 04:59:33PM +0000, Andy Furniss wrote:
>> I've seen plenty of "legal" shrinks looking at tcpdumps - usually the
>> app is throttling it's read speed.
>
> You're not really allowed to shrink by more than you've received, though,
> are you? Typically the buffer going down is just that the app hasn't read all
> the data from the socket -- but that's a different case.

Correct - I was probably being over pedantic about one statement in the
context of buffer sizes.

> The point of the “no-shrink” rule is that once you've advertised a window to
> the sender, the sender should be allowed to send that much data with no ack,

Yes.

> without keeping a copy, and without worrying it might get lost. If you could
> shrink the window, that guarantee would disappear.

Not so sure about this bit = when the ack comes it may well inform that
a re-transmit is needed.
Joel Cunningham Jan. 12, 2017, 5:23 p.m. UTC | #11
> On Jan 12, 2017, at 11:04 AM, Steinar H. Gunderson <steinar+ffmpeg@gunderson.no> wrote:
> 
> On Thu, Jan 12, 2017 at 04:59:33PM +0000, Andy Furniss wrote:
>> I've seen plenty of "legal" shrinks looking at tcpdumps - usually the
>> app is throttling it's read speed.
> 
> You're not really allowed to shrink by more than you've received, though,
> are you? Typically the buffer going down is just that the app hasn't read all
> the data from the socket -- but that's a different case.
> 

You’re correct, decreasing the advertised window by more than you’ve received (relative to last advertised window value) is what the term “shrinking the window” refers to.  Check out RFC 793, section 3.7 Data Communication.

http://www.faqs.org/rfcs/rfc793.html

All that aside, I don’t think illegal window shrinking is that relevant to this patch because we are fetching the value of SO_RCVBUF each time before determining how to perform the seek (either read-ahead/discard or close/open new connection).

> The point of the “no-shrink” rule is that once you've advertised a window to
> the sender, the sender should be allowed to send that much data with no ack,
> without keeping a copy, and without worrying it might get lost. If you could
> shrink the window, that guarantee would disappear.
> 

Joel
Michael Niedermayer Jan. 12, 2017, 11:45 p.m. UTC | #12
On Thu, Jan 12, 2017 at 10:59:56AM -0600, Joel Cunningham wrote:
> Nicolas,
> 
> I’ve found existing “read-ahead logic” in avio_seek to do what I’ve implemented in http_stream_forward().  This is controlled by SHORT_SEEK_THRESHOLD, currently set to 4KB.  I proto-typed increasing this to the 256KB (matches the initial TCP window in my test setup) and saw the same number of reduction in HTTP GETs and the number of seeks!  Thanks for the heads up, this should reduce my patch size!
> 
> I could plumb this setting (s->short_seek_threshold) to a URL function that would get the desired value from HTTP/TCP.  Looking at how ffio_init_context() is implemented, it doesn’t appear to have access to the URLContext pointer.  Any guidance on how I can plumb the protocol layer with aviobuf without a public API change?

i just quickly read this thread and didnt think deeply about it but
remembered  this old (somewhat ugly) patchset:

88066 O   0327  0:24 Michael Niederm (1.1K) [FFmpeg-devel] [PATCH 1/2] avformat/aviobuf: Use a 512kb IO buffer for http* instead of 32kb
88067     0327  0:24 Michael Niederm (0.7K) ├─>[FFmpeg-devel] [PATCH 2/2] avformat/aviobuf: increase SHORT_SEEK_THRESHOLD from 4096 to 8192

[...]
diff mbox

Patch

diff --git a/libavformat/avio.c b/libavformat/avio.c
index 3606eb0..34dcf09 100644
--- a/libavformat/avio.c
+++ b/libavformat/avio.c
@@ -645,6 +645,13 @@  int ffurl_get_multi_file_handle(URLContext *h, int **handles, int *numhandles)
     return h->prot->url_get_multi_file_handle(h, handles, numhandles);
 }
 
+int ffurl_get_stream_size(URLContext *h)
+{
+    if (!h->prot->url_get_stream_size)
+        return -1;
+    return h->prot->url_get_stream_size(h);
+}
+
 int ffurl_shutdown(URLContext *h, int flags)
 {
     if (!h->prot->url_shutdown)
diff --git a/libavformat/http.c b/libavformat/http.c
index 944a6cf..0026168 100644
--- a/libavformat/http.c
+++ b/libavformat/http.c
@@ -1462,6 +1462,61 @@  static int http_close(URLContext *h)
     return ret;
 }
 
+#define DISCARD_BUFFER_SIZE (256 * 1024)
+static int http_stream_forward(HTTPContext *s, int amount)
+{
+    int len;
+    int ret;
+    int discard = amount;
+    int discard_buf_size;
+    uint8_t * discard_buf;
+
+    /* advance local buffer first */
+    len = FFMIN(discard, s->buf_end - s->buf_ptr);
+    if (len > 0) {
+        av_log(s, AV_LOG_DEBUG, "advancing input buffer by %d bytes\n", len);
+        s->buf_ptr += len;
+        discard -= len;
+        if (discard == 0) {
+            return amount;
+        }
+    }
+
+    /* if underlying protocol is a stream with flow control and we are seeking
+    within the stream buffer size, it performs better to advance through the stream
+    rather than abort the connection and open a new one */
+    len = ffurl_get_stream_size(s->hd);
+    av_log(s, AV_LOG_DEBUG, "stream buffer size: %d\n", len);
+    if ((len <= 0) || (discard > len)) {
+        return AVERROR(EINVAL);
+    }
+
+    /* forward stream by discarding data till new position */
+    discard_buf_size = FFMIN(discard, DISCARD_BUFFER_SIZE);
+    discard_buf = av_malloc(discard_buf_size);
+    if (!discard_buf) {
+        return AVERROR(ENOMEM);
+    }
+
+    av_log(s, AV_LOG_DEBUG, "advancing stream by discarding %d bytes\n", discard);
+    while (discard > 0) {
+        ret = ffurl_read(s->hd, discard_buf, FFMIN(discard, discard_buf_size));
+        if (ret > 0) {
+            discard -= ret;
+        } else {
+            ret = (!ret ? AVERROR_EOF : ff_neterrno());
+            av_log(s, AV_LOG_DEBUG, "read error: %d\n", ret);
+            break;
+        }
+    }
+    av_freep(&discard_buf);
+
+    if (discard == 0) {
+        return amount;
+    }
+    return ret;
+}
+
 static int64_t http_seek_internal(URLContext *h, int64_t off, int whence, int force_reconnect)
 {
     HTTPContext *s = h->priv_data;
@@ -1493,6 +1548,20 @@  static int64_t http_seek_internal(URLContext *h, int64_t off, int whence, int fo
     if (s->off && h->is_streamed)
         return AVERROR(ENOSYS);
 
+    /* if seeking forward, see if we can satisfy the seek with our current connection */
+    if ((s->off > old_off) &&
+        (s->off - old_off <= INT32_MAX)) {
+        int amount = (int)(s->off - old_off);
+        int res;
+
+        av_log(s, AV_LOG_DEBUG, "attempting stream forwarding, old: %lld, new: %lld bytes: %d\n",
+           old_off, s->off, amount);
+        res = http_stream_forward(s, amount);
+        if (res == amount) {
+            return off;
+        }
+    }
+
     /* we save the old context in case the seek fails */
     old_buf_size = s->buf_end - s->buf_ptr;
     memcpy(old_buf, s->buf_ptr, old_buf_size);
diff --git a/libavformat/tcp.c b/libavformat/tcp.c
index 25abafc..b73e500 100644
--- a/libavformat/tcp.c
+++ b/libavformat/tcp.c
@@ -265,6 +265,26 @@  static int tcp_get_file_handle(URLContext *h)
     return s->fd;
 }
 
+static int tcp_get_window_size(URLContext *h)
+{
+    TCPContext *s = h->priv_data;
+    int avail;
+    int avail_len = sizeof(avail);
+
+    #if HAVE_WINSOCK2_H
+    /* SO_RCVBUF with winsock only reports the actual TCP window size when
+    auto-tuning has been disabled via setting SO_RCVBUF */
+    if (s->recv_buffer_size < 0) {
+        return AVERROR(EINVAL);
+    }
+    #endif
+
+    if (getsockopt(s->fd, SOL_SOCKET, SO_RCVBUF, &avail, &avail_len)) {
+           return ff_neterrno();
+    }
+    return avail;
+}
+
 const URLProtocol ff_tcp_protocol = {
     .name                = "tcp",
     .url_open            = tcp_open,
@@ -273,6 +293,7 @@  const URLProtocol ff_tcp_protocol = {
     .url_write           = tcp_write,
     .url_close           = tcp_close,
     .url_get_file_handle = tcp_get_file_handle,
+    .url_get_stream_size = tcp_get_window_size,
     .url_shutdown        = tcp_shutdown,
     .priv_data_size      = sizeof(TCPContext),
     .flags               = URL_PROTOCOL_FLAG_NETWORK,
diff --git a/libavformat/url.h b/libavformat/url.h
index 5c50245..5e607c9 100644
--- a/libavformat/url.h
+++ b/libavformat/url.h
@@ -84,6 +84,7 @@  typedef struct URLProtocol {
     int (*url_get_file_handle)(URLContext *h);
     int (*url_get_multi_file_handle)(URLContext *h, int **handles,
                                      int *numhandles);
+    int (*url_get_stream_size)(URLContext *h);
     int (*url_shutdown)(URLContext *h, int flags);
     int priv_data_size;
     const AVClass *priv_data_class;
@@ -249,6 +250,13 @@  int ffurl_get_file_handle(URLContext *h);
 int ffurl_get_multi_file_handle(URLContext *h, int **handles, int *numhandles);
 
 /**
+ * Return sizes (in bytes) of stream based URL buffer.
+ *
+ * @return size on success or <0 on error.
+ */
+int ffurl_get_stream_size(URLContext *h);
+
+/**
  * Signal the URLContext that we are done reading or writing the stream.
  *
  * @param h pointer to the resource