diff mbox

[FFmpeg-devel] avformat/hls: Disallow local file access by default

Message ID 20170530225214.25864-1-michael@niedermayer.cc
State New
Headers show

Commit Message

Michael Niedermayer May 30, 2017, 10:52 p.m. UTC
This prevents an exploit leading to an information leak

The existing exploit depends on a specific decoder as well.
It does appear though that the exploit should be possible with any decoder.
The problem is that as long as sensitive information gets into the decoder,
the output of the decoder becomes sensitive as well.
The only obvious solution is to prevent access to sensitive information. Or to
disable hls or possibly some of its feature. More complex solutions like
checking the path to limit access to only subdirectories of the hls path may
work as an alternative. But such solutions are fragile and tricky to implement
portably and would not stop every possible attack nor would they work with all
valid hls files.

Found-by: Emil Lerner and Pavel Cheremushkin
Reported-by: Thierry Foucu <tfoucu@google.com>

Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
---
 libavformat/hls.c           | 13 ++++++++++++-
 tests/fate/avformat.mak     |  4 ++--
 tests/fate/filter-audio.mak |  4 ++--
 3 files changed, 16 insertions(+), 5 deletions(-)

Comments

Hendrik Leppkes May 30, 2017, 11:14 p.m. UTC | #1
On Wed, May 31, 2017 at 12:52 AM, Michael Niedermayer
<michael@niedermayer.cc> wrote:
> This prevents an exploit leading to an information leak
>
> The existing exploit depends on a specific decoder as well.
> It does appear though that the exploit should be possible with any decoder.
> The problem is that as long as sensitive information gets into the decoder,
> the output of the decoder becomes sensitive as well.
> The only obvious solution is to prevent access to sensitive information. Or to
> disable hls or possibly some of its feature. More complex solutions like
> checking the path to limit access to only subdirectories of the hls path may
> work as an alternative. But such solutions are fragile and tricky to implement
> portably and would not stop every possible attack nor would they work with all
> valid hls files.
>
> Found-by: Emil Lerner and Pavel Cheremushkin
> Reported-by: Thierry Foucu <tfoucu@google.com>
>
>

I don't particularly like this. Being able to dump a HLS stream (ie.
all its file) onto disk and simply open it again is a good thing.
Maybe it should just be smarter and only allow using the same protocol
for the segments then it already used for the m3u8 file, so that a
local m3u8 allows opening a local file (plus http(s), in case I only
saved the playlist), but a http HLS playlist only allows http
segments?

- Hendrik
Michael Niedermayer May 31, 2017, 12:09 a.m. UTC | #2
On Wed, May 31, 2017 at 01:14:58AM +0200, Hendrik Leppkes wrote:
> On Wed, May 31, 2017 at 12:52 AM, Michael Niedermayer
> <michael@niedermayer.cc> wrote:
> > This prevents an exploit leading to an information leak
> >
> > The existing exploit depends on a specific decoder as well.
> > It does appear though that the exploit should be possible with any decoder.
> > The problem is that as long as sensitive information gets into the decoder,
> > the output of the decoder becomes sensitive as well.
> > The only obvious solution is to prevent access to sensitive information. Or to
> > disable hls or possibly some of its feature. More complex solutions like
> > checking the path to limit access to only subdirectories of the hls path may
> > work as an alternative. But such solutions are fragile and tricky to implement
> > portably and would not stop every possible attack nor would they work with all
> > valid hls files.
> >
> > Found-by: Emil Lerner and Pavel Cheremushkin
> > Reported-by: Thierry Foucu <tfoucu@google.com>
> >
> >
> 

> I don't particularly like this. Being able to dump a HLS stream (ie.
> all its file) onto disk and simply open it again is a good thing.
> Maybe it should just be smarter and only allow using the same protocol
> for the segments then it already used for the m3u8 file, so that a
> local m3u8 allows opening a local file (plus http(s), in case I only
> saved the playlist), but a http HLS playlist only allows http
> segments?

we already prevent every protocol except file and crypto for local
hls files. We also already block http* in local hls files by default
thorugh default whitelists (file,crypto for local files)

This is not sufficient, the exploit there is successfully puts the
content of a readable file choosen by the attacker into the output
video, which if its given back to the attacker leaks this information.


[...]
Hendrik Leppkes May 31, 2017, 7:03 a.m. UTC | #3
On Wed, May 31, 2017 at 2:09 AM, Michael Niedermayer
<michael@niedermayer.cc> wrote:
> On Wed, May 31, 2017 at 01:14:58AM +0200, Hendrik Leppkes wrote:
>> On Wed, May 31, 2017 at 12:52 AM, Michael Niedermayer
>> <michael@niedermayer.cc> wrote:
>> > This prevents an exploit leading to an information leak
>> >
>> > The existing exploit depends on a specific decoder as well.
>> > It does appear though that the exploit should be possible with any decoder.
>> > The problem is that as long as sensitive information gets into the decoder,
>> > the output of the decoder becomes sensitive as well.
>> > The only obvious solution is to prevent access to sensitive information. Or to
>> > disable hls or possibly some of its feature. More complex solutions like
>> > checking the path to limit access to only subdirectories of the hls path may
>> > work as an alternative. But such solutions are fragile and tricky to implement
>> > portably and would not stop every possible attack nor would they work with all
>> > valid hls files.
>> >
>> > Found-by: Emil Lerner and Pavel Cheremushkin
>> > Reported-by: Thierry Foucu <tfoucu@google.com>
>> >
>> >
>>
>
>> I don't particularly like this. Being able to dump a HLS stream (ie.
>> all its file) onto disk and simply open it again is a good thing.
>> Maybe it should just be smarter and only allow using the same protocol
>> for the segments then it already used for the m3u8 file, so that a
>> local m3u8 allows opening a local file (plus http(s), in case I only
>> saved the playlist), but a http HLS playlist only allows http
>> segments?
>
> we already prevent every protocol except file and crypto for local
> hls files. We also already block http* in local hls files by default
> thorugh default whitelists (file,crypto for local files)
>
> This is not sufficient, the exploit there is successfully puts the
> content of a readable file choosen by the attacker into the output
> video, which if its given back to the attacker leaks this information.
>

Well, I want to be able to store a HLS playlist and its segments
locally and play it, without specifying some obscure flag. So how can
we make that work?

In general, information disclosure is always a risk when you process
unvalidated HLS streams, even if you load a remote http playlist which
only contains HTTP links, it could reference something on your
intranet and get access to something otherwise unavailable to the
attacker.
I would put the responsibility of ensuring this doesn't happen on
people creating transcoding services, not making our protocols barely
usable.

- Hendrik
Michael Niedermayer May 31, 2017, 9:29 a.m. UTC | #4
On Wed, May 31, 2017 at 09:03:34AM +0200, Hendrik Leppkes wrote:
> On Wed, May 31, 2017 at 2:09 AM, Michael Niedermayer
> <michael@niedermayer.cc> wrote:
> > On Wed, May 31, 2017 at 01:14:58AM +0200, Hendrik Leppkes wrote:
> >> On Wed, May 31, 2017 at 12:52 AM, Michael Niedermayer
> >> <michael@niedermayer.cc> wrote:
> >> > This prevents an exploit leading to an information leak
> >> >
> >> > The existing exploit depends on a specific decoder as well.
> >> > It does appear though that the exploit should be possible with any decoder.
> >> > The problem is that as long as sensitive information gets into the decoder,
> >> > the output of the decoder becomes sensitive as well.
> >> > The only obvious solution is to prevent access to sensitive information. Or to
> >> > disable hls or possibly some of its feature. More complex solutions like
> >> > checking the path to limit access to only subdirectories of the hls path may
> >> > work as an alternative. But such solutions are fragile and tricky to implement
> >> > portably and would not stop every possible attack nor would they work with all
> >> > valid hls files.
> >> >
> >> > Found-by: Emil Lerner and Pavel Cheremushkin
> >> > Reported-by: Thierry Foucu <tfoucu@google.com>
> >> >
> >> >
> >>
> >
> >> I don't particularly like this. Being able to dump a HLS stream (ie.
> >> all its file) onto disk and simply open it again is a good thing.
> >> Maybe it should just be smarter and only allow using the same protocol
> >> for the segments then it already used for the m3u8 file, so that a
> >> local m3u8 allows opening a local file (plus http(s), in case I only
> >> saved the playlist), but a http HLS playlist only allows http
> >> segments?
> >
> > we already prevent every protocol except file and crypto for local
> > hls files. We also already block http* in local hls files by default
> > thorugh default whitelists (file,crypto for local files)
> >
> > This is not sufficient, the exploit there is successfully puts the
> > content of a readable file choosen by the attacker into the output
> > video, which if its given back to the attacker leaks this information.
> >
> 

> Well, I want to be able to store a HLS playlist and its segments
> locally and play it, without specifying some obscure flag. So how can
> we make that work?

What you ask for is to use vulnerable code (hls with local files is
pretty much vulnerable by design).
Enabling this by default is a bad idea and it would be also an
exception to how its handled in other demuxers.
For example mov has drefs disabled without the enable_drefs flag.

Other means than a flag could be used to let the user enable it.
Would this be what you had in mind ? If so what did you had in mind
exactly ?


> 
> In general, information disclosure is always a risk when you process
> unvalidated HLS streams, even if you load a remote http playlist which
> only contains HTTP links, it could reference something on your
> intranet and get access to something otherwise unavailable to the
> attacker.

> I would put the responsibility of ensuring this doesn't happen on
> people creating transcoding services, not making our protocols barely
> usable.

According to the authors of the exploit, which they kindly seem to
have sent to every affected company but not to us.
Most if not all the big names had hls enabled and were vulnerable.
So "its the users responsibility" alone did clearly not lead to secure
code.

Also users cannot review the ever growing feature set we have for
security sensitive features and disable them, thats not practical nor
would it be reasonable from us to ask them to do that.

If you see a better solution than to disable hls or local file use in
hls by default then please explain what you suggest ?

thx

[...]
wm4 May 31, 2017, 9:52 a.m. UTC | #5
On Wed, 31 May 2017 11:29:56 +0200
Michael Niedermayer <michael@niedermayer.cc> wrote:

> On Wed, May 31, 2017 at 09:03:34AM +0200, Hendrik Leppkes wrote:
> > On Wed, May 31, 2017 at 2:09 AM, Michael Niedermayer
> > <michael@niedermayer.cc> wrote:  
> > > On Wed, May 31, 2017 at 01:14:58AM +0200, Hendrik Leppkes wrote:  
> > >> On Wed, May 31, 2017 at 12:52 AM, Michael Niedermayer
> > >> <michael@niedermayer.cc> wrote:  
> > >> > This prevents an exploit leading to an information leak
> > >> >
> > >> > The existing exploit depends on a specific decoder as well.
> > >> > It does appear though that the exploit should be possible with any decoder.
> > >> > The problem is that as long as sensitive information gets into the decoder,
> > >> > the output of the decoder becomes sensitive as well.
> > >> > The only obvious solution is to prevent access to sensitive information. Or to
> > >> > disable hls or possibly some of its feature. More complex solutions like
> > >> > checking the path to limit access to only subdirectories of the hls path may
> > >> > work as an alternative. But such solutions are fragile and tricky to implement
> > >> > portably and would not stop every possible attack nor would they work with all
> > >> > valid hls files.
> > >> >
> > >> > Found-by: Emil Lerner and Pavel Cheremushkin
> > >> > Reported-by: Thierry Foucu <tfoucu@google.com>
> > >> >
> > >> >  
> > >>  
> > >  
> > >> I don't particularly like this. Being able to dump a HLS stream (ie.
> > >> all its file) onto disk and simply open it again is a good thing.
> > >> Maybe it should just be smarter and only allow using the same protocol
> > >> for the segments then it already used for the m3u8 file, so that a
> > >> local m3u8 allows opening a local file (plus http(s), in case I only
> > >> saved the playlist), but a http HLS playlist only allows http
> > >> segments?  
> > >
> > > we already prevent every protocol except file and crypto for local
> > > hls files. We also already block http* in local hls files by default
> > > thorugh default whitelists (file,crypto for local files)
> > >
> > > This is not sufficient, the exploit there is successfully puts the
> > > content of a readable file choosen by the attacker into the output
> > > video, which if its given back to the attacker leaks this information.
> > >  
> >   
> 
> > Well, I want to be able to store a HLS playlist and its segments
> > locally and play it, without specifying some obscure flag. So how can
> > we make that work?  
> 
> What you ask for is to use vulnerable code (hls with local files is
> pretty much vulnerable by design).
> Enabling this by default is a bad idea and it would be also an
> exception to how its handled in other demuxers.
> For example mov has drefs disabled without the enable_drefs flag.
> 
> Other means than a flag could be used to let the user enable it.
> Would this be what you had in mind ? If so what did you had in mind
> exactly ?
> 
> 
> > 
> > In general, information disclosure is always a risk when you process
> > unvalidated HLS streams, even if you load a remote http playlist which
> > only contains HTTP links, it could reference something on your
> > intranet and get access to something otherwise unavailable to the
> > attacker.  
> 
> > I would put the responsibility of ensuring this doesn't happen on
> > people creating transcoding services, not making our protocols barely
> > usable.  
> 
> According to the authors of the exploit, which they kindly seem to
> have sent to every affected company but not to us.
> Most if not all the big names had hls enabled and were vulnerable.
> So "its the users responsibility" alone did clearly not lead to secure
> code.
> 
> Also users cannot review the ever growing feature set we have for
> security sensitive features and disable them, thats not practical nor
> would it be reasonable from us to ask them to do that.
> 
> If you see a better solution than to disable hls or local file use in
> hls by default then please explain what you suggest ?

How is this "vulnerable"? If it's via the completely useless and
annoying bullshit tty decoder, I'm going to throw a party.
Michael Niedermayer May 31, 2017, 10:51 a.m. UTC | #6
On Wed, May 31, 2017 at 11:52:06AM +0200, wm4 wrote:
> On Wed, 31 May 2017 11:29:56 +0200
> Michael Niedermayer <michael@niedermayer.cc> wrote:
> 
> > On Wed, May 31, 2017 at 09:03:34AM +0200, Hendrik Leppkes wrote:
> > > On Wed, May 31, 2017 at 2:09 AM, Michael Niedermayer
> > > <michael@niedermayer.cc> wrote:  
> > > > On Wed, May 31, 2017 at 01:14:58AM +0200, Hendrik Leppkes wrote:  
> > > >> On Wed, May 31, 2017 at 12:52 AM, Michael Niedermayer
> > > >> <michael@niedermayer.cc> wrote:  
> > > >> > This prevents an exploit leading to an information leak
> > > >> >
> > > >> > The existing exploit depends on a specific decoder as well.
> > > >> > It does appear though that the exploit should be possible with any decoder.
> > > >> > The problem is that as long as sensitive information gets into the decoder,
> > > >> > the output of the decoder becomes sensitive as well.
> > > >> > The only obvious solution is to prevent access to sensitive information. Or to
> > > >> > disable hls or possibly some of its feature. More complex solutions like
> > > >> > checking the path to limit access to only subdirectories of the hls path may
> > > >> > work as an alternative. But such solutions are fragile and tricky to implement
> > > >> > portably and would not stop every possible attack nor would they work with all
> > > >> > valid hls files.
> > > >> >
> > > >> > Found-by: Emil Lerner and Pavel Cheremushkin
> > > >> > Reported-by: Thierry Foucu <tfoucu@google.com>
> > > >> >
> > > >> >  
> > > >>  
> > > >  
> > > >> I don't particularly like this. Being able to dump a HLS stream (ie.
> > > >> all its file) onto disk and simply open it again is a good thing.
> > > >> Maybe it should just be smarter and only allow using the same protocol
> > > >> for the segments then it already used for the m3u8 file, so that a
> > > >> local m3u8 allows opening a local file (plus http(s), in case I only
> > > >> saved the playlist), but a http HLS playlist only allows http
> > > >> segments?  
> > > >
> > > > we already prevent every protocol except file and crypto for local
> > > > hls files. We also already block http* in local hls files by default
> > > > thorugh default whitelists (file,crypto for local files)
> > > >
> > > > This is not sufficient, the exploit there is successfully puts the
> > > > content of a readable file choosen by the attacker into the output
> > > > video, which if its given back to the attacker leaks this information.
> > > >  
> > >   
> > 
> > > Well, I want to be able to store a HLS playlist and its segments
> > > locally and play it, without specifying some obscure flag. So how can
> > > we make that work?  
> > 
> > What you ask for is to use vulnerable code (hls with local files is
> > pretty much vulnerable by design).
> > Enabling this by default is a bad idea and it would be also an
> > exception to how its handled in other demuxers.
> > For example mov has drefs disabled without the enable_drefs flag.
> > 
> > Other means than a flag could be used to let the user enable it.
> > Would this be what you had in mind ? If so what did you had in mind
> > exactly ?
> > 
> > 
> > > 
> > > In general, information disclosure is always a risk when you process
> > > unvalidated HLS streams, even if you load a remote http playlist which
> > > only contains HTTP links, it could reference something on your
> > > intranet and get access to something otherwise unavailable to the
> > > attacker.  
> > 
> > > I would put the responsibility of ensuring this doesn't happen on
> > > people creating transcoding services, not making our protocols barely
> > > usable.  
> > 
> > According to the authors of the exploit, which they kindly seem to
> > have sent to every affected company but not to us.
> > Most if not all the big names had hls enabled and were vulnerable.
> > So "its the users responsibility" alone did clearly not lead to secure
> > code.
> > 
> > Also users cannot review the ever growing feature set we have for
> > security sensitive features and disable them, thats not practical nor
> > would it be reasonable from us to ask them to do that.
> > 
> > If you see a better solution than to disable hls or local file use in
> > hls by default then please explain what you suggest ?
> 
> How is this "vulnerable"? If it's via the completely useless and
> annoying bullshit tty decoder, I'm going to throw a party.

As stated in the commit message of the patch
"The existing exploit depends on a specific decoder as well.
 It does appear though that the exploit should be possible with any decoder."

The exploit uses a decoder which we can disable for hls but it would
not do anything to stop the vulnerability.
We do not have a tty decoder. We have a tty demuxer, the tty demuxer is
not used in the exploit.

Also from the commit message:
"The problem is that as long as sensitive information gets into the decoder,
 the output of the decoder becomes sensitive as well."

This should be quite obvious. If you feed some headers + sensitive
data into a decoder, the output can be used to reconstruct the
sensitive data

Its easier with some decoders, harder with others but it will work
with almost all decoders with some effort. The existing exploit, not
suprisingly uses  a decoder with which the effort to reconstruct the
data is minimal.

[...]
wm4 May 31, 2017, 11:13 a.m. UTC | #7
On Wed, 31 May 2017 12:51:35 +0200
Michael Niedermayer <michael@niedermayer.cc> wrote:

> On Wed, May 31, 2017 at 11:52:06AM +0200, wm4 wrote:
> > On Wed, 31 May 2017 11:29:56 +0200
> > Michael Niedermayer <michael@niedermayer.cc> wrote:
> >   
> > > On Wed, May 31, 2017 at 09:03:34AM +0200, Hendrik Leppkes wrote:  
> > > > On Wed, May 31, 2017 at 2:09 AM, Michael Niedermayer
> > > > <michael@niedermayer.cc> wrote:    
> > > > > On Wed, May 31, 2017 at 01:14:58AM +0200, Hendrik Leppkes wrote:    
> > > > >> On Wed, May 31, 2017 at 12:52 AM, Michael Niedermayer
> > > > >> <michael@niedermayer.cc> wrote:    
> > > > >> > This prevents an exploit leading to an information leak
> > > > >> >
> > > > >> > The existing exploit depends on a specific decoder as well.
> > > > >> > It does appear though that the exploit should be possible with any decoder.
> > > > >> > The problem is that as long as sensitive information gets into the decoder,
> > > > >> > the output of the decoder becomes sensitive as well.
> > > > >> > The only obvious solution is to prevent access to sensitive information. Or to
> > > > >> > disable hls or possibly some of its feature. More complex solutions like
> > > > >> > checking the path to limit access to only subdirectories of the hls path may
> > > > >> > work as an alternative. But such solutions are fragile and tricky to implement
> > > > >> > portably and would not stop every possible attack nor would they work with all
> > > > >> > valid hls files.
> > > > >> >
> > > > >> > Found-by: Emil Lerner and Pavel Cheremushkin
> > > > >> > Reported-by: Thierry Foucu <tfoucu@google.com>
> > > > >> >
> > > > >> >    
> > > > >>    
> > > > >    
> > > > >> I don't particularly like this. Being able to dump a HLS stream (ie.
> > > > >> all its file) onto disk and simply open it again is a good thing.
> > > > >> Maybe it should just be smarter and only allow using the same protocol
> > > > >> for the segments then it already used for the m3u8 file, so that a
> > > > >> local m3u8 allows opening a local file (plus http(s), in case I only
> > > > >> saved the playlist), but a http HLS playlist only allows http
> > > > >> segments?    
> > > > >
> > > > > we already prevent every protocol except file and crypto for local
> > > > > hls files. We also already block http* in local hls files by default
> > > > > thorugh default whitelists (file,crypto for local files)
> > > > >
> > > > > This is not sufficient, the exploit there is successfully puts the
> > > > > content of a readable file choosen by the attacker into the output
> > > > > video, which if its given back to the attacker leaks this information.
> > > > >    
> > > >     
> > >   
> > > > Well, I want to be able to store a HLS playlist and its segments
> > > > locally and play it, without specifying some obscure flag. So how can
> > > > we make that work?    
> > > 
> > > What you ask for is to use vulnerable code (hls with local files is
> > > pretty much vulnerable by design).
> > > Enabling this by default is a bad idea and it would be also an
> > > exception to how its handled in other demuxers.
> > > For example mov has drefs disabled without the enable_drefs flag.
> > > 
> > > Other means than a flag could be used to let the user enable it.
> > > Would this be what you had in mind ? If so what did you had in mind
> > > exactly ?
> > > 
> > >   
> > > > 
> > > > In general, information disclosure is always a risk when you process
> > > > unvalidated HLS streams, even if you load a remote http playlist which
> > > > only contains HTTP links, it could reference something on your
> > > > intranet and get access to something otherwise unavailable to the
> > > > attacker.    
> > >   
> > > > I would put the responsibility of ensuring this doesn't happen on
> > > > people creating transcoding services, not making our protocols barely
> > > > usable.    
> > > 
> > > According to the authors of the exploit, which they kindly seem to
> > > have sent to every affected company but not to us.
> > > Most if not all the big names had hls enabled and were vulnerable.
> > > So "its the users responsibility" alone did clearly not lead to secure
> > > code.
> > > 
> > > Also users cannot review the ever growing feature set we have for
> > > security sensitive features and disable them, thats not practical nor
> > > would it be reasonable from us to ask them to do that.
> > > 
> > > If you see a better solution than to disable hls or local file use in
> > > hls by default then please explain what you suggest ?  
> > 
> > How is this "vulnerable"? If it's via the completely useless and
> > annoying bullshit tty decoder, I'm going to throw a party.  
> 
> As stated in the commit message of the patch
> "The existing exploit depends on a specific decoder as well.
>  It does appear though that the exploit should be possible with any decoder."
> 
> The exploit uses a decoder which we can disable for hls but it would
> not do anything to stop the vulnerability.
> We do not have a tty decoder. We have a tty demuxer, the tty demuxer is
> not used in the exploit.

Well, the tty demuxer / ansi decoder. The combination allows you to
dump any text file as video. This would assume the attacker gets the
video back somehow (transcoder service?).

> Also from the commit message:
> "The problem is that as long as sensitive information gets into the decoder,
>  the output of the decoder becomes sensitive as well."
> 
> This should be quite obvious. If you feed some headers + sensitive
> data into a decoder, the output can be used to reconstruct the
> sensitive data

The commit message doesn't really explain how that is a security issue.

> Its easier with some decoders, harder with others but it will work
> with almost all decoders with some effort. The existing exploit, not
> suprisingly uses  a decoder with which the effort to reconstruct the
> data is minimal.
> 
> [...]

I'm not sure why you keep talking about decoders. Decoders just
transform data (unless they have bugs, which should be fixed). Isn't
the problem that the "leaked" data from local filesystem access is
going somewhere it shouldn't. (Again, I can only imagine a transcoder
service that accepts anything without being sandboxed. That's pretty
stupid in itself.)

As as Hendrik already said, you can get the same with http access. An
attacker could poke around in your local network, which could provide
access to normally firewalled and private services. Or does your
attack scenario randomly stop here.

Also, this doesn't prevent that a HLS file on disk accesses network,
which could be another problem with services that consume user files
without further security measures. An attacker could for example upload
a media file (that really is a m3u8 file) to make the server connect to
an attacker's server.

I'm also not very welcoming of the added message. How is a user of a
software using FFmpeg supposed to know how to set a libavformat option?

The real fix is to leave sanitation of external accesses to the API
user by overriding the AVFormatContext.io_open callback.
Michael Niedermayer May 31, 2017, 12:49 p.m. UTC | #8
On Wed, May 31, 2017 at 01:13:50PM +0200, wm4 wrote:
> On Wed, 31 May 2017 12:51:35 +0200
> Michael Niedermayer <michael@niedermayer.cc> wrote:
> 
> > On Wed, May 31, 2017 at 11:52:06AM +0200, wm4 wrote:
> > > On Wed, 31 May 2017 11:29:56 +0200
> > > Michael Niedermayer <michael@niedermayer.cc> wrote:
> > >   
> > > > On Wed, May 31, 2017 at 09:03:34AM +0200, Hendrik Leppkes wrote:  
> > > > > On Wed, May 31, 2017 at 2:09 AM, Michael Niedermayer
> > > > > <michael@niedermayer.cc> wrote:    
> > > > > > On Wed, May 31, 2017 at 01:14:58AM +0200, Hendrik Leppkes wrote:    
> > > > > >> On Wed, May 31, 2017 at 12:52 AM, Michael Niedermayer
> > > > > >> <michael@niedermayer.cc> wrote:    
> > > > > >> > This prevents an exploit leading to an information leak
> > > > > >> >
> > > > > >> > The existing exploit depends on a specific decoder as well.
> > > > > >> > It does appear though that the exploit should be possible with any decoder.
> > > > > >> > The problem is that as long as sensitive information gets into the decoder,
> > > > > >> > the output of the decoder becomes sensitive as well.
> > > > > >> > The only obvious solution is to prevent access to sensitive information. Or to
> > > > > >> > disable hls or possibly some of its feature. More complex solutions like
> > > > > >> > checking the path to limit access to only subdirectories of the hls path may
> > > > > >> > work as an alternative. But such solutions are fragile and tricky to implement
> > > > > >> > portably and would not stop every possible attack nor would they work with all
> > > > > >> > valid hls files.
> > > > > >> >
> > > > > >> > Found-by: Emil Lerner and Pavel Cheremushkin
> > > > > >> > Reported-by: Thierry Foucu <tfoucu@google.com>
> > > > > >> >
> > > > > >> >    
> > > > > >>    
> > > > > >    
> > > > > >> I don't particularly like this. Being able to dump a HLS stream (ie.
> > > > > >> all its file) onto disk and simply open it again is a good thing.
> > > > > >> Maybe it should just be smarter and only allow using the same protocol
> > > > > >> for the segments then it already used for the m3u8 file, so that a
> > > > > >> local m3u8 allows opening a local file (plus http(s), in case I only
> > > > > >> saved the playlist), but a http HLS playlist only allows http
> > > > > >> segments?    
> > > > > >
> > > > > > we already prevent every protocol except file and crypto for local
> > > > > > hls files. We also already block http* in local hls files by default
> > > > > > thorugh default whitelists (file,crypto for local files)
> > > > > >
> > > > > > This is not sufficient, the exploit there is successfully puts the
> > > > > > content of a readable file choosen by the attacker into the output
> > > > > > video, which if its given back to the attacker leaks this information.
> > > > > >    
> > > > >     
> > > >   
> > > > > Well, I want to be able to store a HLS playlist and its segments
> > > > > locally and play it, without specifying some obscure flag. So how can
> > > > > we make that work?    
> > > > 
> > > > What you ask for is to use vulnerable code (hls with local files is
> > > > pretty much vulnerable by design).
> > > > Enabling this by default is a bad idea and it would be also an
> > > > exception to how its handled in other demuxers.
> > > > For example mov has drefs disabled without the enable_drefs flag.
> > > > 
> > > > Other means than a flag could be used to let the user enable it.
> > > > Would this be what you had in mind ? If so what did you had in mind
> > > > exactly ?
> > > > 
> > > >   
> > > > > 
> > > > > In general, information disclosure is always a risk when you process
> > > > > unvalidated HLS streams, even if you load a remote http playlist which
> > > > > only contains HTTP links, it could reference something on your
> > > > > intranet and get access to something otherwise unavailable to the
> > > > > attacker.    
> > > >   
> > > > > I would put the responsibility of ensuring this doesn't happen on
> > > > > people creating transcoding services, not making our protocols barely
> > > > > usable.    
> > > > 
> > > > According to the authors of the exploit, which they kindly seem to
> > > > have sent to every affected company but not to us.
> > > > Most if not all the big names had hls enabled and were vulnerable.
> > > > So "its the users responsibility" alone did clearly not lead to secure
> > > > code.
> > > > 
> > > > Also users cannot review the ever growing feature set we have for
> > > > security sensitive features and disable them, thats not practical nor
> > > > would it be reasonable from us to ask them to do that.
> > > > 
> > > > If you see a better solution than to disable hls or local file use in
> > > > hls by default then please explain what you suggest ?  
> > > 
> > > How is this "vulnerable"? If it's via the completely useless and
> > > annoying bullshit tty decoder, I'm going to throw a party.  
> > 
> > As stated in the commit message of the patch
> > "The existing exploit depends on a specific decoder as well.
> >  It does appear though that the exploit should be possible with any decoder."
> > 
> > The exploit uses a decoder which we can disable for hls but it would
> > not do anything to stop the vulnerability.
> > We do not have a tty decoder. We have a tty demuxer, the tty demuxer is
> > not used in the exploit.
> 
> Well, the tty demuxer / ansi decoder. The combination allows you to
> dump any text file as video. This would assume the attacker gets the
> video back somehow (transcoder service?).
> 
> > Also from the commit message:
> > "The problem is that as long as sensitive information gets into the decoder,
> >  the output of the decoder becomes sensitive as well."
> > 
> > This should be quite obvious. If you feed some headers + sensitive
> > data into a decoder, the output can be used to reconstruct the
> > sensitive data
> 

> The commit message doesn't really explain how that is a security issue.

hls is a playlist with links to files to play
if hls can refer to local files then the output of transcoding a
mallicous hls file can contain any file readable to the user.
That can be his passwords, private keys, and so on

This renders the output of the transcoder unusable for many uses and
the user is unaware of the potential sensitive content.


> 
> > Its easier with some decoders, harder with others but it will work
> > with almost all decoders with some effort. The existing exploit, not
> > suprisingly uses  a decoder with which the effort to reconstruct the
> > data is minimal.
> > 
> > [...]
> 
> I'm not sure why you keep talking about decoders. Decoders just
> transform data (unless they have bugs, which should be fixed). Isn't
> the problem that the "leaked" data from local filesystem access is
> going somewhere it shouldn't. (Again, I can only imagine a transcoder
> service that accepts anything without being sandboxed. That's pretty
> stupid in itself.)

Theres no question that automated transcoding services should be using
a sandbox, that of course is very true.
Still this problem is not limited to automated transcoding services.

A user downloading a mallicous file (which does not need to end in .hls)
and then transcoding it can put private data of the user into the
output file. That being unknown to the user. if she shares this file
she also shares the data embeded in the file.
This is both a security and privacy issue.


> 
> As as Hendrik already said, you can get the same with http access. An
> attacker could poke around in your local network, which could provide
> access to normally firewalled and private services. Or does your
> attack scenario randomly stop here.
> 
> Also, this doesn't prevent that a HLS file on disk accesses network,
> which could be another problem with services that consume user files
> without further security measures. An attacker could for example upload
> a media file (that really is a m3u8 file) to make the server connect to
> an attacker's server.

The default whitelist for local hls does not contain any network
protocols. So unless the user application or user overrides this or
theres a bug unknown to us there is no network access from local hls


> 
> I'm also not very welcoming of the added message. How is a user of a
> software using FFmpeg supposed to know how to set a libavformat option?

by reading the manual of the software she is using.


> 
> The real fix is to leave sanitation of external accesses to the API
> user by overriding the AVFormatContext.io_open callback.

There are problems with this
1. There is no io_open callback in all affected and supported releases.

2. the io_open callback where it is supported did not stop the exploit.
   in practice.

3. How would an application like ffmpeg and ffplay actually do the
   sanitation ?
   We need to allow access if the user enabled mov dref for example.
   As well as any other such cases.
   also we need to allow accesses to any manually specified pathes and
   files. images with wildcard expansion, ...
   This would result in a rather messy and complex callback
   with many special cases. Security fixes should be as simple as
   possible.

If people want, I can limit the local file check to the case where
the io_open callback is not set?
That way user applications which do their own sanitation would not be
affected by the check or error message and stay in full control of
what access is allowed.

[...]
wm4 May 31, 2017, 1:42 p.m. UTC | #9
On Wed, 31 May 2017 14:49:19 +0200
Michael Niedermayer <michael@niedermayer.cc> wrote:

> On Wed, May 31, 2017 at 01:13:50PM +0200, wm4 wrote:
> > On Wed, 31 May 2017 12:51:35 +0200
> > Michael Niedermayer <michael@niedermayer.cc> wrote:
> >   
> > > On Wed, May 31, 2017 at 11:52:06AM +0200, wm4 wrote:  
> > > > On Wed, 31 May 2017 11:29:56 +0200
> > > > Michael Niedermayer <michael@niedermayer.cc> wrote:
> > > >     
> > > > > On Wed, May 31, 2017 at 09:03:34AM +0200, Hendrik Leppkes wrote:    
> > > > > > On Wed, May 31, 2017 at 2:09 AM, Michael Niedermayer
> > > > > > <michael@niedermayer.cc> wrote:      
> > > > > > > On Wed, May 31, 2017 at 01:14:58AM +0200, Hendrik Leppkes wrote:      
> > > > > > >> On Wed, May 31, 2017 at 12:52 AM, Michael Niedermayer
> > > > > > >> <michael@niedermayer.cc> wrote:      
> > > > > > >> > This prevents an exploit leading to an information leak
> > > > > > >> >
> > > > > > >> > The existing exploit depends on a specific decoder as well.
> > > > > > >> > It does appear though that the exploit should be possible with any decoder.
> > > > > > >> > The problem is that as long as sensitive information gets into the decoder,
> > > > > > >> > the output of the decoder becomes sensitive as well.
> > > > > > >> > The only obvious solution is to prevent access to sensitive information. Or to
> > > > > > >> > disable hls or possibly some of its feature. More complex solutions like
> > > > > > >> > checking the path to limit access to only subdirectories of the hls path may
> > > > > > >> > work as an alternative. But such solutions are fragile and tricky to implement
> > > > > > >> > portably and would not stop every possible attack nor would they work with all
> > > > > > >> > valid hls files.
> > > > > > >> >
> > > > > > >> > Found-by: Emil Lerner and Pavel Cheremushkin
> > > > > > >> > Reported-by: Thierry Foucu <tfoucu@google.com>
> > > > > > >> >
> > > > > > >> >      
> > > > > > >>      
> > > > > > >      
> > > > > > >> I don't particularly like this. Being able to dump a HLS stream (ie.
> > > > > > >> all its file) onto disk and simply open it again is a good thing.
> > > > > > >> Maybe it should just be smarter and only allow using the same protocol
> > > > > > >> for the segments then it already used for the m3u8 file, so that a
> > > > > > >> local m3u8 allows opening a local file (plus http(s), in case I only
> > > > > > >> saved the playlist), but a http HLS playlist only allows http
> > > > > > >> segments?      
> > > > > > >
> > > > > > > we already prevent every protocol except file and crypto for local
> > > > > > > hls files. We also already block http* in local hls files by default
> > > > > > > thorugh default whitelists (file,crypto for local files)
> > > > > > >
> > > > > > > This is not sufficient, the exploit there is successfully puts the
> > > > > > > content of a readable file choosen by the attacker into the output
> > > > > > > video, which if its given back to the attacker leaks this information.
> > > > > > >      
> > > > > >       
> > > > >     
> > > > > > Well, I want to be able to store a HLS playlist and its segments
> > > > > > locally and play it, without specifying some obscure flag. So how can
> > > > > > we make that work?      
> > > > > 
> > > > > What you ask for is to use vulnerable code (hls with local files is
> > > > > pretty much vulnerable by design).
> > > > > Enabling this by default is a bad idea and it would be also an
> > > > > exception to how its handled in other demuxers.
> > > > > For example mov has drefs disabled without the enable_drefs flag.
> > > > > 
> > > > > Other means than a flag could be used to let the user enable it.
> > > > > Would this be what you had in mind ? If so what did you had in mind
> > > > > exactly ?
> > > > > 
> > > > >     
> > > > > > 
> > > > > > In general, information disclosure is always a risk when you process
> > > > > > unvalidated HLS streams, even if you load a remote http playlist which
> > > > > > only contains HTTP links, it could reference something on your
> > > > > > intranet and get access to something otherwise unavailable to the
> > > > > > attacker.      
> > > > >     
> > > > > > I would put the responsibility of ensuring this doesn't happen on
> > > > > > people creating transcoding services, not making our protocols barely
> > > > > > usable.      
> > > > > 
> > > > > According to the authors of the exploit, which they kindly seem to
> > > > > have sent to every affected company but not to us.
> > > > > Most if not all the big names had hls enabled and were vulnerable.
> > > > > So "its the users responsibility" alone did clearly not lead to secure
> > > > > code.
> > > > > 
> > > > > Also users cannot review the ever growing feature set we have for
> > > > > security sensitive features and disable them, thats not practical nor
> > > > > would it be reasonable from us to ask them to do that.
> > > > > 
> > > > > If you see a better solution than to disable hls or local file use in
> > > > > hls by default then please explain what you suggest ?    
> > > > 
> > > > How is this "vulnerable"? If it's via the completely useless and
> > > > annoying bullshit tty decoder, I'm going to throw a party.    
> > > 
> > > As stated in the commit message of the patch
> > > "The existing exploit depends on a specific decoder as well.
> > >  It does appear though that the exploit should be possible with any decoder."
> > > 
> > > The exploit uses a decoder which we can disable for hls but it would
> > > not do anything to stop the vulnerability.
> > > We do not have a tty decoder. We have a tty demuxer, the tty demuxer is
> > > not used in the exploit.  
> > 
> > Well, the tty demuxer / ansi decoder. The combination allows you to
> > dump any text file as video. This would assume the attacker gets the
> > video back somehow (transcoder service?).
> >   
> > > Also from the commit message:
> > > "The problem is that as long as sensitive information gets into the decoder,
> > >  the output of the decoder becomes sensitive as well."
> > > 
> > > This should be quite obvious. If you feed some headers + sensitive
> > > data into a decoder, the output can be used to reconstruct the
> > > sensitive data  
> >   
> 
> > The commit message doesn't really explain how that is a security issue.  
> 
> hls is a playlist with links to files to play
> if hls can refer to local files then the output of transcoding a
> mallicous hls file can contain any file readable to the user.
> That can be his passwords, private keys, and so on
> 
> This renders the output of the transcoder unusable for many uses and
> the user is unaware of the potential sensitive content.

OK... though I know all that and changes nothing about what I said in
the previous mail.

> 
> >   
> > > Its easier with some decoders, harder with others but it will work
> > > with almost all decoders with some effort. The existing exploit, not
> > > suprisingly uses  a decoder with which the effort to reconstruct the
> > > data is minimal.
> > > 
> > > [...]  
> > 
> > I'm not sure why you keep talking about decoders. Decoders just
> > transform data (unless they have bugs, which should be fixed). Isn't
> > the problem that the "leaked" data from local filesystem access is
> > going somewhere it shouldn't. (Again, I can only imagine a transcoder
> > service that accepts anything without being sandboxed. That's pretty
> > stupid in itself.)  
> 
> Theres no question that automated transcoding services should be using
> a sandbox, that of course is very true.
> Still this problem is not limited to automated transcoding services.
> 
> A user downloading a mallicous file (which does not need to end in .hls)
> and then transcoding it can put private data of the user into the
> output file. That being unknown to the user. if she shares this file
> she also shares the data embeded in the file.
> This is both a security and privacy issue.

Why would the user not look at the output? What kind of scenario is
that?

> 
> > 
> > As as Hendrik already said, you can get the same with http access. An
> > attacker could poke around in your local network, which could provide
> > access to normally firewalled and private services. Or does your
> > attack scenario randomly stop here.
> > 
> > Also, this doesn't prevent that a HLS file on disk accesses network,
> > which could be another problem with services that consume user files
> > without further security measures. An attacker could for example upload
> > a media file (that really is a m3u8 file) to make the server connect to
> > an attacker's server.  
> 
> The default whitelist for local hls does not contain any network
> protocols. So unless the user application or user overrides this or
> theres a bug unknown to us there is no network access from local hls
> 

What is considered local? What if the protocol is samba, or the thing
on the filesystem is a mounted network FS? I'm fairly sure you could
come up with loads of ideas that subtly break the existing "security".

Anyway, a remote URL could still access the local network. Isn't this
worse?

> > 
> > I'm also not very welcoming of the added message. How is a user of a
> > software using FFmpeg supposed to know how to set a libavformat option?  
> 
> by reading the manual of the software she is using.

So the manual is supposed to contain information about random ffmpeg
error messages that are randomly being added?

> 
> > 
> > The real fix is to leave sanitation of external accesses to the API
> > user by overriding the AVFormatContext.io_open callback.  
> 
> There are problems with this
> 1. There is no io_open callback in all affected and supported releases.

Make them update? I don't think the issue is so critical. And it's
very, very, very obvious and most were probably aware by it.

You might as well disable the http protocol by default because a user
could enter an URL instead of a file name in a web forumlar, and the
transcoder would unintentionally do a network access. This is on a
similar level of ridiculousness.

It's a playlist protocol, and a network protocol that reads references
from somewhere else. Of course it's to be expected that it can access
arbitrary locations.

> 
> 2. the io_open callback where it is supported did not stop the exploit.
>    in practice.

Failures of the API user. (Or the API user doesn't consider it a valid
attack scenario.)

> 3. How would an application like ffmpeg and ffplay actually do the
>    sanitation ?
>    We need to allow access if the user enabled mov dref for example.
>    As well as any other such cases.
>    also we need to allow accesses to any manually specified pathes and
>    files. images with wildcard expansion, ...
>    This would result in a rather messy and complex callback
>    with many special cases.

If they want security, they need to sandbox it fully, severely restrict
use-cases, or something similar.

> Security fixes should be as simple as
>    possible.

Well, your fix isn't simple. It adds yet another exception with
questionable effect. It makes it more complex and harder to predict
what will actually happen, not simpler.

> If people want, I can limit the local file check to the case where
> the io_open callback is not set?
> That way user applications which do their own sanitation would not be
> affected by the check or error message and stay in full control of
> what access is allowed.

That would have little value and would make it more complex too.

I'd say a good way to make this secure would be disabling the hls
protocol in builds which are security sensitive.

In general there doesn't seem to be a good way. Feel free to prove me
wrong. (I tried something similar, but in addition to the security vs.
convenience tradeoff, it just didn't work.)
Tobias Rapp May 31, 2017, 3:18 p.m. UTC | #10
On 31.05.2017 15:42, wm4 wrote:
> On Wed, 31 May 2017 14:49:19 +0200
> Michael Niedermayer <michael@niedermayer.cc> wrote:
>
 >> [...]
 >>
>> Security fixes should be as simple as
>>    possible.
>
> Well, your fix isn't simple. It adds yet another exception with
> questionable effect. It makes it more complex and harder to predict
> what will actually happen, not simpler.
>
>> If people want, I can limit the local file check to the case where
>> the io_open callback is not set?
>> That way user applications which do their own sanitation would not be
>> affected by the check or error message and stay in full control of
>> what access is allowed.
>
> That would have little value and would make it more complex too.
>
> I'd say a good way to make this secure would be disabling the hls
> protocol in builds which are security sensitive.

We already have "protocol_whitelist", --disable-protocol and application 
sandboxing as supported and generic options. I agree with wm4 that some 
special case-handling here just adds complexity.

> In general there doesn't seem to be a good way. Feel free to prove me
> wrong. (I tried something similar, but in addition to the security vs.
> convenience tradeoff, it just didn't work.)

Regards,
Tobias
Michael Niedermayer May 31, 2017, 4:17 p.m. UTC | #11
On Wed, May 31, 2017 at 03:42:41PM +0200, wm4 wrote:
> On Wed, 31 May 2017 14:49:19 +0200
> Michael Niedermayer <michael@niedermayer.cc> wrote:
> 
> > On Wed, May 31, 2017 at 01:13:50PM +0200, wm4 wrote:
> > > On Wed, 31 May 2017 12:51:35 +0200
> > > Michael Niedermayer <michael@niedermayer.cc> wrote:
> > >   
> > > > On Wed, May 31, 2017 at 11:52:06AM +0200, wm4 wrote:  
> > > > > On Wed, 31 May 2017 11:29:56 +0200
> > > > > Michael Niedermayer <michael@niedermayer.cc> wrote:
> > > > >     
> > > > > > On Wed, May 31, 2017 at 09:03:34AM +0200, Hendrik Leppkes wrote:    
> > > > > > > On Wed, May 31, 2017 at 2:09 AM, Michael Niedermayer
> > > > > > > <michael@niedermayer.cc> wrote:      
> > > > > > > > On Wed, May 31, 2017 at 01:14:58AM +0200, Hendrik Leppkes wrote:      
> > > > > > > >> On Wed, May 31, 2017 at 12:52 AM, Michael Niedermayer
> > > > > > > >> <michael@niedermayer.cc> wrote:      
> > > > > > > >> > This prevents an exploit leading to an information leak
> > > > > > > >> >
> > > > > > > >> > The existing exploit depends on a specific decoder as well.
> > > > > > > >> > It does appear though that the exploit should be possible with any decoder.
> > > > > > > >> > The problem is that as long as sensitive information gets into the decoder,
> > > > > > > >> > the output of the decoder becomes sensitive as well.
> > > > > > > >> > The only obvious solution is to prevent access to sensitive information. Or to
> > > > > > > >> > disable hls or possibly some of its feature. More complex solutions like
> > > > > > > >> > checking the path to limit access to only subdirectories of the hls path may
> > > > > > > >> > work as an alternative. But such solutions are fragile and tricky to implement
> > > > > > > >> > portably and would not stop every possible attack nor would they work with all
> > > > > > > >> > valid hls files.
> > > > > > > >> >
> > > > > > > >> > Found-by: Emil Lerner and Pavel Cheremushkin
> > > > > > > >> > Reported-by: Thierry Foucu <tfoucu@google.com>
> > > > > > > >> >
> > > > > > > >> >      
> > > > > > > >>      
> > > > > > > >      
> > > > > > > >> I don't particularly like this. Being able to dump a HLS stream (ie.
> > > > > > > >> all its file) onto disk and simply open it again is a good thing.
> > > > > > > >> Maybe it should just be smarter and only allow using the same protocol
> > > > > > > >> for the segments then it already used for the m3u8 file, so that a
> > > > > > > >> local m3u8 allows opening a local file (plus http(s), in case I only
> > > > > > > >> saved the playlist), but a http HLS playlist only allows http
> > > > > > > >> segments?      
> > > > > > > >
> > > > > > > > we already prevent every protocol except file and crypto for local
> > > > > > > > hls files. We also already block http* in local hls files by default
> > > > > > > > thorugh default whitelists (file,crypto for local files)
> > > > > > > >
> > > > > > > > This is not sufficient, the exploit there is successfully puts the
> > > > > > > > content of a readable file choosen by the attacker into the output
> > > > > > > > video, which if its given back to the attacker leaks this information.
> > > > > > > >      
> > > > > > >       
> > > > > >     
> > > > > > > Well, I want to be able to store a HLS playlist and its segments
> > > > > > > locally and play it, without specifying some obscure flag. So how can
> > > > > > > we make that work?      
> > > > > > 
> > > > > > What you ask for is to use vulnerable code (hls with local files is
> > > > > > pretty much vulnerable by design).
> > > > > > Enabling this by default is a bad idea and it would be also an
> > > > > > exception to how its handled in other demuxers.
> > > > > > For example mov has drefs disabled without the enable_drefs flag.
> > > > > > 
> > > > > > Other means than a flag could be used to let the user enable it.
> > > > > > Would this be what you had in mind ? If so what did you had in mind
> > > > > > exactly ?
> > > > > > 
> > > > > >     
> > > > > > > 
> > > > > > > In general, information disclosure is always a risk when you process
> > > > > > > unvalidated HLS streams, even if you load a remote http playlist which
> > > > > > > only contains HTTP links, it could reference something on your
> > > > > > > intranet and get access to something otherwise unavailable to the
> > > > > > > attacker.      
> > > > > >     
> > > > > > > I would put the responsibility of ensuring this doesn't happen on
> > > > > > > people creating transcoding services, not making our protocols barely
> > > > > > > usable.      
> > > > > > 
> > > > > > According to the authors of the exploit, which they kindly seem to
> > > > > > have sent to every affected company but not to us.
> > > > > > Most if not all the big names had hls enabled and were vulnerable.
> > > > > > So "its the users responsibility" alone did clearly not lead to secure
> > > > > > code.
> > > > > > 
> > > > > > Also users cannot review the ever growing feature set we have for
> > > > > > security sensitive features and disable them, thats not practical nor
> > > > > > would it be reasonable from us to ask them to do that.
> > > > > > 
> > > > > > If you see a better solution than to disable hls or local file use in
> > > > > > hls by default then please explain what you suggest ?    
> > > > > 
> > > > > How is this "vulnerable"? If it's via the completely useless and
> > > > > annoying bullshit tty decoder, I'm going to throw a party.    
> > > > 
> > > > As stated in the commit message of the patch
> > > > "The existing exploit depends on a specific decoder as well.
> > > >  It does appear though that the exploit should be possible with any decoder."
> > > > 
> > > > The exploit uses a decoder which we can disable for hls but it would
> > > > not do anything to stop the vulnerability.
> > > > We do not have a tty decoder. We have a tty demuxer, the tty demuxer is
> > > > not used in the exploit.  
> > > 
> > > Well, the tty demuxer / ansi decoder. The combination allows you to
> > > dump any text file as video. This would assume the attacker gets the
> > > video back somehow (transcoder service?).
> > >   
> > > > Also from the commit message:
> > > > "The problem is that as long as sensitive information gets into the decoder,
> > > >  the output of the decoder becomes sensitive as well."
> > > > 
> > > > This should be quite obvious. If you feed some headers + sensitive
> > > > data into a decoder, the output can be used to reconstruct the
> > > > sensitive data  
> > >   
> > 
> > > The commit message doesn't really explain how that is a security issue.  
> > 
> > hls is a playlist with links to files to play
> > if hls can refer to local files then the output of transcoding a
> > mallicous hls file can contain any file readable to the user.
> > That can be his passwords, private keys, and so on
> > 
> > This renders the output of the transcoder unusable for many uses and
> > the user is unaware of the potential sensitive content.
> 
> OK... though I know all that and changes nothing about what I said in
> the previous mail.
> 
> > 
> > >   
> > > > Its easier with some decoders, harder with others but it will work
> > > > with almost all decoders with some effort. The existing exploit, not
> > > > suprisingly uses  a decoder with which the effort to reconstruct the
> > > > data is minimal.
> > > > 
> > > > [...]  
> > > 
> > > I'm not sure why you keep talking about decoders. Decoders just
> > > transform data (unless they have bugs, which should be fixed). Isn't
> > > the problem that the "leaked" data from local filesystem access is
> > > going somewhere it shouldn't. (Again, I can only imagine a transcoder
> > > service that accepts anything without being sandboxed. That's pretty
> > > stupid in itself.)  
> > 
> > Theres no question that automated transcoding services should be using
> > a sandbox, that of course is very true.
> > Still this problem is not limited to automated transcoding services.
> > 
> > A user downloading a mallicous file (which does not need to end in .hls)
> > and then transcoding it can put private data of the user into the
> > output file. That being unknown to the user. if she shares this file
> > she also shares the data embeded in the file.
> > This is both a security and privacy issue.
> 
> Why would the user not look at the output? What kind of scenario is
> that?

The user obtains a file from an untrusted source, she transcodes it,
she uploads it.

She can look at the file, the attacker can make the file contain
any video he chooses.
Depending on sophistication of the attacker this can be done without
vissible clues in the video.


> 
> > 
> > > 
> > > As as Hendrik already said, you can get the same with http access. An
> > > attacker could poke around in your local network, which could provide
> > > access to normally firewalled and private services. Or does your
> > > attack scenario randomly stop here.
> > > 
> > > Also, this doesn't prevent that a HLS file on disk accesses network,
> > > which could be another problem with services that consume user files
> > > without further security measures. An attacker could for example upload
> > > a media file (that really is a m3u8 file) to make the server connect to
> > > an attacker's server.  
> > 
> > The default whitelist for local hls does not contain any network
> > protocols. So unless the user application or user overrides this or
> > theres a bug unknown to us there is no network access from local hls
> > 
> 
> What is considered local? What if the protocol is samba, or the thing
> on the filesystem is a mounted network FS? I'm fairly sure you could
> come up with loads of ideas that subtly break the existing "security".

The existing code in hls only allows file: and http: protocols
no samba or any other protocol is allowed unless theres a bug unknown
to us.
The patch as posted disables "file:" by default, this blocks also all
network FS that are mounted in the file systems


> 
> Anyway, a remote URL could still access the local network. Isn't this
> worse?

accessing remote addresses from a remote hls via http is how hls works.
We can disable hls by default if that is preferred.

Either way this is not the issue this thread, or vulnerability is about
which is about local files.


> 
> > > 
> > > I'm also not very welcoming of the added message. How is a user of a
> > > software using FFmpeg supposed to know how to set a libavformat option?  
> > 
> > by reading the manual of the software she is using.
> 
> So the manual is supposed to contain information about random ffmpeg
> error messages that are randomly being added?

The manual of a user application using libavformat should contain
information about how the user can change the settings for libavformat


> 
> > 
> > > 
> > > The real fix is to leave sanitation of external accesses to the API
> > > user by overriding the AVFormatContext.io_open callback.  
> > 
> > There are problems with this
> > 1. There is no io_open callback in all affected and supported releases.
> 
> Make them update? I don't think the issue is so critical. And it's
> very, very, very obvious and most were probably aware by it.
> 
> You might as well disable the http protocol by default because a user
> could enter an URL instead of a file name in a web forumlar, and the
> transcoder would unintentionally do a network access. This is on a
> similar level of ridiculousness.
> 
> It's a playlist protocol, and a network protocol that reads references
> from somewhere else. Of course it's to be expected that it can access
> arbitrary locations.
> 
> > 
> > 2. the io_open callback where it is supported did not stop the exploit.
> >    in practice.
> 
> Failures of the API user. (Or the API user doesn't consider it a valid
> attack scenario.)
> 
> > 3. How would an application like ffmpeg and ffplay actually do the
> >    sanitation ?
> >    We need to allow access if the user enabled mov dref for example.
> >    As well as any other such cases.
> >    also we need to allow accesses to any manually specified pathes and
> >    files. images with wildcard expansion, ...
> >    This would result in a rather messy and complex callback
> >    with many special cases.
> 
> If they want security, they need to sandbox it fully, severely restrict
> use-cases, or something similar.
> 
> > Security fixes should be as simple as
> >    possible.
> 
> Well, your fix isn't simple. It adds yet another exception with
> questionable effect. It makes it more complex and harder to predict
> what will actually happen, not simpler.
> 
> > If people want, I can limit the local file check to the case where
> > the io_open callback is not set?
> > That way user applications which do their own sanitation would not be
> > affected by the check or error message and stay in full control of
> > what access is allowed.
> 
> That would have little value and would make it more complex too.
> 


> I'd say a good way to make this secure would be disabling the hls
> protocol in builds which are security sensitive.

The vulnerability can be used against common users, not just a hand
full of automated transcoding services. So this is sadly not an option.
I had thought about this too.


> 
> In general there doesn't seem to be a good way. Feel free to prove me
> wrong. (I tried something similar, but in addition to the security vs.
> convenience tradeoff, it just didn't work.)

The patch as is fixes the issue at the cost of some inconvenience
to users. That is they need to override the default if they want to
use downloaded hls files.

We can alternativly disable hls by default and require a runtime
option to enable it, but that would be less convenient.

What do you prefer ?
Do you want to submit an alternative fix ? (for master and affected
releases)

I dont think you object to fixing this issue, or do you ?

Iam happy to work toward a fix you prefer but i would like to see
this fixed rather sooner than later.

thanks

[...]
Michael Niedermayer May 31, 2017, 4:33 p.m. UTC | #12
On Wed, May 31, 2017 at 05:18:57PM +0200, Tobias Rapp wrote:
> On 31.05.2017 15:42, wm4 wrote:
> >On Wed, 31 May 2017 14:49:19 +0200
> >Michael Niedermayer <michael@niedermayer.cc> wrote:
> >
> >> [...]
> >>
> >>Security fixes should be as simple as
> >>   possible.
> >
> >Well, your fix isn't simple. It adds yet another exception with
> >questionable effect. It makes it more complex and harder to predict
> >what will actually happen, not simpler.
> >
> >>If people want, I can limit the local file check to the case where
> >>the io_open callback is not set?
> >>That way user applications which do their own sanitation would not be
> >>affected by the check or error message and stay in full control of
> >>what access is allowed.
> >
> >That would have little value and would make it more complex too.
> >
> >I'd say a good way to make this secure would be disabling the hls
> >protocol in builds which are security sensitive.
> 
> We already have "protocol_whitelist", --disable-protocol and
> application sandboxing as supported and generic options. I agree
> with wm4 that some special case-handling here just adds complexity.

"--disable-protocol" does not allow fixing this, the vulnerability
only needs the file protocol ultimatly.

similarly protocol_whitelist only helps if "file" is not on it,
no other protocol is really required for this.

I just confirmed the exploit works with
-protocol_whitelist file,crypto

sandboxing is the awnser for automated transcoding services but
the average joe end user cannot use sandboxing

What do you suggest ?


[...]
Tobias Rapp June 1, 2017, 9:02 a.m. UTC | #13
On 31.05.2017 18:33, Michael Niedermayer wrote:
> On Wed, May 31, 2017 at 05:18:57PM +0200, Tobias Rapp wrote:
>> On 31.05.2017 15:42, wm4 wrote:
>>> On Wed, 31 May 2017 14:49:19 +0200
>>> Michael Niedermayer <michael@niedermayer.cc> wrote:
>>>
>>>> [...]
>>>>
>>>> Security fixes should be as simple as
>>>>   possible.
>>>
>>> Well, your fix isn't simple. It adds yet another exception with
>>> questionable effect. It makes it more complex and harder to predict
>>> what will actually happen, not simpler.
>>>
>>>> If people want, I can limit the local file check to the case where
>>>> the io_open callback is not set?
>>>> That way user applications which do their own sanitation would not be
>>>> affected by the check or error message and stay in full control of
>>>> what access is allowed.
>>>
>>> That would have little value and would make it more complex too.
>>>
>>> I'd say a good way to make this secure would be disabling the hls
>>> protocol in builds which are security sensitive.
>>
>> We already have "protocol_whitelist", --disable-protocol and
>> application sandboxing as supported and generic options. I agree
>> with wm4 that some special case-handling here just adds complexity.
>
> "--disable-protocol" does not allow fixing this, the vulnerability
> only needs the file protocol ultimatly.
>
> similarly protocol_whitelist only helps if "file" is not on it,
> no other protocol is really required for this.
>
> I just confirmed the exploit works with
> -protocol_whitelist file,crypto
>
> sandboxing is the awnser for automated transcoding services but
> the average joe end user cannot use sandboxing
>
> What do you suggest ?

Well as far as I understand the user must
(a1) be tricked into opening a playlist file with FFmpeg
or
(a2) some software based on FFmpeg libraries, then
(b) be tricked into uploading the output file to a server under control 
of the attacker.

Adding another command-line argument will not help much for (a1) as user 
Joe Average might not find/understand it in the long list of options (or 
the options are simply hidden behind some script - but in case a script 
is executed we have lost already anyway).

For (a2) the application should put playlist muxers on the blacklist, if 
not required for normal usage. When being extra cautious this could be 
the library option's default.

Now (b) is the biggest part of the security breach. Uploading 
non-verified binary data to unknown third parties is bad. Thats the 
reason why Joe Average should also opt-out to all that telemetry or 
crash report uploading done by other applications.

But if Joe is willing to upload some data which has been generated via 
some untrusted commands he is not far from uploading other readable 
local files anyway - I see no meaningful way to prevent that from code 
within FFmpeg.

Regards,
Tobias
Michael Niedermayer June 1, 2017, 10:13 a.m. UTC | #14
On Thu, Jun 01, 2017 at 11:02:09AM +0200, Tobias Rapp wrote:
> On 31.05.2017 18:33, Michael Niedermayer wrote:
> >On Wed, May 31, 2017 at 05:18:57PM +0200, Tobias Rapp wrote:
> >>On 31.05.2017 15:42, wm4 wrote:
> >>>On Wed, 31 May 2017 14:49:19 +0200
> >>>Michael Niedermayer <michael@niedermayer.cc> wrote:
> >>>
> >>>>[...]
> >>>>
> >>>>Security fixes should be as simple as
> >>>>  possible.
> >>>
> >>>Well, your fix isn't simple. It adds yet another exception with
> >>>questionable effect. It makes it more complex and harder to predict
> >>>what will actually happen, not simpler.
> >>>
> >>>>If people want, I can limit the local file check to the case where
> >>>>the io_open callback is not set?
> >>>>That way user applications which do their own sanitation would not be
> >>>>affected by the check or error message and stay in full control of
> >>>>what access is allowed.
> >>>
> >>>That would have little value and would make it more complex too.
> >>>
> >>>I'd say a good way to make this secure would be disabling the hls
> >>>protocol in builds which are security sensitive.
> >>
> >>We already have "protocol_whitelist", --disable-protocol and
> >>application sandboxing as supported and generic options. I agree
> >>with wm4 that some special case-handling here just adds complexity.
> >
> >"--disable-protocol" does not allow fixing this, the vulnerability
> >only needs the file protocol ultimatly.
> >
> >similarly protocol_whitelist only helps if "file" is not on it,
> >no other protocol is really required for this.
> >
> >I just confirmed the exploit works with
> >-protocol_whitelist file,crypto
> >
> >sandboxing is the awnser for automated transcoding services but
> >the average joe end user cannot use sandboxing
> >
> >What do you suggest ?
> 
> Well as far as I understand the user must

> (a1) be tricked into opening a playlist file with FFmpeg
> or
> (a2) some software based on FFmpeg libraries, then

yes, though
a1/a2 are required for any attack involving input data. The user
must open a file/stream provided by an attacker.
She doesnt know its a playlist, in fact the exploit we have is not
identifyable as a playlist, neither by file extension nor by the
file tool.


> (b) be tricked into uploading the output file to a server under
> control of the attacker.

Depending on what goal the attacker has, half of this is not needed

For example if the attacker just wants to harm his victim he does
not need access to the file. Making her spread her private keys or
passwords vissibly in the metadata or other part of videos is all thats
needed

For another attack the attacker just needs to upload the exploit well
covered up at various random places. Random people downloading the
videos then will realize the files are unexpectedly large and unwieldy
for the quality and some will transcode them and some will share them.
Especially when the video is a bit "viral"
The shared videos would contain their private keys.
At this point the attacker could with some luck just google for the
video title and collect the private keys of random people from the
resulting files. (for this the attacker would of course not place
the private keys in the metadata vissibly but put them elsewhere where
they require some elaborate processing to extract so they arent
found by anyone else)
Thats one potential attack with this vulnerability that doesnt
require the more difficult steps of getting a specific individual to
do something. I very much doubt thats the only or worst possible attack


> 
> Adding another command-line argument will not help much for (a1) as
> user Joe Average might not find/understand it in the long list of
> options

The patch causes a note to be printed pointing to the option if it is
the cause of a file being rejected.


> (or the options are simply hidden behind some script - but
> in case a script is executed we have lost already anyway).

we are not responsible for security issues in 3rd party scripts
nor are we really able to do anythiing about that.
If a script does something insecure thats the script authors fault
There are also scripts that set things randomly to world writable
to fix permission issues ...

But we are responsible for security in our software, that is FFmpeg.


> 
> For (a2) the application should put playlist muxers on the
> blacklist, if not required for normal usage. When being extra
> cautious this could be the library option's default.

There is a protocol blacklist but no demuxer blacklist.
even if we add such list, it would be missing in all releases


> 
> Now (b) is the biggest part of the security breach.

> Uploading
> non-verified binary data to unknown third parties is bad. Thats the
> reason why Joe Average should also opt-out to all that telemetry or
> crash report uploading done by other applications.

i fully agree, but theres no way joe average can verify a output file
to be clean before upload. Nor does joe (even if he is quite smart)
expect the video he transcoded to contain some bits of private local
files.

This vulnerability is someting that we need to fix.
Do you have an idea thats better than what the patch in this thread
does ?

thanks

[...]
Michael Niedermayer June 1, 2017, 11:49 a.m. UTC | #15
On Thu, Jun 01, 2017 at 12:13:35PM +0200, Michael Niedermayer wrote:
> On Thu, Jun 01, 2017 at 11:02:09AM +0200, Tobias Rapp wrote:
> > On 31.05.2017 18:33, Michael Niedermayer wrote:
> > >On Wed, May 31, 2017 at 05:18:57PM +0200, Tobias Rapp wrote:
> > >>On 31.05.2017 15:42, wm4 wrote:
> > >>>On Wed, 31 May 2017 14:49:19 +0200
> > >>>Michael Niedermayer <michael@niedermayer.cc> wrote:
> > >>>
> > >>>>[...]
> > >>>>
> > >>>>Security fixes should be as simple as
> > >>>>  possible.
> > >>>
> > >>>Well, your fix isn't simple. It adds yet another exception with
> > >>>questionable effect. It makes it more complex and harder to predict
> > >>>what will actually happen, not simpler.
> > >>>
> > >>>>If people want, I can limit the local file check to the case where
> > >>>>the io_open callback is not set?
> > >>>>That way user applications which do their own sanitation would not be
> > >>>>affected by the check or error message and stay in full control of
> > >>>>what access is allowed.
> > >>>
> > >>>That would have little value and would make it more complex too.
> > >>>
> > >>>I'd say a good way to make this secure would be disabling the hls
> > >>>protocol in builds which are security sensitive.
> > >>
> > >>We already have "protocol_whitelist", --disable-protocol and
> > >>application sandboxing as supported and generic options. I agree
> > >>with wm4 that some special case-handling here just adds complexity.
> > >
> > >"--disable-protocol" does not allow fixing this, the vulnerability
> > >only needs the file protocol ultimatly.
> > >
> > >similarly protocol_whitelist only helps if "file" is not on it,
> > >no other protocol is really required for this.
> > >
> > >I just confirmed the exploit works with
> > >-protocol_whitelist file,crypto
> > >
> > >sandboxing is the awnser for automated transcoding services but
> > >the average joe end user cannot use sandboxing
> > >
> > >What do you suggest ?
> >
> > Well as far as I understand the user must
>
> > (a1) be tricked into opening a playlist file with FFmpeg
> > or
> > (a2) some software based on FFmpeg libraries, then
>
> yes, though
> a1/a2 are required for any attack involving input data. The user
> must open a file/stream provided by an attacker.
> She doesnt know its a playlist, in fact the exploit we have is not
> identifyable as a playlist, neither by file extension nor by the
> file tool.
[...]
> >
> > For (a2) the application should put playlist muxers on the
> > blacklist, if not required for normal usage. When being extra
> > cautious this could be the library option's default.
> 
> There is a protocol blacklist but no demuxer blacklist.
> even if we add such list, it would be missing in all releases

ive fixed some bugs and inconsistencies in the whitelists and then
implemented something very similar. Patch sent for that.

That might work with releases up to 2.5



[...]
diff mbox

Patch

diff --git a/libavformat/hls.c b/libavformat/hls.c
index 4b8fb19a52..3a82855ab7 100644
--- a/libavformat/hls.c
+++ b/libavformat/hls.c
@@ -204,6 +204,7 @@  typedef struct HLSContext {
     char *http_proxy;                    ///< holds the address of the HTTP proxy server
     AVDictionary *avio_opts;
     int strict_std_compliance;
+    int allow_file;
 } HLSContext;
 
 static int read_chomp_line(AVIOContext *s, char *buf, int maxlen)
@@ -618,8 +619,16 @@  static int open_url(AVFormatContext *s, AVIOContext **pb, const char *url,
         return AVERROR_INVALIDDATA;
 
     // only http(s) & file are allowed
-    if (!av_strstart(proto_name, "http", NULL) && !av_strstart(proto_name, "file", NULL))
+    if (av_strstart(proto_name, "http", NULL))
+        ;
+    else if (av_strstart(proto_name, "file", NULL)) {
+        if (!c->allow_file) {
+            av_log(s, AV_LOG_ERROR, "Local file access is disabled by default for security reasons. To override this set hls_allow_file to 1\n");
+            return AVERROR_INVALIDDATA;
+        }
+    } else
         return AVERROR_INVALIDDATA;
+
     if (!strncmp(proto_name, url, strlen(proto_name)) && url[strlen(proto_name)] == ':')
         ;
     else if (av_strstart(url, "crypto", NULL) && !strncmp(proto_name, url + 7, strlen(proto_name)) && url[7 + strlen(proto_name)] == ':')
@@ -2134,6 +2143,8 @@  static int hls_probe(AVProbeData *p)
 static const AVOption hls_options[] = {
     {"live_start_index", "segment index to start live streams at (negative values are from the end)",
         OFFSET(live_start_index), AV_OPT_TYPE_INT, {.i64 = -3}, INT_MIN, INT_MAX, FLAGS},
+    {"hls_allow_file", "Allow to refer to files instead of http only",
+        OFFSET(allow_file), AV_OPT_TYPE_BOOL, {.i64 = 0}, 0, 1, FLAGS},
     {NULL}
 };
 
diff --git a/tests/fate/avformat.mak b/tests/fate/avformat.mak
index 82a531c7a5..0ca907ab7f 100644
--- a/tests/fate/avformat.mak
+++ b/tests/fate/avformat.mak
@@ -119,12 +119,12 @@  tests/data/adts-to-mkv-cated-%.mkv: tests/data/adts-to-mkv-header.mkv tests/data
 
 FATE_SEGMENT += fate-segment-mp4-to-ts
 fate-segment-mp4-to-ts: tests/data/mp4-to-ts.m3u8
-fate-segment-mp4-to-ts: CMD = framecrc -flags +bitexact -i $(TARGET_PATH)/tests/data/mp4-to-ts.m3u8 -c copy
+fate-segment-mp4-to-ts: CMD = framecrc -flags +bitexact -hls_allow_file 1 -i $(TARGET_PATH)/tests/data/mp4-to-ts.m3u8 -c copy
 FATE_SEGMENT-$(call ALLYES, MOV_DEMUXER H264_MP4TOANNEXB_BSF MPEGTS_MUXER MATROSKA_DEMUXER SEGMENT_MUXER HLS_DEMUXER) += fate-segment-mp4-to-ts
 
 FATE_SEGMENT += fate-segment-adts-to-mkv
 fate-segment-adts-to-mkv: tests/data/adts-to-mkv.m3u8
-fate-segment-adts-to-mkv: CMD = framecrc -flags +bitexact -i $(TARGET_PATH)/tests/data/adts-to-mkv.m3u8 -c copy
+fate-segment-adts-to-mkv: CMD = framecrc -flags +bitexact -hls_allow_file 1 -i $(TARGET_PATH)/tests/data/adts-to-mkv.m3u8 -c copy
 fate-segment-adts-to-mkv: REF = $(SRC_PATH)/tests/ref/fate/segment-adts-to-mkv-header-all
 FATE_SEGMENT-$(call ALLYES, AAC_DEMUXER AAC_ADTSTOASC_BSF MATROSKA_MUXER MATROSKA_DEMUXER SEGMENT_MUXER HLS_DEMUXER) += fate-segment-adts-to-mkv
 
diff --git a/tests/fate/filter-audio.mak b/tests/fate/filter-audio.mak
index 5d15b31e0b..0c229ccc51 100644
--- a/tests/fate/filter-audio.mak
+++ b/tests/fate/filter-audio.mak
@@ -150,7 +150,7 @@  tests/data/hls-list.m3u8: ffmpeg$(PROGSSUF)$(EXESUF) | tests/data
 
 FATE_AFILTER-$(call ALLYES, HLS_DEMUXER MPEGTS_MUXER MPEGTS_DEMUXER AEVALSRC_FILTER LAVFI_INDEV MP2FIXED_ENCODER) += fate-filter-hls
 fate-filter-hls: tests/data/hls-list.m3u8
-fate-filter-hls: CMD = framecrc -flags +bitexact -i $(TARGET_PATH)/tests/data/hls-list.m3u8
+fate-filter-hls: CMD = framecrc -flags +bitexact -hls_allow_file 1 -i $(TARGET_PATH)/tests/data/hls-list.m3u8
 
 tests/data/hls-list-append.m3u8: TAG = GEN
 tests/data/hls-list-append.m3u8: ffmpeg$(PROGSSUF)$(EXESUF) | tests/data
@@ -164,7 +164,7 @@  tests/data/hls-list-append.m3u8: ffmpeg$(PROGSSUF)$(EXESUF) | tests/data
 
 FATE_AFILTER-$(call ALLYES, HLS_DEMUXER MPEGTS_MUXER MPEGTS_DEMUXER AEVALSRC_FILTER LAVFI_INDEV MP2FIXED_ENCODER) += fate-filter-hls-append
 fate-filter-hls-append: tests/data/hls-list-append.m3u8
-fate-filter-hls-append: CMD = framecrc -flags +bitexact -i $(TARGET_PATH)/tests/data/hls-list-append.m3u8 -af asetpts=RTCTIME
+fate-filter-hls-append: CMD = framecrc -flags +bitexact -hls_allow_file 1 -i $(TARGET_PATH)/tests/data/hls-list-append.m3u8 -af asetpts=RTCTIME
 
 FATE_AMIX += fate-filter-amix-simple
 fate-filter-amix-simple: CMD = ffmpeg -filter_complex amix -i $(SRC) -ss 3 -i $(SRC1) -f f32le -