Message ID | 20191209222604.28920-1-andreas.rheinhardt@gmail.com |
---|---|
State | New |
Headers | show |
On Tue, Dec 10, 2019 at 11:12 PM Andriy Gelman <andriy.gelman@gmail.com> wrote: > On Mon, 09. Dec 23:25, Andreas Rheinhardt wrote: > > While CBS itself uses size_t for sizes, it relies on other APIs that use > > int for their sizes; in particular, AVBuffer uses int for their size > > parameters and so does GetBitContext with their number of bits. While > > CBS aims to be a safe API, the checks it employed were not sufficient to > > prevent overflows: E.g. if the size of a unit was > UINT_MAX / 8, 8 * > > said size may be truncated to a positive integer before being passed to > > init_get_bits() in which case its return value would not indicate an > > error. > > > > These checks have been improved to really capture these kinds of errors; > > furthermore, although the sizes are still size_t, they are now de-facto > > restricted to 0..INT_MAX - AV_INPUT_BUFFER_PADDING_SIZE. > > > > Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com> > > --- > > The check in cbs_insert_unit() can currently not be triggered, because > > av_malloc_array makes sure that it doesn't allocate more than INT_MAX; > > so the allocation will fail long before we get even close to INT_MAX. > > > > av1 and H.26x have not been changed, because their split functions > > already check the size (in case of H.264 and H.265 this happens in > > ff_h2645_packet_split()). > > > > It would btw be possible to open the GetBitContext generically. The > > read_unit function would then get the already opened GetBitContext > > as a parameter. > > > > libavcodec/cbs.c | 6 ++++++ > > libavcodec/cbs_jpeg.c | 2 +- > > libavcodec/cbs_mpeg2.c | 2 +- > > libavcodec/cbs_vp9.c | 2 +- > > 4 files changed, 9 insertions(+), 3 deletions(-) > > > > diff --git a/libavcodec/cbs.c b/libavcodec/cbs.c > > index 0badb192d9..805049404b 100644 > > --- a/libavcodec/cbs.c > > +++ b/libavcodec/cbs.c > > @@ -206,6 +206,9 @@ static int > cbs_fill_fragment_data(CodedBitstreamContext *ctx, > > { > > av_assert0(!frag->data && !frag->data_ref); > > > > + if (size > INT_MAX - AV_INPUT_BUFFER_PADDING_SIZE) > > + return AVERROR(ERANGE); > > + > > frag->data_ref = > > av_buffer_alloc(size + AV_INPUT_BUFFER_PADDING_SIZE); > > if (!frag->data_ref) > > @@ -693,6 +696,9 @@ static int cbs_insert_unit(CodedBitstreamContext > *ctx, > > memmove(units + position + 1, units + position, > > (frag->nb_units - position) * sizeof(*units)); > > } else { > > > + if (frag->nb_units == INT_MAX) > > + return AVERROR(ERANGE); > > + > > Why did you decide to include this? As you say in your comments it cannot > be > triggered. > > It can currently not be triggered; but the parameters of av_malloc_array are size_t and a future version of said function might remove the 2 GB allocation limit. - Andreas
On 09/12/2019 22:25, Andreas Rheinhardt wrote: > While CBS itself uses size_t for sizes, it relies on other APIs that use > int for their sizes; in particular, AVBuffer uses int for their size > parameters and so does GetBitContext with their number of bits. While > CBS aims to be a safe API, the checks it employed were not sufficient to > prevent overflows: E.g. if the size of a unit was > UINT_MAX / 8, 8 * > said size may be truncated to a positive integer before being passed to > init_get_bits() in which case its return value would not indicate an > error. > > These checks have been improved to really capture these kinds of errors; > furthermore, although the sizes are still size_t, they are now de-facto > restricted to 0..INT_MAX - AV_INPUT_BUFFER_PADDING_SIZE. > > Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com> > --- > The check in cbs_insert_unit() can currently not be triggered, because > av_malloc_array makes sure that it doesn't allocate more than INT_MAX; > so the allocation will fail long before we get even close to INT_MAX. > > av1 and H.26x have not been changed, because their split functions > already check the size (in case of H.264 and H.265 this happens in > ff_h2645_packet_split()). > > It would btw be possible to open the GetBitContext generically. The > read_unit function would then get the already opened GetBitContext > as a parameter. > > libavcodec/cbs.c | 6 ++++++ > libavcodec/cbs_jpeg.c | 2 +- > libavcodec/cbs_mpeg2.c | 2 +- > libavcodec/cbs_vp9.c | 2 +- > 4 files changed, 9 insertions(+), 3 deletions(-) > > diff --git a/libavcodec/cbs.c b/libavcodec/cbs.c > index 0badb192d9..805049404b 100644 > --- a/libavcodec/cbs.c > +++ b/libavcodec/cbs.c > @@ -206,6 +206,9 @@ static int cbs_fill_fragment_data(CodedBitstreamContext *ctx, > { > av_assert0(!frag->data && !frag->data_ref); > > + if (size > INT_MAX - AV_INPUT_BUFFER_PADDING_SIZE) > + return AVERROR(ERANGE); For this and the following patch I wonder if it would be nicer to pick a sensible upper bounds for fragments (something like 2^30B total in at most 2^20 units?), name them (CBS_MAX_DATA_SIZE, CBS_MAX_UNITS?) and then use those in all checks? No real case should get anywhere near these bounds, and indeed anything which even gets close was likely crafted by a malicious adversary specifically to do so. > + > frag->data_ref = > av_buffer_alloc(size + AV_INPUT_BUFFER_PADDING_SIZE); > if (!frag->data_ref) > @@ -693,6 +696,9 @@ static int cbs_insert_unit(CodedBitstreamContext *ctx, > memmove(units + position + 1, units + position, > (frag->nb_units - position) * sizeof(*units)); > } else { > + if (frag->nb_units == INT_MAX) > + return AVERROR(ERANGE); > + > units = av_malloc_array(frag->nb_units + 1, sizeof(*units)); > if (!units) > return AVERROR(ENOMEM); > diff --git a/libavcodec/cbs_jpeg.c b/libavcodec/cbs_jpeg.c > index b189cbd9b7..2bb6c8d18c 100644 > --- a/libavcodec/cbs_jpeg.c > +++ b/libavcodec/cbs_jpeg.c > @@ -246,7 +246,7 @@ static int cbs_jpeg_read_unit(CodedBitstreamContext *ctx, > GetBitContext gbc; > int err; > > - err = init_get_bits(&gbc, unit->data, 8 * unit->data_size); > + err = init_get_bits8(&gbc, unit->data, unit->data_size); > if (err < 0) > return err; > > diff --git a/libavcodec/cbs_mpeg2.c b/libavcodec/cbs_mpeg2.c > index 13d871cc89..255f033734 100644 > --- a/libavcodec/cbs_mpeg2.c > +++ b/libavcodec/cbs_mpeg2.c > @@ -227,7 +227,7 @@ static int cbs_mpeg2_read_unit(CodedBitstreamContext *ctx, > GetBitContext gbc; > int err; > > - err = init_get_bits(&gbc, unit->data, 8 * unit->data_size); > + err = init_get_bits8(&gbc, unit->data, unit->data_size); > if (err < 0) > return err; > > diff --git a/libavcodec/cbs_vp9.c b/libavcodec/cbs_vp9.c > index 42e4dcf5ac..f6cfaa3b36 100644 > --- a/libavcodec/cbs_vp9.c > +++ b/libavcodec/cbs_vp9.c > @@ -488,7 +488,7 @@ static int cbs_vp9_read_unit(CodedBitstreamContext *ctx, > GetBitContext gbc; > int err, pos; > > - err = init_get_bits(&gbc, unit->data, 8 * unit->data_size); > + err = init_get_bits8(&gbc, unit->data, unit->data_size); > if (err < 0) > return err; > > There are more of these init_get_bits(..., 8 * something) hanging around. Perhaps change all of them as one patch, even if there isn't any danger of overflow? Thanks, - Mark
diff --git a/libavcodec/cbs.c b/libavcodec/cbs.c index 0badb192d9..805049404b 100644 --- a/libavcodec/cbs.c +++ b/libavcodec/cbs.c @@ -206,6 +206,9 @@ static int cbs_fill_fragment_data(CodedBitstreamContext *ctx, { av_assert0(!frag->data && !frag->data_ref); + if (size > INT_MAX - AV_INPUT_BUFFER_PADDING_SIZE) + return AVERROR(ERANGE); + frag->data_ref = av_buffer_alloc(size + AV_INPUT_BUFFER_PADDING_SIZE); if (!frag->data_ref) @@ -693,6 +696,9 @@ static int cbs_insert_unit(CodedBitstreamContext *ctx, memmove(units + position + 1, units + position, (frag->nb_units - position) * sizeof(*units)); } else { + if (frag->nb_units == INT_MAX) + return AVERROR(ERANGE); + units = av_malloc_array(frag->nb_units + 1, sizeof(*units)); if (!units) return AVERROR(ENOMEM); diff --git a/libavcodec/cbs_jpeg.c b/libavcodec/cbs_jpeg.c index b189cbd9b7..2bb6c8d18c 100644 --- a/libavcodec/cbs_jpeg.c +++ b/libavcodec/cbs_jpeg.c @@ -246,7 +246,7 @@ static int cbs_jpeg_read_unit(CodedBitstreamContext *ctx, GetBitContext gbc; int err; - err = init_get_bits(&gbc, unit->data, 8 * unit->data_size); + err = init_get_bits8(&gbc, unit->data, unit->data_size); if (err < 0) return err; diff --git a/libavcodec/cbs_mpeg2.c b/libavcodec/cbs_mpeg2.c index 13d871cc89..255f033734 100644 --- a/libavcodec/cbs_mpeg2.c +++ b/libavcodec/cbs_mpeg2.c @@ -227,7 +227,7 @@ static int cbs_mpeg2_read_unit(CodedBitstreamContext *ctx, GetBitContext gbc; int err; - err = init_get_bits(&gbc, unit->data, 8 * unit->data_size); + err = init_get_bits8(&gbc, unit->data, unit->data_size); if (err < 0) return err; diff --git a/libavcodec/cbs_vp9.c b/libavcodec/cbs_vp9.c index 42e4dcf5ac..f6cfaa3b36 100644 --- a/libavcodec/cbs_vp9.c +++ b/libavcodec/cbs_vp9.c @@ -488,7 +488,7 @@ static int cbs_vp9_read_unit(CodedBitstreamContext *ctx, GetBitContext gbc; int err, pos; - err = init_get_bits(&gbc, unit->data, 8 * unit->data_size); + err = init_get_bits8(&gbc, unit->data, unit->data_size); if (err < 0) return err;
While CBS itself uses size_t for sizes, it relies on other APIs that use int for their sizes; in particular, AVBuffer uses int for their size parameters and so does GetBitContext with their number of bits. While CBS aims to be a safe API, the checks it employed were not sufficient to prevent overflows: E.g. if the size of a unit was > UINT_MAX / 8, 8 * said size may be truncated to a positive integer before being passed to init_get_bits() in which case its return value would not indicate an error. These checks have been improved to really capture these kinds of errors; furthermore, although the sizes are still size_t, they are now de-facto restricted to 0..INT_MAX - AV_INPUT_BUFFER_PADDING_SIZE. Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com> --- The check in cbs_insert_unit() can currently not be triggered, because av_malloc_array makes sure that it doesn't allocate more than INT_MAX; so the allocation will fail long before we get even close to INT_MAX. av1 and H.26x have not been changed, because their split functions already check the size (in case of H.264 and H.265 this happens in ff_h2645_packet_split()). It would btw be possible to open the GetBitContext generically. The read_unit function would then get the already opened GetBitContext as a parameter. libavcodec/cbs.c | 6 ++++++ libavcodec/cbs_jpeg.c | 2 +- libavcodec/cbs_mpeg2.c | 2 +- libavcodec/cbs_vp9.c | 2 +- 4 files changed, 9 insertions(+), 3 deletions(-)