Message ID | 20210627135307.14008-1-michael@niedermayer.cc |
---|---|
State | New |
Headers | show |
Series | [FFmpeg-devel] avcodec/mjpegdec: Adjustable discard threshold | expand |
Context | Check | Description |
---|---|---|
andriy/x86_make | success | Make finished |
andriy/x86_make_fate | success | Make fate finished |
andriy/PPC64_make | success | Make finished |
andriy/PPC64_make_fate | success | Make fate finished |
On Sun, 27. Jun 15:53, Michael Niedermayer wrote: > Fixes regression > Fixes: last frame of Ticket9287 > > Analysed-by: Andriy Gelman > Signed-off-by: Michael Niedermayer <michael@niedermayer.cc> > --- > libavcodec/mjpegdec.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/libavcodec/mjpegdec.c b/libavcodec/mjpegdec.c > index 02a987fd0c..fbc94c46d7 100644 > --- a/libavcodec/mjpegdec.c > +++ b/libavcodec/mjpegdec.c > @@ -338,7 +338,7 @@ int ff_mjpeg_decode_sof(MJpegDecodeContext *s) > av_log(s->avctx, AV_LOG_DEBUG, "sof0: picture: %dx%d\n", width, height); > if (av_image_check_size(width, height, 0, s->avctx) < 0) > return AVERROR_INVALIDDATA; > - if (s->buf_size && (width + 7) / 8 * ((height + 7) / 8) > s->buf_size * 4LL) > + if (s->buf_size && (width + 7) / 8 * ((height + 7) / 8) * (100LL - s->avctx->discard_damaged_percentage) > s->buf_size * 4LL * 100) > return AVERROR_INVALIDDATA; > > nb_components = get_bits(&s->gb, 8); Would a check for discard_damaged_percentage be more accurate in the mjpeg_decode_scan() function? Because mapping buf_size to the number of decoded pixels seems only an estimate at this point. As I understand the goal of this check was to initially prevent timeouts from the fuzzer. The timeouts were caused because there were lots of SOF markers with large frames (so there were many calls to ff_get_buffer()), without SOS markers being called. If the goal is prevent timeout, would it be better to somehow delay calling ff_get_buffer() until we actually start to decode the pixels? I think this was actually done in c8197f73e684b0edc450f3dc2b2b4b3fb9dedd0d, but was reverted recently.
On Sun, Jun 27, 2021 at 01:38:01PM -0400, Andriy Gelman wrote: > On Sun, 27. Jun 15:53, Michael Niedermayer wrote: > > Fixes regression > > Fixes: last frame of Ticket9287 > > > > Analysed-by: Andriy Gelman > > Signed-off-by: Michael Niedermayer <michael@niedermayer.cc> > > --- > > libavcodec/mjpegdec.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/libavcodec/mjpegdec.c b/libavcodec/mjpegdec.c > > index 02a987fd0c..fbc94c46d7 100644 > > --- a/libavcodec/mjpegdec.c > > +++ b/libavcodec/mjpegdec.c > > @@ -338,7 +338,7 @@ int ff_mjpeg_decode_sof(MJpegDecodeContext *s) > > av_log(s->avctx, AV_LOG_DEBUG, "sof0: picture: %dx%d\n", width, height); > > if (av_image_check_size(width, height, 0, s->avctx) < 0) > > return AVERROR_INVALIDDATA; > > - if (s->buf_size && (width + 7) / 8 * ((height + 7) / 8) > s->buf_size * 4LL) > > + if (s->buf_size && (width + 7) / 8 * ((height + 7) / 8) * (100LL - s->avctx->discard_damaged_percentage) > s->buf_size * 4LL * 100) > > return AVERROR_INVALIDDATA; > > > > nb_components = get_bits(&s->gb, 8); > > Would a check for discard_damaged_percentage be more accurate in the mjpeg_decode_scan() function? > Because mapping buf_size to the number of decoded pixels seems only an estimate at > this point. yes > > As I understand the goal of this check was to initially prevent timeouts from > the fuzzer. The timeouts were caused because there were lots of SOF markers with > large frames (so there were many calls to ff_get_buffer()), without SOS markers being called. > > If the goal is prevent timeout, would it be better to somehow delay calling ff_get_buffer() until we actually > start to decode the pixels? > I think this was actually done in c8197f73e684b0edc450f3dc2b2b4b3fb9dedd0d, but was reverted recently. The timeout thing is a bit more annoying doing the allocation only once the first pixel is encountered doesnt really fix it. Its not wrong to do that if that was easy and clean to do but it seemed this isnt so clean and easy The problem is if we consider the maximum sized frame and one coded block and then repeat that. I would guess thats not much better. Theres still O(1) data causing O(width*height) computations but when we limit the input data to a width*height * Constant than at least in theory we are no longer causing O(width*height) computations with O(1) input Thats what that line of code really is trying to achieve It tries to ensure that there is at least a reasonable fragment of a valid frame before spending CPU and memory. If we check later its much more accurate but we then already spend the CPU and memory Thanks [...]
diff --git a/libavcodec/mjpegdec.c b/libavcodec/mjpegdec.c index 02a987fd0c..fbc94c46d7 100644 --- a/libavcodec/mjpegdec.c +++ b/libavcodec/mjpegdec.c @@ -338,7 +338,7 @@ int ff_mjpeg_decode_sof(MJpegDecodeContext *s) av_log(s->avctx, AV_LOG_DEBUG, "sof0: picture: %dx%d\n", width, height); if (av_image_check_size(width, height, 0, s->avctx) < 0) return AVERROR_INVALIDDATA; - if (s->buf_size && (width + 7) / 8 * ((height + 7) / 8) > s->buf_size * 4LL) + if (s->buf_size && (width + 7) / 8 * ((height + 7) / 8) * (100LL - s->avctx->discard_damaged_percentage) > s->buf_size * 4LL * 100) return AVERROR_INVALIDDATA; nb_components = get_bits(&s->gb, 8);
Fixes regression Fixes: last frame of Ticket9287 Analysed-by: Andriy Gelman Signed-off-by: Michael Niedermayer <michael@niedermayer.cc> --- libavcodec/mjpegdec.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)