Message ID | 20161211164000.2858-1-george@nsup.org |
---|---|
State | New |
Headers | show |
On Sun, Dec 11, 2016 at 05:39:58PM +0100, Nicolas George wrote: > This reverts commit 2f07830e69bd14eaba348eb739b9503e7eb7cd4b. > > It was rushed and not properly designed. I strongly object to this patchset we need these limit paramters for fuzzing to be practical. [...]
Le primidi 21 frimaire, an CCXXV, Michael Niedermayer a écrit :
> we need these limit paramters for fuzzing to be practical.
Maybe, but we can take a few days to do it properly instead of rushing a
badly-designed "fix" that does not fix anything and then being forced
into a maintenance nightmare because of it.
First step: these commits and options must go away immediately;
Second step: you show us exactly what problem you are trying to solve.
Third step: we discuss and implement a proper solution.
Regards,
On Sun, Dec 11, 2016 at 05:47:34PM +0100, Nicolas George wrote: > Le primidi 21 frimaire, an CCXXV, Michael Niedermayer a écrit : > > we need these limit paramters for fuzzing to be practical. > > Maybe, but we can take a few days to do it properly instead of rushing a > badly-designed "fix" that does not fix anything and then being forced > into a maintenance nightmare because of it. > > First step: these commits and options must go away immediately; Well, we need these options for efficient fuzzing > > Second step: you show us exactly what problem you are trying to solve. There are multiple independant things these options solve I belive i explained all already, but Fuzzers search and find issues like out of array accesses but also hangs and oom conditions. Fuzzers find hangs and OOM conditions by executing code until it runs out of memory or reaches a timeout. These cases are then reported and need to be checked by a human (that being me in practice it seems) ATM almost all of reported issues are false positives, going through them takes significant amounts of time. the max_pixels parameter should fix this as all the cases i looked at where hitting out of memory or timeout because of very high resolutions. The 2nd issue this fixes is a CVE that was reported about a crafted file with i belive 26k streams hitting OOM. If you belive this CVE is invalid and not a security issue iam totally the wrong to discuss that with. But i like to fix issues instead of arguing about why they are or are not a security issue. The 3rd is that as a user i like to be able to set limits on pixels and streams to limit resource use and avoid crashes The 4th is that it seems to me everyone else on oss-fuzz can avoid this issue, why is something so basic so hard on ffmpeg-devel ? It is very likely there are more problems this fixes maybe a browser loading a image that it knows should be 256x256 ? i dont know, but why shouldnt the user be able to limit the number of pixels? > > Third step: we discuss and implement a proper solution. I already did that and the previously applied solution is the result. If you (singular or plural) dislike how its done, noone stops you from proposing and implementing a better solution. as said, from my point of view the solution is what is needed. and ive spend many times the time on this (mostly discussions) that i would have wanted to. [...]
Le primidi 21 frimaire, an CCXXV, Michael Niedermayer a écrit : > Well, we need these options for efficient fuzzing We need SOMETHING, maybe, but not specifically THESE. > There are multiple independant things these options solve > I belive i explained all already, but You were vague. We only start having details now. > Fuzzers search and find issues like out of array accesses but also > hangs and oom conditions. > Fuzzers find hangs and OOM conditions by executing code until it runs > out of memory or reaches a timeout. > These cases are then reported and need to be checked by a human (that > being me in practice it seems) > ATM almost all of reported issues are false positives, going through > them takes significant amounts of time. the max_pixels parameter should > fix this as all the cases i looked at where hitting out of memory or > timeout because of very high resolutions. Then run the fuzzers with a low address space limit. Problem solved. > The 2nd issue this fixes is a CVE that was reported about a crafted > file with i belive 26k streams hitting OOM. > If you belive this CVE is invalid and not a security issue iam totally > the wrong to discuss that with. But i like to fix issues instead of > arguing about why they are or are not a security issue. Open a ticket, attach the file, we can all discuss what the proper fix is. > The 4th is that it seems to me everyone else on oss-fuzz can avoid > this issue, why is something so basic so hard on ffmpeg-devel ? I do not know. I suppose this is just a misrepresentation of the reality. > The 3rd is that as a user i like to be able to set limits on pixels > and streams to limit resource use and avoid crashes > > It is very likely there are more problems this fixes > maybe a browser loading a image that it knows should be 256x256 ? > i dont know, but why shouldnt the user be able to limit the number > of pixels? Both these paragraphs seem awfully ad-hoc. Nobody ever asked to limit the number of pixels until you came up with this feature. > > Third step: we discuss and implement a proper solution. > I already did that and the previously applied solution is the result. It is not a PROPER solution if it brings us into a maintenance nightmare. > If you (singular or plural) dislike how its done, noone stops you from > proposing and implementing a better solution. Yes, somebody is stopping us: you, with these badly designed features. > as said, from my point of view the solution is what is needed. and > ive spend many times the time on this (mostly discussions) that i > would have wanted to. Then please move on and spend your valuable time (without any kind of sarcasm here, I really believe your time is very valuable to the project) on something else and let the discussion carry on. Please forgive me if I am out of line, but I think I can say it because I consider you as a much better hacker than me: it seems to me that seeing the big picture is not your strongest suit. To give an example, I think Anton, with his "evil plans", is good at seeing the big picture (and also has the courage and stamina to make it work). This is precisely a case where the big picture is needed. We can, collectively, try to see it and find a solution to all you mentioned above. But for that, these commits have to go, they are in the way. Regards,
On 11.12.2016 17:39, Nicolas George wrote: > This reverts commit 2f07830e69bd14eaba348eb739b9503e7eb7cd4b. I'm against reverting this. > It was rushed and not properly designed. It was tested and works well for its purpose. Best regards, Andreas
Le primidi 21 frimaire, an CCXXV, Andreas Cadhalpun a écrit :
> It was tested and works well for its purpose.
Unfortunately, the purpose itself is wrong.
On 11.12.2016 19:27, Nicolas George wrote: > Le primidi 21 frimaire, an CCXXV, Michael Niedermayer a écrit : >> Fuzzers search and find issues like out of array accesses but also >> hangs and oom conditions. >> Fuzzers find hangs and OOM conditions by executing code until it runs >> out of memory or reaches a timeout. >> These cases are then reported and need to be checked by a human (that >> being me in practice it seems) >> ATM almost all of reported issues are false positives, going through >> them takes significant amounts of time. the max_pixels parameter should >> fix this as all the cases i looked at where hitting out of memory or >> timeout because of very high resolutions. > > Then run the fuzzers with a low address space limit. Problem solved. No, that doesn't solve the problem. It takes much more time until the address space limit is reached than checking the resolution before starting to allocate the memory. And setting the memory limit too low means that actually interesting cases can't be tested. Also without options to eliminate common and in general unavoidable slowness it becomes much harder to find real hangs among the many false positives. Best regards, Andreas
Le primidi 21 frimaire, an CCXXV, Andreas Cadhalpun a écrit : > No, that doesn't solve the problem. It takes much more time until the > address space limit is reached than checking the resolution before > starting to allocate the memory. So basically, this option solves the problem of fuzzers running too slowly by preventing them from doing their job. Because that is exactly what is happening: by hitting the arbitrary limit, they stop immediately, and therefore do not test at all the rest of the code. This change seems even wronger by the minute.
On 11.12.2016 20:19, Nicolas George wrote: > Le primidi 21 frimaire, an CCXXV, Andreas Cadhalpun a écrit : >> No, that doesn't solve the problem. It takes much more time until the >> address space limit is reached than checking the resolution before >> starting to allocate the memory. > > So basically, this option solves the problem of fuzzers running too > slowly by preventing them from doing their job. Most problems can be reproduced with smaller image sizes, like basically all the issues I've found so far. And testing hundreds of such files with smaller resolution instead of one with large resolution is just that much more efficient. Best regards, Andreas
On Sun, Dec 11, 2016 at 07:27:26PM +0100, Nicolas George wrote: > Le primidi 21 frimaire, an CCXXV, Michael Niedermayer a écrit : > > Well, we need these options for efficient fuzzing > > We need SOMETHING, maybe, but not specifically THESE. > > > There are multiple independant things these options solve > > I belive i explained all already, but > > You were vague. We only start having details now. > > > Fuzzers search and find issues like out of array accesses but also > > hangs and oom conditions. > > Fuzzers find hangs and OOM conditions by executing code until it runs > > out of memory or reaches a timeout. > > These cases are then reported and need to be checked by a human (that > > being me in practice it seems) > > ATM almost all of reported issues are false positives, going through > > them takes significant amounts of time. the max_pixels parameter should > > fix this as all the cases i looked at where hitting out of memory or > > timeout because of very high resolutions. > > Then run the fuzzers with a low address space limit. Problem solved. You misunderstand I want to find code that allocates too much memory where it should not. to give an example there was long ago some code like len = read() for (i<len) x= alloc() x.whatever =read() ... Straight OOM here with a tiny input file. add a simple if(eof) in there and no OOM anymore this is just one example, code can look very diferent. I want to find these cases and i want to fix them But what i get from the fuzzer is files with resolutions like 65123x3210 which go OOM because of valid but silly resolution. If i can limit the resolution then i can find the other issues which i can fix. If i cannot limit the resolution then i cannot fix the other issues as they are in a sea of OOMs from large resolution files Nothing you can do at the OS level will get you this effect [...] > > > The 2nd issue this fixes is a CVE that was reported about a crafted > > file with i belive 26k streams hitting OOM. > > If you belive this CVE is invalid and not a security issue iam totally > > the wrong to discuss that with. But i like to fix issues instead of > > arguing about why they are or are not a security issue. > > Open a ticket, attach the file, we can all discuss what the proper fix > is. it is exceptionally unprofessional to publish testcases for CVEs before they have been fixed. Also more generally its the researchers choice/job to publish their work. If you belive it should be put in a ticket you should ask him not a 3rd party like me to do that. [...] > > > Third step: we discuss and implement a proper solution. > > I already did that and the previously applied solution is the result. > > It is not a PROPER solution if it brings us into a maintenance > nightmare. who is "us", who is affected by this ? I thought i would be maintaining this alone. Is there someone who will help and work on this ? I certainly dont mind the patches being reverted and replaced by another implementation. What i do mind is if its reverted and either nothing gets implemented or if what is implemented doesnt solve the problems this does. [...]
On Sun, 11 Dec 2016 17:44:31 +0100 Michael Niedermayer <michael@niedermayer.cc> wrote: > On Sun, Dec 11, 2016 at 05:39:58PM +0100, Nicolas George wrote: > > This reverts commit 2f07830e69bd14eaba348eb739b9503e7eb7cd4b. > > > > It was rushed and not properly designed. > > I strongly object to this patchset > > we need these limit paramters for fuzzing to be practical. > > > [...] Sounds to me like their approach to fuzzing is in itself not practical if they can't work around such easily triggered OOM conditions.
On Sun, 11 Dec 2016 17:39:58 +0100
Nicolas George <george@nsup.org> wrote:
> This reverts commit 2f07830e69bd14eaba348eb739b9503e7eb7cd4b.
would you rather the people doing the fuzzing use this feature as a
separate patch so it does not contaminate master?
-compn
On Sun, 11 Dec 2016 22:38:44 -0500 compn <tempn@mi.rr.com> wrote: > On Sun, 11 Dec 2016 17:39:58 +0100 > Nicolas George <george@nsup.org> wrote: > > > This reverts commit 2f07830e69bd14eaba348eb739b9503e7eb7cd4b. > > would you rather the people doing the fuzzing use this feature as a > separate patch so it does not contaminate master? That would also be a very simple solution for the fuzzers.
Le primidi 21 frimaire, an CCXXV, compn a écrit : > would you rather the people doing the fuzzing use this feature as a > separate patch so it does not contaminate master? I would rather have the people doing the fussing stop for a few minutes to think what they need exactly and implement it properly.
Le primidi 21 frimaire, an CCXXV, Michael Niedermayer a écrit : > You misunderstand > > I want to find code that allocates too much memory where it should > not. > to give an example > there was long ago some code like > > len = read() > for (i<len) > x= alloc() > x.whatever =read() > ... > > Straight OOM here with a tiny input file. > add a simple if(eof) in there and no OOM anymore > this is just one example, code can look very diferent. > > I want to find these cases and i want to fix them > But what i get from the fuzzer is files with resolutions like > 65123x3210 which go OOM because of valid but silly resolution. > If i can limit the resolution then i can find the other issues > which i can fix. > If i cannot limit the resolution then i cannot fix the other issues > as they are in a sea of OOMs from large resolution files > > Nothing you can do at the OS level will get you this effect Thanks for explaining. If I read this correctly, this option does not fix any security issue at all, it only help you find other parts of the code that may contain security issues. Am I right? > it is exceptionally unprofessional to publish testcases for CVEs > before they have been fixed. > Also more generally its the researchers choice/job to publish their > work. If you belive it should be put in a ticket you should ask him > not a 3rd party like me to do that. This is Free software, secrecy is not a good policy. "I have this patch that fix a bug, but I can not show you the bug." Well, if the patch is straightforward, we can accept it, but if the patch is not straightforward, we need, collectively, to see the bug. I can understand that if the bug is a critical 0-day exploit, some leeway must be accepted. But "there is a file that triggers a crash" is not enough by far. > who is "us", who is affected by this ? > I thought i would be maintaining this alone. Is there someone who > will help and work on this ? Maintaining "this": it does not work that way, a change in the code puts burden on anybody that work on the code, not just the person who wants the feature.
2016-12-11 19:27 GMT+01:00 Nicolas George <george@nsup.org>: > To give an example, I think Anton, with his "evil plans", is > good at seeing the big picture To me, this seems like exactly the kind of offense that are constantly asking to avoid. Carl Eugen
On Mon, Dec 12, 2016 at 12:04:05PM +0100, Nicolas George wrote: > Le primidi 21 frimaire, an CCXXV, Michael Niedermayer a écrit : > > You misunderstand > > > > I want to find code that allocates too much memory where it should > > not. > > to give an example > > there was long ago some code like > > > > len = read() > > for (i<len) > > x= alloc() > > x.whatever =read() > > ... > > > > Straight OOM here with a tiny input file. > > add a simple if(eof) in there and no OOM anymore > > this is just one example, code can look very diferent. > > > > I want to find these cases and i want to fix them > > But what i get from the fuzzer is files with resolutions like > > 65123x3210 which go OOM because of valid but silly resolution. > > If i can limit the resolution then i can find the other issues > > which i can fix. > > If i cannot limit the resolution then i cannot fix the other issues > > as they are in a sea of OOMs from large resolution files > > > > Nothing you can do at the OS level will get you this effect > > Thanks for explaining. > > If I read this correctly, this option does not fix any security issue at > all, it only help you find other parts of the code that may contain > security issues. Am I right? its more complex there are really independant things this achievs OOM is a security issue under some(1) circumstances, one can hit OOM due to too many pixels (or streams), this specific issue is fixed by the options completely independant of this, the option is needed to fuzz FFmpeg efficiently as andreas explained and also completely independant of this, the option is needed to allow finding some fixable OOM bugs that i would like to fix (1) For example a server process that dies due to it. Even if restarted this can put load on the machine to be a effective was to DOS it Also its certainly true that this does not fix every OOM issue but no bugfix that fixes a out of array read fixes every out of array read either > > > it is exceptionally unprofessional to publish testcases for CVEs > > before they have been fixed. > > Also more generally its the researchers choice/job to publish their > > work. If you belive it should be put in a ticket you should ask him > > not a 3rd party like me to do that. > > This is Free software, secrecy is not a good policy. "I have this patch > that fix a bug, but I can not show you the bug." Well, if the patch is > straightforward, we can accept it, but if the patch is not > straightforward, we need, collectively, to see the bug. > > I can understand that if the bug is a critical 0-day exploit, some > leeway must be accepted. But "there is a file that triggers a crash" is > not enough by far. If it is neccessary to publish exploits then i belive all security support will end in FFmpeg and move elsewhere I cannot really speak for others but i belive that companies doing fuzzing and submit testcases will not submit them if that implies them being made public. Actually they can probably not even legally do that if they wanted it would massivly endanger their customers. About this specific bug, as mentioned it has a CVE, it is on the public oss security list heres a link: http://www.openwall.com/lists/oss-security/2016/12/08/1 That seems more than adequate to understand this one reported case I can privatly share the sample with FFmpeg developers who work on writing an alterantive fix [...]
On Mon, 12 Dec 2016 21:00:19 +0100 Michael Niedermayer <michael@niedermayer.cc> wrote: > On Mon, Dec 12, 2016 at 12:04:05PM +0100, Nicolas George wrote: > > Le primidi 21 frimaire, an CCXXV, Michael Niedermayer a écrit : > > > You misunderstand > > > > > > I want to find code that allocates too much memory where it should > > > not. > > > to give an example > > > there was long ago some code like > > > > > > len = read() > > > for (i<len) > > > x= alloc() > > > x.whatever =read() > > > ... > > > > > > Straight OOM here with a tiny input file. > > > add a simple if(eof) in there and no OOM anymore > > > this is just one example, code can look very diferent. > > > > > > I want to find these cases and i want to fix them > > > But what i get from the fuzzer is files with resolutions like > > > 65123x3210 which go OOM because of valid but silly resolution. > > > If i can limit the resolution then i can find the other issues > > > which i can fix. > > > If i cannot limit the resolution then i cannot fix the other issues > > > as they are in a sea of OOMs from large resolution files > > > > > > Nothing you can do at the OS level will get you this effect > > > > Thanks for explaining. > > > > If I read this correctly, this option does not fix any security issue at > > all, it only help you find other parts of the code that may contain > > security issues. Am I right? > > its more complex > there are really independant things this achievs > > OOM is a security issue under some(1) circumstances, one can hit OOM due > to too many pixels (or streams), this specific issue is fixed by the > options But the transcoding dies with the max_streams limit enforced too. On the other hand, if your server doesn't run the ffmpeg transcoder process in a sandbox, and that process dying due to OOM kills your server, and in response makes your entire site unavailable AND causes some actual security issue... uh, I don't know what to say. Just don't run a server in this case. > > completely independant of this, the option is needed to fuzz FFmpeg > efficiently as andreas explained > > and also completely independant of this, the option is needed to > allow finding some fixable OOM bugs that i would like to fix You can do that with a wrapper that makes malloc randomly fail. > > (1) For example a server process that dies due to it. Even if restarted > this can put load on the machine to be a effective was to DOS it > > Also its certainly true that this does not fix every OOM issue but > no bugfix that fixes a out of array read fixes every out of array read > either An array out of bounds read is a real issue that programmers try to avoid. OOM on the other hand is a "normal" failure just as much as an invalid file that can't be read. > > > > > > > it is exceptionally unprofessional to publish testcases for CVEs > > > before they have been fixed. > > > Also more generally its the researchers choice/job to publish their > > > work. If you belive it should be put in a ticket you should ask him > > > not a 3rd party like me to do that. > > > > This is Free software, secrecy is not a good policy. "I have this patch > > that fix a bug, but I can not show you the bug." Well, if the patch is > > straightforward, we can accept it, but if the patch is not > > straightforward, we need, collectively, to see the bug. > > > > I can understand that if the bug is a critical 0-day exploit, some > > leeway must be accepted. But "there is a file that triggers a crash" is > > not enough by far. > > If it is neccessary to publish exploits then i belive all security > support will end in FFmpeg and move elsewhere > I cannot really speak for others but i belive that companies doing > fuzzing and submit testcases will not submit them if that implies > them being made public. Actually they can probably not even legally > do that if they wanted it would massivly endanger their customers. > > About this specific bug, as mentioned it has a CVE, it is on the public > oss security list > heres a link: > http://www.openwall.com/lists/oss-security/2016/12/08/1 Causing this a vulnerability was already derided as nonsense. Assuming it's a real OOM kill, and not a buggy OOM error handling path. I can't really see this from the report. > That seems more than adequate to understand this one reported case > I can privatly share the sample with FFmpeg developers who work on > writing an alterantive fix > > [...] > >
On Mon, Dec 12, 2016 at 09:16:54PM +0100, wm4 wrote: > On Mon, 12 Dec 2016 21:00:19 +0100 > Michael Niedermayer <michael@niedermayer.cc> wrote: > > > On Mon, Dec 12, 2016 at 12:04:05PM +0100, Nicolas George wrote: > > > Le primidi 21 frimaire, an CCXXV, Michael Niedermayer a écrit : [...] > > completely independant of this, the option is needed to fuzz FFmpeg > > efficiently as andreas explained > > > > and also completely independant of this, the option is needed to > > allow finding some fixable OOM bugs that i would like to fix > > You can do that with a wrapper that makes malloc randomly fail. no, that will find a different class of issues [...]
On Sun, Dec 11, 2016 at 10:38:44PM -0500, compn wrote: > On Sun, 11 Dec 2016 17:39:58 +0100 > Nicolas George <george@nsup.org> wrote: > > > This reverts commit 2f07830e69bd14eaba348eb739b9503e7eb7cd4b. > > would you rather the people doing the fuzzing use this feature as a > separate patch so it does not contaminate master? If people want fuzzing (and consequently bug fixes i do) to be done on a different branch than master this may be possible. It sounds a bit strange to me but it should at least technically be possible. [...]
On 13.12.2016 16:41, Michael Niedermayer wrote: > On Sun, Dec 11, 2016 at 10:38:44PM -0500, compn wrote: >> On Sun, 11 Dec 2016 17:39:58 +0100 >> Nicolas George <george@nsup.org> wrote: >> >>> This reverts commit 2f07830e69bd14eaba348eb739b9503e7eb7cd4b. >> >> would you rather the people doing the fuzzing use this feature as a >> separate patch so it does not contaminate master? > > If people want fuzzing (and consequently bug fixes i do) to be done on > a different branch than master this may be possible. > It sounds a bit strange to me but it should at least technically be > possible. I don't think that's a good idea, as it would make fuzzing ffmpeg efficiently harder, while it should be made easier, so that more people do it. Best regards, Andreas
Le duodi 22 frimaire, an CCXXV, Michael Niedermayer a écrit : > OOM is a security issue under some(1) circumstances, one can hit OOM due > to too many pixels (or streams), this specific issue is fixed by the > options > (1) For example a server process that dies due to it. Even if restarted > this can put load on the machine to be a effective was to DOS it If memory is limited by explicit limits at the OS level, then huge memory consumption in FFmpeg would not result in bursts of the OOM-killer but just return ENOMEM like a normal error. > completely independant of this, the option is needed to fuzz FFmpeg > efficiently as andreas explained > and also completely independant of this, the option is needed to > allow finding some fixable OOM bugs that i would like to fix For that case, I think that the idea of an independent branch is interesting. Or possibly: the option is implemented in a separate branch, and whenever it is needed, the commit is cherry-picked on top of head. But more importantly: since this is for convenience rather than security, it does not need to be backported to older releases, and therefore the compatibility issue is moot. The fixes for the bugs found by fuzzing, of course, can be backported. > Also its certainly true that this does not fix every OOM issue but > no bugfix that fixes a out of array read fixes every out of array read > either I would never oppose to a change on the grounds that it only does part of a task impossible to do completely. I think I denounced that kind of arguments recently about DCE :) Regards,
diff --git a/doc/codecs.texi b/doc/codecs.texi index 9a3a56d..ca7c523 100644 --- a/doc/codecs.texi +++ b/doc/codecs.texi @@ -1272,10 +1272,6 @@ ffprobe -dump_separator " " -i ~/videos/matrixbench_mpeg2.mpg @end example -@item max_pixels @var{integer} (@emph{decoding/encoding,video}) -Maximum number of pixels per image. This value can be used to avoid out of -memory failures due to large images. - @end table @c man end CODEC OPTIONS diff --git a/libavcodec/avcodec.h b/libavcodec/avcodec.h index 02234ae..7ac2ada 100644 --- a/libavcodec/avcodec.h +++ b/libavcodec/avcodec.h @@ -3570,14 +3570,6 @@ typedef struct AVCodecContext { */ int trailing_padding; - /** - * The number of pixels per image to maximally accept. - * - * - decoding: set by user - * - encoding: set by user - */ - int64_t max_pixels; - } AVCodecContext; AVRational av_codec_get_pkt_timebase (const AVCodecContext *avctx); diff --git a/libavcodec/options_table.h b/libavcodec/options_table.h index 3fe7925..ee79859 100644 --- a/libavcodec/options_table.h +++ b/libavcodec/options_table.h @@ -570,7 +570,6 @@ static const AVOption avcodec_options[] = { {"codec_whitelist", "List of decoders that are allowed to be used", OFFSET(codec_whitelist), AV_OPT_TYPE_STRING, { .str = NULL }, CHAR_MIN, CHAR_MAX, A|V|S|D }, {"pixel_format", "set pixel format", OFFSET(pix_fmt), AV_OPT_TYPE_PIXEL_FMT, {.i64=AV_PIX_FMT_NONE}, -1, INT_MAX, 0 }, {"video_size", "set video size", OFFSET(width), AV_OPT_TYPE_IMAGE_SIZE, {.str=NULL}, 0, INT_MAX, 0 }, -{"max_pixels", "Maximum number of pixels", OFFSET(max_pixels), AV_OPT_TYPE_INT64, {.i64 = INT_MAX }, 0, INT_MAX, A|V|S|D|E }, {NULL}, }; diff --git a/libavcodec/utils.c b/libavcodec/utils.c index 44ecc09..89a12c6 100644 --- a/libavcodec/utils.c +++ b/libavcodec/utils.c @@ -209,7 +209,7 @@ void avcodec_set_dimensions(AVCodecContext *s, int width, int height) int ff_set_dimensions(AVCodecContext *s, int width, int height) { - int ret = av_image_check_size2(width, height, s->max_pixels, AV_PIX_FMT_NONE, 0, s); + int ret = av_image_check_size(width, height, 0, s); if (ret < 0) width = height = 0; @@ -904,7 +904,7 @@ static int get_buffer_internal(AVCodecContext *avctx, AVFrame *frame, int flags) int ret; if (avctx->codec_type == AVMEDIA_TYPE_VIDEO) { - if ((ret = av_image_check_size2(avctx->width, avctx->height, avctx->max_pixels, AV_PIX_FMT_NONE, 0, avctx)) < 0 || avctx->pix_fmt<0) { + if ((ret = av_image_check_size(avctx->width, avctx->height, 0, avctx)) < 0 || avctx->pix_fmt<0) { av_log(avctx, AV_LOG_ERROR, "video_get_buffer: image parameters invalid\n"); return AVERROR(EINVAL); } @@ -1338,8 +1338,8 @@ int attribute_align_arg avcodec_open2(AVCodecContext *avctx, const AVCodec *code } if ((avctx->coded_width || avctx->coded_height || avctx->width || avctx->height) - && ( av_image_check_size2(avctx->coded_width, avctx->coded_height, avctx->max_pixels, AV_PIX_FMT_NONE, 0, avctx) < 0 - || av_image_check_size2(avctx->width, avctx->height, avctx->max_pixels, AV_PIX_FMT_NONE, 0, avctx) < 0)) { + && ( av_image_check_size(avctx->coded_width, avctx->coded_height, 0, avctx) < 0 + || av_image_check_size(avctx->width, avctx->height, 0, avctx) < 0)) { av_log(avctx, AV_LOG_WARNING, "Ignoring invalid width/height values\n"); ff_set_dimensions(avctx, 0, 0); } @@ -1982,7 +1982,7 @@ int attribute_align_arg avcodec_encode_video2(AVCodecContext *avctx, return 0; } - if (av_image_check_size2(avctx->width, avctx->height, avctx->max_pixels, AV_PIX_FMT_NONE, 0, avctx)) + if (av_image_check_size(avctx->width, avctx->height, 0, avctx)) return AVERROR(EINVAL); if (frame && frame->format == AV_PIX_FMT_NONE) @@ -2233,7 +2233,7 @@ int attribute_align_arg avcodec_decode_video2(AVCodecContext *avctx, AVFrame *pi } *got_picture_ptr = 0; - if ((avctx->coded_width || avctx->coded_height) && av_image_check_size2(avctx->coded_width, avctx->coded_height, avctx->max_pixels, AV_PIX_FMT_NONE, 0, avctx)) + if ((avctx->coded_width || avctx->coded_height) && av_image_check_size(avctx->coded_width, avctx->coded_height, 0, avctx)) return AVERROR(EINVAL); avctx->internal->pkt = avpkt; diff --git a/tests/ref/fate/api-mjpeg-codec-param b/tests/ref/fate/api-mjpeg-codec-param index 08313ad..c67d1b1 100644 --- a/tests/ref/fate/api-mjpeg-codec-param +++ b/tests/ref/fate/api-mjpeg-codec-param @@ -155,7 +155,6 @@ stream=0, decode=0 codec_whitelist= pixel_format=yuvj422p video_size=400x225 - max_pixels=2147483647 stream=0, decode=1 b=0 ab=0 @@ -313,4 +312,3 @@ stream=0, decode=1 codec_whitelist= pixel_format=yuvj422p video_size=400x225 - max_pixels=2147483647 diff --git a/tests/ref/fate/api-png-codec-param b/tests/ref/fate/api-png-codec-param index 7a9a921..bd53441 100644 --- a/tests/ref/fate/api-png-codec-param +++ b/tests/ref/fate/api-png-codec-param @@ -155,7 +155,6 @@ stream=0, decode=0 codec_whitelist= pixel_format=rgba video_size=128x128 - max_pixels=2147483647 stream=0, decode=1 b=0 ab=0 @@ -313,4 +312,3 @@ stream=0, decode=1 codec_whitelist= pixel_format=rgba video_size=128x128 - max_pixels=2147483647
This reverts commit 2f07830e69bd14eaba348eb739b9503e7eb7cd4b. It was rushed and not properly designed. Signed-off-by: Nicolas George <george@nsup.org> --- doc/codecs.texi | 4 ---- libavcodec/avcodec.h | 8 -------- libavcodec/options_table.h | 1 - libavcodec/utils.c | 12 ++++++------ tests/ref/fate/api-mjpeg-codec-param | 2 -- tests/ref/fate/api-png-codec-param | 2 -- 6 files changed, 6 insertions(+), 23 deletions(-) Just to show that it is not just an idle proposal. FFmpeg have lived for years with this so-called issue, many libraries still do. We can take a few days properly designing things. I think these changes were introduced recently enough to allow just reverting them without introducing compability bloat.