diff mbox series

[FFmpeg-devel] dnn-layer-mathbinary-test: Fix tests for cases with extra intermediate precision

Message ID 20200423063402.9591-1-martin@martin.st
State Accepted
Commit f4d8fad802b59d3099eb453dcafb08219ecfa22c
Headers show
Series [FFmpeg-devel] dnn-layer-mathbinary-test: Fix tests for cases with extra intermediate precision | expand

Checks

Context Check Description
andriy/default pending
andriy/make success Make finished
andriy/make_fate success Make fate finished

Commit Message

Martin Storsjö April 23, 2020, 6:34 a.m. UTC
This fixes tests on 32 bit x86 mingw with clang, which uses x87
fpu by default.

In this setup, while the get_expected function is declared to
return float, the compiler is (especially given the optimization
flags set) free to keep the intermediate values (in this case,
the return value from the inlined function) in higher precision.

This results in the situation where 7.28 (which actually, as
a float, ends up as 7.2800002098), multiplied by 100, is
728.000000 when really forced into a 32 bit float, but 728.000021
when kept with higher intermediate precision.

For the multiplication case, a more suitable epsilon would e.g.
be 2*FLT_EPSILON*fabs(expected_output), but just increase the
current hardcoded threshold for now.
---
 tests/dnn/dnn-layer-mathbinary-test.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Martin Storsjö April 24, 2020, 10:43 a.m. UTC | #1
On Thu, 23 Apr 2020, Guo, Yejun wrote:

>> -----Original Message-----
>> From: ffmpeg-devel [mailto:ffmpeg-devel-bounces@ffmpeg.org] On Behalf Of
>> Martin Storsj?
>> Sent: Thursday, April 23, 2020 2:34 PM
>> To: ffmpeg-devel@ffmpeg.org
>> Subject: [FFmpeg-devel] [PATCH] dnn-layer-mathbinary-test: Fix tests for cases
>> with extra intermediate precision
>>
>> This fixes tests on 32 bit x86 mingw with clang, which uses x87
>> fpu by default.
>>
>> In this setup, while the get_expected function is declared to
>> return float, the compiler is (especially given the optimization
>> flags set) free to keep the intermediate values (in this case,
>> the return value from the inlined function) in higher precision.
>>
>> This results in the situation where 7.28 (which actually, as
>> a float, ends up as 7.2800002098), multiplied by 100, is
>> 728.000000 when really forced into a 32 bit float, but 728.000021
>> when kept with higher intermediate precision.
>>
>> For the multiplication case, a more suitable epsilon would e.g.
>> be 2*FLT_EPSILON*fabs(expected_output),
>
> thanks for the fix. LGTM.
>
> Just want to have a talk with 2*FLT_EPSILON*fabs(expected_output),
> any explanation for this? looks ULP (units of least precision) based method
> is a good choice, see https://bitbashing.io/comparing-floats.html.
> Anyway, let's use the hardcoded threshold for simplicity.

FLT_EPSILON corresponds to 1 ULP when the exponent is zero, i.e. in the 
range [1,2] or [-2,-1]. So by doing FLT_EPSILON*fabs(expected_output) you 
get the magnitude of 1 ULP for the value expected_output. By allowing a 
difference of 2 ULP it would be a bit more lenient - not sure if that 
aspect really is relevant or not.

This would work fine for this particular test, as you have two input 
values that should be represented the same in both implementations, and 
you do one single operation on them - so the only difference _should_ be 
how much the end result is rounded. If testing for likeness on a more 
complex function that does a series of operations, you would have to 
account for a ~1 ULP rounding error in each of the steps, and calculate 
how that rounding error could be magnified by later operations.

And especially if you have two potentially inexact numbers that are close 
each other and perform a subtraction, you'll have loss of significance, 
and the error in that result is way larger than 1 ULP for that particular 
number.

// Martin
Guo, Yejun April 24, 2020, 11:54 a.m. UTC | #2
> -----Original Message-----
> From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of Martin
> Storsj?
> Sent: Friday, April 24, 2020 6:44 PM
> To: FFmpeg development discussions and patches <ffmpeg-devel@ffmpeg.org>
> Subject: Re: [FFmpeg-devel] [PATCH] dnn-layer-mathbinary-test: Fix tests for
> cases with extra intermediate precision
> 
> On Thu, 23 Apr 2020, Guo, Yejun wrote:
> 
> >> -----Original Message-----
> >> From: ffmpeg-devel [mailto:ffmpeg-devel-bounces@ffmpeg.org] On Behalf
> >> Of Martin Storsj?
> >> Sent: Thursday, April 23, 2020 2:34 PM
> >> To: ffmpeg-devel@ffmpeg.org
> >> Subject: [FFmpeg-devel] [PATCH] dnn-layer-mathbinary-test: Fix tests
> >> for cases with extra intermediate precision
> >>
> >> This fixes tests on 32 bit x86 mingw with clang, which uses x87 fpu
> >> by default.
> >>
> >> In this setup, while the get_expected function is declared to return
> >> float, the compiler is (especially given the optimization flags set)
> >> free to keep the intermediate values (in this case, the return value
> >> from the inlined function) in higher precision.
> >>
> >> This results in the situation where 7.28 (which actually, as a float,
> >> ends up as 7.2800002098), multiplied by 100, is
> >> 728.000000 when really forced into a 32 bit float, but 728.000021
> >> when kept with higher intermediate precision.
> >>
> >> For the multiplication case, a more suitable epsilon would e.g.
> >> be 2*FLT_EPSILON*fabs(expected_output),
> >
> > thanks for the fix. LGTM.
> >
> > Just want to have a talk with 2*FLT_EPSILON*fabs(expected_output),
> > any explanation for this? looks ULP (units of least precision) based
> > method is a good choice, see https://bitbashing.io/comparing-floats.html.
> > Anyway, let's use the hardcoded threshold for simplicity.
> 
> FLT_EPSILON corresponds to 1 ULP when the exponent is zero, i.e. in the range
> [1,2] or [-2,-1]. So by doing FLT_EPSILON*fabs(expected_output) you get the
> magnitude of 1 ULP for the value expected_output. By allowing a difference of 2
> ULP it would be a bit more lenient - not sure if that aspect really is relevant or
> not.
> 
> This would work fine for this particular test, as you have two input values that
> should be represented the same in both implementations, and you do one
> single operation on them - so the only difference _should_ be how much the
> end result is rounded. If testing for likeness on a more complex function that
> does a series of operations, you would have to account for a ~1 ULP rounding
> error in each of the steps, and calculate how that rounding error could be
> magnified by later operations.
> 
> And especially if you have two potentially inexact numbers that are close each
> other and perform a subtraction, you'll have loss of significance, and the error
> in that result is way larger than 1 ULP for that particular number.

Thanks a lot Martin, will push now.

> 
> // Martin
> 
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org
> with subject "unsubscribe".
Martin Storsjö April 24, 2020, 12:39 p.m. UTC | #3
On Fri, 24 Apr 2020, Guo, Yejun wrote:

>
>
>> -----Original Message-----
>> From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of Martin
>> Storsj?
>> Sent: Friday, April 24, 2020 6:44 PM
>> To: FFmpeg development discussions and patches <ffmpeg-devel@ffmpeg.org>
>> Subject: Re: [FFmpeg-devel] [PATCH] dnn-layer-mathbinary-test: Fix tests for
>> cases with extra intermediate precision
>>
>> On Thu, 23 Apr 2020, Guo, Yejun wrote:
>>
>>>> -----Original Message-----
>>>> From: ffmpeg-devel [mailto:ffmpeg-devel-bounces@ffmpeg.org] On Behalf
>>>> Of Martin Storsj?
>>>> Sent: Thursday, April 23, 2020 2:34 PM
>>>> To: ffmpeg-devel@ffmpeg.org
>>>> Subject: [FFmpeg-devel] [PATCH] dnn-layer-mathbinary-test: Fix tests
>>>> for cases with extra intermediate precision
>>>>
>>>> This fixes tests on 32 bit x86 mingw with clang, which uses x87 fpu
>>>> by default.
>>>>
>>>> In this setup, while the get_expected function is declared to return
>>>> float, the compiler is (especially given the optimization flags set)
>>>> free to keep the intermediate values (in this case, the return value
>>>> from the inlined function) in higher precision.
>>>>
>>>> This results in the situation where 7.28 (which actually, as a float,
>>>> ends up as 7.2800002098), multiplied by 100, is
>>>> 728.000000 when really forced into a 32 bit float, but 728.000021
>>>> when kept with higher intermediate precision.
>>>>
>>>> For the multiplication case, a more suitable epsilon would e.g.
>>>> be 2*FLT_EPSILON*fabs(expected_output),
>>>
>>> thanks for the fix. LGTM.
>>>
>>> Just want to have a talk with 2*FLT_EPSILON*fabs(expected_output),
>>> any explanation for this? looks ULP (units of least precision) based
>>> method is a good choice, see https://bitbashing.io/comparing-floats.html.
>>> Anyway, let's use the hardcoded threshold for simplicity.
>>
>> FLT_EPSILON corresponds to 1 ULP when the exponent is zero, i.e. in the range
>> [1,2] or [-2,-1]. So by doing FLT_EPSILON*fabs(expected_output) you get the
>> magnitude of 1 ULP for the value expected_output. By allowing a difference of 2
>> ULP it would be a bit more lenient - not sure if that aspect really is relevant or
>> not.
>>
>> This would work fine for this particular test, as you have two input values that
>> should be represented the same in both implementations, and you do one
>> single operation on them - so the only difference _should_ be how much the
>> end result is rounded. If testing for likeness on a more complex function that
>> does a series of operations, you would have to account for a ~1 ULP rounding
>> error in each of the steps, and calculate how that rounding error could be
>> magnified by later operations.
>>
>> And especially if you have two potentially inexact numbers that are close each
>> other and perform a subtraction, you'll have loss of significance, and the error
>> in that result is way larger than 1 ULP for that particular number.
>
> Thanks a lot Martin, will push now.

I actually already pushed it, but thanks anyway!

// Martin
diff mbox series

Patch

diff --git a/tests/dnn/dnn-layer-mathbinary-test.c b/tests/dnn/dnn-layer-mathbinary-test.c
index f82d15b14c..f67c0f213b 100644
--- a/tests/dnn/dnn-layer-mathbinary-test.c
+++ b/tests/dnn/dnn-layer-mathbinary-test.c
@@ -24,7 +24,7 @@ 
 #include "libavfilter/dnn/dnn_backend_native_layer_mathbinary.h"
 #include "libavutil/avassert.h"
 
-#define EPSON 0.00001
+#define EPSON 0.00005
 
 static float get_expected(float f1, float f2, DNNMathBinaryOperation op)
 {