diff mbox series

[FFmpeg-devel] Fix broken build on Android due to broken asm-generic/termbits.h include

Message ID b69e5dea-5dd8-4c98-aeae-6729c0ebdb40@edison
State New
Headers show
Series [FFmpeg-devel] Fix broken build on Android due to broken asm-generic/termbits.h include | expand

Checks

Context Check Description
yinshiyou/configure_loongarch64 warning Failed to apply patch
andriy/commit_msg_x86 warning The first line of the commit message must start with a context terminated by a colon and a space, for example "lavu/opt: " or "doc: ".
andriy/make_x86 success Make finished
andriy/make_fate_x86 success Make fate finished

Commit Message

copypaste Feb. 22, 2023, 9:12 a.m. UTC
Best,
 
Fred Brennan (copypaste.wtf)

Comments

Anton Khirnov Feb. 23, 2023, 1:45 p.m. UTC | #1
Quoting copypaste (2023-02-22 10:12:08)
> Applied to libavcodec/aaccoder.c:
> :%s/\([*]\)\?\([ \t]\{-}\)\?\(\<[BD-Z]\([0-9]*\)\)\>/\1\2_\3/g
> 
> This stops B0 and B1 from colliding with a define in
> /usr/include/asm-generic/termbits.h:

Why is that file even being included?

Also
> All identifiers that begin with an underscore and either an uppercase
> letter or another underscore are always reserved for any use.
copypaste Feb. 23, 2023, 2:37 p.m. UTC | #2
It gets included because this platform does indeed have Video4Linux so some of the aarch64 stuff is relevant. Furthermore I think that the HEVC stuff includes it.
 

 
I have attached the file, whether or not it is not allowed seems immaterial because it is done by Android whether we like it or not.
 

 
 
 
 
 
>  
> On Feb 23, 2023 at 8:47 AM, Anton Khirnov  <anton@khirnov.net>  wrote:
>  
>  
>  Quoting copypaste (2023-02-22 10:12:08) 
> >  Applied to libavcodec/aaccoder.c: 
> Why is that file even being included? 
>
copypaste Feb. 23, 2023, 2:46 p.m. UTC | #3
Sorry Anton, I stupidly misread your message. You are telling me not to start my identifiers with underscores not talking about Android. Well, I can certainly change it but just making them lowercase will not fix the issue because those collide with ffmpeg's variables.  

 
Perhaps what I should do is more of the #undef thing I did in file   libavcodec/hevcdec.h.
Hendrik Leppkes Feb. 23, 2023, 3:01 p.m. UTC | #4
On Thu, Feb 23, 2023 at 3:37 PM copypaste <copypaste@kittens.ph> wrote:
> It gets included because this platform does indeed have Video4Linux so some of the aarch64 stuff is relevant. Furthermore I think that the HEVC stuff includes it.
>

If its relevant for V4L or other hardware integration is fine and all
... but the real question is, how does it end up in aaccoder.c and
other files entirely unrelated to video or hardware? And can we just
get it out of those?

- Hendrik
copypaste Feb. 24, 2023, 2:55 p.m. UTC | #5
Here's my attempt to do just that. :-)  

 
Compiled fine with:
 

 
configuration: --prefix=/data/data/com.termux/files/usr/local --enable-indev=alsa --enable-indev=xcbgrab --enable-libfreetype --enable-libfontconfig --cc=clang --cxx=clang++ --disable-libxcb-shm --enable-vulkan --enable-opencl --enable-opengl --enable-libass --enable-libopus --enable-libfribidi --enable-librsvg --disable-stripping --enable-logging --enable-debug --logfile=/dev/tty --enable-nonfree --enable-jni --enable-gpl --enable-version3 --host-os=android --target-os=android --enable-mediacodec
Zhao Zhili Feb. 27, 2023, 1:44 p.m. UTC | #6
> On Feb 23, 2023, at 23:01, Hendrik Leppkes <h.leppkes@gmail.com> wrote:
> 
> On Thu, Feb 23, 2023 at 3:37 PM copypaste <copypaste@kittens.ph> wrote:
>> It gets included because this platform does indeed have Video4Linux so some of the aarch64 stuff is relevant. Furthermore I think that the HEVC stuff includes it.
>> 
> 
> If its relevant for V4L or other hardware integration is fine and all
> ... but the real question is, how does it end up in aaccoder.c and
> other files entirely unrelated to video or hardware? And can we just
> get it out of those?

I know this issue exists for a long time. But I can’t reproduce it
these days.

https://github.com/android/ndk/issues/630
https://trac.ffmpeg.org/ticket/7130

The ‘B0’ macro comes from <sys/ioctl.h>, which is defined by
asm-generic/termbits.h. <sys/ioctl.h> is included by libavutil/timer.h.
Thanks to the cleanup work done by Andreas Rheinhardt, the conflicts
has been reduced a lot, I think that’s why I didn’t get build errors.

Google suggests FFmpeg to fix the code problem, while NDK has
break more than one projects, see

https://github.com/jupyter-xeus/cpp-terminal/issues/35 

You can reproduce the build error with:

$ cat foo.c 
#include <sys/ioctl.h>

int main()
{
    int B0 = 123;
}

$ make foo
cc     foo.c   -o foo
foo.c:5:9: error: expected identifier or '('
    int B0 = 123;
        ^
/data/data/com.termux/files/usr/include/asm-generic/termbits.h:118:12: note: expanded from macro 'B0'
#define B0 0000000
           ^
1 error generated.
make: *** [<builtin>: foo] Error 1

> 
> - Hendrik
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
diff mbox series

Patch

From ccd0e884e045ed854658fd5e956bbddcd953dcf6 Mon Sep 17 00:00:00 2001
From: Fredrick Brennan <copypaste@kittens.ph>
Date: Wed, 22 Feb 2023 04:07:14 -0500
Subject: [PATCH 2/2] Fixed rest of the build on Android

see last commit
---
 libavcodec/faandct.c  |  62 +++---
 libavcodec/hevcdec.h  |   7 +
 libavcodec/opus_pvq.c | 446 +++++++++++++++++++++---------------------
 3 files changed, 261 insertions(+), 254 deletions(-)

diff --git a/libavcodec/faandct.c b/libavcodec/faandct.c
index 38c392bbae..52136ec9b1 100644
--- a/libavcodec/faandct.c
+++ b/libavcodec/faandct.c
@@ -37,29 +37,29 @@  for this approach). Unfortunately, long double is not always available correctly
 e.g ppc has issues.
 TODO: add L suffixes when ppc and toolchains sort out their stuff.
 */
-#define B0 1.000000000000000000000000000000000000
-#define B1 0.720959822006947913789091890943021267 // (cos(pi*1/16)sqrt(2))^-1
-#define B2 0.765366864730179543456919968060797734 // (cos(pi*2/16)sqrt(2))^-1
-#define B3 0.850430094767256448766702844371412325 // (cos(pi*3/16)sqrt(2))^-1
-#define B4 1.000000000000000000000000000000000000 // (cos(pi*4/16)sqrt(2))^-1
-#define B5 1.272758580572833938461007018281767032 // (cos(pi*5/16)sqrt(2))^-1
-#define B6 1.847759065022573512256366378793576574 // (cos(pi*6/16)sqrt(2))^-1
-#define B7 3.624509785411551372409941227504289587 // (cos(pi*7/16)sqrt(2))^-1
-
-#define A1 M_SQRT1_2              // cos(pi*4/16)
-#define A2 0.54119610014619698435 // cos(pi*6/16)sqrt(2)
-#define A5 0.38268343236508977170 // cos(pi*6/16)
-#define A4 1.30656296487637652774 // cos(pi*2/16)sqrt(2)
+#define _B0 1.000000000000000000000000000000000000
+#define _B1 0.720959822006947913789091890943021267 // (cos(pi*1/16)sqrt(2))^-1
+#define _B2 0.765366864730179543456919968060797734 // (cos(pi*2/16)sqrt(2))^-1
+#define _B3 0.850430094767256448766702844371412325 // (cos(pi*3/16)sqrt(2))^-1
+#define _B4 1.000000000000000000000000000000000000 // (cos(pi*4/16)sqrt(2))^-1
+#define _B5 1.272758580572833938461007018281767032 // (cos(pi*5/16)sqrt(2))^-1
+#define _B6 1.847759065022573512256366378793576574 // (cos(pi*6/16)sqrt(2))^-1
+#define _B7 3.624509785411551372409941227504289587 // (cos(pi*7/16)sqrt(2))^-1
+
+#define _A1 M_SQRT1_2              // cos(pi*4/16)
+#define _A2 0.54119610014619698435 // cos(pi*6/16)sqrt(2)
+#define _A5 0.38268343236508977170 // cos(pi*6/16)
+#define _A4 1.30656296487637652774 // cos(pi*2/16)sqrt(2)
 
 static const FLOAT postscale[64]={
-B0*B0, B0*B1, B0*B2, B0*B3, B0*B4, B0*B5, B0*B6, B0*B7,
-B1*B0, B1*B1, B1*B2, B1*B3, B1*B4, B1*B5, B1*B6, B1*B7,
-B2*B0, B2*B1, B2*B2, B2*B3, B2*B4, B2*B5, B2*B6, B2*B7,
-B3*B0, B3*B1, B3*B2, B3*B3, B3*B4, B3*B5, B3*B6, B3*B7,
-B4*B0, B4*B1, B4*B2, B4*B3, B4*B4, B4*B5, B4*B6, B4*B7,
-B5*B0, B5*B1, B5*B2, B5*B3, B5*B4, B5*B5, B5*B6, B5*B7,
-B6*B0, B6*B1, B6*B2, B6*B3, B6*B4, B6*B5, B6*B6, B6*B7,
-B7*B0, B7*B1, B7*B2, B7*B3, B7*B4, B7*B5, B7*B6, B7*B7,
+_B0*_B0, _B0*_B1, _B0*_B2, _B0*_B3, _B0*_B4, _B0*_B5, _B0*_B6, _B0*_B7,
+_B1*_B0, _B1*_B1, _B1*_B2, _B1*_B3, _B1*_B4, _B1*_B5, _B1*_B6, _B1*_B7,
+_B2*_B0, _B2*_B1, _B2*_B2, _B2*_B3, _B2*_B4, _B2*_B5, _B2*_B6, _B2*_B7,
+_B3*_B0, _B3*_B1, _B3*_B2, _B3*_B3, _B3*_B4, _B3*_B5, _B3*_B6, _B3*_B7,
+_B4*_B0, _B4*_B1, _B4*_B2, _B4*_B3, _B4*_B4, _B4*_B5, _B4*_B6, _B4*_B7,
+_B5*_B0, _B5*_B1, _B5*_B2, _B5*_B3, _B5*_B4, _B5*_B5, _B5*_B6, _B5*_B7,
+_B6*_B0, _B6*_B1, _B6*_B2, _B6*_B3, _B6*_B4, _B6*_B5, _B6*_B6, _B6*_B7,
+_B7*_B0, _B7*_B1, _B7*_B2, _B7*_B3, _B7*_B4, _B7*_B5, _B7*_B6, _B7*_B7,
 };
 
 static av_always_inline void row_fdct(FLOAT temp[64], int16_t *data)
@@ -88,7 +88,7 @@  static av_always_inline void row_fdct(FLOAT temp[64], int16_t *data)
         temp[4 + i]= tmp10 - tmp11;
 
         tmp12 += tmp13;
-        tmp12 *= A1;
+        tmp12 *= _A1;
         temp[2 + i]= tmp13 + tmp12;
         temp[6 + i]= tmp13 - tmp12;
 
@@ -96,10 +96,10 @@  static av_always_inline void row_fdct(FLOAT temp[64], int16_t *data)
         tmp5 += tmp6;
         tmp6 += tmp7;
 
-        z2= tmp4*(A2+A5) - tmp6*A5;
-        z4= tmp6*(A4-A5) + tmp4*A5;
+        z2= tmp4*(_A2+_A5) - tmp6*_A5;
+        z4= tmp6*(_A4-_A5) + tmp4*_A5;
 
-        tmp5*=A1;
+        tmp5*=_A1;
 
         z11= tmp7 + tmp5;
         z13= tmp7 - tmp5;
@@ -142,7 +142,7 @@  void ff_faandct(int16_t *data)
         data[8*4 + i]= lrintf(postscale[8*4 + i] * (tmp10 - tmp11));
 
         tmp12 += tmp13;
-        tmp12 *= A1;
+        tmp12 *= _A1;
         data[8*2 + i]= lrintf(postscale[8*2 + i] * (tmp13 + tmp12));
         data[8*6 + i]= lrintf(postscale[8*6 + i] * (tmp13 - tmp12));
 
@@ -150,10 +150,10 @@  void ff_faandct(int16_t *data)
         tmp5 += tmp6;
         tmp6 += tmp7;
 
-        z2= tmp4*(A2+A5) - tmp6*A5;
-        z4= tmp6*(A4-A5) + tmp4*A5;
+        z2= tmp4*(_A2+_A5) - tmp6*_A5;
+        z4= tmp6*(_A4-_A5) + tmp4*_A5;
 
-        tmp5*=A1;
+        tmp5*=_A1;
 
         z11= tmp7 + tmp5;
         z13= tmp7 - tmp5;
@@ -195,7 +195,7 @@  void ff_faandct248(int16_t *data)
         data[8*4 + i] = lrintf(postscale[8*4 + i] * (tmp10 - tmp11));
 
         tmp12 += tmp13;
-        tmp12 *= A1;
+        tmp12 *= _A1;
         data[8*2 + i] = lrintf(postscale[8*2 + i] * (tmp13 + tmp12));
         data[8*6 + i] = lrintf(postscale[8*6 + i] * (tmp13 - tmp12));
 
@@ -208,7 +208,7 @@  void ff_faandct248(int16_t *data)
         data[8*5 + i] = lrintf(postscale[8*4 + i] * (tmp10 - tmp11));
 
         tmp12 += tmp13;
-        tmp12 *= A1;
+        tmp12 *= _A1;
         data[8*3 + i] = lrintf(postscale[8*2 + i] * (tmp13 + tmp12));
         data[8*7 + i] = lrintf(postscale[8*6 + i] * (tmp13 - tmp12));
     }
diff --git a/libavcodec/hevcdec.h b/libavcodec/hevcdec.h
index 9d3f4adbb3..bfb94155e1 100644
--- a/libavcodec/hevcdec.h
+++ b/libavcodec/hevcdec.h
@@ -56,6 +56,13 @@ 
 
 #define L0 0
 #define L1 1
+// Required on Android. Set by asm include!
+#undef A0
+#undef A1
+#undef A2
+#undef B0
+#undef B1
+#undef B2
 
 #define EPEL_EXTRA_BEFORE 1
 #define EPEL_EXTRA_AFTER  2
diff --git a/libavcodec/opus_pvq.c b/libavcodec/opus_pvq.c
index d08dcd7413..9deab4c615 100644
--- a/libavcodec/opus_pvq.c
+++ b/libavcodec/opus_pvq.c
@@ -80,21 +80,21 @@  static inline int celt_pulses2bits(const uint8_t *cache, int pulses)
    return (pulses == 0) ? 0 : cache[pulses] + 1;
 }
 
-static inline void celt_normalize_residual(const int * av_restrict iy, float * av_restrict X,
-                                           int N, float g)
+static inline void celt_normalize_residual(const int * av_restrict iy, float * av_restrict _X,
+                                           int _N, float g)
 {
     int i;
-    for (i = 0; i < N; i++)
-        X[i] = g * iy[i];
+    for (i = 0; i < _N; i++)
+        _X[i] = g * iy[i];
 }
 
-static void celt_exp_rotation_impl(float *X, uint32_t len, uint32_t stride,
+static void celt_exp_rotation_impl(float *_X, uint32_t len, uint32_t stride,
                                    float c, float s)
 {
     float *Xptr;
     int i;
 
-    Xptr = X;
+    Xptr = _X;
     for (i = 0; i < len - stride; i++) {
         float x1     = Xptr[0];
         float x2     = Xptr[stride];
@@ -102,7 +102,7 @@  static void celt_exp_rotation_impl(float *X, uint32_t len, uint32_t stride,
         *Xptr++      = c * x1 - s * x2;
     }
 
-    Xptr = &X[len - 2 * stride - 1];
+    Xptr = &_X[len - 2 * stride - 1];
     for (i = len - 2 * stride - 1; i >= 0; i--) {
         float x1     = Xptr[0];
         float x2     = Xptr[stride];
@@ -111,8 +111,8 @@  static void celt_exp_rotation_impl(float *X, uint32_t len, uint32_t stride,
     }
 }
 
-static inline void celt_exp_rotation(float *X, uint32_t len,
-                                     uint32_t stride, uint32_t K,
+static inline void celt_exp_rotation(float *_X, uint32_t len,
+                                     uint32_t stride, uint32_t _K,
                                      enum CeltSpread spread, const int encode)
 {
     uint32_t stride2 = 0;
@@ -120,10 +120,10 @@  static inline void celt_exp_rotation(float *X, uint32_t len,
     float gain, theta;
     int i;
 
-    if (2*K >= len || spread == CELT_SPREAD_NONE)
+    if (2*_K >= len || spread == CELT_SPREAD_NONE)
         return;
 
-    gain = (float)len / (len + (20 - 5*spread) * K);
+    gain = (float)len / (len + (20 - 5*spread) * _K);
     theta = M_PI * gain * gain / 4;
 
     c = cosf(theta);
@@ -140,175 +140,175 @@  static inline void celt_exp_rotation(float *X, uint32_t len,
     len /= stride;
     for (i = 0; i < stride; i++) {
         if (encode) {
-            celt_exp_rotation_impl(X + i * len, len, 1, c, -s);
+            celt_exp_rotation_impl(_X + i * len, len, 1, c, -s);
             if (stride2)
-                celt_exp_rotation_impl(X + i * len, len, stride2, s, -c);
+                celt_exp_rotation_impl(_X + i * len, len, stride2, s, -c);
         } else {
             if (stride2)
-                celt_exp_rotation_impl(X + i * len, len, stride2, s, c);
-            celt_exp_rotation_impl(X + i * len, len, 1, c, s);
+                celt_exp_rotation_impl(_X + i * len, len, stride2, s, c);
+            celt_exp_rotation_impl(_X + i * len, len, 1, c, s);
         }
     }
 }
 
-static inline uint32_t celt_extract_collapse_mask(const int *iy, uint32_t N, uint32_t B)
+static inline uint32_t celt_extract_collapse_mask(const int *iy, uint32_t _N, uint32_t _B)
 {
-    int i, j, N0 = N / B;
+    int i, j, _N0 = _N / _B;
     uint32_t collapse_mask = 0;
 
-    if (B <= 1)
+    if (_B <= 1)
         return 1;
 
-    for (i = 0; i < B; i++)
-        for (j = 0; j < N0; j++)
-            collapse_mask |= (!!iy[i*N0+j]) << i;
+    for (i = 0; i < _B; i++)
+        for (j = 0; j < _N0; j++)
+            collapse_mask |= (!!iy[i*_N0+j]) << i;
     return collapse_mask;
 }
 
-static inline void celt_stereo_merge(float *X, float *Y, float mid, int N)
+static inline void celt_stereo_merge(float *_X, float *_Y, float mid, int _N)
 {
     int i;
     float xp = 0, side = 0;
-    float E[2];
+    float _E[2];
     float mid2;
     float gain[2];
 
-    /* Compute the norm of X+Y and X-Y as |X|^2 + |Y|^2 +/- sum(xy) */
-    for (i = 0; i < N; i++) {
-        xp   += X[i] * Y[i];
-        side += Y[i] * Y[i];
+    /* Compute the norm of _X+_Y and _X-_Y as |_X|^2 + |_Y|^2 +/- sum(xy) */
+    for (i = 0; i < _N; i++) {
+        xp   += _X[i] * _Y[i];
+        side += _Y[i] * _Y[i];
     }
 
     /* Compensating for the mid normalization */
     xp *= mid;
     mid2 = mid;
-    E[0] = mid2 * mid2 + side - 2 * xp;
-    E[1] = mid2 * mid2 + side + 2 * xp;
-    if (E[0] < 6e-4f || E[1] < 6e-4f) {
-        for (i = 0; i < N; i++)
-            Y[i] = X[i];
+    _E[0] = mid2 * mid2 + side - 2 * xp;
+    _E[1] = mid2 * mid2 + side + 2 * xp;
+    if (_E[0] < 6e-4f || _E[1] < 6e-4f) {
+        for (i = 0; i < _N; i++)
+            _Y[i] = _X[i];
         return;
     }
 
-    gain[0] = 1.0f / sqrtf(E[0]);
-    gain[1] = 1.0f / sqrtf(E[1]);
+    gain[0] = 1.0f / sqrtf(_E[0]);
+    gain[1] = 1.0f / sqrtf(_E[1]);
 
-    for (i = 0; i < N; i++) {
+    for (i = 0; i < _N; i++) {
         float value[2];
         /* Apply mid scaling (side is already scaled) */
-        value[0] = mid * X[i];
-        value[1] = Y[i];
-        X[i] = gain[0] * (value[0] - value[1]);
-        Y[i] = gain[1] * (value[0] + value[1]);
+        value[0] = mid * _X[i];
+        value[1] = _Y[i];
+        _X[i] = gain[0] * (value[0] - value[1]);
+        _Y[i] = gain[1] * (value[0] + value[1]);
     }
 }
 
-static void celt_interleave_hadamard(float *tmp, float *X, int N0,
+static void celt_interleave_hadamard(float *tmp, float *_X, int _N0,
                                      int stride, int hadamard)
 {
-    int i, j, N = N0*stride;
+    int i, j, _N = _N0*stride;
     const uint8_t *order = &ff_celt_hadamard_order[hadamard ? stride - 2 : 30];
 
     for (i = 0; i < stride; i++)
-        for (j = 0; j < N0; j++)
-            tmp[j*stride+i] = X[order[i]*N0+j];
+        for (j = 0; j < _N0; j++)
+            tmp[j*stride+i] = _X[order[i]*_N0+j];
 
-    memcpy(X, tmp, N*sizeof(float));
+    memcpy(_X, tmp, _N*sizeof(float));
 }
 
-static void celt_deinterleave_hadamard(float *tmp, float *X, int N0,
+static void celt_deinterleave_hadamard(float *tmp, float *_X, int _N0,
                                        int stride, int hadamard)
 {
-    int i, j, N = N0*stride;
+    int i, j, _N = _N0*stride;
     const uint8_t *order = &ff_celt_hadamard_order[hadamard ? stride - 2 : 30];
 
     for (i = 0; i < stride; i++)
-        for (j = 0; j < N0; j++)
-            tmp[order[i]*N0+j] = X[j*stride+i];
+        for (j = 0; j < _N0; j++)
+            tmp[order[i]*_N0+j] = _X[j*stride+i];
 
-    memcpy(X, tmp, N*sizeof(float));
+    memcpy(_X, tmp, _N*sizeof(float));
 }
 
-static void celt_haar1(float *X, int N0, int stride)
+static void celt_haar1(float *_X, int _N0, int stride)
 {
     int i, j;
-    N0 >>= 1;
+    _N0 >>= 1;
     for (i = 0; i < stride; i++) {
-        for (j = 0; j < N0; j++) {
-            float x0 = X[stride * (2 * j + 0) + i];
-            float x1 = X[stride * (2 * j + 1) + i];
-            X[stride * (2 * j + 0) + i] = (x0 + x1) * M_SQRT1_2;
-            X[stride * (2 * j + 1) + i] = (x0 - x1) * M_SQRT1_2;
+        for (j = 0; j < _N0; j++) {
+            float x0 = _X[stride * (2 * j + 0) + i];
+            float x1 = _X[stride * (2 * j + 1) + i];
+            _X[stride * (2 * j + 0) + i] = (x0 + x1) * M_SQRT1_2;
+            _X[stride * (2 * j + 1) + i] = (x0 - x1) * M_SQRT1_2;
         }
     }
 }
 
-static inline int celt_compute_qn(int N, int b, int offset, int pulse_cap,
+static inline int celt_compute_qn(int _N, int b, int offset, int pulse_cap,
                                   int stereo)
 {
     int qn, qb;
-    int N2 = 2 * N - 1;
-    if (stereo && N == 2)
-        N2--;
+    int _N2 = 2 * _N - 1;
+    if (stereo && _N == 2)
+        _N2--;
 
     /* The upper limit ensures that in a stereo split with itheta==16384, we'll
      * always have enough bits left over to code at least one pulse in the
      * side; otherwise it would collapse, since it doesn't get folded. */
-    qb = FFMIN3(b - pulse_cap - (4 << 3), (b + N2 * offset) / N2, 8 << 3);
+    qb = FFMIN3(b - pulse_cap - (4 << 3), (b + _N2 * offset) / _N2, 8 << 3);
     qn = (qb < (1 << 3 >> 1)) ? 1 : ((ff_celt_qn_exp2[qb & 0x7] >> (14 - (qb >> 3))) + 1) >> 1 << 1;
     return qn;
 }
 
 /* Convert the quantized vector to an index */
-static inline uint32_t celt_icwrsi(uint32_t N, uint32_t K, const int *y)
+static inline uint32_t celt_icwrsi(uint32_t _N, uint32_t _K, const int *y)
 {
     int i, idx = 0, sum = 0;
-    for (i = N - 1; i >= 0; i--) {
-        const uint32_t i_s = CELT_PVQ_U(N - i, sum + FFABS(y[i]) + 1);
-        idx += CELT_PVQ_U(N - i, sum) + (y[i] < 0)*i_s;
+    for (i = _N - 1; i >= 0; i--) {
+        const uint32_t i_s = CELT_PVQ_U(_N - i, sum + FFABS(y[i]) + 1);
+        idx += CELT_PVQ_U(_N - i, sum) + (y[i] < 0)*i_s;
         sum += FFABS(y[i]);
     }
     return idx;
 }
 
 // this code was adapted from libopus
-static inline uint64_t celt_cwrsi(uint32_t N, uint32_t K, uint32_t i, int *y)
+static inline uint64_t celt_cwrsi(uint32_t _N, uint32_t _K, uint32_t i, int *y)
 {
     uint64_t norm = 0;
     uint32_t q, p;
     int s, val;
     int k0;
 
-    while (N > 2) {
+    while (_N > 2) {
         /*Lots of pulses case:*/
-        if (K >= N) {
-            const uint32_t *row = ff_celt_pvq_u_row[N];
+        if (_K >= _N) {
+            const uint32_t *row = ff_celt_pvq_u_row[_N];
 
             /* Are the pulses in this dimension negative? */
-            p  = row[K + 1];
+            p  = row[_K + 1];
             s  = -(i >= p);
             i -= p & s;
 
             /*Count how many pulses were placed in this dimension.*/
-            k0 = K;
-            q = row[N];
+            k0 = _K;
+            q = row[_N];
             if (q > i) {
-                K = N;
+                _K = _N;
                 do {
-                    p = ff_celt_pvq_u_row[--K][N];
+                    p = ff_celt_pvq_u_row[--_K][_N];
                 } while (p > i);
             } else
-                for (p = row[K]; p > i; p = row[K])
-                    K--;
+                for (p = row[_K]; p > i; p = row[_K])
+                    _K--;
 
             i    -= p;
-            val   = (k0 - K + s) ^ s;
+            val   = (k0 - _K + s) ^ s;
             norm += val * val;
             *y++  = val;
         } else { /*Lots of dimensions case:*/
             /*Are there any pulses in this dimension at all?*/
-            p = ff_celt_pvq_u_row[K    ][N];
-            q = ff_celt_pvq_u_row[K + 1][N];
+            p = ff_celt_pvq_u_row[_K    ][_N];
+            q = ff_celt_pvq_u_row[_K + 1][_N];
 
             if (p <= i && i < q) {
                 i -= p;
@@ -319,88 +319,88 @@  static inline uint64_t celt_cwrsi(uint32_t N, uint32_t K, uint32_t i, int *y)
                 i -= q & s;
 
                 /*Count how many pulses were placed in this dimension.*/
-                k0 = K;
-                do p = ff_celt_pvq_u_row[--K][N];
+                k0 = _K;
+                do p = ff_celt_pvq_u_row[--_K][_N];
                 while (p > i);
 
                 i    -= p;
-                val   = (k0 - K + s) ^ s;
+                val   = (k0 - _K + s) ^ s;
                 norm += val * val;
                 *y++  = val;
             }
         }
-        N--;
+        _N--;
     }
 
-    /* N == 2 */
-    p  = 2 * K + 1;
+    /* _N == 2 */
+    p  = 2 * _K + 1;
     s  = -(i >= p);
     i -= p & s;
-    k0 = K;
-    K  = (i + 1) / 2;
+    k0 = _K;
+    _K  = (i + 1) / 2;
 
-    if (K)
-        i -= 2 * K - 1;
+    if (_K)
+        i -= 2 * _K - 1;
 
-    val   = (k0 - K + s) ^ s;
+    val   = (k0 - _K + s) ^ s;
     norm += val * val;
     *y++  = val;
 
-    /* N==1 */
+    /* _N==1 */
     s     = -i;
-    val   = (K + s) ^ s;
+    val   = (_K + s) ^ s;
     norm += val * val;
     *y    = val;
 
     return norm;
 }
 
-static inline void celt_encode_pulses(OpusRangeCoder *rc, int *y, uint32_t N, uint32_t K)
+static inline void celt_encode_pulses(OpusRangeCoder *rc, int *y, uint32_t _N, uint32_t _K)
 {
-    ff_opus_rc_enc_uint(rc, celt_icwrsi(N, K, y), CELT_PVQ_V(N, K));
+    ff_opus_rc_enc_uint(rc, celt_icwrsi(_N, _K, y), CELT_PVQ_V(_N, _K));
 }
 
-static inline float celt_decode_pulses(OpusRangeCoder *rc, int *y, uint32_t N, uint32_t K)
+static inline float celt_decode_pulses(OpusRangeCoder *rc, int *y, uint32_t _N, uint32_t _K)
 {
-    const uint32_t idx = ff_opus_rc_dec_uint(rc, CELT_PVQ_V(N, K));
-    return celt_cwrsi(N, K, idx, y);
+    const uint32_t idx = ff_opus_rc_dec_uint(rc, CELT_PVQ_V(_N, _K));
+    return celt_cwrsi(_N, _K, idx, y);
 }
 
 #if CONFIG_OPUS_ENCODER
 /*
  * Faster than libopus's search, operates entirely in the signed domain.
- * Slightly worse/better depending on N, K and the input vector.
+ * Slightly worse/better depending on _N, _K and the input vector.
  */
-static float ppp_pvq_search_c(float *X, int *y, int K, int N)
+static float ppp_pvq_search_c(float *_X, int *y, int _K, int _N)
 {
     int i, y_norm = 0;
     float res = 0.0f, xy_norm = 0.0f;
 
-    for (i = 0; i < N; i++)
-        res += FFABS(X[i]);
+    for (i = 0; i < _N; i++)
+        res += FFABS(_X[i]);
 
-    res = K/(res + FLT_EPSILON);
+    res = _K/(res + FLT_EPSILON);
 
-    for (i = 0; i < N; i++) {
-        y[i] = lrintf(res*X[i]);
+    for (i = 0; i < _N; i++) {
+        y[i] = lrintf(res*_X[i]);
         y_norm  += y[i]*y[i];
-        xy_norm += y[i]*X[i];
-        K -= FFABS(y[i]);
+        xy_norm += y[i]*_X[i];
+        _K -= FFABS(y[i]);
     }
 
-    while (K) {
-        int max_idx = 0, phase = FFSIGN(K);
+    while (_K) {
+        int max_idx = 0, phase = FFSIGN(_K);
         float max_num = 0.0f;
         float max_den = 1.0f;
         y_norm += 1.0f;
 
-        for (i = 0; i < N; i++) {
+        for (i = 0; i < _N; i++) {
             /* If the sum has been overshot and the best place has 0 pulses allocated
              * to it, attempting to decrease it further will actually increase the
              * sum. Prevent this by disregarding any 0 positions when decrementing. */
             const int ca = 1 ^ ((y[i] == 0) & (phase < 0));
             const int y_new = y_norm  + 2*phase*FFABS(y[i]);
-            float xy_new = xy_norm + 1*phase*FFABS(X[i]);
+            float xy_new = xy_norm + 1*phase*FFABS(_X[i]);
             xy_new = xy_new * xy_new;
             if (ca && (max_den*xy_new) > (y_new*max_num)) {
                 max_den = y_new;
@@ -409,10 +409,10 @@  static float ppp_pvq_search_c(float *X, int *y, int K, int N)
             }
         }
 
-        K -= phase;
+        _K -= phase;
 
-        phase *= FFSIGN(X[max_idx]);
-        xy_norm += 1*phase*X[max_idx];
+        phase *= FFSIGN(_X[max_idx]);
+        xy_norm += 1*phase*_X[max_idx];
         y_norm  += 2*phase*y[max_idx];
         y[max_idx] += phase;
     }
@@ -421,76 +421,76 @@  static float ppp_pvq_search_c(float *X, int *y, int K, int N)
 }
 #endif
 
-static uint32_t celt_alg_quant(OpusRangeCoder *rc, float *X, uint32_t N, uint32_t K,
+static uint32_t celt_alg_quant(OpusRangeCoder *rc, float *_X, uint32_t _N, uint32_t _K,
                                enum CeltSpread spread, uint32_t blocks, float gain,
                                CeltPVQ *pvq)
 {
     int *y = pvq->qcoeff;
 
-    celt_exp_rotation(X, N, blocks, K, spread, 1);
-    gain /= sqrtf(pvq->pvq_search(X, y, K, N));
-    celt_encode_pulses(rc, y,  N, K);
-    celt_normalize_residual(y, X, N, gain);
-    celt_exp_rotation(X, N, blocks, K, spread, 0);
-    return celt_extract_collapse_mask(y, N, blocks);
+    celt_exp_rotation(_X, _N, blocks, _K, spread, 1);
+    gain /= sqrtf(pvq->pvq_search(_X, y, _K, _N));
+    celt_encode_pulses(rc, y,  _N, _K);
+    celt_normalize_residual(y, _X, _N, gain);
+    celt_exp_rotation(_X, _N, blocks, _K, spread, 0);
+    return celt_extract_collapse_mask(y, _N, blocks);
 }
 
 /** Decode pulse vector and combine the result with the pitch vector to produce
     the final normalised signal in the current band. */
-static uint32_t celt_alg_unquant(OpusRangeCoder *rc, float *X, uint32_t N, uint32_t K,
+static uint32_t celt_alg_unquant(OpusRangeCoder *rc, float *_X, uint32_t _N, uint32_t _K,
                                  enum CeltSpread spread, uint32_t blocks, float gain,
                                  CeltPVQ *pvq)
 {
     int *y = pvq->qcoeff;
 
-    gain /= sqrtf(celt_decode_pulses(rc, y, N, K));
-    celt_normalize_residual(y, X, N, gain);
-    celt_exp_rotation(X, N, blocks, K, spread, 0);
-    return celt_extract_collapse_mask(y, N, blocks);
+    gain /= sqrtf(celt_decode_pulses(rc, y, _N, _K));
+    celt_normalize_residual(y, _X, _N, gain);
+    celt_exp_rotation(_X, _N, blocks, _K, spread, 0);
+    return celt_extract_collapse_mask(y, _N, blocks);
 }
 
-static int celt_calc_theta(const float *X, const float *Y, int coupling, int N)
+static int celt_calc_theta(const float *_X, const float *_Y, int coupling, int _N)
 {
     int i;
     float e[2] = { 0.0f, 0.0f };
     if (coupling) { /* Coupling case */
-        for (i = 0; i < N; i++) {
-            e[0] += (X[i] + Y[i])*(X[i] + Y[i]);
-            e[1] += (X[i] - Y[i])*(X[i] - Y[i]);
+        for (i = 0; i < _N; i++) {
+            e[0] += (_X[i] + _Y[i])*(_X[i] + _Y[i]);
+            e[1] += (_X[i] - _Y[i])*(_X[i] - _Y[i]);
         }
     } else {
-        for (i = 0; i < N; i++) {
-            e[0] += X[i]*X[i];
-            e[1] += Y[i]*Y[i];
+        for (i = 0; i < _N; i++) {
+            e[0] += _X[i]*_X[i];
+            e[1] += _Y[i]*_Y[i];
         }
     }
     return lrintf(32768.0f*atan2f(sqrtf(e[1]), sqrtf(e[0]))/M_PI);
 }
 
-static void celt_stereo_is_decouple(float *X, float *Y, float e_l, float e_r, int N)
+static void celt_stereo_is_decouple(float *_X, float *_Y, float e_l, float e_r, int _N)
 {
     int i;
     const float energy_n = 1.0f/(sqrtf(e_l*e_l + e_r*e_r) + FLT_EPSILON);
     e_l *= energy_n;
     e_r *= energy_n;
-    for (i = 0; i < N; i++)
-        X[i] = e_l*X[i] + e_r*Y[i];
+    for (i = 0; i < _N; i++)
+        _X[i] = e_l*_X[i] + e_r*_Y[i];
 }
 
-static void celt_stereo_ms_decouple(float *X, float *Y, int N)
+static void celt_stereo_ms_decouple(float *_X, float *_Y, int _N)
 {
     int i;
-    for (i = 0; i < N; i++) {
-        const float Xret = X[i];
-        X[i] = (X[i] + Y[i])*M_SQRT1_2;
-        Y[i] = (Y[i] - Xret)*M_SQRT1_2;
+    for (i = 0; i < _N; i++) {
+        const float Xret = _X[i];
+        _X[i] = (_X[i] + _Y[i])*M_SQRT1_2;
+        _Y[i] = (_Y[i] - Xret)*M_SQRT1_2;
     }
 }
 
 static av_always_inline uint32_t quant_band_template(CeltPVQ *pvq, CeltFrame *f,
                                                      OpusRangeCoder *rc,
-                                                     const int band, float *X,
-                                                     float *Y, int N, int b,
+                                                     const int band, float *_X,
+                                                     float *_Y, int _N, int b,
                                                      uint32_t blocks, float *lowband,
                                                      int duration, float *lowband_out,
                                                      int level, float gain,
@@ -499,21 +499,21 @@  static av_always_inline uint32_t quant_band_template(CeltPVQ *pvq, CeltFrame *f,
 {
     int i;
     const uint8_t *cache;
-    int stereo = !!Y, split = stereo;
+    int stereo = !!_Y, split = stereo;
     int imid = 0, iside = 0;
-    uint32_t N0 = N;
-    int N_B = N / blocks;
+    uint32_t _N0 = _N;
+    int N_B = _N / blocks;
     int N_B0 = N_B;
-    int B0 = blocks;
+    int _B0 = blocks;
     int time_divide = 0;
     int recombine = 0;
     int inv = 0;
     float mid = 0, side = 0;
-    int longblocks = (B0 == 1);
+    int longblocks = (_B0 == 1);
     uint32_t cm = 0;
 
-    if (N == 1) {
-        float *x = X;
+    if (_N == 1) {
+        float *x = _X;
         for (i = 0; i <= stereo; i++) {
             int sign = 0;
             if (f->remaining2 >= 1 << 3) {
@@ -526,10 +526,10 @@  static av_always_inline uint32_t quant_band_template(CeltPVQ *pvq, CeltFrame *f,
                 f->remaining2 -= 1 << 3;
             }
             x[0] = 1.0f - 2.0f*sign;
-            x = Y;
+            x = _Y;
         }
         if (lowband_out)
-            lowband_out[0] = X[0];
+            lowband_out[0] = _X[0];
         return 1;
     }
 
@@ -541,15 +541,15 @@  static av_always_inline uint32_t quant_band_template(CeltPVQ *pvq, CeltFrame *f,
         /* Band recombining to increase frequency resolution */
 
         if (lowband &&
-            (recombine || ((N_B & 1) == 0 && tf_change < 0) || B0 > 1)) {
-            for (i = 0; i < N; i++)
+            (recombine || ((N_B & 1) == 0 && tf_change < 0) || _B0 > 1)) {
+            for (i = 0; i < _N; i++)
                 lowband_scratch[i] = lowband[i];
             lowband = lowband_scratch;
         }
 
         for (k = 0; k < recombine; k++) {
             if (quant || lowband)
-                celt_haar1(quant ? X : lowband, N >> k, 1 << k);
+                celt_haar1(quant ? _X : lowband, _N >> k, 1 << k);
             fill = ff_celt_bit_interleave[fill & 0xF] | ff_celt_bit_interleave[fill >> 4] << 2;
         }
         blocks >>= recombine;
@@ -558,29 +558,29 @@  static av_always_inline uint32_t quant_band_template(CeltPVQ *pvq, CeltFrame *f,
         /* Increasing the time resolution */
         while ((N_B & 1) == 0 && tf_change < 0) {
             if (quant || lowband)
-                celt_haar1(quant ? X : lowband, N_B, blocks);
+                celt_haar1(quant ? _X : lowband, N_B, blocks);
             fill |= fill << blocks;
             blocks <<= 1;
             N_B >>= 1;
             time_divide++;
             tf_change++;
         }
-        B0 = blocks;
+        _B0 = blocks;
         N_B0 = N_B;
 
         /* Reorganize the samples in time order instead of frequency order */
-        if (B0 > 1 && (quant || lowband))
-            celt_deinterleave_hadamard(pvq->hadamard_tmp, quant ? X : lowband,
-                                       N_B >> recombine, B0 << recombine,
+        if (_B0 > 1 && (quant || lowband))
+            celt_deinterleave_hadamard(pvq->hadamard_tmp, quant ? _X : lowband,
+                                       N_B >> recombine, _B0 << recombine,
                                        longblocks);
     }
 
     /* If we need 1.5 more bit than we can produce, split the band in two. */
     cache = ff_celt_cache_bits +
             ff_celt_cache_index[(duration + 1) * CELT_MAX_BANDS + band];
-    if (!stereo && duration >= 0 && b > cache[cache[0]] + 12 && N > 2) {
-        N >>= 1;
-        Y = X + N;
+    if (!stereo && duration >= 0 && b > cache[cache[0]] + 12 && _N > 2) {
+        _N >>= 1;
+        _Y = _X + _N;
         split = 1;
         duration -= 1;
         if (blocks == 1)
@@ -590,7 +590,7 @@  static av_always_inline uint32_t quant_band_template(CeltPVQ *pvq, CeltFrame *f,
 
     if (split) {
         int qn;
-        int itheta = quant ? celt_calc_theta(X, Y, stereo, N) : 0;
+        int itheta = quant ? celt_calc_theta(_X, _Y, stereo, _N) : 0;
         int mbits, sbits, delta;
         int qalloc;
         int pulse_cap;
@@ -600,10 +600,10 @@  static av_always_inline uint32_t quant_band_template(CeltPVQ *pvq, CeltFrame *f,
 
         /* Decide on the resolution to give to the split parameter theta */
         pulse_cap = ff_celt_log_freq_range[band] + duration * 8;
-        offset = (pulse_cap >> 1) - (stereo && N == 2 ? CELT_QTHETA_OFFSET_TWOPHASE :
+        offset = (pulse_cap >> 1) - (stereo && _N == 2 ? CELT_QTHETA_OFFSET_TWOPHASE :
                                                           CELT_QTHETA_OFFSET);
         qn = (stereo && band >= f->intensity_stereo) ? 1 :
-             celt_compute_qn(N, b, offset, pulse_cap, stereo);
+             celt_compute_qn(_N, b, offset, pulse_cap, stereo);
         tell = opus_rc_tell_frac(rc);
         if (qn != 1) {
             if (quant)
@@ -611,24 +611,24 @@  static av_always_inline uint32_t quant_band_template(CeltPVQ *pvq, CeltFrame *f,
             /* Entropy coding of the angle. We use a uniform pdf for the
              * time split, a step for stereo, and a triangular one for the rest. */
             if (quant) {
-                if (stereo && N > 2)
+                if (stereo && _N > 2)
                     ff_opus_rc_enc_uint_step(rc, itheta, qn / 2);
-                else if (stereo || B0 > 1)
+                else if (stereo || _B0 > 1)
                     ff_opus_rc_enc_uint(rc, itheta, qn + 1);
                 else
                     ff_opus_rc_enc_uint_tri(rc, itheta, qn);
                 itheta = itheta * 16384 / qn;
                 if (stereo) {
                     if (itheta == 0)
-                        celt_stereo_is_decouple(X, Y, f->block[0].lin_energy[band],
-                                                f->block[1].lin_energy[band], N);
+                        celt_stereo_is_decouple(_X, _Y, f->block[0].lin_energy[band],
+                                                f->block[1].lin_energy[band], _N);
                     else
-                        celt_stereo_ms_decouple(X, Y, N);
+                        celt_stereo_ms_decouple(_X, _Y, _N);
                 }
             } else {
-                if (stereo && N > 2)
+                if (stereo && _N > 2)
                     itheta = ff_opus_rc_dec_uint_step(rc, qn / 2);
-                else if (stereo || B0 > 1)
+                else if (stereo || _B0 > 1)
                     itheta = ff_opus_rc_dec_uint(rc, qn+1);
                 else
                     itheta = ff_opus_rc_dec_uint_tri(rc, qn);
@@ -638,11 +638,11 @@  static av_always_inline uint32_t quant_band_template(CeltPVQ *pvq, CeltFrame *f,
             if (quant) {
                 inv = f->apply_phase_inv ? itheta > 8192 : 0;
                  if (inv) {
-                    for (i = 0; i < N; i++)
-                       Y[i] *= -1;
+                    for (i = 0; i < _N; i++)
+                       _Y[i] *= -1;
                  }
-                 celt_stereo_is_decouple(X, Y, f->block[0].lin_energy[band],
-                                         f->block[1].lin_energy[band], N);
+                 celt_stereo_is_decouple(_X, _Y, f->block[0].lin_energy[band],
+                                         f->block[1].lin_energy[band], _N);
 
                 if (b > 2 << 3 && f->remaining2 > 2 << 3) {
                     ff_opus_rc_enc_log(rc, inv, 2);
@@ -674,16 +674,16 @@  static av_always_inline uint32_t quant_band_template(CeltPVQ *pvq, CeltFrame *f,
             iside = celt_cos(16384-itheta);
             /* This is the mid vs side allocation that minimizes squared error
             in that band. */
-            delta = ROUND_MUL16((N - 1) << 7, celt_log2tan(iside, imid));
+            delta = ROUND_MUL16((_N - 1) << 7, celt_log2tan(iside, imid));
         }
 
         mid  = imid  / 32768.0f;
         side = iside / 32768.0f;
 
-        /* This is a special case for N=2 that only works for stereo and takes
+        /* This is a special case for _N=2 that only works for stereo and takes
         advantage of the fact that mid and side are orthogonal to encode
         the side with just one bit. */
-        if (N == 2 && stereo) {
+        if (_N == 2 && stereo) {
             int c;
             int sign = 0;
             float tmp;
@@ -695,8 +695,8 @@  static av_always_inline uint32_t quant_band_template(CeltPVQ *pvq, CeltFrame *f,
             c = (itheta > 8192);
             f->remaining2 -= qalloc+sbits;
 
-            x2 = c ? Y : X;
-            y2 = c ? X : Y;
+            x2 = c ? _Y : _X;
+            y2 = c ? _X : _Y;
             if (sbits) {
                 if (quant) {
                     sign = x2[0]*y2[1] - x2[1]*y2[0] < 0;
@@ -708,22 +708,22 @@  static av_always_inline uint32_t quant_band_template(CeltPVQ *pvq, CeltFrame *f,
             sign = 1 - 2 * sign;
             /* We use orig_fill here because we want to fold the side, but if
             itheta==16384, we'll have cleared the low bits of fill. */
-            cm = pvq->quant_band(pvq, f, rc, band, x2, NULL, N, mbits, blocks, lowband, duration,
+            cm = pvq->quant_band(pvq, f, rc, band, x2, NULL, _N, mbits, blocks, lowband, duration,
                                  lowband_out, level, gain, lowband_scratch, orig_fill);
-            /* We don't split N=2 bands, so cm is either 1 or 0 (for a fold-collapse),
+            /* We don't split _N=2 bands, so cm is either 1 or 0 (for a fold-collapse),
             and there's no need to worry about mixing with the other channel. */
             y2[0] = -sign * x2[1];
             y2[1] =  sign * x2[0];
-            X[0] *= mid;
-            X[1] *= mid;
-            Y[0] *= side;
-            Y[1] *= side;
-            tmp = X[0];
-            X[0] = tmp - Y[0];
-            Y[0] = tmp + Y[0];
-            tmp = X[1];
-            X[1] = tmp - Y[1];
-            Y[1] = tmp + Y[1];
+            _X[0] *= mid;
+            _X[1] *= mid;
+            _Y[0] *= side;
+            _Y[1] *= side;
+            tmp = _X[0];
+            _X[0] = tmp - _Y[0];
+            _Y[0] = tmp + _Y[0];
+            tmp = _X[1];
+            _X[1] = tmp - _Y[1];
+            _Y[1] = tmp + _Y[1];
         } else {
             /* "Normal" split code */
             float *next_lowband2     = NULL;
@@ -734,21 +734,21 @@  static av_always_inline uint32_t quant_band_template(CeltPVQ *pvq, CeltFrame *f,
 
             /* Give more bits to low-energy MDCTs than they would
              * otherwise deserve */
-            if (B0 > 1 && !stereo && (itheta & 0x3fff)) {
+            if (_B0 > 1 && !stereo && (itheta & 0x3fff)) {
                 if (itheta > 8192)
                     /* Rough approximation for pre-echo masking */
                     delta -= delta >> (4 - duration);
                 else
                     /* Corresponds to a forward-masking slope of
                      * 1.5 dB per 10 ms */
-                    delta = FFMIN(0, delta + (N << 3 >> (5 - duration)));
+                    delta = FFMIN(0, delta + (_N << 3 >> (5 - duration)));
             }
             mbits = av_clip((b - delta) / 2, 0, b);
             sbits = b - mbits;
             f->remaining2 -= qalloc;
 
             if (lowband && !stereo)
-                next_lowband2 = lowband + N; /* >32-bit split case */
+                next_lowband2 = lowband + _N; /* >32-bit split case */
 
             /* Only stereo needs to pass on lowband_out.
              * Otherwise, it's handled at the end */
@@ -761,7 +761,7 @@  static av_always_inline uint32_t quant_band_template(CeltPVQ *pvq, CeltFrame *f,
             if (mbits >= sbits) {
                 /* In stereo mode, we do not apply a scaling to the mid
                  * because we need the normalized mid for folding later */
-                cm = pvq->quant_band(pvq, f, rc, band, X, NULL, N, mbits, blocks,
+                cm = pvq->quant_band(pvq, f, rc, band, _X, NULL, _N, mbits, blocks,
                                      lowband, duration, next_lowband_out1, next_level,
                                      stereo ? 1.0f : (gain * mid), lowband_scratch, fill);
                 rebalance = mbits - (rebalance - f->remaining2);
@@ -770,24 +770,24 @@  static av_always_inline uint32_t quant_band_template(CeltPVQ *pvq, CeltFrame *f,
 
                 /* For a stereo split, the high bits of fill are always zero,
                  * so no folding will be done to the side. */
-                cmt = pvq->quant_band(pvq, f, rc, band, Y, NULL, N, sbits, blocks,
+                cmt = pvq->quant_band(pvq, f, rc, band, _Y, NULL, _N, sbits, blocks,
                                       next_lowband2, duration, NULL, next_level,
                                       gain * side, NULL, fill >> blocks);
-                cm |= cmt << ((B0 >> 1) & (stereo - 1));
+                cm |= cmt << ((_B0 >> 1) & (stereo - 1));
             } else {
                 /* For a stereo split, the high bits of fill are always zero,
                  * so no folding will be done to the side. */
-                cm = pvq->quant_band(pvq, f, rc, band, Y, NULL, N, sbits, blocks,
+                cm = pvq->quant_band(pvq, f, rc, band, _Y, NULL, _N, sbits, blocks,
                                      next_lowband2, duration, NULL, next_level,
                                      gain * side, NULL, fill >> blocks);
-                cm <<= ((B0 >> 1) & (stereo - 1));
+                cm <<= ((_B0 >> 1) & (stereo - 1));
                 rebalance = sbits - (rebalance - f->remaining2);
                 if (rebalance > 3 << 3 && itheta != 16384)
                     mbits += rebalance - (3 << 3);
 
                 /* In stereo mode, we do not apply a scaling to the mid because
                  * we need the normalized mid for folding later */
-                cm |= pvq->quant_band(pvq, f, rc, band, X, NULL, N, mbits, blocks,
+                cm |= pvq->quant_band(pvq, f, rc, band, _X, NULL, _N, mbits, blocks,
                                       lowband, duration, next_lowband_out1, next_level,
                                       stereo ? 1.0f : (gain * mid), lowband_scratch, fill);
             }
@@ -808,10 +808,10 @@  static av_always_inline uint32_t quant_band_template(CeltPVQ *pvq, CeltFrame *f,
         if (q != 0) {
             /* Finally do the actual (de)quantization */
             if (quant) {
-                cm = celt_alg_quant(rc, X, N, (q < 8) ? q : (8 + (q & 7)) << ((q >> 3) - 1),
+                cm = celt_alg_quant(rc, _X, _N, (q < 8) ? q : (8 + (q & 7)) << ((q >> 3) - 1),
                                     f->spread, blocks, gain, pvq);
             } else {
-                cm = celt_alg_unquant(rc, X, N, (q < 8) ? q : (8 + (q & 7)) << ((q >> 3) - 1),
+                cm = celt_alg_unquant(rc, _X, _N, (q < 8) ? q : (8 + (q & 7)) << ((q >> 3) - 1),
                                       f->spread, blocks, gain, pvq);
             }
         } else {
@@ -821,61 +821,61 @@  static av_always_inline uint32_t quant_band_template(CeltPVQ *pvq, CeltFrame *f,
             if (fill) {
                 if (!lowband) {
                     /* Noise */
-                    for (i = 0; i < N; i++)
-                        X[i] = (((int32_t)celt_rng(f)) >> 20);
+                    for (i = 0; i < _N; i++)
+                        _X[i] = (((int32_t)celt_rng(f)) >> 20);
                     cm = cm_mask;
                 } else {
                     /* Folded spectrum */
-                    for (i = 0; i < N; i++) {
+                    for (i = 0; i < _N; i++) {
                         /* About 48 dB below the "normal" folding level */
-                        X[i] = lowband[i] + (((celt_rng(f)) & 0x8000) ? 1.0f / 256 : -1.0f / 256);
+                        _X[i] = lowband[i] + (((celt_rng(f)) & 0x8000) ? 1.0f / 256 : -1.0f / 256);
                     }
                     cm = fill;
                 }
-                celt_renormalize_vector(X, N, gain);
+                celt_renormalize_vector(_X, _N, gain);
             } else {
-                memset(X, 0, N*sizeof(float));
+                memset(_X, 0, _N*sizeof(float));
             }
         }
     }
 
     /* This code is used by the decoder and by the resynthesis-enabled encoder */
     if (stereo) {
-        if (N > 2)
-            celt_stereo_merge(X, Y, mid, N);
+        if (_N > 2)
+            celt_stereo_merge(_X, _Y, mid, _N);
         if (inv) {
-            for (i = 0; i < N; i++)
-                Y[i] *= -1;
+            for (i = 0; i < _N; i++)
+                _Y[i] *= -1;
         }
     } else if (level == 0) {
         int k;
 
         /* Undo the sample reorganization going from time order to frequency order */
-        if (B0 > 1)
-            celt_interleave_hadamard(pvq->hadamard_tmp, X, N_B >> recombine,
-                                     B0 << recombine, longblocks);
+        if (_B0 > 1)
+            celt_interleave_hadamard(pvq->hadamard_tmp, _X, N_B >> recombine,
+                                     _B0 << recombine, longblocks);
 
         /* Undo time-freq changes that we did earlier */
         N_B = N_B0;
-        blocks = B0;
+        blocks = _B0;
         for (k = 0; k < time_divide; k++) {
             blocks >>= 1;
             N_B <<= 1;
             cm |= cm >> blocks;
-            celt_haar1(X, N_B, blocks);
+            celt_haar1(_X, N_B, blocks);
         }
 
         for (k = 0; k < recombine; k++) {
             cm = ff_celt_bit_deinterleave[cm];
-            celt_haar1(X, N0>>k, 1<<k);
+            celt_haar1(_X, _N0>>k, 1<<k);
         }
         blocks <<= recombine;
 
         /* Scale output for later folding */
         if (lowband_out) {
-            float n = sqrtf(N0);
-            for (i = 0; i < N0; i++)
-                lowband_out[i] = n * X[i];
+            float n = sqrtf(_N0);
+            for (i = 0; i < _N0; i++)
+                lowband_out[i] = n * _X[i];
         }
         cm = av_mod_uintp2(cm, blocks);
     }
-- 
2.39.2