Message ID | 20230618222805.4054410-1-michael@niedermayer.cc |
---|---|
State | New |
Headers | show |
Series | [FFmpeg-devel,v2] avformat: add Software Defined Radio support | expand |
Context | Check | Description |
---|---|---|
yinshiyou/make_loongarch64 | fail | Make failed |
andriy/make_x86 | fail | Make failed |
On Mon, Jun 19, 2023 at 12:28:05AM +0200, Michael Niedermayer wrote: > Signed-off-by: Michael Niedermayer <michael@niedermayer.cc> > --- > configure | 4 + > doc/demuxers.texi | 71 ++ > libavformat/Makefile | 2 + > libavformat/allformats.c | 2 + > libavformat/sdrdemux.c | 1739 ++++++++++++++++++++++++++++++++++++++ > 5 files changed, 1818 insertions(+) > create mode 100644 libavformat/sdrdemux.c Ill post a v3 later today or tomorrow that makes this work with the RTL-SDR Blog V3 [...]
On 6/22/2023 10:43 AM, Michael Niedermayer wrote: > On Mon, Jun 19, 2023 at 12:28:05AM +0200, Michael Niedermayer wrote: >> Signed-off-by: Michael Niedermayer <michael@niedermayer.cc> >> --- >> configure | 4 + >> doc/demuxers.texi | 71 ++ >> libavformat/Makefile | 2 + >> libavformat/allformats.c | 2 + >> libavformat/sdrdemux.c | 1739 ++++++++++++++++++++++++++++++++++++++ >> 5 files changed, 1818 insertions(+) >> create mode 100644 libavformat/sdrdemux.c > > Ill post a v3 later today or tomorrow that makes this work with the RTL-SDR Blog V3 Shouldn't the SDR "demuxer" be in libavdevice? Being AVFMT_NOFILE and pretty much a capture device, it seems to me that's the proper place. I guess the problem arises with the sdrfile demuxer, which shares code with the other one.
On Thu, Jun 22, 2023 at 10:55:44AM -0300, James Almer wrote: > On 6/22/2023 10:43 AM, Michael Niedermayer wrote: > > On Mon, Jun 19, 2023 at 12:28:05AM +0200, Michael Niedermayer wrote: > > > Signed-off-by: Michael Niedermayer <michael@niedermayer.cc> > > > --- > > > configure | 4 + > > > doc/demuxers.texi | 71 ++ > > > libavformat/Makefile | 2 + > > > libavformat/allformats.c | 2 + > > > libavformat/sdrdemux.c | 1739 ++++++++++++++++++++++++++++++++++++++ > > > 5 files changed, 1818 insertions(+) > > > create mode 100644 libavformat/sdrdemux.c > > > > Ill post a v3 later today or tomorrow that makes this work with the RTL-SDR Blog V3 > > Shouldn't the SDR "demuxer" be in libavdevice? Being AVFMT_NOFILE and pretty > much a capture device, it seems to me that's the proper place. > I guess the problem arises with the sdrfile demuxer, which shares code with > the other one. I have no oppinon on this. I can move it to libavdevice if people prefer. do people prefer libavdevice for this ? personally i think libavdevice should be merged with libavformat. Their APIs and ABIs are the same, they are not truly seperate libs One could say libavdevice is a plugin for libavformat. And with that view having each device from libavdevice a plugin would be "better" than all together as a plugin thx [...]
On 6/22/2023 12:05 PM, Michael Niedermayer wrote: > On Thu, Jun 22, 2023 at 10:55:44AM -0300, James Almer wrote: >> On 6/22/2023 10:43 AM, Michael Niedermayer wrote: >>> On Mon, Jun 19, 2023 at 12:28:05AM +0200, Michael Niedermayer wrote: >>>> Signed-off-by: Michael Niedermayer <michael@niedermayer.cc> >>>> --- >>>> configure | 4 + >>>> doc/demuxers.texi | 71 ++ >>>> libavformat/Makefile | 2 + >>>> libavformat/allformats.c | 2 + >>>> libavformat/sdrdemux.c | 1739 ++++++++++++++++++++++++++++++++++++++ >>>> 5 files changed, 1818 insertions(+) >>>> create mode 100644 libavformat/sdrdemux.c >>> >>> Ill post a v3 later today or tomorrow that makes this work with the RTL-SDR Blog V3 >> >> Shouldn't the SDR "demuxer" be in libavdevice? Being AVFMT_NOFILE and pretty >> much a capture device, it seems to me that's the proper place. >> I guess the problem arises with the sdrfile demuxer, which shares code with >> the other one. > > I have no oppinon on this. I can move it to libavdevice if people prefer. > do people prefer libavdevice for this ? I do. It's a capture device and depends on an external library to interface with hardware. And like i said, you can keep the file demuxer in lavf, where it does fit. > > > personally i think libavdevice should be merged with libavformat. It's been suggested plenty of times. Anton attempted to at least kickstart it once, but his plan was shot down. No one has tried anything ever since. > Their APIs and ABIs are the same, they are not truly seperate libs > One could say libavdevice is a plugin for libavformat. And with that view > having each device from libavdevice a plugin would be "better" than all > together as a plugin > > thx > > [...] > > > _______________________________________________ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > To unsubscribe, visit link above, or email > ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
On Thu, Jun 22, 2023 at 12:10:06PM -0300, James Almer wrote: > On 6/22/2023 12:05 PM, Michael Niedermayer wrote: > > On Thu, Jun 22, 2023 at 10:55:44AM -0300, James Almer wrote: > > > On 6/22/2023 10:43 AM, Michael Niedermayer wrote: > > > > On Mon, Jun 19, 2023 at 12:28:05AM +0200, Michael Niedermayer wrote: > > > > > Signed-off-by: Michael Niedermayer <michael@niedermayer.cc> > > > > > --- > > > > > configure | 4 + > > > > > doc/demuxers.texi | 71 ++ > > > > > libavformat/Makefile | 2 + > > > > > libavformat/allformats.c | 2 + > > > > > libavformat/sdrdemux.c | 1739 ++++++++++++++++++++++++++++++++++++++ > > > > > 5 files changed, 1818 insertions(+) > > > > > create mode 100644 libavformat/sdrdemux.c > > > > > > > > Ill post a v3 later today or tomorrow that makes this work with the RTL-SDR Blog V3 > > > > > > Shouldn't the SDR "demuxer" be in libavdevice? Being AVFMT_NOFILE and pretty > > > much a capture device, it seems to me that's the proper place. > > > I guess the problem arises with the sdrfile demuxer, which shares code with > > > the other one. > > > > I have no oppinon on this. I can move it to libavdevice if people prefer. > > do people prefer libavdevice for this ? > > I do. It's a capture device and depends on an external library to interface > with hardware. And like i said, you can keep the file demuxer in lavf, where > it does fit. anyone objects if i move sdrfile then too into libavdevice ? because its the same code basically. Otherwise it would have to be some sort of #include of c file accross libs > > > > > > > personally i think libavdevice should be merged with libavformat. > > It's been suggested plenty of times. Anton attempted to at least kickstart > it once, but his plan was shot down. > No one has tried anything ever since. How was it shut down ? 1. sent patch 2. apply it I think it should be done with the next major ABI bump thx [...]
On 6/22/2023 1:26 PM, Michael Niedermayer wrote: > On Thu, Jun 22, 2023 at 12:10:06PM -0300, James Almer wrote: >> On 6/22/2023 12:05 PM, Michael Niedermayer wrote: >>> On Thu, Jun 22, 2023 at 10:55:44AM -0300, James Almer wrote: >>>> On 6/22/2023 10:43 AM, Michael Niedermayer wrote: >>>>> On Mon, Jun 19, 2023 at 12:28:05AM +0200, Michael Niedermayer wrote: >>>>>> Signed-off-by: Michael Niedermayer <michael@niedermayer.cc> >>>>>> --- >>>>>> configure | 4 + >>>>>> doc/demuxers.texi | 71 ++ >>>>>> libavformat/Makefile | 2 + >>>>>> libavformat/allformats.c | 2 + >>>>>> libavformat/sdrdemux.c | 1739 ++++++++++++++++++++++++++++++++++++++ >>>>>> 5 files changed, 1818 insertions(+) >>>>>> create mode 100644 libavformat/sdrdemux.c >>>>> >>>>> Ill post a v3 later today or tomorrow that makes this work with the RTL-SDR Blog V3 >>>> >>>> Shouldn't the SDR "demuxer" be in libavdevice? Being AVFMT_NOFILE and pretty >>>> much a capture device, it seems to me that's the proper place. >>>> I guess the problem arises with the sdrfile demuxer, which shares code with >>>> the other one. >>> >>> I have no oppinon on this. I can move it to libavdevice if people prefer. >>> do people prefer libavdevice for this ? >> >> I do. It's a capture device and depends on an external library to interface >> with hardware. And like i said, you can keep the file demuxer in lavf, where >> it does fit. > > anyone objects if i move sdrfile then too into libavdevice ? because its the > same code basically. Otherwise it would have to be some sort of #include of c > file accross libs It does not belong in lavd. The doxy in avdevice.h states "the (de)muxers in libavdevice are of the AVFMT_NOFILE type". And you can make the common code avpriv_ since both libraries are basically version locked. There's no need to duplicate it, but if you prefer that, you can indeed do an #include c file using the SHLIBOBJS list in Makefile which will trigger on shared builds. > > >> >>> >>> >>> personally i think libavdevice should be merged with libavformat. >> >> It's been suggested plenty of times. Anton attempted to at least kickstart >> it once, but his plan was shot down. >> No one has tried anything ever since. > > How was it shut down ? > 1. sent patch > 2. apply it > I think it should be done with the next major ABI bump > > thx > > [...] > > > _______________________________________________ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > To unsubscribe, visit link above, or email > ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
On Thu, Jun 22, 2023 at 01:42:39PM -0300, James Almer wrote: > On 6/22/2023 1:26 PM, Michael Niedermayer wrote: > > On Thu, Jun 22, 2023 at 12:10:06PM -0300, James Almer wrote: > > > On 6/22/2023 12:05 PM, Michael Niedermayer wrote: > > > > On Thu, Jun 22, 2023 at 10:55:44AM -0300, James Almer wrote: > > > > > On 6/22/2023 10:43 AM, Michael Niedermayer wrote: > > > > > > On Mon, Jun 19, 2023 at 12:28:05AM +0200, Michael Niedermayer wrote: > > > > > > > Signed-off-by: Michael Niedermayer <michael@niedermayer.cc> > > > > > > > --- > > > > > > > configure | 4 + > > > > > > > doc/demuxers.texi | 71 ++ > > > > > > > libavformat/Makefile | 2 + > > > > > > > libavformat/allformats.c | 2 + > > > > > > > libavformat/sdrdemux.c | 1739 ++++++++++++++++++++++++++++++++++++++ > > > > > > > 5 files changed, 1818 insertions(+) > > > > > > > create mode 100644 libavformat/sdrdemux.c > > > > > > > > > > > > Ill post a v3 later today or tomorrow that makes this work with the RTL-SDR Blog V3 > > > > > > > > > > Shouldn't the SDR "demuxer" be in libavdevice? Being AVFMT_NOFILE and pretty > > > > > much a capture device, it seems to me that's the proper place. > > > > > I guess the problem arises with the sdrfile demuxer, which shares code with > > > > > the other one. > > > > > > > > I have no oppinon on this. I can move it to libavdevice if people prefer. > > > > do people prefer libavdevice for this ? > > > > > > I do. It's a capture device and depends on an external library to interface > > > with hardware. And like i said, you can keep the file demuxer in lavf, where > > > it does fit. > > > > anyone objects if i move sdrfile then too into libavdevice ? because its the > > same code basically. Otherwise it would have to be some sort of #include of c > > file accross libs > > It does not belong in lavd. The doxy in avdevice.h states "the (de)muxers in > libavdevice are of the AVFMT_NOFILE type". And you can make the common code > avpriv_ since both libraries are basically version locked. There's no need > to duplicate it, but if you prefer that, you can indeed do an #include c > file using the SHLIBOBJS list in Makefile which will trigger on shared > builds. it seemed to me wiser to not put effort in seperating the sdrfile and sdr cases when really device and format will be merged at some future point. Its work now and then work to undo it. but ill look into it thx [...]
FFmpeg is not the place for SDR. SDR is as large and complex as the entirety of multimedia. What next, is FFmpeg going to implement TCP in userspace, Wifi, Ethernet, an entire 4G and 5G stack? All without any separation of layers (the problem you currently have)? Kieran
On Fri, Jun 23, 2023 at 10:34:10AM +0800, Kieran Kunhya wrote: > FFmpeg is not the place for SDR. SDR is as large and complex as the > entirety of multimedia. > > What next, is FFmpeg going to implement TCP in userspace, Wifi, Ethernet, > an entire 4G and 5G stack? https://en.wikipedia.org/wiki/Straw_man What my patch is doing is adding support for AM demodulation, the AM specific code is like 2 pages. The future plan for FM demodulation will not add alot of code either. DAB/DVB should also not be anything big (if that is implemented at all by anyone) If the code grows beyond that it could be split out into a seperate library outside FFmpeg. The size of all of SDR really has as much bearing on FFmpeg as the size of all of mathematics has on the use of mathematics in FFmpeg. > All without any separation of layers (the problem you currently have)? Lets see where the review process leads to. It is possible iam missing some things, its possible others are missing some factors. Ultimately sdr is more similar than it is different from existing input devices and demuxers. The review process may identify possible solutions that benefit other input devices too. It might identify shortcommings in FFmpeg that could lead to improvments. I dont really enjoy the review process ATM, no ;) but lets see where it leads to. Thx [...]
On Fri, 23 Jun 2023, 19:17 Michael Niedermayer, <michael@niedermayer.cc> wrote: > On Fri, Jun 23, 2023 at 10:34:10AM +0800, Kieran Kunhya wrote: > > FFmpeg is not the place for SDR. SDR is as large and complex as the > > entirety of multimedia. > > > > What next, is FFmpeg going to implement TCP in userspace, Wifi, Ethernet, > > an entire 4G and 5G stack? > > https://en.wikipedia.org/wiki/Straw_man It's not a straw man, I'm asking why you want to add layers far below multimedia to ffmpeg. All of those examples are similar examples of physical layers. > What my patch is doing is adding support for AM demodulation, the AM > specific code is like 2 pages. The future plan for FM demodulation will > not add alot of code either. DAB/DVB should also not be anything big > (if that is implemented at all by anyone) > If the code grows beyond that it could be split out into a seperate > library outside FFmpeg. > Then just put it in a library outside of FFmpeg to begin with. DVB-T2 and S2 are very complex to implement well. AM and FM are toys in comparison. > The size of all of SDR really has as much bearing on FFmpeg as the size > of all of mathematics has on the use of mathematics in FFmpeg. > I have no idea what this means, mathematics is needed in multimedia, SDR is not. > > All without any separation of layers (the problem you currently have)? > > Lets see where the review process leads to. > It is possible iam missing some things, its possible others are missing > some factors. > Ultimately sdr is more similar than it is different from existing input > devices and demuxers. > The review process may identify possible solutions that benefit other > input devices too. It might identify shortcommings in FFmpeg that > could lead to improvments. > I dont really enjoy the review process ATM, no ;) but lets see where it > leads to. > No input device needs to do active mathematical processing on the data at a analogue layer. SDR isn't similar at all. Kieran
Hi, Le 23 juin 2023 13:17:28 GMT+02:00, Michael Niedermayer <michael@niedermayer.cc> a écrit : >On Fri, Jun 23, 2023 at 10:34:10AM +0800, Kieran Kunhya wrote: >> FFmpeg is not the place for SDR. SDR is as large and complex as the >> entirety of multimedia. >> >> What next, is FFmpeg going to implement TCP in userspace, Wifi, Ethernet, >> an entire 4G and 5G stack? > >https://en.wikipedia.org/wiki/Straw_man > >What my patch is doing is adding support for AM demodulation, the AM >specific code is like 2 pages. The future plan for FM demodulation will >not add alot of code either. DAB/DVB should also not be anything big >(if that is implemented at all by anyone) Literally every one of those layer-2 protocols has a lower-level API already on Linux, and typically they are, or would be, backends to libavdevice. (Specifically AM and FM are supported by V4L radio+ALSA; DAB and DVB by Linux-DVB. 4G and 5G are network devices.) So I can only agree with Kieran that these are *lower* layers, that don't really look like they belong in FFmpeg. >If the code grows beyond that it could be split out into a seperate >library outside FFmpeg. I think that the point is, that that code should be up-front in a separate FFmpeg-independent library. And it's not just a technical argument with layering. It's also that it's too far outside what FFmpeg typically works with, so it really should not be put under the purview of FFmpeg-devel. In other words, it's also a social problem. The flip side of that argument is that this may be of interest to other higher-level projects than FFmpeg, including projects that (rightfully) don't depend on FFmpeg, and that this may interest people who wouldn't contribute or participate in FFmpeg. >The size of all of SDR really has as much bearing on FFmpeg as the size >of all of mathematics has on the use of mathematics in FFmpeg. On an empirical basis, I'd argue that FFmpeg mathematics are so fine-tuned to specific algorithmic use cases, that you will anyway end up writing custom algorithms and optimisations here. And thus you won't be sharing much code with (the rest of) FFmpeg down the line. > > >> All without any separation of layers (the problem you currently have)? > >Lets see where the review process leads to. >It is possible iam missing some things, its possible others are missing >some factors. >Ultimately sdr is more similar than it is different from existing input >devices and demuxers. >The review process may identify possible solutions that benefit other >input devices too. It might identify shortcommings in FFmpeg that >could lead to improvments. >I dont really enjoy the review process ATM, no ;) but lets see where it >leads to. > >Thx > >[...]
Hi On Fri, Jun 23, 2023 at 06:37:18PM +0200, Rémi Denis-Courmont wrote: > Hi, > > Le 23 juin 2023 13:17:28 GMT+02:00, Michael Niedermayer <michael@niedermayer.cc> a écrit : > >On Fri, Jun 23, 2023 at 10:34:10AM +0800, Kieran Kunhya wrote: > >> FFmpeg is not the place for SDR. SDR is as large and complex as the > >> entirety of multimedia. > >> > >> What next, is FFmpeg going to implement TCP in userspace, Wifi, Ethernet, > >> an entire 4G and 5G stack? > > > >https://en.wikipedia.org/wiki/Straw_man > > > >What my patch is doing is adding support for AM demodulation, the AM > >specific code is like 2 pages. The future plan for FM demodulation will > >not add alot of code either. DAB/DVB should also not be anything big > >(if that is implemented at all by anyone) > > Literally every one of those layer-2 protocols has a lower-level API already on Linux, and typically they are, or would be, backends to libavdevice. > > (Specifically AM and FM are supported by V4L radio+ALSA; DAB and DVB by Linux-DVB. 4G and 5G are network devices.) 4 problems * FFmpeg is not "linux only". * No software i tried or was suggested to me used V4L or Linux-DVB. * iam not sure the RSP1A i have has linux drivers for these interfaces * What iam interrested in was working with the signals at a low level, why because i find it interresting and fun. Accessing AM/FM through some high level API is not something iam interrested in. This is also because any issues are likely unsolvable at that level. If probing didnt find a station, or demodulation doesnt work, a high level API likely wont allow doing anything about that. > > So I can only agree with Kieran that these are *lower* layers, that don't really look like they belong in FFmpeg. FFmpeg has been always very low level. We stoped at where the OS provides support that works, not at some academic "level". If every OS provides a great SDR API than i missed that, which is possible because that was never something i was interrested in. > > >If the code grows beyond that it could be split out into a seperate > >library outside FFmpeg. > > I think that the point is, that that code should be up-front in a separate FFmpeg-independent library. And it's not just a technical argument with layering. It's also that it's too far outside what FFmpeg typically works with, so it really should not be put under the purview of FFmpeg-devel. In other words, it's also a social problem. > > The flip side of that argument is that this may be of interest to other higher-level projects than FFmpeg, including projects that (rightfully) don't depend on FFmpeg, and that this may interest people who wouldn't contribute or participate in FFmpeg. The issue i have with this view is it comes from people who want nothing to do with this SDR work. I would see this argument very differntly if it would come from people who want to work on that external SDR library. I mean this is more a "go away" than a "lets work together on SDR (for FFmpeg)" > > >The size of all of SDR really has as much bearing on FFmpeg as the size > >of all of mathematics has on the use of mathematics in FFmpeg. > > On an empirical basis, I'd argue that FFmpeg mathematics are so fine-tuned to specific algorithmic use cases, that you will anyway end up writing custom algorithms and optimisations here. And thus you won't be sharing much code with (the rest of) FFmpeg down the line. Iam not sure iam drifting off topic maybe but I frequently use code from libavutil outside multimedia thx [...]
On Fri, Jun 23, 2023 at 8:12 PM Michael Niedermayer <michael@niedermayer.cc> wrote: > Hi > > On Fri, Jun 23, 2023 at 06:37:18PM +0200, Rémi Denis-Courmont wrote: > > Hi, > > > > Le 23 juin 2023 13:17:28 GMT+02:00, Michael Niedermayer < > michael@niedermayer.cc> a écrit : > > >On Fri, Jun 23, 2023 at 10:34:10AM +0800, Kieran Kunhya wrote: > > >> FFmpeg is not the place for SDR. SDR is as large and complex as the > > >> entirety of multimedia. > > >> > > >> What next, is FFmpeg going to implement TCP in userspace, Wifi, > Ethernet, > > >> an entire 4G and 5G stack? > > > > > >https://en.wikipedia.org/wiki/Straw_man > > > > > >What my patch is doing is adding support for AM demodulation, the AM > > >specific code is like 2 pages. The future plan for FM demodulation will > > >not add alot of code either. DAB/DVB should also not be anything big > > >(if that is implemented at all by anyone) > > > > Literally every one of those layer-2 protocols has a lower-level API > already on Linux, and typically they are, or would be, backends to > libavdevice. > > > > (Specifically AM and FM are supported by V4L radio+ALSA; DAB and DVB by > Linux-DVB. 4G and 5G are network devices.) > > 4 problems > * FFmpeg is not "linux only". > * No software i tried or was suggested to me used V4L or Linux-DVB. > * iam not sure the RSP1A i have has linux drivers for these interfaces > * What iam interrested in was working with the signals at a low level, why > because i find it interresting and fun. Accessing AM/FM through some high > level API is not something iam interrested in. This is also because any > issues are likely unsolvable at that level. > If probing didnt find a station, or demodulation doesnt work, a high > level API likely wont allow doing anything about that. > > > > > > So I can only agree with Kieran that these are *lower* layers, that > don't really look like they belong in FFmpeg. > > FFmpeg has been always very low level. We stoped at where the OS provides > support that works, not at some academic "level". If every OS provides a > great > SDR API than i missed that, which is possible because that was never > something > i was interrested in. > > > > > > >If the code grows beyond that it could be split out into a seperate > > >library outside FFmpeg. > > > > I think that the point is, that that code should be up-front in a > separate FFmpeg-independent library. And it's not just a technical argument > with layering. It's also that it's too far outside what FFmpeg typically > works with, so it really should not be put under the purview of > FFmpeg-devel. In other words, it's also a social problem. > > > > The flip side of that argument is that this may be of interest to other > higher-level projects than FFmpeg, including projects that (rightfully) > don't depend on FFmpeg, and that this may interest people who wouldn't > contribute or participate in FFmpeg. > > The issue i have with this view is it comes from people who want nothing to > do with this SDR work. > I would see this argument very differntly if it would come from people who > want to work on that external SDR library. > > I mean this is more a "go away" than a "lets work together on SDR (for > FFmpeg)" > > > > > > >The size of all of SDR really has as much bearing on FFmpeg as the size > > >of all of mathematics has on the use of mathematics in FFmpeg. > > > > On an empirical basis, I'd argue that FFmpeg mathematics are so > fine-tuned to specific algorithmic use cases, that you will anyway end up > writing custom algorithms and optimisations here. And thus you won't be > sharing much code with (the rest of) FFmpeg down the line. > > Iam not sure iam drifting off topic maybe but > I frequently use code from libavutil outside multimedia > > thx > > > If you are not willing to listen to reviews than you can not collaborate on FFmpeg project. I tried to tell you that using this code inside libavformat is mistake. But you keep ignoring that. [...] > -- > Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB > > The misfortune of the wise is better than the prosperity of the fool. > -- Epicurus > _______________________________________________ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > To unsubscribe, visit link above, or email > ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". >
On Fri, Jun 23, 2023 at 08:17:16PM +0200, Paul B Mahol wrote: > On Fri, Jun 23, 2023 at 8:12 PM Michael Niedermayer <michael@niedermayer.cc> > wrote: > > > Hi > > > > On Fri, Jun 23, 2023 at 06:37:18PM +0200, Rémi Denis-Courmont wrote: > > > Hi, > > > > > > Le 23 juin 2023 13:17:28 GMT+02:00, Michael Niedermayer < > > michael@niedermayer.cc> a écrit : > > > >On Fri, Jun 23, 2023 at 10:34:10AM +0800, Kieran Kunhya wrote: > > > >> FFmpeg is not the place for SDR. SDR is as large and complex as the > > > >> entirety of multimedia. > > > >> > > > >> What next, is FFmpeg going to implement TCP in userspace, Wifi, > > Ethernet, > > > >> an entire 4G and 5G stack? > > > > > > > >https://en.wikipedia.org/wiki/Straw_man > > > > > > > >What my patch is doing is adding support for AM demodulation, the AM > > > >specific code is like 2 pages. The future plan for FM demodulation will > > > >not add alot of code either. DAB/DVB should also not be anything big > > > >(if that is implemented at all by anyone) > > > > > > Literally every one of those layer-2 protocols has a lower-level API > > already on Linux, and typically they are, or would be, backends to > > libavdevice. > > > > > > (Specifically AM and FM are supported by V4L radio+ALSA; DAB and DVB by > > Linux-DVB. 4G and 5G are network devices.) > > > > 4 problems > > * FFmpeg is not "linux only". > > * No software i tried or was suggested to me used V4L or Linux-DVB. > > * iam not sure the RSP1A i have has linux drivers for these interfaces > > * What iam interrested in was working with the signals at a low level, why > > because i find it interresting and fun. Accessing AM/FM through some high > > level API is not something iam interrested in. This is also because any > > issues are likely unsolvable at that level. > > If probing didnt find a station, or demodulation doesnt work, a high > > level API likely wont allow doing anything about that. > > > > > > > > > > So I can only agree with Kieran that these are *lower* layers, that > > don't really look like they belong in FFmpeg. > > > > FFmpeg has been always very low level. We stoped at where the OS provides > > support that works, not at some academic "level". If every OS provides a > > great > > SDR API than i missed that, which is possible because that was never > > something > > i was interrested in. > > > > > > > > > > >If the code grows beyond that it could be split out into a seperate > > > >library outside FFmpeg. > > > > > > I think that the point is, that that code should be up-front in a > > separate FFmpeg-independent library. And it's not just a technical argument > > with layering. It's also that it's too far outside what FFmpeg typically > > works with, so it really should not be put under the purview of > > FFmpeg-devel. In other words, it's also a social problem. > > > > > > The flip side of that argument is that this may be of interest to other > > higher-level projects than FFmpeg, including projects that (rightfully) > > don't depend on FFmpeg, and that this may interest people who wouldn't > > contribute or participate in FFmpeg. > > > > The issue i have with this view is it comes from people who want nothing to > > do with this SDR work. > > I would see this argument very differntly if it would come from people who > > want to work on that external SDR library. > > > > I mean this is more a "go away" than a "lets work together on SDR (for > > FFmpeg)" > > > > > > > > > > >The size of all of SDR really has as much bearing on FFmpeg as the size > > > >of all of mathematics has on the use of mathematics in FFmpeg. > > > > > > On an empirical basis, I'd argue that FFmpeg mathematics are so > > fine-tuned to specific algorithmic use cases, that you will anyway end up > > writing custom algorithms and optimisations here. And thus you won't be > > sharing much code with (the rest of) FFmpeg down the line. > > > > Iam not sure iam drifting off topic maybe but > > I frequently use code from libavutil outside multimedia > > > > thx > > > > > > > If you are not willing to listen to reviews than you can not collaborate on > FFmpeg project. > > I tried to tell you that using this code inside libavformat is mistake. But > you keep ignoring that. You called me a liar. Which is unacceptable libavformat is open source code, anyone can add any feature that he wants. Iam in no way insisting to implement this in libavformat, but sofar its really the only place (even if the active code would be moved outside, its still interfacing in libavformat) Also iam not ignoring anyone. I just move forward and try to implement all suggestions that i can and that i agree with. Once thats done, we will look if we have a consensus what to do. That could be git master, it could be another place, it could include an external lib IF others want to work on this lib too. Maybe someone has an entirly different suggestion that we all prefer, i dont know. We will see But ATM i find this sdr stuff a bit interresting and so i will continue to work on it. If me working on free software offends anyone, iam not sure what to say. thx [...]
On Fri, Jun 23, 2023 at 8:56 PM Michael Niedermayer <michael@niedermayer.cc> wrote: > On Fri, Jun 23, 2023 at 08:17:16PM +0200, Paul B Mahol wrote: > > On Fri, Jun 23, 2023 at 8:12 PM Michael Niedermayer < > michael@niedermayer.cc> > > wrote: > > > > > Hi > > > > > > On Fri, Jun 23, 2023 at 06:37:18PM +0200, Rémi Denis-Courmont wrote: > > > > Hi, > > > > > > > > Le 23 juin 2023 13:17:28 GMT+02:00, Michael Niedermayer < > > > michael@niedermayer.cc> a écrit : > > > > >On Fri, Jun 23, 2023 at 10:34:10AM +0800, Kieran Kunhya wrote: > > > > >> FFmpeg is not the place for SDR. SDR is as large and complex as > the > > > > >> entirety of multimedia. > > > > >> > > > > >> What next, is FFmpeg going to implement TCP in userspace, Wifi, > > > Ethernet, > > > > >> an entire 4G and 5G stack? > > > > > > > > > >https://en.wikipedia.org/wiki/Straw_man > > > > > > > > > >What my patch is doing is adding support for AM demodulation, the AM > > > > >specific code is like 2 pages. The future plan for FM demodulation > will > > > > >not add alot of code either. DAB/DVB should also not be anything big > > > > >(if that is implemented at all by anyone) > > > > > > > > Literally every one of those layer-2 protocols has a lower-level API > > > already on Linux, and typically they are, or would be, backends to > > > libavdevice. > > > > > > > > (Specifically AM and FM are supported by V4L radio+ALSA; DAB and DVB > by > > > Linux-DVB. 4G and 5G are network devices.) > > > > > > 4 problems > > > * FFmpeg is not "linux only". > > > * No software i tried or was suggested to me used V4L or Linux-DVB. > > > * iam not sure the RSP1A i have has linux drivers for these interfaces > > > * What iam interrested in was working with the signals at a low level, > why > > > because i find it interresting and fun. Accessing AM/FM through some > high > > > level API is not something iam interrested in. This is also because > any > > > issues are likely unsolvable at that level. > > > If probing didnt find a station, or demodulation doesnt work, a high > > > level API likely wont allow doing anything about that. > > > > > > > > > > > > > > So I can only agree with Kieran that these are *lower* layers, that > > > don't really look like they belong in FFmpeg. > > > > > > FFmpeg has been always very low level. We stoped at where the OS > provides > > > support that works, not at some academic "level". If every OS provides > a > > > great > > > SDR API than i missed that, which is possible because that was never > > > something > > > i was interrested in. > > > > > > > > > > > > > > >If the code grows beyond that it could be split out into a seperate > > > > >library outside FFmpeg. > > > > > > > > I think that the point is, that that code should be up-front in a > > > separate FFmpeg-independent library. And it's not just a technical > argument > > > with layering. It's also that it's too far outside what FFmpeg > typically > > > works with, so it really should not be put under the purview of > > > FFmpeg-devel. In other words, it's also a social problem. > > > > > > > > The flip side of that argument is that this may be of interest to > other > > > higher-level projects than FFmpeg, including projects that (rightfully) > > > don't depend on FFmpeg, and that this may interest people who wouldn't > > > contribute or participate in FFmpeg. > > > > > > The issue i have with this view is it comes from people who want > nothing to > > > do with this SDR work. > > > I would see this argument very differntly if it would come from people > who > > > want to work on that external SDR library. > > > > > > I mean this is more a "go away" than a "lets work together on SDR (for > > > FFmpeg)" > > > > > > > > > > > > > > >The size of all of SDR really has as much bearing on FFmpeg as the > size > > > > >of all of mathematics has on the use of mathematics in FFmpeg. > > > > > > > > On an empirical basis, I'd argue that FFmpeg mathematics are so > > > fine-tuned to specific algorithmic use cases, that you will anyway end > up > > > writing custom algorithms and optimisations here. And thus you won't be > > > sharing much code with (the rest of) FFmpeg down the line. > > > > > > Iam not sure iam drifting off topic maybe but > > > I frequently use code from libavutil outside multimedia > > > > > > thx > > > > > > > > > > > If you are not willing to listen to reviews than you can not collaborate > on > > FFmpeg project. > > > > > I tried to tell you that using this code inside libavformat is mistake. > But > > you keep ignoring that. > > You called me a liar. Which is unacceptable > > > libavformat is open source code, anyone can add any feature that he wants. > Iam in no way insisting to implement this in libavformat, but sofar its > really the only place (even if the active code would be moved outside, its > still interfacing in libavformat) > > Also iam not ignoring anyone. I just move forward and try to implement all > suggestions that i can and that i agree with. Once thats done, we will look > if we have a consensus what to do. That could be git master, it could be > another place, it could include an external lib IF others want to work on > this lib too. Maybe someone has an entirly different suggestion that we > all prefer, > i dont know. We will see > But ATM i find this sdr stuff a bit interresting and so i will continue > to work on it. If me working on free software offends anyone, iam not > sure what to say. > Yes your patches, not just this one, offend all of us: 1. breaks existing functionality and introduces subtle bugs 2. ignoring fixing various bugs and regressions reported on trac 3. introducing new changes and broken features that very fast becomes unmaintainable. Feel free to work on own fork/project as you ever want. > > thx > > [...] > -- > Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB > > If you drop bombs on a foreign country and kill a hundred thousand > innocent people, expect your government to call the consequence > "unprovoked inhuman terrorist attacks" and use it to justify dropping > more bombs and killing more people. The technology changed, the idea is > old. > _______________________________________________ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > To unsubscribe, visit link above, or email > ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". >
On 6/23/2023 4:10 PM, Paul B Mahol wrote: > On Fri, Jun 23, 2023 at 8:56 PM Michael Niedermayer <michael@niedermayer.cc> > wrote: > >> On Fri, Jun 23, 2023 at 08:17:16PM +0200, Paul B Mahol wrote: >>> On Fri, Jun 23, 2023 at 8:12 PM Michael Niedermayer < >> michael@niedermayer.cc> >>> wrote: >>> >>>> Hi >>>> >>>> On Fri, Jun 23, 2023 at 06:37:18PM +0200, Rémi Denis-Courmont wrote: >>>>> Hi, >>>>> >>>>> Le 23 juin 2023 13:17:28 GMT+02:00, Michael Niedermayer < >>>> michael@niedermayer.cc> a écrit : >>>>>> On Fri, Jun 23, 2023 at 10:34:10AM +0800, Kieran Kunhya wrote: >>>>>>> FFmpeg is not the place for SDR. SDR is as large and complex as >> the >>>>>>> entirety of multimedia. >>>>>>> >>>>>>> What next, is FFmpeg going to implement TCP in userspace, Wifi, >>>> Ethernet, >>>>>>> an entire 4G and 5G stack? >>>>>> >>>>>> https://en.wikipedia.org/wiki/Straw_man >>>>>> >>>>>> What my patch is doing is adding support for AM demodulation, the AM >>>>>> specific code is like 2 pages. The future plan for FM demodulation >> will >>>>>> not add alot of code either. DAB/DVB should also not be anything big >>>>>> (if that is implemented at all by anyone) >>>>> >>>>> Literally every one of those layer-2 protocols has a lower-level API >>>> already on Linux, and typically they are, or would be, backends to >>>> libavdevice. >>>>> >>>>> (Specifically AM and FM are supported by V4L radio+ALSA; DAB and DVB >> by >>>> Linux-DVB. 4G and 5G are network devices.) >>>> >>>> 4 problems >>>> * FFmpeg is not "linux only". >>>> * No software i tried or was suggested to me used V4L or Linux-DVB. >>>> * iam not sure the RSP1A i have has linux drivers for these interfaces >>>> * What iam interrested in was working with the signals at a low level, >> why >>>> because i find it interresting and fun. Accessing AM/FM through some >> high >>>> level API is not something iam interrested in. This is also because >> any >>>> issues are likely unsolvable at that level. >>>> If probing didnt find a station, or demodulation doesnt work, a high >>>> level API likely wont allow doing anything about that. >>>> >>>> >>>>> >>>>> So I can only agree with Kieran that these are *lower* layers, that >>>> don't really look like they belong in FFmpeg. >>>> >>>> FFmpeg has been always very low level. We stoped at where the OS >> provides >>>> support that works, not at some academic "level". If every OS provides >> a >>>> great >>>> SDR API than i missed that, which is possible because that was never >>>> something >>>> i was interrested in. >>>> >>>> >>>>> >>>>>> If the code grows beyond that it could be split out into a seperate >>>>>> library outside FFmpeg. >>>>> >>>>> I think that the point is, that that code should be up-front in a >>>> separate FFmpeg-independent library. And it's not just a technical >> argument >>>> with layering. It's also that it's too far outside what FFmpeg >> typically >>>> works with, so it really should not be put under the purview of >>>> FFmpeg-devel. In other words, it's also a social problem. >>>>> >>>>> The flip side of that argument is that this may be of interest to >> other >>>> higher-level projects than FFmpeg, including projects that (rightfully) >>>> don't depend on FFmpeg, and that this may interest people who wouldn't >>>> contribute or participate in FFmpeg. >>>> >>>> The issue i have with this view is it comes from people who want >> nothing to >>>> do with this SDR work. >>>> I would see this argument very differntly if it would come from people >> who >>>> want to work on that external SDR library. >>>> >>>> I mean this is more a "go away" than a "lets work together on SDR (for >>>> FFmpeg)" >>>> >>>> >>>>> >>>>>> The size of all of SDR really has as much bearing on FFmpeg as the >> size >>>>>> of all of mathematics has on the use of mathematics in FFmpeg. >>>>> >>>>> On an empirical basis, I'd argue that FFmpeg mathematics are so >>>> fine-tuned to specific algorithmic use cases, that you will anyway end >> up >>>> writing custom algorithms and optimisations here. And thus you won't be >>>> sharing much code with (the rest of) FFmpeg down the line. >>>> >>>> Iam not sure iam drifting off topic maybe but >>>> I frequently use code from libavutil outside multimedia >>>> >>>> thx >>>> >>>> >>>> >>> If you are not willing to listen to reviews than you can not collaborate >> on >>> FFmpeg project. >>> >> >>> I tried to tell you that using this code inside libavformat is mistake. >> But >>> you keep ignoring that. >> >> You called me a liar. Which is unacceptable >> >> >> libavformat is open source code, anyone can add any feature that he wants. >> Iam in no way insisting to implement this in libavformat, but sofar its >> really the only place (even if the active code would be moved outside, its >> still interfacing in libavformat) >> >> Also iam not ignoring anyone. I just move forward and try to implement all >> suggestions that i can and that i agree with. Once thats done, we will look >> if we have a consensus what to do. That could be git master, it could be >> another place, it could include an external lib IF others want to work on >> this lib too. Maybe someone has an entirly different suggestion that we >> all prefer, >> i dont know. We will see >> But ATM i find this sdr stuff a bit interresting and so i will continue >> to work on it. If me working on free software offends anyone, iam not >> sure what to say. >> > > Yes your patches, not just this one, offend all of us: > > 1. breaks existing functionality and introduces subtle bugs > 2. ignoring fixing various bugs and regressions reported on trac > 3. introducing new changes and broken features that very fast becomes > unmaintainable. > > Feel free to work on own fork/project as you ever want. Or you could stop being so inflammatory all the time. There are much more productive ways to prevent or fix all of the things you complain about above than throwing accusations around.
No. Absolutely not. Radio stuff belongs in radio projects such as GNU Radio. This is extreme scope creep. I can throw an AM detector together in 15 minutes. That doesn't mean it belongs in FFmpeg. You are also treading into modem territory, among other things. Please contribute to actually existing free radio projects instead. /Tomas
Hi Tomas On Fri, Jun 23, 2023 at 10:10:29PM +0200, Tomas Härdin wrote: > No. Absolutely not. > > Radio stuff belongs in radio projects such as GNU Radio. This is > extreme scope creep. > > I can throw an AM detector together in 15 minutes. That doesn't mean it > belongs in FFmpeg. You are also treading into modem territory, among > other things. Please contribute to actually existing free radio > projects instead. And DSP belongs in DSP projects, mpeg decoding belongs in libmpeg2 png, ogg, vorbis belong in their respective projects in fact FFmpeg shouldnt exist we should all have been contributing to gstreamer Wouldnt that be the same line of thought ? But seriously there are 2 things. Theres myself and what is fun for me to work on (you suggest i shouldnt have fun?) And there is FFmpeg that is missing any and all SDR support Are you planing to add SDR support through some library like GNU radio to FFmpeg ? I think GNU radio is a poor choice, even just the base package has "Installed-Size: 407 MB" that would be huge dependancy to avoid ATM 2 pages of modem code, also GNU Radio is not LGPL, i think FFmpeg generally prefers LGPL over GPL. Not that i personally have anything against GPL, I like GPL but thats not the preferred license in FFmpeg do you suggest we should create a libavradio ? or can you suggest an existing library that would fit the C + clean ASM LGPL style that FFmpeg tends to prefer ? thx [...]
fre 2023-06-23 klockan 23:18 +0200 skrev Michael Niedermayer: > Hi Tomas > > On Fri, Jun 23, 2023 at 10:10:29PM +0200, Tomas Härdin wrote: > > No. Absolutely not. > > > > Radio stuff belongs in radio projects such as GNU Radio. This is > > extreme scope creep. > > > > I can throw an AM detector together in 15 minutes. That doesn't > > mean it > > belongs in FFmpeg. You are also treading into modem territory, > > among > > other things. Please contribute to actually existing free radio > > projects instead. > > And DSP belongs in DSP projects, mpeg decoding belongs in libmpeg2 > png, ogg, vorbis belong in their respective projects > in fact FFmpeg shouldnt exist we should all have been contributing to > gstreamer > > Wouldnt that be the same line of thought ? > > But seriously there are 2 things. > Theres myself and what is fun for me to work on (you suggest i > shouldnt have fun?) > And there is FFmpeg that is missing any and all SDR support > > Are you planing to add SDR support through some library like GNU > radio > to FFmpeg ? This is begging the question. I don't care what you think is fun, this is outside the scope of the project. Not everything needs to be shoveled into FFmpeg master. The UNIX pipe was invented for a reason. Use radio tools to do radio stuff, then pipe the resulting bitstream or audio stream into FFmpeg if you need to say transcode DAB to Opus for streaming on the Web or something. > I think GNU radio is a poor choice, even just the base package has > "Installed-Size: 407 MB" that would be huge dependancy to avoid ATM 2 > pages > of modem code Ridiculous justification for increasing technical debt in the project. Modern computers have more than enough disk. GNU Radio supports offloading processing to on-board FPGAs among other very useful features for radio work. There are more light-weight options if you just want AM/FM, that can then be piped to FFmpeg by various means (jack or named pipes). gqrx for example, which in Debian (gqrx-sdr) comes to 25 megs of downloads (152 megs on disk) including all dependencies, which includes gnuradio, gnuradio-dev, gr-osmosdr etc > also GNU Radio is not LGPL, i think FFmpeg generally prefers > LGPL over GPL. This is a non-issue when using pipes. > Not that i personally have anything against GPL, I like GPL > but thats not the preferred license in FFmpeg > > do you suggest we should create a libavradio ? > or can you suggest an existing library that would fit the C + clean > ASM > LGPL style that FFmpeg tends to prefer ? I am suggesting you follow the UNIX philosophy of having programs that do one thing well. /Tomas
Le 23 juin 2023 20:12:41 GMT+02:00, Michael Niedermayer <michael@niedermayer.cc> a écrit : >Hi > >On Fri, Jun 23, 2023 at 06:37:18PM +0200, Rémi Denis-Courmont wrote: >> Hi, >> >> Le 23 juin 2023 13:17:28 GMT+02:00, Michael Niedermayer <michael@niedermayer.cc> a écrit : >> >On Fri, Jun 23, 2023 at 10:34:10AM +0800, Kieran Kunhya wrote: >> >> FFmpeg is not the place for SDR. SDR is as large and complex as the >> >> entirety of multimedia. >> >> >> >> What next, is FFmpeg going to implement TCP in userspace, Wifi, Ethernet, >> >> an entire 4G and 5G stack? >> > >> >https://en.wikipedia.org/wiki/Straw_man >> > >> >What my patch is doing is adding support for AM demodulation, the AM >> >specific code is like 2 pages. The future plan for FM demodulation will >> >not add alot of code either. DAB/DVB should also not be anything big >> >(if that is implemented at all by anyone) >> >> Literally every one of those layer-2 protocols has a lower-level API already on Linux, and typically they are, or would be, backends to libavdevice. >> >> (Specifically AM and FM are supported by V4L radio+ALSA; DAB and DVB by Linux-DVB. 4G and 5G are network devices.) > >4 problems >* FFmpeg is not "linux only". And then what? Whether you like it or not, radio signal processing sits on top of OS-specific APIs to access whatever bus or hardware. You can't make this OS-independent whether it's in FFmpeg or elsewhere. At best you can write or reuse platform abstraction layers (such as libusb). Maybe. In other words, whether this ends up in FFmpeg or not has absolutely no bearing on this "problem" as you call it. But it doesn't end here. Audio input on Linux is normally exposed with ALSA modules (hw/plughw if the driver is in kernel, but it doesn't have to be), and other OSes have equivalent APIs. A sound (pun unintended) implementation of AM or FM would actually be an ALSA module, and *maybe* also PA and PW modules. (They don't have to be kernel mode drivers.) ...Not an FFmpeg device or demux. >* No software i tried or was suggested to me used V4L or Linux-DVB. That would be because audio input is done with ALSA (in combination with V4L for *hardware* radio tuning). The point was that this is lower layer stuff that belongs in a lower level library or module, rather than FFmpeg. I never *literally* said that you should (or even could) use V4L or Linux-DVB here. Those APIs are rather for the *hardware* equivalent of what you're doing in *software*. So again, no "problem" here. >* iam not sure the RSP1A i have has linux drivers for these interfaces Unless you're connecting to the radio receiver via IP (which would be a kludge IMO), you're going to have to have a kernel driver to expose the bus to the physical hardware. This is the same fallacious argument as the first one: you *can't* be OS-independent here, even if it's understandably and even agreeably desirable in an ideal (and purely imaginary) world. >* What iam interrested in was working with the signals at a low level, why > because i find it interresting and fun. Nothing wrong with that, but that doesn't make it fit in FFmpeg, which, for all the amazing work you've done on it, isn't your personal playground. > Accessing AM/FM through some high > level API is not something iam interrested in. This is also because any > issues are likely unsolvable at that level. > If probing didnt find a station, or demodulation doesnt work, a high > level API likely wont allow doing anything about that. I'm not sure what you even call high-level API here. AM and FM are analogue audio sources, and ALSA modules and equivalent APIs on other OSes are as low as it practically gets for *exposing* digitised analogue audio. They're *lower* than libavformat/libavdevice even, I'd argue. > >> >> So I can only agree with Kieran that these are *lower* layers, that don't really look like they belong in FFmpeg. > >FFmpeg has been always very low level. We stoped at where the OS provides >support that works, not at some academic "level". If every OS provides a great >SDR API than i missed that, which is possible because that was never something >i was interrested in. > > >> >> >If the code grows beyond that it could be split out into a seperate >> >library outside FFmpeg. >> >> I think that the point is, that that code should be up-front in a separate FFmpeg-independent library. And it's not just a technical argument with layering. It's also that it's too far outside what FFmpeg typically works with, so it really should not be put under the purview of FFmpeg-devel. In other words, it's also a social problem. >> >> The flip side of that argument is that this may be of interest to other higher-level projects than FFmpeg, including projects that (rightfully) don't depend on FFmpeg, and that this may interest people who wouldn't contribute or participate in FFmpeg. > >The issue i have with this view is it comes from people who want nothing to >do with this SDR work. >I would see this argument very differntly if it would come from people who >want to work on that external SDR library. > >I mean this is more a "go away" than a "lets work together on SDR (for FFmpeg)" > > >> >> >The size of all of SDR really has as much bearing on FFmpeg as the size >> >of all of mathematics has on the use of mathematics in FFmpeg. >> >> On an empirical basis, I'd argue that FFmpeg mathematics are so fine-tuned to specific algorithmic use cases, that you will anyway end up writing custom algorithms and optimisations here. And thus you won't be sharing much code with (the rest of) FFmpeg down the line. > >Iam not sure iam drifting off topic maybe but >I frequently use code from libavutil outside multimedia > >thx > > >[...]
On Sat, Jun 24, 2023 at 11:51:57AM +0200, Tomas Härdin wrote: > fre 2023-06-23 klockan 23:18 +0200 skrev Michael Niedermayer: > > Hi Tomas > > > > On Fri, Jun 23, 2023 at 10:10:29PM +0200, Tomas Härdin wrote: > > > No. Absolutely not. > > > > > > Radio stuff belongs in radio projects such as GNU Radio. This is > > > extreme scope creep. > > > > > > I can throw an AM detector together in 15 minutes. That doesn't > > > mean it > > > belongs in FFmpeg. You are also treading into modem territory, > > > among > > > other things. Please contribute to actually existing free radio > > > projects instead. > > > > And DSP belongs in DSP projects, mpeg decoding belongs in libmpeg2 > > png, ogg, vorbis belong in their respective projects > > in fact FFmpeg shouldnt exist we should all have been contributing to > > gstreamer > > > > Wouldnt that be the same line of thought ? > > > > But seriously there are 2 things. > > Theres myself and what is fun for me to work on (you suggest i > > shouldnt have fun?) > > And there is FFmpeg that is missing any and all SDR support > > > > Are you planing to add SDR support through some library like GNU > > radio > > to FFmpeg ? > > This is begging the question. I don't care what you think is fun, this > is outside the scope of the project. Not everything needs to be > shoveled into FFmpeg master. The UNIX pipe was invented for a reason. > Use radio tools to do radio stuff, then pipe the resulting bitstream or > audio stream into FFmpeg if you need to say transcode DAB to Opus for > streaming on the Web or something. We will have to agree to disagree here. all other sources of audio and video work conveniently in FFmpeg I can also play a video and switch subtitle and audio streams with a button not have to use a external demultiplxer and choose the streams before starting the player. > > > I think GNU radio is a poor choice, even just the base package has > > "Installed-Size: 407 MB" that would be huge dependancy to avoid ATM 2 > > pages > > of modem code > > Ridiculous justification for increasing technical debt in the project. > Modern computers have more than enough disk. GNU Radio supports > offloading processing to on-board FPGAs among other very useful > features for radio work. somehow you swing between extreems here first its pipes or nothing now we need FPGAs where ATM its just 2 pages of modem related code > There are more light-weight options if you > just want AM/FM, that can then be piped to FFmpeg by various means > (jack or named pipes). gqrx for example, which in Debian (gqrx-sdr) > comes to 25 megs of downloads (152 megs on disk) including all > dependencies, which includes gnuradio, gnuradio-dev, gr-osmosdr etc I think we somehow are not talking on the same frequency ;) if i want just AM/FM i need 10 pages of code max and writing these pages is a fun thing to do for me. No external dependancy belongs in FFmpeg to avoid 10 pages of code That dependancy causes us MUCH more pain than these 10 pages of code in fact interfacing to that dependancy might need more code A dependancy makes sense only if it provides an _advanatge_ for us > > > also GNU Radio is not LGPL, i think FFmpeg generally prefers > > LGPL over GPL. > > This is a non-issue when using pipes. How many users do use multimedia players with pipes ? you sure can have decoder demuxer protocol and video display and connect all by pipes. Why is noone doing that and just calls vlc ? > > > Not that i personally have anything against GPL, I like GPL > > but thats not the preferred license in FFmpeg > > > > do you suggest we should create a libavradio ? > > or can you suggest an existing library that would fit the C + clean > > ASM > > LGPL style that FFmpeg tends to prefer ? > > I am suggesting you follow the UNIX philosophy of having programs that > do one thing well. Sure, but again it doesnt do it The 3 cases iam thinking of ATM are 1. A libavcodec / libavformat / libavfilter based player which allows going through the radio spectrum, vissually displaying what is there autodetecting modulations and demodulating the selected station. 2. FFmpeg being able to simply be given a piece of radio spectrum and detect and demodulate everything in that part independant of what modulations are there and storing that all in a multimedia file like matroska or nut. 3. as a fun excercise about SDR, modulations and related math. I dont know how to do any of these even with pipes and unmodified ffmpeg thx [...]
lör 2023-06-24 klockan 22:27 +0200 skrev Rémi Denis-Courmont: > Unless you're connecting to the radio receiver via IP (which would be > a kludge IMO) Just a minor point: piping IQ data over TCP or UDP is actually quite common in the radio scene. I also wouldn't be surprised if Linux gains an API for dealing with IQ data in the future. That would allow writing more driver-esque radio stuff. Because radio is such a wide field, there's no place where adding it in FFmpeg is likely to be appropriate. A demodulator could output data packets, or bitstreams, or audio, or video, or fax images, or IQ data etc etc. Should we have an APRS implementation in lavf? AX.25? Maybe we should implement WiFi? This reminds me of the IPFS debacle. /Tomas
lör 2023-06-24 klockan 23:01 +0200 skrev Michael Niedermayer: > On Sat, Jun 24, 2023 at 11:51:57AM +0200, Tomas Härdin wrote: > > fre 2023-06-23 klockan 23:18 +0200 skrev Michael Niedermayer: > > > > > > Are you planing to add SDR support through some library like GNU > > > radio > > > to FFmpeg ? > > > > This is begging the question. I don't care what you think is fun, > > this > > is outside the scope of the project. Not everything needs to be > > shoveled into FFmpeg master. The UNIX pipe was invented for a > > reason. > > Use radio tools to do radio stuff, then pipe the resulting > > bitstream or > > audio stream into FFmpeg if you need to say transcode DAB to Opus > > for > > streaming on the Web or something. > > We will have to agree to disagree here. > all other sources of audio and video work conveniently in FFmpeg > I can also play a video and switch subtitle and audio streams with a > button not have to use a external demultiplxer and choose the streams > before starting the player. If you want to listen to AM broadcasts then there are already plenty of programs to do that. If you want to skim the bands for interesting broadcasts there are tools for that as well. Have you even done basic research to see what is out there? > > > I think GNU radio is a poor choice, even just the base package > > > has > > > "Installed-Size: 407 MB" that would be huge dependancy to avoid > > > ATM 2 > > > pages > > > of modem code > > > > Ridiculous justification for increasing technical debt in the > > project. > > Modern computers have more than enough disk. GNU Radio supports > > offloading processing to on-board FPGAs among other very useful > > features for radio work. > > somehow you swing between extreems here > first its pipes or nothing now we need FPGAs Yeah because different applications call for different tools. You cannot do RADAR or WiFi on the CPU. Hence why there's a sophisticated set of tools for doing radio stuff already. Tools that could use more developer effort. > where ATM its just 2 pages of modem related code All the better reason to nip it in the bud. > > There are more light-weight options if you > > just want AM/FM, that can then be piped to FFmpeg by various means > > (jack or named pipes). gqrx for example, which in Debian (gqrx-sdr) > > comes to 25 megs of downloads (152 megs on disk) including all > > dependencies, which includes gnuradio, gnuradio-dev, gr-osmosdr etc > > I think we somehow are not talking on the same frequency ;) > if i want just AM/FM i need 10 pages of code max and writing these > pages is a fun thing to do for me. Fun for you is technical debt for everyone else. > No external dependancy belongs in FFmpeg to avoid 10 pages of code > That dependancy causes us MUCH more pain than these 10 pages of code > in fact interfacing to that dependancy might need more code More technical debt is worse, not better. > > > > > also GNU Radio is not LGPL, i think FFmpeg generally prefers > > > LGPL over GPL. > > > > This is a non-issue when using pipes. > > How many users do use multimedia players with pipes ? > you sure can have decoder demuxer protocol and video display > and connect all by pipes. Why is noone doing that and just > calls vlc ? Why would anyone use ffmpeg to do radio stuff when there already are superior tools for doing it? Tools that play better with ALSA etc. > > > Not that i personally have anything against GPL, I like GPL > > > but thats not the preferred license in FFmpeg > > > > > > do you suggest we should create a libavradio ? > > > or can you suggest an existing library that would fit the C + > > > clean > > > ASM > > > LGPL style that FFmpeg tends to prefer ? > > > > I am suggesting you follow the UNIX philosophy of having programs > > that > > do one thing well. > > Sure, but again it doesnt do it > > The 3 cases iam thinking of ATM are > > 1. A libavcodec / libavformat / libavfilter based player which allows > going > through the radio spectrum, vissually displaying what is there This already exists. Ask your local radio amateurs and they will point out oodles of programs for you. or apt search amateur radio > autodetecting modulations and demodulating the selected station. lol > 2. FFmpeg being able to simply be given a piece of radio spectrum and > detect and > demodulate everything in that part independant of what modulations > are there and storing > that all in a multimedia file like matroska or nut. You are suggesting writing a skimmer that somehow figures out what modulation is used. Meanwhile what people actually do is rely on the band plan, because shit is difficult enough as it is. Broadcast FM channels can be mono, or stereo, or forego stereo for subcarriers in different languages. Plus you have digital data in there, and mechanisms for emergency broadcast switchover. > 3. as a fun excercise about SDR, modulations and related math. There is no reason why your own personal experiments should contaminate master. /Tomas
Michael Niedermayer (12023-06-23): > * What iam interrested in was working with the signals at a low level, why > because i find it interresting and fun. Then this is what you should be spending your time on, and to hell with anybody who says otherwise. Unless you have commitments that we are not privy of, nobody can tell you how you are supposed to be spending your time and skills. Nobody can force you to manage releases and fix fuzzing bugs and do all the things you do that are necessary to the project. Necessary to a conception of the good of the project that is not even your own I think. Nobody can prevent you from hacking the things that motivate you. At worst, they can prevent you from committing the resulting code into official FFmpeg. That would be the project's loss, and you can still publish it on a private branch. But they do not have the power to block you from pushing. It is not even clear they have the authority do block you, more so if the code is really good and fits FFmpeg well. The only thing that can block you is your desire to play nice and not harm the project. I would like to emphasize that some of the frequent naysayers do not have an excellent track record when it comes to playing nice and especially not harming the project. I am especially annoyed by the “it's too hard” naysayers — the same I have been getting when I say that I want to write a XML suited to our needs to replace our use of libxml. They do not realize they reveal more about their own skills than anybody else. You know if you can write a radio, so ignore them and go ahead. And if they were right, if it is too hard and you fail… well, you would just have wasted your own time; and you would have had fun and learned things on the occasion, which means the time was not even wasted. But the whole attitude who wants FFmpeg to be a Serious OpenSource TM Project, who needs to make releases and worry above all about ABI stability, is really the attitude who is killing all the fun in working on FFmpeg. Hey, people, realize FFmpeg does not exist to be a Serious OpenSource TM Project, FFmpeg does not exist to serve other projects, to serve companies who benefit from it give the bare minimum back. FFmpeg exists because some day a dude thought it would be fun to write a MPEG decoder. And everybody else told him it would be too hard, everybody else told him to use an existing library and to leave it to the professionals. He did not believe them and proved them utterly wrong, and the rest, as the saying goes, is history. So I will say it explicitly. We — me, and everybody who agrees with me — do not want to just maintain a bunch of wrappers for the convenience of others, we want to have fun writing interesting code, trying new ways of doing things, inventing original optimizations. We can find a balance and work on useful things too. But if you want us to work only on the boring useful things, if you want to bar us from working on fun things, then just fork you. P.S.: I do not think I am skilled enough in that area to review your code. But if you think I am, maybe on some parts, then I will happily. Regards,
On Sun, Jun 25, 2023 at 12:01:07AM +0200, Tomas Härdin wrote: > lör 2023-06-24 klockan 23:01 +0200 skrev Michael Niedermayer: > > On Sat, Jun 24, 2023 at 11:51:57AM +0200, Tomas Härdin wrote: > > > fre 2023-06-23 klockan 23:18 +0200 skrev Michael Niedermayer: > > > > > > > > Are you planing to add SDR support through some library like GNU > > > > radio > > > > to FFmpeg ? > > > > > > This is begging the question. I don't care what you think is fun, > > > this > > > is outside the scope of the project. Not everything needs to be > > > shoveled into FFmpeg master. The UNIX pipe was invented for a > > > reason. > > > Use radio tools to do radio stuff, then pipe the resulting > > > bitstream or > > > audio stream into FFmpeg if you need to say transcode DAB to Opus > > > for > > > streaming on the Web or something. > > > > We will have to agree to disagree here. > > all other sources of audio and video work conveniently in FFmpeg > > I can also play a video and switch subtitle and audio streams with a > > button not have to use a external demultiplxer and choose the streams > > before starting the player. > > If you want to listen to AM broadcasts then there are already plenty of > programs to do that. If you want to skim the bands for interesting > broadcasts there are tools for that as well. Have you even done basic > research to see what is out there? Ive looked a bit, i found CubicSDR and SDR++ i like these tools they are cool. But they dont seem to do many demodulations, just analog also they are very manual which is cool sure but what iam missing is a "iphone" like experience. Something with 2 buttons as in next station, previous station, hold one to record, hold both to record everything, double click to switch between audio and viddeo ;) not exacty like that of course but the idea of simple convenient and 99.99% working, no tool i saw or heared of does this. you know, not i have to select the frequency i have to select the demodulation i have to figure out what the song name is. Something when i hear a song i like i press record and it figures out what song this is (RDS has the info) figures out when the song started and goes back in some cache and stores that from the past Thats one use case like i said. Theres surely a lot more. like dumping everything from the whole DAB or FM band into a properly sorted "mp3/aac" library or maybe record all the air traffic control stuff (not that i care) but i suspect some people would care and for that i wouldnt be surpised if software exists already > > > > > I think GNU radio is a poor choice, even just the base package > > > > has > > > > "Installed-Size: 407 MB" that would be huge dependancy to avoid > > > > ATM 2 > > > > pages > > > > of modem code > > > > > > Ridiculous justification for increasing technical debt in the > > > project. > > > Modern computers have more than enough disk. GNU Radio supports > > > offloading processing to on-board FPGAs among other very useful > > > features for radio work. > > > > somehow you swing between extreems here > > first its pipes or nothing now we need FPGAs > > Yeah because different applications call for different tools. You > cannot do RADAR or WiFi on the CPU. we are drifting off topic. But "You cannot do" triggers me a bit reminds be of "you cannot break videocrypt without FFTs on FPGAs" I did that on the CPU of my average desktop in 1997 https://guru.multimedia.cx/index.php?s=videocrypt so now we cannot do RADAR on the CPU ? why exactly would that be so ? RADAR is not one thing btw, theres alot of different RADAR systems some i do belive worked with vaccuum tubes i do guess you dont mean these with wifi theres also different iterations. I guess as bandwidth increases we need more processing > Hence why there's a sophisticated > set of tools for doing radio stuff already. Tools that could use more > developer effort. > > > where ATM its just 2 pages of modem related code > > All the better reason to nip it in the bud. > > > > There are more light-weight options if you > > > just want AM/FM, that can then be piped to FFmpeg by various means > > > (jack or named pipes). gqrx for example, which in Debian (gqrx-sdr) > > > comes to 25 megs of downloads (152 megs on disk) including all > > > dependencies, which includes gnuradio, gnuradio-dev, gr-osmosdr etc > > > > I think we somehow are not talking on the same frequency ;) > > if i want just AM/FM i need 10 pages of code max and writing these > > pages is a fun thing to do for me. > > Fun for you is technical debt for everyone else. how is a self contained module affecting anyone excpet who works on it? [...] > > > > Not that i personally have anything against GPL, I like GPL > > > > but thats not the preferred license in FFmpeg > > > > > > > > do you suggest we should create a libavradio ? > > > > or can you suggest an existing library that would fit the C + > > > > clean > > > > ASM > > > > LGPL style that FFmpeg tends to prefer ? > > > > > > I am suggesting you follow the UNIX philosophy of having programs > > > that > > > do one thing well. > > > > Sure, but again it doesnt do it > > > > The 3 cases iam thinking of ATM are > > > > 1. A libavcodec / libavformat / libavfilter based player which allows > > going > > through the radio spectrum, vissually displaying what is there > > This already exists. Ask your local radio amateurs and they will point > out oodles of programs for you. or apt search amateur radio > > > autodetecting modulations and demodulating the selected station. > > lol whats funny ? > > > 2. FFmpeg being able to simply be given a piece of radio spectrum and > > detect and > > demodulate everything in that part independant of what modulations > > are there and storing > > that all in a multimedia file like matroska or nut. > > You are suggesting writing a skimmer that somehow figures out what > modulation is used. Meanwhile what people actually do is rely on the > band plan, because shit is difficult enough as it is. Broadcast FM > channels can be mono, or stereo, or forego stereo for subcarriers in > different languages. Plus you have digital data in there, and > mechanisms for emergency broadcast switchover. yes, and ? even my car radio (which is what the manufactor of the car put in) has working detection for all this thx [...]
> > > 3. as a fun excercise about SDR, modulations and related math. > > There is no reason why your own personal experiments should contaminate > master. > Agreed, there's nothing wrong with writing an SDR library around FFmpeg programming paradigms (C, handwritten asm and not intrinsics etc). B But this should be in a separate library related to SDR and not FFmpeg. Kieran
On Sat, Jun 24, 2023 at 10:27:13PM +0200, Rémi Denis-Courmont wrote: > > > Le 23 juin 2023 20:12:41 GMT+02:00, Michael Niedermayer <michael@niedermayer.cc> a écrit : > >Hi > > > >On Fri, Jun 23, 2023 at 06:37:18PM +0200, Rémi Denis-Courmont wrote: > >> Hi, > >> > >> Le 23 juin 2023 13:17:28 GMT+02:00, Michael Niedermayer <michael@niedermayer.cc> a écrit : > >> >On Fri, Jun 23, 2023 at 10:34:10AM +0800, Kieran Kunhya wrote: > >> >> FFmpeg is not the place for SDR. SDR is as large and complex as the > >> >> entirety of multimedia. > >> >> > >> >> What next, is FFmpeg going to implement TCP in userspace, Wifi, Ethernet, > >> >> an entire 4G and 5G stack? > >> > > >> >https://en.wikipedia.org/wiki/Straw_man > >> > > >> >What my patch is doing is adding support for AM demodulation, the AM > >> >specific code is like 2 pages. The future plan for FM demodulation will > >> >not add alot of code either. DAB/DVB should also not be anything big > >> >(if that is implemented at all by anyone) > >> > >> Literally every one of those layer-2 protocols has a lower-level API already on Linux, and typically they are, or would be, backends to libavdevice. > >> > >> (Specifically AM and FM are supported by V4L radio+ALSA; DAB and DVB by Linux-DVB. 4G and 5G are network devices.) > > > >4 problems > >* FFmpeg is not "linux only". > > And then what? Whether you like it or not, radio signal processing sits on top of OS-specific APIs to access whatever bus or hardware. You can't make this OS-independent whether it's in FFmpeg or elsewhere. > > At best you can write or reuse platform abstraction layers (such as libusb). Maybe. > > In other words, whether this ends up in FFmpeg or not has absolutely no bearing on this "problem" as you call it. > > But it doesn't end here. Audio input on Linux is normally exposed with ALSA modules (hw/plughw if the driver is in kernel, but it doesn't have to be), and other OSes have equivalent APIs. A sound (pun unintended) implementation of AM or FM would actually be an ALSA module, and *maybe* also PA and PW modules. (They don't have to be kernel mode drivers.) > > ...Not an FFmpeg device or demux. This is interresting. I never tried but there should be a way to pipe ffmpeg output back into alsa to make it available as an alsa module This could actually be usefull as is with no sdr. but that tought i have here is 100% off topic > > >* No software i tried or was suggested to me used V4L or Linux-DVB. > > That would be because audio input is done with ALSA (in combination with V4L for *hardware* radio tuning). > > The point was that this is lower layer stuff that belongs in a lower level library or module, rather than FFmpeg. I never *literally* said that you should (or even could) use V4L or Linux-DVB here. Those APIs are rather for the *hardware* equivalent of what you're doing in *software*. > > So again, no "problem" here. ok, i misunderstood you somewhat here thx [...]
On Sun, Jun 25, 2023 at 12:19:04AM +0200, Nicolas George wrote: > Michael Niedermayer (12023-06-23): > > * What iam interrested in was working with the signals at a low level, why > > because i find it interresting and fun. > > Then this is what you should be spending your time on, and to hell with > anybody who says otherwise. > > Unless you have commitments that we are not privy of, nobody can tell > you how you are supposed to be spending your time and skills. Nobody can > force you to manage releases and fix fuzzing bugs and do all the things > you do that are necessary to the project. Necessary to a conception of > the good of the project that is not even your own I think. > > Nobody can prevent you from hacking the things that motivate you. At > worst, they can prevent you from committing the resulting code into > official FFmpeg. That would be the project's loss, and you can still > publish it on a private branch. But they do not have the power to block > you from pushing. It is not even clear they have the authority do block > you, more so if the code is really good and fits FFmpeg well. The only > thing that can block you is your desire to play nice and not harm the > project. I would like to emphasize that some of the frequent naysayers > do not have an excellent track record when it comes to playing nice and > especially not harming the project. > > I am especially annoyed by the “it's too hard” naysayers — the same I > have been getting when I say that I want to write a XML suited to our > needs to replace our use of libxml. They do not realize they reveal more > about their own skills than anybody else. You know if you can write a > radio, so ignore them and go ahead. And if they were right, if it is too > hard and you fail… well, you would just have wasted your own time; and > you would have had fun and learned things on the occasion, which means > the time was not even wasted. you are very spot on with all this. anyway i will not push anything to master without the community agreeing, that would do no good. > > But the whole attitude who wants FFmpeg to be a Serious OpenSource TM > Project, who needs to make releases and worry above all about ABI > stability, is really the attitude who is killing all the fun in working > on FFmpeg. > > Hey, people, realize FFmpeg does not exist to be a Serious OpenSource TM > Project, FFmpeg does not exist to serve other projects, to serve > companies who benefit from it give the bare minimum back. > > FFmpeg exists because some day a dude thought it would be fun to write a > MPEG decoder. And everybody else told him it would be too hard, > everybody else told him to use an existing library and to leave it to > the professionals. He did not believe them and proved them utterly > wrong, and the rest, as the saying goes, is history. yes, also in the early days FFmpeg was inovative and pushing the boundaries i mean i could have pushed a cpu emulator and compiler IF there was need for that, noone would have had an issue with that. But nowadays FFmpeg is established and theres tradition. Any breach of that tradition leads to opposition. > > So I will say it explicitly. We — me, and everybody who agrees with me — > do not want to just maintain a bunch of wrappers for the convenience of > others, we want to have fun writing interesting code, trying new ways of > doing things, inventing original optimizations. We can find a balance > and work on useful things too. But if you want us to work only on the > boring useful things, if you want to bar us from working on fun things, > then just fork you. Well, first question really is if we can move forward in the established system. It seems this is at least a bit cumbersome. But we will see where it ends with that sdr feature. It doesnt need to be in a demuxer (IF there was another way, it just seems there is not), it can be in a seperate library (IF there is more than 1 developer interrested in working on it), theres alot of things that can be adjusted to reach a consensus. Then the question is if we have a consensus to push the sdr code in whatever form that would be to git master. If not the code goes elsewhere and that elsewhere should really then be open to everyone else who wants to do inovative work. I think every possible outcome here really has its upsides. Its just a more cumbersome path than what i had envissioned thx [...]
On Sat, Jun 24, 2023 at 10:27:13PM +0200, Rémi Denis-Courmont wrote: > > > Le 23 juin 2023 20:12:41 GMT+02:00, Michael Niedermayer <michael@niedermayer.cc> a écrit : > >Hi > > > >On Fri, Jun 23, 2023 at 06:37:18PM +0200, Rémi Denis-Courmont wrote: > >> Hi, > >> > >> Le 23 juin 2023 13:17:28 GMT+02:00, Michael Niedermayer <michael@niedermayer.cc> a écrit : > >> >On Fri, Jun 23, 2023 at 10:34:10AM +0800, Kieran Kunhya wrote: > >> >> FFmpeg is not the place for SDR. SDR is as large and complex as the > >> >> entirety of multimedia. > >> >> > >> >> What next, is FFmpeg going to implement TCP in userspace, Wifi, Ethernet, > >> >> an entire 4G and 5G stack? > >> > > >> >https://en.wikipedia.org/wiki/Straw_man > >> > > >> >What my patch is doing is adding support for AM demodulation, the AM > >> >specific code is like 2 pages. The future plan for FM demodulation will > >> >not add alot of code either. DAB/DVB should also not be anything big > >> >(if that is implemented at all by anyone) > >> > >> Literally every one of those layer-2 protocols has a lower-level API already on Linux, and typically they are, or would be, backends to libavdevice. > >> > >> (Specifically AM and FM are supported by V4L radio+ALSA; DAB and DVB by Linux-DVB. 4G and 5G are network devices.) > > > >4 problems > >* FFmpeg is not "linux only". > > And then what? Whether you like it or not, radio signal processing sits on top of OS-specific APIs to access whatever bus or hardware. You can't make this OS-independent whether it's in FFmpeg or elsewhere. > > At best you can write or reuse platform abstraction layers (such as libusb). Maybe. > > In other words, whether this ends up in FFmpeg or not has absolutely no bearing on this "problem" as you call it. > > But it doesn't end here. Audio input on Linux is normally exposed with ALSA modules (hw/plughw if the driver is in kernel, but it doesn't have to be), and other OSes have equivalent APIs. A sound (pun unintended) implementation of AM or FM would actually be an ALSA module, and *maybe* also PA and PW modules. (They don't have to be kernel mode drivers.) > > ...Not an FFmpeg device or demux. > speaking of layers. IMHO the kernel and driver layer should end with the IQ data for SDR. Thats OS specific. After that the whole chain should be largely OS and HW independent turning IQ data to a decoded video and audio stream for DVB is the same on every OS as complex as it may be. We could feed this back into ALSA on linux but i doubt that would be used alot more likely players would link to the library directly that turns IQ -> video+audio even more so because for the player it would be the same on every platform otherwise the player would have to support platform specific interfaces and not just one per platform either i think thx [...]
sön 2023-06-25 klockan 11:54 +0200 skrev Michael Niedermayer: > On Sun, Jun 25, 2023 at 12:01:07AM +0200, Tomas Härdin wrote: > > lör 2023-06-24 klockan 23:01 +0200 skrev Michael Niedermayer: > > > On Sat, Jun 24, 2023 at 11:51:57AM +0200, Tomas Härdin wrote: > > > > fre 2023-06-23 klockan 23:18 +0200 skrev Michael Niedermayer: > > > > > > > > > > Are you planing to add SDR support through some library like > > > > > GNU > > > > > radio > > > > > to FFmpeg ? > > > > > > > > This is begging the question. I don't care what you think is > > > > fun, > > > > this > > > > is outside the scope of the project. Not everything needs to be > > > > shoveled into FFmpeg master. The UNIX pipe was invented for a > > > > reason. > > > > Use radio tools to do radio stuff, then pipe the resulting > > > > bitstream or > > > > audio stream into FFmpeg if you need to say transcode DAB to > > > > Opus > > > > for > > > > streaming on the Web or something. > > > > > > We will have to agree to disagree here. > > > all other sources of audio and video work conveniently in FFmpeg > > > I can also play a video and switch subtitle and audio streams > > > with a > > > button not have to use a external demultiplxer and choose the > > > streams > > > before starting the player. > > > > If you want to listen to AM broadcasts then there are already > > plenty of > > programs to do that. If you want to skim the bands for interesting > > broadcasts there are tools for that as well. Have you even done > > basic > > research to see what is out there? > > Ive looked a bit, i found CubicSDR and SDR++ i like these tools they > are cool. But they dont seem to do many demodulations, just analog > also they are very manual which is cool sure but what iam missing > is a "iphone" like experience. Something with 2 buttons > > as in next station, previous station, hold one to record, hold both > to record everything, double click to switch between audio and viddeo > ;) > not exacty like that of course but the idea of simple convenient and > 99.99% working, no tool i saw or heared of does this. > > you know, not i have to select the frequency i have to select the > demodulation > i have to figure out what the song name is. Something when i hear a > song i like > i press record and it figures out what song this is (RDS has the > info) > figures out when the song started and goes back in some cache and > stores > that from the past > Thats one use case like i said. Theres surely a lot more. like > dumping everything > from the whole DAB or FM band into a properly sorted "mp3/aac" > library Yes those are all fine use cases. But that doesn't explain why they belong in FFmpeg rather than say a driver that turns generic SDRs into V4L FM devices, or a dedicated GUI application. The latter would be a fine project on its own, and having it exec() ffmpeg for recording would no doubt also be useful. But it seems that you've decided beforehand that ffmpeg should do this. Why I do not know. gqrx can do some of what you describe, including displaying your SDR's input bandwidth in a waterfall. I wouldn't be surprised if there are more novice friendly programs. > > > > > somehow you swing between extreems here > > > first its pipes or nothing now we need FPGAs > > > > Yeah because different applications call for different tools. You > > cannot do RADAR or WiFi on the CPU. > > we are drifting off topic. But "You cannot do" triggers me a bit > reminds be of "you cannot break videocrypt without FFTs on FPGAs" > I did that on the CPU of my average desktop in 1997 > https://guru.multimedia.cx/index.php?s=videocrypt > > so now we cannot do RADAR on the CPU ? > why exactly would that be so ? > RADAR is not one thing btw, theres alot of different RADAR systems > some i do belive worked with vaccuum tubes i do guess you dont > mean these > > with wifi theres also different iterations. I guess as bandwidth > increases we need more processing Processing isn't the issue, latency is. But yes it is a bit off-topic, sorry. > > > I think we somehow are not talking on the same frequency ;) > > > if i want just AM/FM i need 10 pages of code max and writing > > > these > > > pages is a fun thing to do for me. > > > > Fun for you is technical debt for everyone else. > > how is a self contained module affecting anyone excpet who works on > it? Because radio begets its own requirements as far as DSP and other tooling is concerned, which will inevitably spill over into other parts of the project. Just one example is that sample rates are no longer within the range of human hearing, but could range in the GHz, thus causing all kinds of fun problems as sample_rate approaches INT_MAX. > > > autodetecting modulations and demodulating the selected station. > > > > lol > > whats funny ? I would suggest reading https://www.sigidwiki.com/wiki/Signal_Identification_Guide and then come back to me what the prospects of automatically identifying transmissions are. > > > 2. FFmpeg being able to simply be given a piece of radio spectrum > > > and > > > detect and > > > demodulate everything in that part independant of what > > > modulations > > > are there and storing > > > that all in a multimedia file like matroska or nut. > > > > You are suggesting writing a skimmer that somehow figures out what > > modulation is used. Meanwhile what people actually do is rely on > > the > > band plan, because shit is difficult enough as it is. Broadcast FM > > channels can be mono, or stereo, or forego stereo for subcarriers > > in > > different languages. Plus you have digital data in there, and > > mechanisms for emergency broadcast switchover. > > yes, and ? > even my car radio (which is what the manufactor of the car put in) > has working detection for all this Yes but your car radio doesn't automatically figure out whether a station is AM or FM say. It relies on expert knowledge (the band plan). All this said, if you want to write performant C and SIMD code for radio stuff there is a plethora of radio projects out there. Dablin looks like it could use AM and FM demodulation, and calling on ffmpeg for recording if it doesn't already. /Tomas
Tomas Härdin (12023-06-27): > Yes those are all fine use cases. But that doesn't explain why they > belong in FFmpeg rather than say a driver that turns generic SDRs into > V4L FM devices, or a dedicated GUI application. They can be implemented in any of these pieces of software, including FFmpeg, and each solution will have its own set of benefits and drawbacks in terms of performance, features, user interaction, etc. In other words, a simple point of logic: X can Y does not imply X cannot Z for all Z not equal to Y.
tis 2023-06-27 klockan 12:57 +0200 skrev Nicolas George: > Tomas Härdin (12023-06-27): > > Yes those are all fine use cases. But that doesn't explain why they > > belong in FFmpeg rather than say a driver that turns generic SDRs > > into > > V4L FM devices, or a dedicated GUI application. > > They can be implemented in any of these pieces of software, including > FFmpeg, and each solution will have its own set of benefits and > drawbacks in terms of performance, features, user interaction, etc. > > In other words, a simple point of logic: X can Y does not imply X > cannot > Z for all Z not equal to Y. I propose we add the ability for ffmpeg to send email. After all some users might find that useful. And because many users use webmail we should also implement HTML parsing. /Tomas
Tomas Härdin (12023-06-27): > I propose we add the ability for ffmpeg to send email. After all some > users might find that useful. And because many users use webmail we > should also implement HTML parsing. If you like to work on it and manage to produce something that integrates elegantly in the framework, then by all means, go ahead. Please, do it.
Le sunnuntaina 25. kesäkuuta 2023, 1.19.04 EEST Nicolas George a écrit : > Michael Niedermayer (12023-06-23): > > * What iam interrested in was working with the signals at a low level, why > > because i find it interresting and fun. > > Then this is what you should be spending your time on, and to hell with > anybody who says otherwise. Straw man much? The argument is that SDR does not belong in FFmpeg master, not that Michael cannot do SDR in his free time. > Unless you have commitments that we are not privy of, nobody can tell > you how you are supposed to be spending your time and skills. FYI https://fflabs.eu/about/ > Nobody can force you to manage releases and fix fuzzing bugs and do all the > things you do that are necessary to the project. Necessary to a conception > of the good of the project that is not even your own I think. Same straw man argument. > Nobody can prevent you from hacking the things that motivate you. At > worst, they can prevent you from committing the resulting code into > official FFmpeg. That would be the project's loss, Not cluttering an open-source project is generally considered to be a win, not a loss. Less maintainance work and smaller code size. > and you can still > publish it on a private branch. But they do not have the power to block > you from pushing. > It is not even clear they have the authority do block > you, more so if the code is really good and fits FFmpeg well. ...which it doesn't. "A complete, cross-platform solution to record, convert and stream audio and video" is how the official website defines FFmpeg... Radio waves (outside the visibible spectrum) are neither audio nor video. [Gratuitous slander removed] > I am especially annoyed by the “it's too hard” naysayers Nobody said that it was too hard. The complexity argument was that DAB & DVB is much more complicated than AM and FM, which they indeed are by any reasonable objective metric, not that Michael wouldn't be able to implement them. > They do not realize they reveal more about their own skills than anybody > else. It only tells that they have (at least) a rudimentary understanding of radio waves to figure out that DAB or DVB are much more complex signals than FM or AM. Maybe among the general public that would be telling some. Among ffmpeg-devel, I think it really tells nothing because just about everybody here would know or be able to guess that much. > But the whole attitude who wants FFmpeg to be a Serious OpenSource TM > Project, who needs to make releases and worry above all about ABI > stability, is really the attitude who is killing all the fun in working > on FFmpeg. Michael, you or really anyone is more than welcome to fork and have their fun coding on FFmpeg *outside* the community project, if they cannot have it within the framework that is that community. Meanwhile, they are dozens of people, ostensibly including Michael himself, that depend on FFmpeg being "Serious OpenSource TM" in some way, for their livelihood, and millions for their computer use. You don't get to ruin that, and if you try, you will first be blocked by the TC, and if you try harder, you will be kicked by the CC. It looks to me that some people need to learn that not every piece of code they write or intend to write belongs in upstream FFmpeg. > Hey, people, realize FFmpeg does not exist to be a Serious OpenSource TM > Project, FFmpeg does not exist to serve other projects, to serve > companies who benefit from it give the bare minimum back. It also does not exist to be your playground, especially not at the expense of other people. And it has certainly not evolved that way for the past 20 or so years. > FFmpeg exists because some day a dude thought it would be fun to write a > MPEG decoder. > And everybody else told him it would be too hard, > everybody else told him to use an existing library and to leave it to > the professionals. He did not believe them and proved them utterly > wrong, and the rest, as the saying goes, is history. And? That is completely irrelevant to the question whether (and if so how) SDR should be integrated in FFmpeg. > So I will say it explicitly. We — me, and everybody who agrees with me — > do not want to just maintain a bunch of wrappers for the convenience of > others, we want to have fun writing interesting code, trying new ways of > doing things, inventing original optimizations. We can find a balance > and work on useful things too. But if you want us to work only on the > boring useful things, if you want to bar us from working on fun things, > then just fork you. To paraphrase you, the fact that you can throw such an argument with a straight face tells a lot. So what if, for the sake of the argument, people's subjective notion of discretionary fun is writing FFmpeg code in a project that excludes you? This is tiring, I'll leave it at that.
sön 2023-06-25 klockan 00:19 +0200 skrev Nicolas George: > I am especially annoyed by the “it's too hard” naysayers — the same I > have been getting when I say that I want to write a XML suited to our > needs to replace our use of libxml. This is very revealing. The reason why XML parsing shouldn't be implemented in FFmpeg isn't just because it is difficult but because: * it is out of scope * it is technical debt * shitty parsing is the cause of the vast majority of CVEs It speaks of an inability to make sane architectural decisions, of not seperating concerns and of preferring cowboy coding over sane decisions with respect to the entire free software ecosystem. > But the whole attitude who wants FFmpeg to be a Serious OpenSource TM > Project, who needs to make releases and worry above all about ABI > stability Are you implying that API and ABI stability are not important? /Tomas
Tomas Härdin (12023-06-29): > This is very revealing. Indeed. > The reason why XML parsing shouldn't be > implemented in FFmpeg isn't just because it is difficult but because: > > * it is out of scope It is necessary. > * it is technical debt > * shitty parsing is the cause of the vast majority of CVEs > > It speaks of an inability to make sane architectural decisions, of not > seperating concerns and of preferring cowboy coding over sane decisions > with respect to the entire free software ecosystem. Thank you for illustrating my words with your inability to consider that (1) we do not need a complete parser (2) the things that make parsing XML hard are precisely the things we do not need and (3) if you consider writing such a XML parser is way too hard, it speaks more of your skills in that area than anything else. > Are you implying that API and ABI stability are not important? Precisely. Nothing is “important” in absolute?. Things are only important with regard to a goal. My goal is to have fun, and for that goal, API and ABI stability are not important. To have fun, I still need the project to exist, so I will spend time on these issues, that is all. API and ABI stability are important if you want to be of service. I do not want to be of service. Maybe you want FFmpeg to be of service, you seem to. Well, let me tell you this: if you want to force me to not have fun and spend effort on things you think are important instead, go fork yourself.
Hello, On Thu, 29 Jun 2023, at 09:14, Nicolas George wrote: >> The reason why XML parsing shouldn't be >> implemented in FFmpeg isn't just because it is difficult but because: >> >> * it is out of scope > > It is necessary. Says you. This is not an opinion that seems to be the majority. > Precisely. Nothing is “important” in absolute?. Things are only > important with regard to a goal. > > My goal is to have fun, and for that goal, API and ABI stability are not > important. To have fun, I still need the project to exist, so I will > spend time on these issues, that is all. Again YOUR goal. It might not be the majority of the community. We can do an AG vote, if you want. > go fork yourself. This is out of line, IMHO. jb
tor 2023-06-29 klockan 09:14 +0200 skrev Nicolas George: > Tomas Härdin (12023-06-29): > > > The reason why XML parsing shouldn't be > > implemented in FFmpeg isn't just because it is difficult but > > because: > > > > * it is out of scope > > It is necessary. Factually incorrect. libxml2 provides what we need. libexpat would also be acceptable. In fact providing that option may be useful to packagers. > > * it is technical debt > > * shitty parsing is the cause of the vast majority of CVEs > > > > It speaks of an inability to make sane architectural decisions, of > > not > > seperating concerns and of preferring cowboy coding over sane > > decisions > > with respect to the entire free software ecosystem. > > Thank you for illustrating my words with your inability to consider > that > (1) we do not need a complete parser (2) the things that make parsing > XML hard are precisely the things we do not need and (3) if you > consider > writing such a XML parser is way too hard, it speaks more of your > skills > in that area than anything else. I am perfectly capable of writing an XML parser, potentially a formally correct one. But more importantly I am capable of understanding that doing so would be a waste of time and (if not formally verified) a source of CVEs. The same goes for AM demodulation. > > Are you implying that API and ABI stability are not important? > > Precisely. Nothing is “important” in absolute? ABI and API stability are Pretty Fucking Important. This attitude alone is reason to withdraw push access. > API and ABI stability are important if you want to be of service. What do you think the point of this project is? > Well, let me tell you this: if you want to force me to not have fun > and > spend effort on things you think are important instead, go fork > yourself. Childish insults do not make you right, but rather the opposite. Discipline is important, especially in large projects. /Tomas
On Fri, Jun 30, 2023 at 11:51:21PM +0200, Tomas Härdin wrote: > tor 2023-06-29 klockan 09:14 +0200 skrev Nicolas George: > > Tomas Härdin (12023-06-29): [...] > I am perfectly capable of writing an XML parser, potentially a formally > correct one. But more importantly I am capable of understanding that > doing so would be a waste of time and (if not formally verified) a > source of CVEs. The same goes for AM demodulation. I didnt want to reply to this thread as i dont want to participate in this xml discussion but "AM demodulation" is out of place in this pragraph XML (if you consider the full syntax) is very complex, easy to get wrong and has produced many security issues. AM demodulation is none of that I implemented 3 different ways to do AM demodulation in like 2 pages of code all together thx [...]
Tomas Härdin (12023-06-30): > Factually incorrect. libxml2 provides what we need. libexpat would also > be acceptable. In fact providing that option may be useful to > packagers. Do you consider these to be system libraries? > ABI and API stability are Pretty Fucking Important. This attitude alone > is reason to withdraw push access. Good thing you do not have this kind of authority. It could be argued that this kind of authoritarian attitude should be a reason to kick *you* out. > What do you think the point of this project is? The point of FFmpeg is for hackers to have fun and produce great code. Little hands can come later and polish the result so that is can be used even by lusers.
sön 2023-07-02 klockan 11:34 +0200 skrev Nicolas George: > Tomas Härdin (12023-06-30): > > Factually incorrect. libxml2 provides what we need. libexpat would > > also > > be acceptable. In fact providing that option may be useful to > > packagers. > > Do you consider these to be system libraries? On Debian these are provided by apt same as all other libraries including libc, so yes. > > ABI and API stability are Pretty Fucking Important. This attitude > > alone > > is reason to withdraw push access. > > Good thing you do not have this kind of authority. It could be argued > that this kind of authoritarian attitude should be a reason to kick > *you* out. "authoritarian" lol /Tomas
Tomas Härdin (12023-07-02): > On Debian these are provided by apt same as all other libraries > including libc, so yes. By this reasoning, xbill would be considered a system library. Try again.
diff --git a/configure b/configure index 4ac7cc6c0b..8896348a30 100755 --- a/configure +++ b/configure @@ -269,6 +269,7 @@ External library support: --enable-libshine enable fixed-point MP3 encoding via libshine [no] --enable-libsmbclient enable Samba protocol via libsmbclient [no] --enable-libsnappy enable Snappy compression, needed for hap encoding [no] + --enable-libsoapysdr enable SoapySDR, needed for connecting to SDR HW [no] --enable-libsoxr enable Include libsoxr resampling [no] --enable-libspeex enable Speex de/encoding via libspeex [no] --enable-libsrt enable Haivision SRT protocol via libsrt [no] @@ -1888,6 +1889,7 @@ EXTERNAL_LIBRARY_LIST=" libshine libsmbclient libsnappy + libsoapysdr libsoxr libspeex libsrt @@ -3540,6 +3542,7 @@ rtsp_muxer_select="rtp_muxer http_protocol rtp_protocol rtpenc_chain" sap_demuxer_select="sdp_demuxer" sap_muxer_select="rtp_muxer rtp_protocol rtpenc_chain" sdp_demuxer_select="rtpdec" +sdr_demuxer_deps="libsoapysdr" smoothstreaming_muxer_select="ismv_muxer" spdif_demuxer_select="adts_header" spdif_muxer_select="adts_header" @@ -6776,6 +6779,7 @@ enabled libshine && require_pkg_config libshine shine shine/layer3.h sh enabled libsmbclient && { check_pkg_config libsmbclient smbclient libsmbclient.h smbc_init || require libsmbclient libsmbclient.h smbc_init -lsmbclient; } enabled libsnappy && require libsnappy snappy-c.h snappy_compress -lsnappy -lstdc++ +enabled libsoapysdr && require libsoapysdr SoapySDR/Device.h SoapySDRDevice_enumerate -lSoapySDR enabled libsoxr && require libsoxr soxr.h soxr_create -lsoxr enabled libssh && require_pkg_config libssh libssh libssh/sftp.h sftp_init enabled libspeex && require_pkg_config libspeex speex speex/speex.h speex_decoder_init diff --git a/doc/demuxers.texi b/doc/demuxers.texi index 2d33b47a56..fa8da99911 100644 --- a/doc/demuxers.texi +++ b/doc/demuxers.texi @@ -899,6 +899,77 @@ the script is directly played, the actual times will match the absolute timestamps up to the sound controller's clock accuracy, but if the user somehow pauses the playback or seeks, all times will be shifted accordingly. +@section sdr / sdrfile + +Software Defined Radio Demuxer. The sdr demuxer will demux radio streams +through the use of a libsoapy compatible software defined radio. sdrfile +will do the same from a file. +The short seek forward and backward commands can be used to select the next and +previous radio stations. Seeking to specific radio stations and similar will be +supported in the future. + +@table @option + +@item block_size +size of FFT to use, leave at 0 to choose automatically + +@item mode + +@table @samp + +@item single_mode +Demodulate 1 station + +@item all_mode +Demodulate all stations in view + +@end table + +@item station_freq +Initial station to pick, if multiple are probed. + +@item driver +libsoapy driver to use. Leave empty to attempt autodetection. + +@item sdr_sr +SDR sample rate, this must be a value supported by the hardware. Leave 0 to attempt +autodetection + +@item sdr_freq +initial SDR center frequency, this must be a value supported by the hardware + +@item min_freq +Minimum frequency, leave at 0 for autodetection + +@item max_freq +Maximum frequency, leave at 0 for autodetection + +@item dumpurl +URL to dump RAW SDR stream to, this can be played back later with sdrfile. Useful +for debuging. + +@item kbd_alpha +Kaiser Bessel derived window parameter + +@item am_mode +AM Demodulation method. Several different methods are supported. +@table @samp +@item am_inphase +Demodulate mono signal that is in phase with the carrier. This works well if there is one +transmitter at the frequency and there is nothing disturbing the signal. + +@item am_midside +Demodulate AM into a stereo signal so that the mid is from the in phase and side is from the +quadrature signal. This can be used if there are Multiple transmitters that transmit at the same +frequency. Or just for fun. + +@item am_envelope +Demodulate mono signal that is basically the energy of the spectrum. + +@end table + +@end table + @section tedcaptions JSON captions used for @url{http://www.ted.com/, TED Talks}. diff --git a/libavformat/Makefile b/libavformat/Makefile index 05434a0f82..1efb51c613 100644 --- a/libavformat/Makefile +++ b/libavformat/Makefile @@ -528,6 +528,8 @@ OBJS-$(CONFIG_SCC_MUXER) += sccenc.o OBJS-$(CONFIG_SCD_DEMUXER) += scd.o OBJS-$(CONFIG_SDNS_DEMUXER) += sdns.o OBJS-$(CONFIG_SDP_DEMUXER) += rtsp.o +OBJS-$(CONFIG_SDR_DEMUXER) += sdrdemux.o +OBJS-$(CONFIG_SDRFILE_DEMUXER) += sdrdemux.o OBJS-$(CONFIG_SDR2_DEMUXER) += sdr2.o OBJS-$(CONFIG_SDS_DEMUXER) += sdsdec.o OBJS-$(CONFIG_SDX_DEMUXER) += sdxdec.o pcm.o diff --git a/libavformat/allformats.c b/libavformat/allformats.c index 96443a7272..a0393e6523 100644 --- a/libavformat/allformats.c +++ b/libavformat/allformats.c @@ -410,6 +410,8 @@ extern const FFOutputFormat ff_scc_muxer; extern const AVInputFormat ff_scd_demuxer; extern const AVInputFormat ff_sdns_demuxer; extern const AVInputFormat ff_sdp_demuxer; +extern const AVInputFormat ff_sdr_demuxer; +extern const AVInputFormat ff_sdrfile_demuxer; extern const AVInputFormat ff_sdr2_demuxer; extern const AVInputFormat ff_sds_demuxer; extern const AVInputFormat ff_sdx_demuxer; diff --git a/libavformat/sdrdemux.c b/libavformat/sdrdemux.c new file mode 100644 index 0000000000..28f6f3d706 --- /dev/null +++ b/libavformat/sdrdemux.c @@ -0,0 +1,1739 @@ +/* + * SDR Demuxer / Demodulator + * Copyright (c) 2023 Michael Niedermayer + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +/** + * @file + * + * + */ + +/** + * TODO + * * FM + * * DAB + * * DVB + * * Improve probing using multiple detections and differential detection + * + */ + +#include "config_components.h" + +#include <pthread.h> +#if CONFIG_SDR_DEMUXER +#include <SoapySDR/Version.h> +#include <SoapySDR/Device.h> +#include <SoapySDR/Formats.h> +#endif +#include <stdatomic.h> +#include <float.h> +#include "libavutil/avassert.h" +#include "libavutil/channel_layout.h" +#include "libavutil/fifo.h" +#include "libavutil/intreadwrite.h" +#include "libavutil/opt.h" +#include "libavutil/time.h" +#include "libavutil/thread.h" +#include "libavutil/tx.h" +#include "libavcodec/kbdwin.h" +#include "avformat.h" +#include "demux.h" +#include "internal.h" + +#ifdef SYN_TEST +#include "libavutil/lfg.h" +#endif + +#define FREQ_BITS 22 +#define TIMEBASE ((48000ll / 128) << FREQ_BITS) +#define MAX_CHANNELS 4 + +#define STATION_TIMEOUT 100 ///< The number of frames after which a station is removed if it was not detected + +/* + * 100 detects nothing + * 50 detects a good bit but not all + */ +#define AM_THRESHOLD 40 + +#define AM_MAX23 0.03 //smaller causes failure on synthetic signals +#define AM_MAX4 0.01 + +#define FM_THRESHOLD .8 //TODO adjust + +#define INDEX2F(INDEX) (((INDEX) - sdr->block_size + 0.5) * 0.5 * sdr->sdr_sample_rate / sdr->block_size + sdr->block_center_freq) +#define F2INDEX(F) ((( F) - sdr->block_center_freq) * 2 * sdr->block_size / sdr->sdr_sample_rate + sdr->block_size - 0.5) + +typedef enum Mode { + SingleStationMode, //< demodulate 1 stations for Radio like usage + AllStationMode, //< demodulate all stations in current input + ModeNB, +} Mode; + +typedef enum AMMode { + AMMidSide, + AMLeftRight, + AMInPhase, + AMEnvelope, + AMModeNB, +} AMMode; + +typedef enum Modulation { + AM, + FM, + OFDM_DQPSK, //DAB + //QAM, PSK, ... +} Modulation; + +typedef struct Station { + char *name; + enum Modulation modulation; + double frequency; + int64_t bandwidth; + float score; + int timeout; //since how many blocks was this detectable but not detected + int multiplex_index; //DAB can have multiple stations on one frequency + + struct SDRStream *stream; +} Station; + +typedef struct SDRStream { + AVTXContext *ifft_ctx; + av_tx_fn ifft; + int block_size; + int processing_index; + float *out_buf; + AVComplexFloat *block; + AVComplexFloat *iblock; + AVComplexFloat *icarrier; + float *window; + Station *station; + + int frame_size; + int frame_buffer_line; + uint8_t *frame_buffer; +} SDRStream; + +typedef struct SDRContext { + const AVClass *class; + AVFormatContext *avfmt; +#if CONFIG_SDR_DEMUXER + SoapySDRDevice *soapy; + SoapySDRStream *soapyRxStream; +#endif + AVTXContext *fft_ctx; + av_tx_fn fft; + Mode mode; + AVRational fps; + char *driver_name; + char *dump_url; + int fileheader_size; + AVIOContext *dump_avio; + Station **station; + int nb_stations; + int width, height; + int single_ch_audio_st_index; + int64_t freq; + int64_t min_freq; + int64_t max_freq; + int64_t min_center_freq; + int64_t max_center_freq; + int sdr_sample_rate; + int sdr_agc; + int64_t bandwidth; + int64_t last_pts; + int64_t pts; + int block_size; + int kbd_alpha; + AVComplexFloat *windowed_block; + int64_t block_center_freq; ///< center frequency the current block contains + int64_t station_freq; + + int am_mode; ///< AMMode but using int for generic option access + + pthread_t soapy_thread; + int thread_started; + pthread_mutex_t mutex; ///< Mutex to protect common variable between mainthread and soapy_thread, and also to protect soapy from concurrent calls + AVFifo *empty_block_fifo; + AVFifo *full_block_fifo; + atomic_int close_requested; + int64_t wanted_freq; ///< center frequency we want the hw to provide next + int seek_direction; ///< if a seek is requested this is -1 or 1 otherwise 0 + int skip_probe; + + AVComplexFloat *block; + float *len2block; + float *window; + + int missing_streams; +} SDRContext; + +typedef struct ModulationDescriptor { + const char *name; + enum Modulation modulation; + enum AVMediaType media_type; + + /** + * Scan all of the current sdr->block and call create_station() for each found station + */ + int (*probe)(SDRContext *sdr); + + /** + * Demodulate given station into packet + */ + int (*demodulate)(SDRContext *sdr, int stream_index, AVPacket *pkt); +} ModulationDescriptor; + +typedef struct FIFOElement { + int64_t center_frequency; + AVComplexFloat *halfblock; +} FIFOElement; + +static float len2(AVComplexFloat c) +{ + return c.re*c.re + c.im*c.im; +} + +static int create_station(SDRContext *sdr, enum Modulation modulation, double freq, int64_t bandwidth, float score) { + Station *station; + void *tmp; + int i; + double best_distance = INT64_MAX; + Station *best_station = NULL; + int best_station_index = -1; + float drift = modulation == AM ? sdr->sdr_sample_rate / (float)sdr->block_size + 1 : (bandwidth/4.0); + + for (i=0; i<sdr->nb_stations; i++) { + double delta = fabs(sdr->station[i]->frequency - freq); + // Station already added, or we have 2 rather close stations + //FIXME we want to make sure that the stronger station is not skiped but we also dont want to add a station twice + if (modulation == sdr->station[i]->modulation && delta < best_distance) { + best_distance = delta; + best_station = sdr->station[i]; + best_station_index = i; + } + } + if (best_station) { + if (best_distance <= drift) + return best_station_index; + } + + tmp = av_realloc_array(sdr->station, sdr->nb_stations+1, sizeof(*sdr->station)); + if (!tmp) + return AVERROR(ENOMEM); + sdr->station = tmp; + + station = av_mallocz(sizeof(*station)); + if (!station) + return AVERROR(ENOMEM); + + sdr->station[sdr->nb_stations++] = station; + + station->modulation = modulation; + station->frequency = freq; + station->bandwidth = bandwidth; + station->score = score; + + // if we just found a new station lets also probe the next frame + sdr->skip_probe = 0; + + av_log(sdr, AV_LOG_INFO, "create_station %d f:%f bw:%"PRId64" score: %f\n", modulation, freq, bandwidth, score); + + return sdr->nb_stations - 1; +} + +static void free_station(Station *station) +{ + av_freep(&station->name); + if (station->stream) + station->stream->station = NULL; + av_free(station); +} + +/** + * remove stations which we no longer receive well + * Especially with AM and weather conditions stations disapear, this keeps things a bit more tidy + */ +static void decay_stations(SDRContext *sdr) +{ + for (int i=0; i<sdr->nb_stations; i++) { + Station *station = sdr->station[i]; + + if (station->frequency - station->bandwidth/2 < sdr->block_center_freq - sdr->bandwidth/2 || + station->frequency + station->bandwidth/2 > sdr->block_center_freq + sdr->bandwidth/2) + continue; + if (station->stream) + continue; + + if (station->timeout++ > STATION_TIMEOUT) { + free_station(station); + sdr->station[i] = sdr->station[--sdr->nb_stations]; + } + } +} + +static void probe_common(SDRContext *sdr) +{ + for(int i = 0; i < 2*sdr->block_size; i++) { + sdr->len2block[i] = len2(sdr->block[i]); + } +} + +// simple and dumb implementation for max we could do it faster but it doesnzt matter ATM +static float max_in_range(SDRContext *sdr, unsigned start, unsigned end) +{ + float max = sdr->len2block[start]; + for (; start <= end; start++) + max = fmax(max, sdr->len2block[start]); + return max; +} + +static double find_peak(SDRContext *sdr, const float *data, int index, int len) +{ + double y[3], b, a; + + if (index == 0) { + index = 1; + } else if (index >= len - 1) + index = len - 2; + + //We use a simple quadratic to find the maximum at higher precission than the available samples + //ax^2 + bx + c + //dy/dx = 2ax + b = 0 + y[0] = data[index-1]; + y[1] = data[index]; + y[2] = data[index+1]; + + b = (y[2] - y[0]) * 0.5; + a = (y[0] + y[2]) * 0.5 - y[1]; + + //This should not happen + if (a >= 0.0) + return INT_MIN; + + return index -0.5 * b / a; + //TODO theres some simplification possible above but the fm_probe() using this needs to be tuned first so we dont optimize the wrong algorithm +} + +static double find_peak_macleod(SDRContext *sdr, const AVComplexFloat *data, int index, int len, float *phase) { + AVComplexFloat ref; + double r0, r1, r2, rd, g; + double ab[16][2] = { + {1.525084, 3.376388}, {1.773260, 3.129280}, {1.970200, 2.952300}, {2.122700, 2.825800}, + {2.243600, 2.731620}, {2.341880, 2.658740}, {2.423310, 2.600750}, {2.492110, 2.553360}, + {2.551000, 2.513930}, {2.602300, 2.480400}, {2.646880, 2.451880}, {2.686200, 2.427200}, + {2.721880, 2.405120}, {2.753600, 2.385800}, {2.781800, 2.368900}, {2.808580, 2.352920}, + }; + + double a = ab[sdr->kbd_alpha-1][0]; + double b = ab[sdr->kbd_alpha-1][1]; + + if (index == 0) { + index = 1; + } else if (index >= len - 1) + index = len - 2; + + /* Baed on Macleod, M.D., "Fast Nearly ML Estimation of the Parameters + * of Real or Complex Single Tones or Resolved Multiple Tones," + * IEEE Trans. Sig. Proc. Vol 46 No 1, + * January 1998, pp141-148. + * + * We use the 3 point estimator without corrections, the corrections + * provide insignificant performance increase. We add compensation due to + * the window used, this performs substantially better than the original + * with a rectangular window. + */ + ref = data[index]; + r0 = data[index-1].re * ref.re + data[index-1].im * ref.im; + r1 = ref.re * ref.re + ref.im * ref.im; + r2 = data[index+1].re * ref.re + data[index+1].im * ref.im; + rd = 2*r1 + r0 + r2; + if (r2 > r1 || r0 > r1 || rd <=0) // rounding and float numeric issues + return -1; + g = (r0-r2) / rd + DBL_MIN; + g = (sqrt(a*a + 8*g*g)-a)/(b*g); + + if (phase) { + AVComplexFloat t; + if (g < 0){ + t.re = ref.re * (1+g) + data[index-1].re * g; + t.im = ref.im * (1+g) + data[index-1].im * g; + } else { + t.re = ref.re * (1-g) - data[index+1].re * g; + t.im = ref.im * (1-g) - data[index+1].im * g; + } + *phase = atan2(t.im, t.re) - M_PI*g; + } + + return index + g; +} + +static int probe_am(SDRContext *sdr) +{ + int i; + int bandwidth_f = 6000; + int half_bw_i = bandwidth_f * (int64_t)sdr->block_size / sdr->sdr_sample_rate; + double avg = 0; + + if (2*half_bw_i > 2*sdr->block_size) + return 0; + + for (i = 0; i<2*half_bw_i; i++) + avg += sdr->len2block[i]; + + for (i = half_bw_i; i<2*sdr->block_size - half_bw_i; i++) { + float mid = sdr->len2block[i]; + double score; + avg += sdr->len2block[i + half_bw_i]; + score = half_bw_i * mid / (avg - mid); + avg -= sdr->len2block[i - half_bw_i]; + //TODO also check for symmetry in the spectrum + if (mid > 0 && score > AM_THRESHOLD && + sdr->len2block[i - 1] < mid && sdr->len2block[i + 1] <= mid && + sdr->len2block[i - 2] < mid*AM_MAX23 && sdr->len2block[i + 2] < mid*AM_MAX23 && + sdr->len2block[i - 3] < mid*AM_MAX23 && sdr->len2block[i + 3] < mid*AM_MAX23 + ){ + if (max_in_range(sdr, i-half_bw_i, i-4) < mid*AM_MAX4 && + max_in_range(sdr, i+4, i+half_bw_i) < mid*AM_MAX4) { + int station_index; + double peak_i = find_peak_macleod(sdr, sdr->block, i, 2*sdr->block_size, NULL); + if (peak_i < 0) + continue; + if (fabs(peak_i-i) > 1.0) { + av_log(sdr->avfmt, AV_LOG_WARNING, "peak detection failure\n"); + continue; + } + + station_index = create_station(sdr, AM, INDEX2F(peak_i), bandwidth_f, score); + + if (station_index >= 0) { + Station *station = sdr->station[station_index]; + station->timeout = 0; + } + } + } + } + + return 0; +} + +static int demodulate_am(SDRContext *sdr, int stream_index, AVPacket *pkt) +{ + AVStream *st = sdr->avfmt->streams[stream_index]; + SDRStream *sst = st->priv_data; + double freq = sst->station->frequency; + int64_t bandwidth = sst->station->bandwidth; + int index = lrint(F2INDEX(freq)); + int len = (bandwidth * 2ll * sdr->block_size + sdr->sdr_sample_rate/2) / sdr->sdr_sample_rate; + float *newbuf; + float scale; + int sample_rate = sdr->sdr_sample_rate * (int64_t)sst->block_size / sdr->block_size; + int ret, i; + int i_max; + double current_station_i; + double avg = 0; + float score; + float mid=0; + float limits[2] = {-0.0, 0.0}; + float clip = 1.0; + enum AMMode am_mode = sdr->am_mode; + +#define CARRIER_SEARCH 2 + if (index + len + CARRIER_SEARCH>= 2*sdr->block_size || + index - len - CARRIER_SEARCH < 0 || + 2*len + 1 > 2*sst->block_size) + return AVERROR(ERANGE); + + for(int i = index-CARRIER_SEARCH; i<index+CARRIER_SEARCH; i++) { + sdr->len2block[i] = len2(sdr->block[i]); // we only update the array when probing so we need to compute this here + if (sdr->len2block[i] > sdr->len2block[i_max]) + i_max = i; + } + mid = sdr->len2block[i_max]; + + for (i = -len; i < len+1; i++) + avg += sdr->len2block[i + index]; + score = len * mid / (avg - mid); + //find optimal frequency for this block if we have a carrier + if (score > AM_THRESHOLD / 4) { + i_max = index; + + current_station_i = find_peak_macleod(sdr, sdr->block, i_max, 2*sdr->block_size, NULL); + if (current_station_i >= 0 && fabs(current_station_i - i_max) < 1.0) { + av_log(sdr->avfmt, AV_LOG_DEBUG, "adjusting frequency %f, max index: %ld\n", INDEX2F(current_station_i) - freq, lrint(F2INDEX(freq)) - index); + freq = INDEX2F(current_station_i); + index = lrint(F2INDEX(freq)); + } + } else { + // We have no carrier so we cannot decode Synchronously to a carrier + am_mode = AMEnvelope; + } + + newbuf = av_malloc(sizeof(*sst->out_buf) * 2 * sst->block_size); + if (!newbuf) + return AVERROR(ENOMEM); +#define SEPC 4 + + i = 2*len+1; + memcpy(sst->block, sdr->block + index - len, sizeof(*sst->block) * i); + memset(sst->block + i, 0, sizeof(*sst->block) * (2 * sst->block_size - i)); + + sst->ifft(sst->ifft_ctx, sst->iblock , sst->block, sizeof(AVComplexFloat)); + + if (am_mode == AMEnvelope) { + double vdotw = 0; + double wdot = 0; // could be precalculated + for (i = 0; i<2*sst->block_size; i++) { + float w = sst->window[i]; + float v = sqrt(len2(sst->iblock[i])); + sst->iblock[i].re = v; + sst->iblock[i].im = 0; + + vdotw += w*v; + wdot += w*w; + } + + vdotw /= wdot ; + for (i = 0; i<2*sst->block_size; i++) { + float w = sst->window[i]; + sst->iblock[i].re -= w*vdotw; + } + + scale = 0.9/vdotw; + } else { + // Synchronous demodulation + memset(sst->block, 0, sizeof(*sst->block) * i); + for (i = len-SEPC+1; i<len+SEPC; i++) + sst->block[i] = sdr->block[index + i - len]; + sst->ifft(sst->ifft_ctx, sst->icarrier, sst->block, sizeof(AVComplexFloat)); + + for (i = 0; i<2*sst->block_size; i++) { + AVComplexFloat c = sst->icarrier[i]; + AVComplexFloat s = sst->iblock[i]; + float w = sst->window[i]; + float den = w / (c.re*c.re + c.im*c.im); + av_assert0(c.re*c.re + c.im*c.im > 0); + + sst->iblock[i].re = (s.im*c.im + s.re*c.re) * den; + sst->iblock[i].im = (s.im*c.re - s.re*c.im) * den; + sst->iblock[i].re -= w; + } + scale = 0.9; + } + + for(int i = 0; i<2*sst->block_size; i++) { + av_assert0(isfinite(sst->iblock[i].re)); + av_assert0(isfinite(sst->iblock[i].im)); + limits[0] = FFMIN(limits[0], FFMIN(sst->iblock[i].re - sst->iblock[i].im, sst->iblock[i].re + sst->iblock[i].im)); + limits[1] = FFMAX(limits[1], FFMAX(sst->iblock[i].re - sst->iblock[i].im, sst->iblock[i].re + sst->iblock[i].im)); + } + av_assert1(FFMAX(limits[1], -limits[0]) >= 0); + scale = FFMIN(scale, 0.98 / FFMAX(limits[1], -limits[0])); + + for(i = 0; i<sst->block_size; i++) { + float m, q; + + m = sst->out_buf[2*i+0] + (sst->iblock[i ].re) * sst->window[i ] * scale; + newbuf[2*i+0] = (sst->iblock[i + sst->block_size].re) * sst->window[i + sst->block_size] * scale; + + switch(am_mode) { + case AMMidSide: + case AMLeftRight: + q = sst->out_buf[2*i+1] + sst->iblock[i ].im * sst->window[i ] * scale; + newbuf[2*i+1] = sst->iblock[i + sst->block_size].im * sst->window[i + sst->block_size] * scale; + switch(am_mode) { + case AMMidSide: + q *= 0.5; + sst->out_buf[2*i+0] = m + q; + sst->out_buf[2*i+1] = m - q; + break; + case AMLeftRight: + sst->out_buf[2*i+0] = m; + sst->out_buf[2*i+1] = q; + break; + } + break; + + case AMEnvelope: + case AMInPhase: + sst->out_buf[2*i+0] = + sst->out_buf[2*i+1] = m; + break; + } + + if (fabs(sst->out_buf[i]) > clip) { + av_log(sdr->avfmt, AV_LOG_WARNING, "CLIP %f\n", sst->out_buf[i]); + clip = fabs(sst->out_buf[i]) * 1.1; + } + } + + ret = av_packet_from_data(pkt, (void*)sst->out_buf, sizeof(*sst->out_buf) * 2 * sst->block_size); + if (ret < 0) + av_free(sst->out_buf); + sst->out_buf = newbuf; + + if (st->codecpar->ch_layout.nb_channels != 2 || + st->codecpar->sample_rate != sample_rate + ) { + av_log(sdr->avfmt, AV_LOG_INFO, "set channel parameters %d %d %d\n", st->codecpar->ch_layout.nb_channels, st->codecpar->sample_rate, sample_rate); + if (st->codecpar->sample_rate == 0) + sdr->missing_streams--; + ff_add_param_change(pkt, 2, 0, sample_rate, 0, 0); + st->codecpar->sample_rate = sample_rate; + } + + return ret; +} + +static int probe_fm(SDRContext *sdr) +{ + int i; + int bandwidth_f = 200*1000; + int half_bw_i = bandwidth_f * (int64_t)sdr->block_size / sdr->sdr_sample_rate; + double avg[2] = {0}, tri = 0; + float last_score[3] = {FLT_MAX, FLT_MAX, FLT_MAX}; + + if (2*half_bw_i > 2*sdr->block_size) + return 0; + + for (i = 0; i<half_bw_i; i++) { + avg[0] += sdr->len2block[i]; + tri += i*sdr->len2block[i]; + } + for (; i<2*half_bw_i; i++) { + avg[1] += sdr->len2block[i]; + tri += (2*half_bw_i-i)*sdr->len2block[i]; + } + + for(i = half_bw_i; i<2*sdr->block_size - half_bw_i; i++) { + double b = avg[0] + sdr->len2block[i]; + avg[0] += sdr->len2block[i] - sdr->len2block[i - half_bw_i]; + avg[1] -= sdr->len2block[i] - sdr->len2block[i + half_bw_i]; + b += avg[1]; + tri += avg[1] - avg[0]; + + last_score[2] = last_score[1]; + last_score[1] = last_score[0]; + last_score[0] = tri / (b * half_bw_i); + + if (last_score[1] >= last_score[0] && + last_score[1] > last_score[2] && + last_score[1] > FM_THRESHOLD) { + + // as secondary check, we could check that without the center 3 samples we are still having a strong signal FIXME + + double peak_i = find_peak(sdr, last_score, 1, 3) + i - 1; + if (peak_i < 0) + continue; + av_assert0(fabs(peak_i-i) < 2); +//Disabled as we dont have demodulate yet and its wrong sometimes +// create_station(sdr, FM, peak_i * 0.5 * sdr->sdr_sample_rate / sdr->block_size + sdr->block_center_freq - sdr->sdr_sample_rate/2, bandwidth_f, last_score[1]); + + } + } + + return 0; +} + +ModulationDescriptor modulation_descs[] = { + {"Amplitude Modulation", AM, AVMEDIA_TYPE_AUDIO, probe_am, demodulate_am}, + {"Frequency Modulation", FM, AVMEDIA_TYPE_AUDIO, probe_fm, NULL}, +}; + +static int set_sdr_freq(SDRContext *sdr, int64_t freq) +{ + freq = av_clip64(freq, sdr->min_center_freq, sdr->max_center_freq); + +#if CONFIG_SDR_DEMUXER + if (sdr->soapy && SoapySDRDevice_setFrequency(sdr->soapy, SOAPY_SDR_RX, 0, freq, NULL) != 0) { + av_log(sdr->avfmt, AV_LOG_ERROR, "setFrequency fail: %s\n", SoapySDRDevice_lastError()); + return AVERROR_EXTERNAL; + } +#endif // CONFIG_SDR_DEMUXER + + sdr->freq = freq; + + return 0; +} + +static void free_stream(SDRContext *sdr, int stream_index) +{ + AVFormatContext *s = sdr->avfmt; + AVStream *st = s->streams[stream_index]; + SDRStream *sst = st->priv_data; + + av_tx_uninit(&sst->ifft_ctx); + sst->ifft = NULL; + sst->block_size = 0; + + av_freep(&sst->out_buf); + av_freep(&sst->block); + av_freep(&sst->iblock); + av_freep(&sst->icarrier); + av_freep(&sst->window); + +} + +static int setup_stream(SDRContext *sdr, int stream_index, Station *station) +{ + AVFormatContext *s = sdr->avfmt; + AVStream *st = s->streams[stream_index]; + SDRStream *sst = st->priv_data; + int ret; + + //For now we expect each station to be only demodulated once, nothing should break though if its done more often + av_assert0(station->stream == NULL || station->stream == sst); + + if (sst->station) + sst->station->stream = NULL; + + sst->station = station; + station->stream = sst; + + if (st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) { + free_stream(sdr, stream_index); + + for (sst->block_size = 4; 4ll *sst->station->bandwidth * sdr->block_size / sdr->sdr_sample_rate > sst->block_size; sst->block_size <<= 1) + ; + sst->block_size = FFMIN(sdr->block_size, sst->block_size); + + ret = av_tx_init(&sst->ifft_ctx, &sst->ifft, AV_TX_FLOAT_FFT, 1, 2*sst->block_size, NULL, 0); + if (ret < 0) + return ret; + + sst->out_buf = av_malloc(sizeof(*sst->out_buf) * 2 * sst->block_size); + sst->block = av_malloc(sizeof(*sst-> block) * 2 * sst->block_size); + sst->iblock = av_malloc(sizeof(*sst->iblock) * 2 * sst->block_size); + sst->icarrier = av_malloc(sizeof(*sst->icarrier) * 2 * sst->block_size); + sst->window = av_malloc(sizeof(*sst->window) * 2 * sst->block_size); + if (!sst->out_buf || !sst->block || !sst->iblock || !sst->icarrier || !sst->window) + return AVERROR(ENOMEM); + + avpriv_kbd_window_init(sst->window, 8, sst->block_size); + for(int i = sst->block_size; i < 2 * sst->block_size; i++) { + sst->window[i] = sst->window[2*sst->block_size - i - 1]; + } + } + + return 0; +} + +static void inject_block_into_fifo(SDRContext *sdr, AVFifo *fifo, FIFOElement *fifo_element, const char *error_message) +{ + int ret; + + pthread_mutex_lock(&sdr->mutex); + ret = av_fifo_write(fifo, fifo_element, 1); + pthread_mutex_unlock(&sdr->mutex); + if (ret < 0) { + av_log(sdr->avfmt, AV_LOG_DEBUG, "%s", error_message); + av_freep(&fifo_element->halfblock); + } +} + +static void flush_fifo(SDRContext *sdr, AVFifo *fifo) +{ + FIFOElement fifo_element; + + pthread_mutex_lock(&sdr->mutex); + while(fifo) { + int ret = av_fifo_read(fifo, &fifo_element, 1); + if (ret < 0) + break; + av_freep(&fifo_element.halfblock); + } + pthread_mutex_unlock(&sdr->mutex); +} + +/** + * Grab data from soapy and put it in a bigger buffer. + * This thread would not be needed if libsoapy internal buffering was not restricted + * to a few milli seconds. libavformat cannot guarntee that it will get called from the user + * application every 10-20ms or something like that so we need a thread to pull data from soapy + * and put it in a larger buffer that we can then read from at the rate the code is called + * by the user + */ +static void *soapy_needs_bigger_buffers_worker(SDRContext *sdr) +{ + AVFormatContext *avfmt = sdr->avfmt; + unsigned block_counter = 0; + int remaining_file_block_size = 0; + ff_thread_setname("sdrdemux - soapy rx"); + + while(!atomic_load(&sdr->close_requested)) { + FIFOElement fifo_element; + int remaining, ret; + int empty_blocks, full_blocks; + + //i wish av_fifo was thread safe + pthread_mutex_lock(&sdr->mutex); + ret = av_fifo_read(sdr->empty_block_fifo, &fifo_element, 1); + empty_blocks = av_fifo_can_read(sdr->empty_block_fifo); + full_blocks = av_fifo_can_read(sdr->full_block_fifo); + pthread_mutex_unlock(&sdr->mutex); + + if (ret < 0) { + av_log(avfmt, AV_LOG_WARNING, "Allocating new block due to lack of space full:%d empty:%d\n", full_blocks, empty_blocks); + fifo_element.halfblock = av_malloc(sdr->block_size * sizeof(fifo_element.halfblock)); + if (!fifo_element.halfblock) { + av_log(avfmt, AV_LOG_ERROR, "Allocation failed, waiting for free space\n"); + // we wait 10ms here, tests have shown soapy to loose data when it is not serviced for 40-50ms + av_usleep(10*1000); + continue; + } + } + + block_counter ++; + pthread_mutex_lock(&sdr->mutex); + // we try to get 2 clean blocks after windowing, to improve chances scanning doesnt miss too much + // First block after parameter change is not reliable, we do not assign it any frequency + // 2 blocks are needed with windowing to get a clean FFT output + // Thus > 3 is the minimum for the next frequency update if we want to do something reliable with the data + if (sdr->seek_direction && block_counter > 3) { + int64_t new_freq = av_clip64(sdr->wanted_freq + sdr->seek_direction*sdr->bandwidth, sdr->min_center_freq, sdr->max_center_freq); + + // did we hit an end ? + if (new_freq == sdr->wanted_freq) { + //simply wrap around + new_freq = new_freq == sdr->min_center_freq ? sdr->max_center_freq : sdr->min_center_freq; + } + sdr->wanted_freq = new_freq; + } + if (sdr->wanted_freq != sdr->freq) { + //We could use a seperate MUTEX for the FIFO and for soapy + set_sdr_freq(sdr, sdr->wanted_freq); + //This shouldnt really cause any problem if we just continue on error except that we continue returning data with the previous target frequency range + //And theres not much else we can do, an error message was already printed by set_sdr_freq() in that case + block_counter = 0; // we just changed the frequency, do not trust the next blocks content + } + pthread_mutex_unlock(&sdr->mutex); + + fifo_element.center_frequency = block_counter > 0 ? sdr->freq : 0; + + remaining = sdr->block_size; + while (remaining && !atomic_load(&sdr->close_requested)) { + void *soapy_buffers[MAX_CHANNELS] = {NULL}; + long long soapy_ts; + int soapy_flags; + + soapy_buffers[0] = fifo_element.halfblock + sdr->block_size - remaining; +#if CONFIG_SDR_DEMUXER + if (sdr->soapy) { + ret = SoapySDRDevice_readStream(sdr->soapy, sdr->soapyRxStream, + soapy_buffers, + remaining, + &soapy_flags, + &soapy_ts, + 10*1000); + + if (ret == SOAPY_SDR_TIMEOUT) { + continue; + } else if (ret == SOAPY_SDR_OVERFLOW) { + av_log(avfmt, AV_LOG_WARNING, "SOAPY OVERFLOW\n"); + continue; + } else if (ret < 0) { + av_log(avfmt, AV_LOG_ERROR, "SoapySDRDevice_readStream() Failed with (%d) %s\n", ret, SoapySDRDevice_lastError()); + av_usleep(10*1000); + continue; + } + } else +#endif // CONFIG_SDR_DEMUXER + { + float scale = 1 / 32768.0; + int64_t size; + float *outp; + uint8_t *inp; + if (!remaining_file_block_size) { + int block_size; + avio_skip(avfmt->pb, 16 + 4); //FFSDR00Xint16BE + block_size = avio_rb32(avfmt->pb); + fifo_element.center_frequency = av_int2double(avio_rb64(avfmt->pb)); + avio_rb64(avfmt->pb); //pts + avio_skip(avfmt->pb, sdr->fileheader_size - 40); + remaining_file_block_size = block_size; + } + pthread_mutex_lock(&sdr->mutex); + + //We have no FIFO API to check how much we can write + size = sdr->sdr_sample_rate / sdr->block_size - av_fifo_can_read(sdr->full_block_fifo); + pthread_mutex_unlock(&sdr->mutex); + if (size <= 0) { + av_usleep(10*1000); + continue; + } + size = FFMIN(remaining, remaining_file_block_size) * sizeof(int16_t) * 2; + outp = soapy_buffers[0]; + inp = (uint8_t*)soapy_buffers[0] + size; // used as temporary buffer + ret = avio_read(avfmt->pb, inp, size); + if (ret == AVERROR_EOF || (ret > 0 && ret % (sizeof(int16_t) * 2))) { + avio_seek(avfmt->pb, SEEK_SET, 0); + av_log(avfmt, AV_LOG_INFO, "EOF, will wraparound\n"); + continue; + } else if (ret == AVERROR(EAGAIN)) { + av_log(avfmt, AV_LOG_DEBUG, "read EAGAIN\n"); + continue; + } else if (ret < 0) { + av_log(avfmt, AV_LOG_ERROR, "read Failed with (%d)\n", ret); + continue; + } + + ret /= sizeof(int16_t) * 2; + for(int i= 0; i<ret; i++) { + outp[2*i + 0] = (int16_t)AV_RB16(inp + 4*i + 0) * scale; + outp[2*i + 1] = (int16_t)AV_RB16(inp + 4*i + 2) * scale; + } + remaining_file_block_size -= ret; + } + av_assert0(ret <= remaining); + remaining -= ret; + } + + inject_block_into_fifo(sdr, sdr->full_block_fifo, &fifo_element, "block fifo overflow, discarding block\n"); + } + av_assert0(atomic_load(&sdr->close_requested) == 1); + + return NULL; +} + +static int sdr_read_header(AVFormatContext *s) +{ + SDRContext *sdr = s->priv_data; + AVStream *st; + SDRStream *sst; + int ret, i; + + size_t length; + char** names; + int64_t max_sample_rate; + + sdr->avfmt = s; + + if (sdr->width>1 && sdr->height>1) { + /* video stream */ + st = avformat_new_stream(s, NULL); + if (!st) + return AVERROR(ENOMEM); + sst = av_mallocz(sizeof(SDRStream)); + if (!sst) + return AVERROR(ENOMEM); + st->priv_data = sst; + st->codecpar->codec_type = AVMEDIA_TYPE_VIDEO; + st->codecpar->codec_id = AV_CODEC_ID_RAWVIDEO; + st->codecpar->codec_tag = 0; /* no fourcc */ + st->codecpar->format = AV_PIX_FMT_BGRA; + st->codecpar->width = sdr->width; + st->codecpar->height = sdr->height; + avpriv_set_pts_info(st, 64, 1, (48000/128) << FREQ_BITS); + } + + if (sdr->mode == SingleStationMode) { + /* audio stream */ + st = avformat_new_stream(s, NULL); + if (!st) + return AVERROR(ENOMEM); + sst = av_mallocz(sizeof(SDRStream)); + if (!sst) + return AVERROR(ENOMEM); + st->priv_data = sst; + st->codecpar->codec_type = AVMEDIA_TYPE_AUDIO; + st->codecpar->codec_tag = 0; /* no fourcc */ + st->codecpar->codec_id = HAVE_BIGENDIAN ? AV_CODEC_ID_PCM_F32BE : AV_CODEC_ID_PCM_F32LE; + st->codecpar->ch_layout = (AVChannelLayout)AV_CHANNEL_LAYOUT_STEREO; + st->codecpar->sample_rate = 0; //will be set later + avpriv_set_pts_info(st, 64, 1, (48000/128) << FREQ_BITS); + sdr->single_ch_audio_st_index = st->index; + sdr->missing_streams++; + } + +#if CONFIG_SDR_DEMUXER + if (!strcmp(s->iformat->name, "sdr")) { + SoapySDRRange *ranges, range; + SoapySDRKwargs *results = SoapySDRDevice_enumerate(NULL, &length); + SoapySDRKwargs args = {}; + int has_agc; + + for (i = 0; i < length; i++) { + int usable = 1; + for (int j = 0; j < results[i].size; j++) { + if (!strcmp("driver", results[i].keys[j])) { + if (!strcmp("audio", results[i].vals[j])) { + usable = 0; + } else if (!sdr->driver_name) { + sdr->driver_name = av_strdup(results[i].vals[j]); + if (!sdr->driver_name) + return AVERROR(ENOMEM); + } + } + } + if (!usable) + continue; + av_log(s, AV_LOG_INFO, "Soapy enumeration %d\n", i); + for (int j = 0; j < results[i].size; j++) { + av_log(s, AV_LOG_INFO, " results %s = %s\n", results[i].keys[j], results[i].vals[j]); + } + } + SoapySDRKwargsList_clear(results, length); + + av_log(s, AV_LOG_INFO, "Opening %s\n", sdr->driver_name); + if (!sdr->driver_name) + return AVERROR(EINVAL); //No driver specified and none found + SoapySDRKwargs_set(&args, "driver", sdr->driver_name); + sdr->soapy = SoapySDRDevice_make(&args); + SoapySDRKwargs_clear(&args); + + if (sdr->soapy == NULL) { + av_log(s, AV_LOG_ERROR, "SoapySDRDevice_make fail: %s\n", SoapySDRDevice_lastError()); + return AVERROR_EXTERNAL; + } + + names = SoapySDRDevice_listAntennas(sdr->soapy, SOAPY_SDR_RX, 0, &length); + av_log(s, AV_LOG_INFO, "Antennas: "); + for (i = 0; i < length; i++) + av_log(s, AV_LOG_INFO, "%s, ", names[i]); + av_log(s, AV_LOG_INFO, "\n"); + SoapySDRStrings_clear(&names, length); + + names = SoapySDRDevice_listGains(sdr->soapy, SOAPY_SDR_RX, 0, &length); + av_log(s, AV_LOG_INFO, "Rx Gain Elements: "); + for (i = 0; i < length; i++) + av_log(s, AV_LOG_INFO, "%s, ", names[i]); + av_log(s, AV_LOG_INFO, "\n"); + SoapySDRStrings_clear(&names, length); + + has_agc = SoapySDRDevice_hasGainMode(sdr->soapy, SOAPY_SDR_RX, 0); + av_log(s, AV_LOG_INFO, "RX AGC Supported: %d\n", has_agc); + if (has_agc) + SoapySDRDevice_setGainMode(sdr->soapy, SOAPY_SDR_RX, 0, 1); + + range = SoapySDRDevice_getGainRange(sdr->soapy, SOAPY_SDR_RX, 0); + av_log(s, AV_LOG_INFO, "Rx Gain range: %f dB - %f dB\n", range.minimum, range.maximum); + + ranges = SoapySDRDevice_getFrequencyRange(sdr->soapy, SOAPY_SDR_RX, 0, &length); + av_log(s, AV_LOG_INFO, "Rx freq ranges: "); + for (i = 0; i < length; i++) { + av_log(s, AV_LOG_INFO, "[%g Hz -> %g Hz], ", ranges[i].minimum, ranges[i].maximum); + if (sdr->max_freq > ranges[i].maximum) + continue; + if (sdr->min_freq && sdr->min_freq < ranges[i].minimum) + continue; + break; + } + av_log(s, AV_LOG_INFO, "\n"); + if (i == length) { + av_log(s, AV_LOG_ERROR, "Invalid frequency range\n"); + return AVERROR(EINVAL); + } + if (!sdr->max_freq) + sdr->max_freq = ranges[i].maximum; + if (!sdr->min_freq) + sdr->min_freq = ranges[i].minimum; + free(ranges); ranges = NULL; + av_log(s, AV_LOG_INFO, "frequency range: %"PRId64" - %"PRId64"\n", sdr->min_freq, sdr->max_freq); + //TODO do any drivers support multiple distinct frequency ranges ? if so we pick just one, thats not ideal + + ranges = SoapySDRDevice_getSampleRateRange(sdr->soapy, SOAPY_SDR_RX, 0, &length); + max_sample_rate = 0; + av_log(s, AV_LOG_INFO, "SampleRate ranges: "); + for (i = 0; i < length; i++) { + av_log(s, AV_LOG_INFO, "[%g Hz -> %g Hz], ", ranges[i].minimum, ranges[i].maximum); + if (sdr->sdr_sample_rate && + (sdr->sdr_sample_rate < ranges[i].minimum || sdr->sdr_sample_rate > ranges[i].maximum)) + continue; + max_sample_rate = FFMAX(max_sample_rate, ranges[i].maximum); + } + av_log(s, AV_LOG_INFO, "\n"); + free(ranges); ranges = NULL; + if (!sdr->sdr_sample_rate) { + sdr->sdr_sample_rate = max_sample_rate; + } + + // We disallow odd sample rates as they result in either center or endpoint frequencies to be non integer. No big deal but simpler is better + if (!max_sample_rate || sdr->sdr_sample_rate%2) { + av_log(s, AV_LOG_ERROR, "Invalid sdr sample rate\n"); + return AVERROR(EINVAL); + } + + ranges = SoapySDRDevice_getBandwidthRange(sdr->soapy, SOAPY_SDR_RX, 0, &length); + av_log(s, AV_LOG_INFO, "Bandwidth ranges: "); + for (i = 0; i < length; i++) { + av_log(s, AV_LOG_INFO, "[%g Hz -> %g Hz], ", ranges[i].minimum, ranges[i].maximum); + } + av_log(s, AV_LOG_INFO, "\n"); + free(ranges); ranges = NULL; + + //apply settings + if (SoapySDRDevice_setSampleRate(sdr->soapy, SOAPY_SDR_RX, 0, sdr->sdr_sample_rate) != 0) { + av_log(s, AV_LOG_ERROR, "setSampleRate fail: %s\n", SoapySDRDevice_lastError()); + return AVERROR_EXTERNAL; + } + ret = set_sdr_freq(sdr, sdr->wanted_freq); + if (ret < 0) + return ret; + + //setup a stream (complex floats) +#if SOAPY_SDR_API_VERSION < 0x00080000 // The old version is still widely used so we must support it + if (SoapySDRDevice_setupStream(sdr->soapy, &sdr->soapyRxStream, SOAPY_SDR_RX, SOAPY_SDR_CF32, NULL, 0, NULL)) + sdr->soapyRxStream = NULL; +#else + sdr->soapyRxStream = SoapySDRDevice_setupStream(sdr->soapy, SOAPY_SDR_RX, SOAPY_SDR_CF32, NULL, 0, NULL); +#endif + if (!sdr->soapyRxStream) { + av_log(s, AV_LOG_ERROR, "setupStream fail: %s\n", SoapySDRDevice_lastError()); + return AVERROR_EXTERNAL; + } + + sdr->bandwidth = SoapySDRDevice_getBandwidth(sdr->soapy, SOAPY_SDR_RX, 0); + av_log(s, AV_LOG_INFO, "bandwidth %"PRId64"\n", sdr->bandwidth); + + SoapySDRDevice_activateStream(sdr->soapy, sdr->soapyRxStream, 0, 0, 0); + } else +#endif // CONFIG_SDR_DEMUXER + { + int version; + avio_skip(s->pb, 5); //FFSDR + version = avio_rb24(s->pb); //000 + avio_skip(s->pb, 8); //int16BE + sdr->sdr_sample_rate = avio_rb32(s->pb); + avio_rb32(s->pb); //block_size + sdr->wanted_freq = av_int2double(avio_rb64(s->pb)); + sdr->pts = avio_rb64(s->pb); + if (version > AV_RB24("000")) { + sdr->bandwidth = avio_rb32(s->pb); + sdr->fileheader_size = avio_rb32(s->pb); + } else { + sdr->bandwidth = sdr->sdr_sample_rate; + sdr->fileheader_size = 40; + } + + avio_seek(s->pb, 0, SEEK_SET); + + ret = set_sdr_freq(sdr, sdr->wanted_freq); + if (ret < 0) + return ret; + } + + sdr->min_center_freq = sdr->min_freq + sdr->sdr_sample_rate / 2; + sdr->max_center_freq = sdr->max_freq + sdr->sdr_sample_rate / 2; + + + if(!sdr->block_size) { + for(sdr->block_size = 1; 25*sdr->block_size < sdr->sdr_sample_rate; sdr->block_size <<=1) + ; + } else if(sdr->block_size & (sdr->block_size - 1)) { + av_log(s, AV_LOG_ERROR, "Block size must be a power of 2\n"); + return AVERROR(EINVAL); + } + av_log(s, AV_LOG_INFO, "Block size %d\n", sdr->block_size); + + + sdr->windowed_block = av_malloc(sizeof(*sdr->windowed_block) * 2 * sdr->block_size); + sdr->block = av_malloc(sizeof(*sdr->block ) * 2 * sdr->block_size); + sdr->len2block = av_malloc(sizeof(*sdr->len2block) * 2 * sdr->block_size); + sdr->window = av_malloc(sizeof(*sdr->window ) * 2 * sdr->block_size); + if (!sdr->windowed_block || !sdr->len2block || !sdr->block || !sdr->window) + return AVERROR(ENOMEM); + + ret = av_tx_init(&sdr->fft_ctx, &sdr->fft, AV_TX_FLOAT_FFT, 0, 2*sdr->block_size, NULL, 0); + if (ret < 0) + return ret; + + avpriv_kbd_window_init(sdr->window, 8, sdr->block_size); + + for(int i = sdr->block_size; i < 2 * sdr->block_size; i++) { + sdr->window[i] = sdr->window[2*sdr->block_size - i - 1]; + } + for (int i = 0; i < 2 * sdr->block_size; i++) + sdr->window[i] *= ((i&1) ? 1:-1); + + for (int stream_index = 0; stream_index < s->nb_streams; stream_index++) { + AVStream *st = s->streams[stream_index]; + SDRStream *sst = st->priv_data; + // Setup streams which are independant of stations and modulations + // Others cannot be setup yet + if (st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) { + sst->frame_size = sdr->width * sdr->height * 4; + sst->frame_buffer = av_malloc(sst->frame_size * 2); + if (!sst->frame_buffer) + return AVERROR(ENOMEM); + } + } + + sdr->pts = 0; + + sdr->empty_block_fifo = av_fifo_alloc2(1, sizeof(FIFOElement), AV_FIFO_FLAG_AUTO_GROW); + sdr-> full_block_fifo = av_fifo_alloc2(1, sizeof(FIFOElement), AV_FIFO_FLAG_AUTO_GROW); + if (!sdr->empty_block_fifo || !sdr-> full_block_fifo) + return AVERROR(ENOMEM); + //Limit fifo to 1second to avoid OOM + av_fifo_auto_grow_limit(sdr->empty_block_fifo, sdr->sdr_sample_rate / sdr->block_size); + av_fifo_auto_grow_limit(sdr-> full_block_fifo, sdr->sdr_sample_rate / sdr->block_size); + + atomic_init(&sdr->close_requested, 0); + ret = pthread_mutex_init(&sdr->mutex, NULL); + if (ret) { + av_log(s, AV_LOG_ERROR, "pthread_mutex_init failed: %s\n", strerror(ret)); + return AVERROR(ret); + } + ret = pthread_create(&sdr->soapy_thread, NULL, (void*)soapy_needs_bigger_buffers_worker, sdr); + if (ret != 0) { + av_log(s, AV_LOG_ERROR, "pthread_create failed : %s\n", strerror(ret)); + pthread_mutex_destroy(&sdr->mutex); + return AVERROR(ret); + } + sdr->thread_started = 1; + + if(sdr->dump_url) { + ret = avio_open2(&sdr->dump_avio, sdr->dump_url, AVIO_FLAG_WRITE, NULL, NULL); + if (ret < 0) { + fprintf(stderr, "Unable to open %s\n", sdr->dump_url); + return ret; + } + } + + return 0; +} + +static inline void draw_point_component(uint8_t *frame_buffer, ptrdiff_t stride, int x, int y, int r, int g, int b, int w, int h) +{ + uint8_t *p; + + if (x<0 || y<0 || x>=w || y>=h) + return; + p = frame_buffer + 4*x + stride*y; + + p[0] = av_clip_uint8(p[0] + (b>>16)); + p[1] = av_clip_uint8(p[1] + (g>>16)); + p[2] = av_clip_uint8(p[2] + (r>>16)); +} + +// Draw a point with subpixel precission, (it looked bad otherwise) +static void draw_point(uint8_t *frame_buffer, ptrdiff_t stride, int x, int y, int r, int g, int b, int w, int h) +{ + int px = x>>8; + int py = y>>8; + int sx = x&255; + int sy = y&255; + int s; + + s = (256 - sx) * (256 - sy); + draw_point_component(frame_buffer, stride, px , py , r*s, g*s, b*s, w, h); + s = sx * (256 - sy); + draw_point_component(frame_buffer, stride, px+1, py , r*s, g*s, b*s, w, h); + s = (256 - sx) * sy; + draw_point_component(frame_buffer, stride, px , py+1, r*s, g*s, b*s, w, h); + s = sx * sy; + draw_point_component(frame_buffer, stride, px+1, py+1, r*s, g*s, b*s, w, h); +} + +static int sdr_read_packet(AVFormatContext *s, + AVPacket *pkt) +{ + SDRContext *sdr = s->priv_data; + int ret, i, full_blocks, seek_direction; + FIFOElement fifo_element[2]; + +process_next_block: + + for (int stream_index = 0; stream_index < s->nb_streams; stream_index++) { + AVStream *st = s->streams[stream_index]; + SDRStream *sst = st->priv_data; + + if (sst->processing_index) { + int skip = 1; + if (st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) { + int w = st->codecpar->width; + int h = st->codecpar->height; + int h2 = FFMIN(64, h / 4); + int frame_index = av_rescale(sdr->pts, sdr->fps.num, sdr->fps.den * TIMEBASE); + int last_index = av_rescale(sdr->last_pts, sdr->fps.num, sdr->fps.den * TIMEBASE); + skip = frame_index == last_index || sdr->missing_streams; + av_assert0(sdr->missing_streams >= 0); + + for(int x= 0; x<w; x++) { + int color; + int idx = 4*(x + sst->frame_buffer_line*w); + int bindex = x * 2ll * sdr->block_size / w; + int bindex2 = (x+1) * 2ll * sdr->block_size / w; + float a = 0; + av_assert0(bindex2 <= 2 * sdr->block_size); + for (int i = bindex; i < bindex2; i++) { + AVComplexFloat sample = sdr->block[i]; + a += len2(sample); + } + color = lrintf(log(a)*8 + 32); + + sst->frame_buffer[idx + 0] = color; + sst->frame_buffer[idx + 1] = color; + sst->frame_buffer[idx + 2] = color; + sst->frame_buffer[idx + 3] = 255; + } + + // Display locations of all vissible stations + for(int station_index = 0; station_index<sdr->nb_stations; station_index++) { + Station *s = sdr->station[station_index]; + double f = s->frequency; +// int bw = s->bandwidth; +// int xleft = 256*((f-bw) - sdr->block_center_freq + sdr->sdr_sample_rate/2) * w / sdr->sdr_sample_rate; +// int xright= 256*((f+bw) - sdr->block_center_freq + sdr->sdr_sample_rate/2) * w / sdr->sdr_sample_rate; + int xmid = 256*( f - sdr->block_center_freq + sdr->sdr_sample_rate/2) * w / sdr->sdr_sample_rate; + int g = s->modulation == AM ? 50 : 0; + int b = s->modulation == AM ? 0 : 70; + int r = s->stream ? 50 : 0; + + draw_point(sst->frame_buffer, 4*w, xmid, 256*(sst->frame_buffer_line+1), r, g, b, w, h); + } + + if (!skip) { + ret = av_new_packet(pkt, sst->frame_size); + if (ret < 0) + return ret; + + for(int y= 0; y<h2; y++) { + for(int x= 0; x<w; x++) { + int color; + int idx = x + y*w; + int idx_t = (idx / h2) + (idx % h2)*w; + int bindex = idx * 2ll * sdr->block_size / (w * h2); + int bindex2 = (idx+1) * 2ll * sdr->block_size / (w * h2); + float a = 0; + av_assert0(bindex2 <= 2 * sdr->block_size); + for (int i = bindex; i < bindex2; i++) { + AVComplexFloat sample = sdr->block[i]; + a += len2(sample); + } + color = lrintf(log(a)*9 + 64); + + idx_t *= 4; + + pkt->data[idx_t+0] = color; + pkt->data[idx_t+1] = color; + pkt->data[idx_t+2] = color; + pkt->data[idx_t+3] = 255; + } + } + for (int y= h2; y<h; y++) + memcpy(pkt->data + 4*y*w, sst->frame_buffer + 4*(y + sst->frame_buffer_line - h2)*w, 4*w); + } + + if (!sst->frame_buffer_line) { + memcpy(sst->frame_buffer + sst->frame_size, sst->frame_buffer, sst->frame_size); + sst->frame_buffer_line = h-1; + } else + sst->frame_buffer_line--; + +//TODO +// draw RDS* +// draw frequencies + } else if (st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) { + if (sst->station) { + skip = 0; + ret = modulation_descs[ sst->station->modulation ].demodulate(sdr, stream_index, pkt); + if (ret < 0) { + av_log(s, AV_LOG_ERROR, "demodulation failed ret = %d\n", ret); + } + } + } else + av_assert0(0); + sst->processing_index = 0; + if (pkt && !skip) { + pkt->stream_index = stream_index; + pkt->dts = (sdr->pts & (-1<<FREQ_BITS)); + if (sst->station) + pkt->dts += lrint(sst->station->frequency/1000); + pkt->pts = pkt->dts; + + return 0; + } + } + } + + pthread_mutex_lock(&sdr->mutex); + full_blocks = av_fifo_can_read(sdr->full_block_fifo) - 1; + ret = av_fifo_peek(sdr->full_block_fifo, &fifo_element, 2, 0); + if (ret >= 0) + av_fifo_drain2(sdr->full_block_fifo, 1); + seek_direction = sdr->seek_direction; //This doesnt need a mutex here at all but tools might complain + pthread_mutex_unlock(&sdr->mutex); + + if (ret < 0) { + av_log(s, AV_LOG_DEBUG, "EAGAIN on not enough data\n"); + return AVERROR(EAGAIN); + } + if (fifo_element[0].center_frequency != fifo_element[1].center_frequency) { + av_log(s, AV_LOG_DEBUG, "Mismatching frequency blocks\n"); +// fifo_element[0].center_frequency = 0; +// inject_block_into_fifo(sdr, sdr->empty_block_fifo, &fifo_element, "Cannot pass next buffer, freeing it\n"); + //Its simpler to just continue than to discard this, but we must make sure that on seeking we have matching blocks for each block we want to scan + sdr->block_center_freq = 0; + sdr->skip_probe = 1; // we want to probe the next block as it will have new frequencies + } else + sdr->block_center_freq = fifo_element[0].center_frequency; + + if (sdr->dump_avio) { + uint8_t header[48] = "FFSDR001int16BE"; + uint8_t *tmp = (void*)sdr->windowed_block; //We use an unused array as temporary here + + AV_WB32(header+16, sdr->sdr_sample_rate); + AV_WB32(header+20, sdr->block_size); + AV_WB64(header+24, av_double2int(fifo_element[0].center_frequency)); + AV_WB64(header+32, sdr->pts); + AV_WB32(header+40, sdr->bandwidth); + AV_WB32(header+44, sizeof(header)); + + avio_write(sdr->dump_avio, header, sizeof(header)); + + //We do ask Soapy for 32bit complex floats as they are easier to work with but really the hardware generally produces integers and no 32bits + //For storage int16 is supperior as it needs less space + for(int i=0; i<sdr->block_size; i++) { + AV_WB16(tmp + 4*i + 0, lrint(fifo_element[0].halfblock[i].re * 32768)); + AV_WB16(tmp + 4*i + 2, lrint(fifo_element[0].halfblock[i].im * 32768)); + } + avio_write(sdr->dump_avio, tmp, 4*sdr->block_size); + } + + for (i = 0; i<sdr->block_size; i++) { + sdr->windowed_block[i].re = fifo_element[0].halfblock[i].re * sdr->window[i]; + sdr->windowed_block[i].im = fifo_element[0].halfblock[i].im * sdr->window[i]; + } + for (i = sdr->block_size; i<2*sdr->block_size; i++) { + sdr->windowed_block[i].re = fifo_element[1].halfblock[i - sdr->block_size].re * sdr->window[i]; + sdr->windowed_block[i].im = fifo_element[1].halfblock[i - sdr->block_size].im * sdr->window[i]; + } + + inject_block_into_fifo(sdr, sdr->empty_block_fifo, &fifo_element[0], "Cannot pass next buffer, freeing it\n"); +#ifdef SYN_TEST //synthetic test signal + static int64_t synp=0; + AVLFG avlfg; + if(!synp) + av_lfg_init(&avlfg, 0); + for(int i = 0; i<2*sdr->block_size; i++) { + double f = F2INDEX(7123456.78901234567); + int64_t synp2 = synp % (40*sdr->block_size); + double fsig0 = 0.00002; + double fsig2= 123; + + if (!i) + av_log(0,0, "i= %f %f\n", f, INDEX2F(f)); + double noise[2] = {0,0}; + double sig0 = 1.0 + 0.25*sin(synp2*fsig0*synp2*M_PI / sdr->block_size); + double sig2 = 0.1 + 0.03*cos(synp*fsig2*M_PI / sdr->block_size); + if (i & 256) + av_bmg_get(&avlfg, noise); + sdr->windowed_block[i].re = (noise[0]*0.0000001 + 0.00001*sig0*sin(synp*M_PI*(f)/sdr->block_size) + + 0.00001*sig2*cos(synp*M_PI*(f)/sdr->block_size) + ) * fabs(sdr->window[i]); + sdr->windowed_block[i].im = noise[1]*0.0000001 * sdr->window[i]; + synp++; + } + synp -= sdr->block_size; +#endif + + + sdr->fft(sdr->fft_ctx, sdr->block, sdr->windowed_block, sizeof(AVComplexFloat)); + // windowed_block is unused now, we can fill it with the next blocks data + + if (sdr->block_center_freq) { + if (sdr->skip_probe-- <= 0) { + //Probing takes a bit of time, lets not do it every time + sdr->skip_probe = 5; + probe_common(sdr); + + for(int i = 0; i < FF_ARRAY_ELEMS(modulation_descs); i++) { + ModulationDescriptor *md = &modulation_descs[i]; + md->probe(sdr); + av_assert0(i == md->modulation); + } + + decay_stations(sdr); + } + + if (sdr->mode == SingleStationMode) { + AVStream *st = s->streams[sdr->single_ch_audio_st_index]; + SDRStream *sst = st->priv_data; + + if (!sst->station || seek_direction) { + double current_freq; + double best_distance = INT64_MAX; + Station *best_station = NULL; + + if (sst->station) { + current_freq = sst->station->frequency; + } else if (sdr->station_freq) { + current_freq = sdr->station_freq; + } else + current_freq = sdr->block_center_freq; + + for(int i = 0; i<sdr->nb_stations; i++) { + Station *station = sdr->station[i]; + double distance = station->frequency - current_freq; + + if (distance * seek_direction < 0 || station == sst->station) + continue; + distance = fabs(distance); + if (distance < best_distance) { + best_distance = distance; + best_station = station; + } + } + av_assert0(!best_station || best_station != sst->station); + if (best_station) { + ret = setup_stream(sdr, sdr->single_ch_audio_st_index, best_station); + if (ret < 0) { + av_log(s, AV_LOG_DEBUG, "setup_stream failed\n"); + return ret; + } + + pthread_mutex_lock(&sdr->mutex); + sdr->seek_direction = + seek_direction = 0; + sdr->wanted_freq = lrint(best_station->frequency + 213*1000); // We target a bit off teh exact frequency to avoid artifacts + //200*1000 had artifacts + + av_log(s, AV_LOG_DEBUG, "request f = %"PRId64"\n", sdr->wanted_freq); + pthread_mutex_unlock(&sdr->mutex); + } + } + } else { + av_assert0(sdr->mode == AllStationMode); + for(int i = 0; i<sdr->nb_stations; i++) { + Station *station = sdr->station[i]; + if (!station->stream) { + /* audio stream */ + AVStream *st = avformat_new_stream(s, NULL); + SDRStream *sst; + if (!st) + return AVERROR(ENOMEM); + sst = av_mallocz(sizeof(*sst)); + if (!sst) + return AVERROR(ENOMEM); + st->priv_data = sst; + st->codecpar->codec_type = AVMEDIA_TYPE_AUDIO; + st->codecpar->codec_tag = 0; /* no fourcc */ + st->codecpar->codec_id = HAVE_BIGENDIAN ? AV_CODEC_ID_PCM_F32BE : AV_CODEC_ID_PCM_F32LE; + st->codecpar->ch_layout = (AVChannelLayout)AV_CHANNEL_LAYOUT_STEREO; + st->codecpar->sample_rate = 0; // will be set later + avpriv_set_pts_info(st, 64, 1, (48000/128) << FREQ_BITS); + ret = setup_stream(sdr, st->index, station); + if (ret < 0) + return ret; + sdr->missing_streams++; + } + } + } + + // new data is available for all streams, lets tell them + //TODO with mixed blocks during scanning theres more than one block_center_freq, we may want to try to support them as they likely have demodulatable data + for (int stream_index = 0; stream_index < s->nb_streams; stream_index++) { + AVStream *st = s->streams[stream_index]; + SDRStream *sst = st->priv_data; + sst->processing_index += sdr->block_size; + } + } + + sdr->last_pts = sdr->pts; + sdr->pts += av_rescale(sdr->block_size, TIMEBASE, sdr->sdr_sample_rate); + + //some user apps force a delay on EAGAIN so we cannot always return EAGAIN or we overflow the fifos + if (full_blocks >= 2) + goto process_next_block; + + return AVERROR(EAGAIN); +} + +static int sdr_read_seek(AVFormatContext *s, int stream_index, + int64_t target, int flags) +{ + SDRContext *sdr = s->priv_data; + int64_t freq, step; + int dir = (flags & AVSEEK_FLAG_BACKWARD) ? -1 : 1; + AVStream *st = s->streams[stream_index]; + SDRStream *sst = st->priv_data; + + if (sdr->mode != SingleStationMode) { + return AVERROR(ENOTSUP); + } + + av_assert0(stream_index >= 0); + + step = FFABS(target - sdr->pts); + if (step < 35 * TIMEBASE) { + target = (sdr->pts & (-1<<FREQ_BITS)) + dir; + if (sst->station) + target += lrint(sst->station->frequency/1000); + }else if (step < 330 * TIMEBASE) + return AVERROR(ENOTSUP); // Seek to next/prev band + else + return AVERROR(ENOTSUP); // Reserved for future ideas + + freq = (target & ((1<<FREQ_BITS) - 1)) * 1000LL; + + pthread_mutex_lock(&sdr->mutex); + sdr->seek_direction = dir; + pthread_mutex_unlock(&sdr->mutex); + flush_fifo(sdr, sdr->full_block_fifo); + return 0; +} + +static int sdr_read_close(AVFormatContext *s) +{ + SDRContext *sdr = s->priv_data; + int i; + + atomic_store(&sdr->close_requested, 1); + + if(sdr->thread_started) + pthread_join(sdr->soapy_thread, NULL); + + flush_fifo(sdr, sdr->empty_block_fifo); //needs mutex to be still around + flush_fifo(sdr, sdr-> full_block_fifo); //needs mutex to be still around + + if(sdr->thread_started) + pthread_mutex_destroy(&sdr->mutex); + + av_fifo_freep2(&sdr->empty_block_fifo); + av_fifo_freep2(&sdr->full_block_fifo); + + for (i = 0; i < s->nb_streams; i++) { + AVStream *st = s->streams[i]; + SDRStream *sst = st->priv_data; + + free_stream(sdr, i); + + av_freep(&sst->frame_buffer); + sst->frame_size = 0; + } + + for (i = 0; i < sdr->nb_stations; i++) { + free_station(sdr->station[i]); + } + sdr->nb_stations = 0; + av_freep(&sdr->station); + +#if CONFIG_SDR_DEMUXER + if (sdr->soapy) { + if (sdr->soapyRxStream) { + SoapySDRDevice_deactivateStream(sdr->soapy, sdr->soapyRxStream, 0, 0); + SoapySDRDevice_closeStream(sdr->soapy, sdr->soapyRxStream); + sdr->soapyRxStream = NULL; + } + + SoapySDRDevice_unmake(sdr->soapy); + sdr->soapy = NULL; + } +#endif //CONFIG_SDR_DEMUXER + + av_freep(&sdr->windowed_block); + av_freep(&sdr->block); + av_freep(&sdr->len2block); + av_freep(&sdr->window); + + av_tx_uninit(&sdr->fft_ctx); + sdr->fft = NULL; + + avio_close(sdr->dump_avio); + + return 0; +} + +static int sdrfile_probe(const AVProbeData *p) +{ + if (!memcmp(p->buf , "FFSDR00", 7) && + !memcmp(p->buf+8, "int16BE", 7)) { + return AVPROBE_SCORE_MAX; + } else + return 0; +} + +#define OFFSET(x) offsetof(SDRContext, x) +#define DEC AV_OPT_FLAG_DECODING_PARAM + +static const AVOption options[] = { + { "video_size", "set frame size", OFFSET(width), AV_OPT_TYPE_IMAGE_SIZE, {.str = "800x600"}, 0, 0, DEC }, + { "framerate" , "set frame rate", OFFSET(fps), AV_OPT_TYPE_VIDEO_RATE, {.str = "25"}, 0, INT_MAX,DEC }, + { "block_size", "FFT block size", OFFSET(block_size), AV_OPT_TYPE_INT, {.i64 = 0}, 0, INT_MAX, DEC}, + { "mode", "" , OFFSET(mode), AV_OPT_TYPE_INT, {.i64 = SingleStationMode}, 0, ModeNB-1, DEC, "mode"}, + { "single_mode", "Demodulate 1 station", 0, AV_OPT_TYPE_CONST, {.i64 = SingleStationMode}, 0, 0, DEC, "mode"}, + { "all_mode" , "Demodulate all station", 0, AV_OPT_TYPE_CONST, {.i64 = AllStationMode}, 0, 0, DEC, "mode"}, + + { "station_freq", "current station/channel/stream frequency", OFFSET(station_freq), AV_OPT_TYPE_INT64, {.i64 = 88000000}, 0, INT64_MAX, DEC}, + + { "driver" , "sdr driver name" , OFFSET(driver_name), AV_OPT_TYPE_STRING, {.str = NULL}, 0, 0, DEC}, + { "sdr_sr" , "sdr sample rate" , OFFSET(sdr_sample_rate ), AV_OPT_TYPE_INT , {.i64 = 0}, 0, INT_MAX, DEC}, + { "sdr_freq", "sdr frequency" , OFFSET(wanted_freq), AV_OPT_TYPE_INT64 , {.i64 = 9000000}, 0, INT64_MAX, DEC}, + { "sdr_agc" , "sdr automatic gain control", OFFSET(sdr_agc), AV_OPT_TYPE_BOOL , {.i64 = 1}, 0, 1, DEC}, + { "min_freq", "minimum frequency", OFFSET(min_freq ), AV_OPT_TYPE_INT64 , {.i64 = 0}, 0, INT64_MAX, DEC}, + { "max_freq", "maximum frequency", OFFSET(max_freq ), AV_OPT_TYPE_INT64 , {.i64 = 0}, 0, INT64_MAX, DEC}, + + { "dumpurl", "url to dump soapy output to" , OFFSET(dump_url), AV_OPT_TYPE_STRING , {.str = NULL}, 0, 0, DEC}, + { "kbd_alpha", "Kaiser Bessel window parameter" , OFFSET(kbd_alpha), AV_OPT_TYPE_INT , {.i64 = 8}, 1, 16, DEC}, + + + { "am_mode", "AM Demodulation Mode", OFFSET(am_mode ), AV_OPT_TYPE_INT , {.i64 = AMMidSide}, 0, AMModeNB-1, DEC, "am_mode"}, + { "am_leftright", "AM Demodulation Left Right", 0, AV_OPT_TYPE_CONST, {.i64 = AMLeftRight}, 0, 0, DEC, "am_mode"}, + { "am_midside", "AM Demodulation Mid Side", 0, AV_OPT_TYPE_CONST, {.i64 = AMMidSide}, 0, 0, DEC, "am_mode"}, + { "am_inphase", "AM Demodulation In Phase", 0, AV_OPT_TYPE_CONST, {.i64 = AMInPhase}, 0, 0, DEC, "am_mode"}, + { "am_envelope","AM Demodulation EnvelopeDC", 0, AV_OPT_TYPE_CONST, {.i64 = AMEnvelope}, 0, 0, DEC, "am_mode"}, + + { NULL }, +}; + +#if CONFIG_SDR_DEMUXER +static const AVClass sdr_demuxer_class = { + .class_name = "sdr", + .item_name = av_default_item_name, + .option = options, + .version = LIBAVUTIL_VERSION_INT, + .category = AV_CLASS_CATEGORY_DEMUXER, +}; + +const AVInputFormat ff_sdr_demuxer = { + .name = "sdr", + .long_name = NULL_IF_CONFIG_SMALL("Software Defined Radio Demodulator"), + .priv_data_size = sizeof(SDRContext), + .read_header = sdr_read_header, + .read_packet = sdr_read_packet, + .read_close = sdr_read_close, + .read_seek = sdr_read_seek, + .flags = AVFMT_NOFILE, + .flags_internal = FF_FMT_INIT_CLEANUP, + .priv_class = &sdr_demuxer_class, +}; +#endif + +static const AVClass sdrfile_demuxer_class = { + .class_name = "sdrfile", + .item_name = av_default_item_name, + .option = options, + .version = LIBAVUTIL_VERSION_INT, + .category = AV_CLASS_CATEGORY_DEMUXER, +}; + +const AVInputFormat ff_sdrfile_demuxer = { + .name = "sdrfile", + .long_name = NULL_IF_CONFIG_SMALL("Software Defined Radio Demodulator (Using a file for testing)"), + .priv_data_size = sizeof(SDRContext), + .read_probe = sdrfile_probe, + .read_header = sdr_read_header, + .read_packet = sdr_read_packet, + .read_close = sdr_read_close, + .read_seek = sdr_read_seek, + .flags_internal = FF_FMT_INIT_CLEANUP, + .priv_class = &sdrfile_demuxer_class, +};
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc> --- configure | 4 + doc/demuxers.texi | 71 ++ libavformat/Makefile | 2 + libavformat/allformats.c | 2 + libavformat/sdrdemux.c | 1739 ++++++++++++++++++++++++++++++++++++++ 5 files changed, 1818 insertions(+) create mode 100644 libavformat/sdrdemux.c