diff mbox series

[FFmpeg-devel,3/3] libavutil/hwcontext_opencl: fix a bug for mapping qsv frame to opencl

Message ID 20211116081623.3081766-3-wenbin.chen@intel.com
State New
Headers show
Series [FFmpeg-devel,1/3] libavcodec/vaapi_decode: fix the problem that init_pool_size < nb_surface
Related show

Checks

Context Check Description
andriy/make_x86 success Make finished
andriy/make_fate_x86 success Make fate finished
andriy/make_ppc success Make finished
andriy/make_fate_ppc success Make fate finished

Commit Message

Wenbin Chen Nov. 16, 2021, 8:16 a.m. UTC
From: nyanmisaka <nst799610810@gmail.com>

mfxHDLPair was added to qsv, so modify qsv->opencl map function as well.
Now the following commandline works:

ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 \
-init_hw_device qsv=qs@va -init_hw_device opencl=ocl@va -filter_hw_device ocl \
-hwaccel qsv -hwaccel_output_format qsv -hwaccel_device qs -c:v h264_qsv \
-i input.264 -vf "hwmap=derive_device=opencl,format=opencl,avgblur_opencl, \
hwmap=derive_device=qsv:reverse=1:extra_hw_frames=32,format=qsv" \
-c:v h264_qsv output.264

Signed-off-by: nyanmisaka <nst799610810@gmail.com>
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
---
 libavutil/hwcontext_opencl.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Comments

Wenbin Chen Nov. 23, 2021, 1:56 a.m. UTC | #1
> From: nyanmisaka <nst799610810@gmail.com>
> 
> mfxHDLPair was added to qsv, so modify qsv->opencl map function as well.
> Now the following commandline works:
> 
> ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 \
> -init_hw_device qsv=qs@va -init_hw_device opencl=ocl@va -
> filter_hw_device ocl \
> -hwaccel qsv -hwaccel_output_format qsv -hwaccel_device qs -c:v h264_qsv
> \
> -i input.264 -vf
> "hwmap=derive_device=opencl,format=opencl,avgblur_opencl, \
> hwmap=derive_device=qsv:reverse=1:extra_hw_frames=32,format=qsv" \
> -c:v h264_qsv output.264
> 
> Signed-off-by: nyanmisaka <nst799610810@gmail.com>
> Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
> ---
>  libavutil/hwcontext_opencl.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/libavutil/hwcontext_opencl.c b/libavutil/hwcontext_opencl.c
> index 26a3a24593..4b6e74ff6f 100644
> --- a/libavutil/hwcontext_opencl.c
> +++ b/libavutil/hwcontext_opencl.c
> @@ -2249,7 +2249,8 @@ static int
> opencl_map_from_qsv(AVHWFramesContext *dst_fc, AVFrame *dst,
>  #if CONFIG_LIBMFX
>      if (src->format == AV_PIX_FMT_QSV) {
>          mfxFrameSurface1 *mfx_surface = (mfxFrameSurface1*)src->data[3];
> -        va_surface = *(VASurfaceID*)mfx_surface->Data.MemId;
> +        mfxHDLPair *pair = (mfxHDLPair*)mfx_surface->Data.MemId;
> +        va_surface = *(VASurfaceID*)pair->first;
>      } else
>  #endif
>          if (src->format == AV_PIX_FMT_VAAPI) {
> --
> 2.25.1
> 

ping

> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
Anton Khirnov Nov. 29, 2021, 9:48 a.m. UTC | #2
Quoting Wenbin Chen (2021-11-16 09:16:23)
> From: nyanmisaka <nst799610810@gmail.com>
> 
> mfxHDLPair was added to qsv, so modify qsv->opencl map function as well.
> Now the following commandline works:
> 
> ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 \
> -init_hw_device qsv=qs@va -init_hw_device opencl=ocl@va -filter_hw_device ocl \
> -hwaccel qsv -hwaccel_output_format qsv -hwaccel_device qs -c:v h264_qsv \
> -i input.264 -vf "hwmap=derive_device=opencl,format=opencl,avgblur_opencl, \
> hwmap=derive_device=qsv:reverse=1:extra_hw_frames=32,format=qsv" \
> -c:v h264_qsv output.264
> 
> Signed-off-by: nyanmisaka <nst799610810@gmail.com>
> Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
> ---
>  libavutil/hwcontext_opencl.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/libavutil/hwcontext_opencl.c b/libavutil/hwcontext_opencl.c
> index 26a3a24593..4b6e74ff6f 100644
> --- a/libavutil/hwcontext_opencl.c
> +++ b/libavutil/hwcontext_opencl.c
> @@ -2249,7 +2249,8 @@ static int opencl_map_from_qsv(AVHWFramesContext *dst_fc, AVFrame *dst,
>  #if CONFIG_LIBMFX
>      if (src->format == AV_PIX_FMT_QSV) {
>          mfxFrameSurface1 *mfx_surface = (mfxFrameSurface1*)src->data[3];
> -        va_surface = *(VASurfaceID*)mfx_surface->Data.MemId;
> +        mfxHDLPair *pair = (mfxHDLPair*)mfx_surface->Data.MemId;
> +        va_surface = *(VASurfaceID*)pair->first;
>      } else
>  #endif

The casts here are starting to look like overly arcane black magic.
Who is responsible for setting MemId here? I assume it's something in
hwcontext_qsv, but is that guaranteed?

It would be better for hwcontext_qsv to abstract reading/writing MemId
contents into a function/macro, so other code can just call it and not
hardcode internal implementation details.
Wenbin Chen Nov. 30, 2021, 1:55 a.m. UTC | #3
> Quoting Wenbin Chen (2021-11-16 09:16:23)
> > From: nyanmisaka <nst799610810@gmail.com>
> >
> > mfxHDLPair was added to qsv, so modify qsv->opencl map function as well.
> > Now the following commandline works:
> >
> > ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 \
> > -init_hw_device qsv=qs@va -init_hw_device opencl=ocl@va -
> filter_hw_device ocl \
> > -hwaccel qsv -hwaccel_output_format qsv -hwaccel_device qs -c:v
> h264_qsv \
> > -i input.264 -vf
> "hwmap=derive_device=opencl,format=opencl,avgblur_opencl, \
> > hwmap=derive_device=qsv:reverse=1:extra_hw_frames=32,format=qsv" \
> > -c:v h264_qsv output.264
> >
> > Signed-off-by: nyanmisaka <nst799610810@gmail.com>
> > Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
> > ---
> >  libavutil/hwcontext_opencl.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/libavutil/hwcontext_opencl.c b/libavutil/hwcontext_opencl.c
> > index 26a3a24593..4b6e74ff6f 100644
> > --- a/libavutil/hwcontext_opencl.c
> > +++ b/libavutil/hwcontext_opencl.c
> > @@ -2249,7 +2249,8 @@ static int
> opencl_map_from_qsv(AVHWFramesContext *dst_fc, AVFrame *dst,
> >  #if CONFIG_LIBMFX
> >      if (src->format == AV_PIX_FMT_QSV) {
> >          mfxFrameSurface1 *mfx_surface = (mfxFrameSurface1*)src->data[3];
> > -        va_surface = *(VASurfaceID*)mfx_surface->Data.MemId;
> > +        mfxHDLPair *pair = (mfxHDLPair*)mfx_surface->Data.MemId;
> > +        va_surface = *(VASurfaceID*)pair->first;
> >      } else
> >  #endif
> 
> The casts here are starting to look like overly arcane black magic.
> Who is responsible for setting MemId here? I assume it's something in
> hwcontext_qsv, but is that guaranteed?
> 
> It would be better for hwcontext_qsv to abstract reading/writing MemId
> contents into a function/macro, so other code can just call it and not
> hardcode internal implementation details.
> 
> --
> Anton Khirnov

Thanks for review. I will resubmit the patchset

> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
Mark Thompson Dec. 27, 2021, 6:51 p.m. UTC | #4
On 16/11/2021 08:16, Wenbin Chen wrote:
> From: nyanmisaka <nst799610810@gmail.com>
> 
> mfxHDLPair was added to qsv, so modify qsv->opencl map function as well.
> Now the following commandline works:
> 
> ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 \
> -init_hw_device qsv=qs@va -init_hw_device opencl=ocl@va -filter_hw_device ocl \
> -hwaccel qsv -hwaccel_output_format qsv -hwaccel_device qs -c:v h264_qsv \
> -i input.264 -vf "hwmap=derive_device=opencl,format=opencl,avgblur_opencl, \
> hwmap=derive_device=qsv:reverse=1:extra_hw_frames=32,format=qsv" \
> -c:v h264_qsv output.264
> 
> Signed-off-by: nyanmisaka <nst799610810@gmail.com>
> Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
> ---
>   libavutil/hwcontext_opencl.c | 3 ++-
>   1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/libavutil/hwcontext_opencl.c b/libavutil/hwcontext_opencl.c
> index 26a3a24593..4b6e74ff6f 100644
> --- a/libavutil/hwcontext_opencl.c
> +++ b/libavutil/hwcontext_opencl.c
> @@ -2249,7 +2249,8 @@ static int opencl_map_from_qsv(AVHWFramesContext *dst_fc, AVFrame *dst,
>   #if CONFIG_LIBMFX
>       if (src->format == AV_PIX_FMT_QSV) {
>           mfxFrameSurface1 *mfx_surface = (mfxFrameSurface1*)src->data[3];
> -        va_surface = *(VASurfaceID*)mfx_surface->Data.MemId;
> +        mfxHDLPair *pair = (mfxHDLPair*)mfx_surface->Data.MemId;
> +        va_surface = *(VASurfaceID*)pair->first;
>       } else
>   #endif
>           if (src->format == AV_PIX_FMT_VAAPI) {

Since these frames can be user-supplied, this implies that the user-facing API/ABI for AV_PIX_FMT_QSV has changed.

It looks like this was broken by using HDLPairs when D3D11 was introduced, which silently changed the existing API for DXVA2 and VAAPI as well.

Could someone related to that please document it properly (clearly not all possible valid mfxFrameSurface1s are allowed), and note in APIchanges when the API change happened?

- Mark
Soft Works Dec. 27, 2021, 8:31 p.m. UTC | #5
> -----Original Message-----
> From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of Mark
> Thompson
> Sent: Monday, December 27, 2021 7:51 PM
> To: ffmpeg-devel@ffmpeg.org
> Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix a bug
> for mapping qsv frame to opencl
> 
> On 16/11/2021 08:16, Wenbin Chen wrote:
> > From: nyanmisaka <nst799610810@gmail.com>
> >
> > mfxHDLPair was added to qsv, so modify qsv->opencl map function as well.
> > Now the following commandline works:
> >
> > ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 \
> > -init_hw_device qsv=qs@va -init_hw_device opencl=ocl@va -filter_hw_device
> ocl \
> > -hwaccel qsv -hwaccel_output_format qsv -hwaccel_device qs -c:v h264_qsv \
> > -i input.264 -vf "hwmap=derive_device=opencl,format=opencl,avgblur_opencl,
> \
> > hwmap=derive_device=qsv:reverse=1:extra_hw_frames=32,format=qsv" \
> > -c:v h264_qsv output.264
> >
> > Signed-off-by: nyanmisaka <nst799610810@gmail.com>
> > Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
> > ---
> >   libavutil/hwcontext_opencl.c | 3 ++-
> >   1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/libavutil/hwcontext_opencl.c b/libavutil/hwcontext_opencl.c
> > index 26a3a24593..4b6e74ff6f 100644
> > --- a/libavutil/hwcontext_opencl.c
> > +++ b/libavutil/hwcontext_opencl.c
> > @@ -2249,7 +2249,8 @@ static int opencl_map_from_qsv(AVHWFramesContext
> *dst_fc, AVFrame *dst,
> >   #if CONFIG_LIBMFX
> >       if (src->format == AV_PIX_FMT_QSV) {
> >           mfxFrameSurface1 *mfx_surface = (mfxFrameSurface1*)src->data[3];
> > -        va_surface = *(VASurfaceID*)mfx_surface->Data.MemId;
> > +        mfxHDLPair *pair = (mfxHDLPair*)mfx_surface->Data.MemId;
> > +        va_surface = *(VASurfaceID*)pair->first;
> >       } else
> >   #endif
> >           if (src->format == AV_PIX_FMT_VAAPI) {
> 
> Since these frames can be user-supplied, this implies that the user-facing
> API/ABI for AV_PIX_FMT_QSV has changed.
> 
> It looks like this was broken by using HDLPairs when D3D11 was introduced,
> which silently changed the existing API for DXVA2 and VAAPI as well.
> 
> Could someone related to that please document it properly (clearly not all
> possible valid mfxFrameSurface1s are allowed), and note in APIchanges when
> the API change happened?

Hi Mark,

QSV contexts always need to be backed by a child context, which can be DXVA2,
D3D11VA or VAAPI. You can create a QSV context either by deriving from one of
those contexts or when create a new QSV context, it automatically creates an
appropriate child context - either implicitly (auto mode) or explicitly, like
the ffmpeg implementation does in most cases.

When working with "user-supplied" frames on Linux, you need to create a VAAPI
context with those frames and derive a QSV context from that context.

There is no way to create or supply QSV frames directly. Looking at the code:

> *mfx_surface = (mfxFrameSurface1*)src->data[3];

A QSV frames context is using the mfxFrameSurface1 structure for describing 
the individual frames and mfxFrameSurface1 can only come from the MSDK runtime,
it cannot be user-supplied.

I don't think that there's something that needs to be documented because 
whatever user-side manipulation an API consumer would want to perform, it would
always need to derive the context either from QSV to D3D/VAAPI or from D3D to
VAAPI in order to access and manipulate individual frames. 

Kind regards,
softworkz
Mark Thompson Dec. 27, 2021, 11:45 p.m. UTC | #6
On 27/12/2021 20:31, Soft Works wrote:>> -----Original Message-----
>> From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of Mark
>> Thompson
>> Sent: Monday, December 27, 2021 7:51 PM
>> To: ffmpeg-devel@ffmpeg.org
>> Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix a bug
>> for mapping qsv frame to opencl
>>
>> On 16/11/2021 08:16, Wenbin Chen wrote:
>>> From: nyanmisaka <nst799610810@gmail.com>
>>>
>>> mfxHDLPair was added to qsv, so modify qsv->opencl map function as well.
>>> Now the following commandline works:
>>>
>>> ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 \
>>> -init_hw_device qsv=qs@va -init_hw_device opencl=ocl@va -filter_hw_device
>> ocl \
>>> -hwaccel qsv -hwaccel_output_format qsv -hwaccel_device qs -c:v h264_qsv \
>>> -i input.264 -vf "hwmap=derive_device=opencl,format=opencl,avgblur_opencl,
>> \
>>> hwmap=derive_device=qsv:reverse=1:extra_hw_frames=32,format=qsv" \
>>> -c:v h264_qsv output.264
>>>
>>> Signed-off-by: nyanmisaka <nst799610810@gmail.com>
>>> Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
>>> ---
>>>    libavutil/hwcontext_opencl.c | 3 ++-
>>>    1 file changed, 2 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/libavutil/hwcontext_opencl.c b/libavutil/hwcontext_opencl.c
>>> index 26a3a24593..4b6e74ff6f 100644
>>> --- a/libavutil/hwcontext_opencl.c
>>> +++ b/libavutil/hwcontext_opencl.c
>>> @@ -2249,7 +2249,8 @@ static int opencl_map_from_qsv(AVHWFramesContext
>> *dst_fc, AVFrame *dst,
>>>    #if CONFIG_LIBMFX
>>>        if (src->format == AV_PIX_FMT_QSV) {
>>>            mfxFrameSurface1 *mfx_surface = (mfxFrameSurface1*)src->data[3];
>>> -        va_surface = *(VASurfaceID*)mfx_surface->Data.MemId;
>>> +        mfxHDLPair *pair = (mfxHDLPair*)mfx_surface->Data.MemId;
>>> +        va_surface = *(VASurfaceID*)pair->first;
>>>        } else
>>>    #endif
>>>            if (src->format == AV_PIX_FMT_VAAPI) {
>>
>> Since these frames can be user-supplied, this implies that the user-facing
>> API/ABI for AV_PIX_FMT_QSV has changed.
>>
>> It looks like this was broken by using HDLPairs when D3D11 was introduced,
>> which silently changed the existing API for DXVA2 and VAAPI as well.
>>
>> Could someone related to that please document it properly (clearly not all
>> possible valid mfxFrameSurface1s are allowed), and note in APIchanges when
>> the API change happened?
> 
> Hi Mark,
> 
> QSV contexts always need to be backed by a child context, which can be DXVA2,
> D3D11VA or VAAPI. You can create a QSV context either by deriving from one of
> those contexts or when create a new QSV context, it automatically creates an
> appropriate child context - either implicitly (auto mode) or explicitly, like
> the ffmpeg implementation does in most cases.

... or by using the one the user supplies when they create it.

> When working with "user-supplied" frames on Linux, you need to create a VAAPI
> context with those frames and derive a QSV context from that context.
> 
> There is no way to create or supply QSV frames directly.

???  The ability for the user to set up their own version of these things is literally the whole point of the split alloc/init API.


// Some user stuff involving libmfx - has a D3D or VAAPI backing, but this code doesn't need to care about it.

// It has a session and creates some surfaces to use with MemId filled compatible with ffmpeg.
user_session = ...;
user_surfaces = ...;

// No ffmpeg involved before this, now we want to pass these surfaces we've got into ffmpeg.

// Create a device context using the existing session.

mfx_ctx = av_hwdevice_ctx_alloc(MFX);

dc = mfx_ctx->data;
mfx_dc = dc->hwctx;
mfx_dc->session = user_session;

av_hwdevice_ctx_init(mfx_ctx);

// Create a frames context out of the surfaces we've got.

mfx_frames = av_hwframes_ctx_alloc(mfx_ctx);

fc = mfx_frames->data;
fc.pool = user_surfaces.allocator;
fc.width = user_surfaces.width;
// etc.

mfx_fc = fc->hwctx;
mfx_fc.surfaces = user_surfaces.array;
mfx_fc.nb_surfaces = user_surfaces.count;
mfx_fc.frame_type = user_surfaces.memtype;

av_hwframe_ctx_init(frames);

// Do stuff with frames.

>                                                          Looking at the code:
> 
>> *mfx_surface = (mfxFrameSurface1*)src->data[3];
> 
> A QSV frames context is using the mfxFrameSurface1 structure for describing
> the individual frames and mfxFrameSurface1 can only come from the MSDK runtime,
> it cannot be user-supplied.
> 
> I don't think that there's something that needs to be documented because
> whatever user-side manipulation an API consumer would want to perform, it would
> always need to derive the context either from QSV to D3D/VAAPI or from D3D to
> VAAPI in order to access and manipulate individual frames.

- Mark
Soft Works Dec. 28, 2021, 1:17 a.m. UTC | #7
> -----Original Message-----
> From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of Mark
> Thompson
> Sent: Tuesday, December 28, 2021 12:46 AM
> To: ffmpeg-devel@ffmpeg.org
> Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix a bug
> for mapping qsv frame to opencl
> 
> On 27/12/2021 20:31, Soft Works wrote:>> -----Original Message-----
> >> From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of Mark
> >> Thompson
> >> Sent: Monday, December 27, 2021 7:51 PM
> >> To: ffmpeg-devel@ffmpeg.org
> >> Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix a
> bug
> >> for mapping qsv frame to opencl
> >>
> >> On 16/11/2021 08:16, Wenbin Chen wrote:
> >>> From: nyanmisaka <nst799610810@gmail.com>
> >>>
> >>> mfxHDLPair was added to qsv, so modify qsv->opencl map function as well.
> >>> Now the following commandline works:
> >>>
> >>> ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 \
> >>> -init_hw_device qsv=qs@va -init_hw_device opencl=ocl@va -filter_hw_device
> >> ocl \
> >>> -hwaccel qsv -hwaccel_output_format qsv -hwaccel_device qs -c:v h264_qsv
> \
> >>> -i input.264 -vf
> "hwmap=derive_device=opencl,format=opencl,avgblur_opencl,
> >> \
> >>> hwmap=derive_device=qsv:reverse=1:extra_hw_frames=32,format=qsv" \
> >>> -c:v h264_qsv output.264
> >>>
> >>> Signed-off-by: nyanmisaka <nst799610810@gmail.com>
> >>> Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
> >>> ---
> >>>    libavutil/hwcontext_opencl.c | 3 ++-
> >>>    1 file changed, 2 insertions(+), 1 deletion(-)
> >>>
> >>> diff --git a/libavutil/hwcontext_opencl.c b/libavutil/hwcontext_opencl.c
> >>> index 26a3a24593..4b6e74ff6f 100644
> >>> --- a/libavutil/hwcontext_opencl.c
> >>> +++ b/libavutil/hwcontext_opencl.c
> >>> @@ -2249,7 +2249,8 @@ static int opencl_map_from_qsv(AVHWFramesContext
> >> *dst_fc, AVFrame *dst,
> >>>    #if CONFIG_LIBMFX
> >>>        if (src->format == AV_PIX_FMT_QSV) {
> >>>            mfxFrameSurface1 *mfx_surface = (mfxFrameSurface1*)src-
> >data[3];
> >>> -        va_surface = *(VASurfaceID*)mfx_surface->Data.MemId;
> >>> +        mfxHDLPair *pair = (mfxHDLPair*)mfx_surface->Data.MemId;
> >>> +        va_surface = *(VASurfaceID*)pair->first;
> >>>        } else
> >>>    #endif
> >>>            if (src->format == AV_PIX_FMT_VAAPI) {
> >>
> >> Since these frames can be user-supplied, this implies that the user-facing
> >> API/ABI for AV_PIX_FMT_QSV has changed.
> >>
> >> It looks like this was broken by using HDLPairs when D3D11 was introduced,
> >> which silently changed the existing API for DXVA2 and VAAPI as well.
> >>
> >> Could someone related to that please document it properly (clearly not all
> >> possible valid mfxFrameSurface1s are allowed), and note in APIchanges when
> >> the API change happened?
> >
> > Hi Mark,
> >
> > QSV contexts always need to be backed by a child context, which can be
> DXVA2,
> > D3D11VA or VAAPI. You can create a QSV context either by deriving from one
> of
> > those contexts or when create a new QSV context, it automatically creates
> an
> > appropriate child context - either implicitly (auto mode) or explicitly,
> like
> > the ffmpeg implementation does in most cases.
> 
> ... or by using the one the user supplies when they create it.
> 
> > When working with "user-supplied" frames on Linux, you need to create a
> VAAPI
> > context with those frames and derive a QSV context from that context.
> >
> > There is no way to create or supply QSV frames directly.
> 
> ???  The ability for the user to set up their own version of these things is
> literally the whole point of the split alloc/init API.
> 
> 
> // Some user stuff involving libmfx - has a D3D or VAAPI backing, but this
> code doesn't need to care about it.
> 
> // It has a session and creates some surfaces to use with MemId filled
> compatible with ffmpeg.
> user_session = ...;
> user_surfaces = ...;
> 
> // No ffmpeg involved before this, now we want to pass these surfaces we've
> got into ffmpeg.
> 
> // Create a device context using the existing session.
> 
> mfx_ctx = av_hwdevice_ctx_alloc(MFX);
> 
> dc = mfx_ctx->data;
> mfx_dc = dc->hwctx;
> mfx_dc->session = user_session;
> 
> av_hwdevice_ctx_init(mfx_ctx);
> 
> // Create a frames context out of the surfaces we've got.
> 
> mfx_frames = av_hwframes_ctx_alloc(mfx_ctx);
> 
> fc = mfx_frames->data;
> fc.pool = user_surfaces.allocator;
> fc.width = user_surfaces.width;
> // etc.
> 
> mfx_fc = fc->hwctx;
> mfx_fc.surfaces = user_surfaces.array;
> mfx_fc.nb_surfaces = user_surfaces.count;
> mfx_fc.frame_type = user_surfaces.memtype;
> 
> av_hwframe_ctx_init(frames);
> 
> // Do stuff with frames.

I wouldn't consider an mfxSession as an entity that could or should be
shared between implementations. IMO, this is not a valid use case.
A consumer of the mfx API needs to make certain choices regarding 
the usage of the API, one of which is the way how frames are allocated
and managed. 
This is not something that is meant to be shared between implementations.
Even inside ffmpeg, we don't use a single mfx session. We use separate
sessions for decoding, encoding and filtering that are joined together
via MFXJoinSession. 
When an (ffmpeg-)API consumer is creating its own MFX session and its
own frame allocator implementation, it shouldn't be and allowed 
scenario to create an ffmpeg hw context using this session.
This shouldn't be considered a public API of ffmpeg because it doesn't
make sense to share an mfx session like this.

Besides that, is there any ffmpeg documentation indicating that 
the memId field of mfxFrameSurface1 could be casted to VASurfaceId?

Kind regards,
softworkz
Mark Thompson Dec. 28, 2021, 12:53 p.m. UTC | #8
On 28/12/2021 01:17, Soft Works wrote:
>> -----Original Message-----
>> From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of Mark
>> Thompson
>> Sent: Tuesday, December 28, 2021 12:46 AM
>> To: ffmpeg-devel@ffmpeg.org
>> Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix a bug
>> for mapping qsv frame to opencl
>>
>> On 27/12/2021 20:31, Soft Works wrote:>> -----Original Message-----
>>>> From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of Mark
>>>> Thompson
>>>> Sent: Monday, December 27, 2021 7:51 PM
>>>> To: ffmpeg-devel@ffmpeg.org
>>>> Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix a
>> bug
>>>> for mapping qsv frame to opencl
>>>>
>>>> On 16/11/2021 08:16, Wenbin Chen wrote:
>>>>> From: nyanmisaka <nst799610810@gmail.com>
>>>>>
>>>>> mfxHDLPair was added to qsv, so modify qsv->opencl map function as well.
>>>>> Now the following commandline works:
>>>>>
>>>>> ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 \
>>>>> -init_hw_device qsv=qs@va -init_hw_device opencl=ocl@va -filter_hw_device
>>>> ocl \
>>>>> -hwaccel qsv -hwaccel_output_format qsv -hwaccel_device qs -c:v h264_qsv
>> \
>>>>> -i input.264 -vf
>> "hwmap=derive_device=opencl,format=opencl,avgblur_opencl,
>>>> \
>>>>> hwmap=derive_device=qsv:reverse=1:extra_hw_frames=32,format=qsv" \
>>>>> -c:v h264_qsv output.264
>>>>>
>>>>> Signed-off-by: nyanmisaka <nst799610810@gmail.com>
>>>>> Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
>>>>> ---
>>>>>     libavutil/hwcontext_opencl.c | 3 ++-
>>>>>     1 file changed, 2 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/libavutil/hwcontext_opencl.c b/libavutil/hwcontext_opencl.c
>>>>> index 26a3a24593..4b6e74ff6f 100644
>>>>> --- a/libavutil/hwcontext_opencl.c
>>>>> +++ b/libavutil/hwcontext_opencl.c
>>>>> @@ -2249,7 +2249,8 @@ static int opencl_map_from_qsv(AVHWFramesContext
>>>> *dst_fc, AVFrame *dst,
>>>>>     #if CONFIG_LIBMFX
>>>>>         if (src->format == AV_PIX_FMT_QSV) {
>>>>>             mfxFrameSurface1 *mfx_surface = (mfxFrameSurface1*)src-
>>> data[3];
>>>>> -        va_surface = *(VASurfaceID*)mfx_surface->Data.MemId;
>>>>> +        mfxHDLPair *pair = (mfxHDLPair*)mfx_surface->Data.MemId;
>>>>> +        va_surface = *(VASurfaceID*)pair->first;
>>>>>         } else
>>>>>     #endif
>>>>>             if (src->format == AV_PIX_FMT_VAAPI) {
>>>>
>>>> Since these frames can be user-supplied, this implies that the user-facing
>>>> API/ABI for AV_PIX_FMT_QSV has changed.
>>>>
>>>> It looks like this was broken by using HDLPairs when D3D11 was introduced,
>>>> which silently changed the existing API for DXVA2 and VAAPI as well.
>>>>
>>>> Could someone related to that please document it properly (clearly not all
>>>> possible valid mfxFrameSurface1s are allowed), and note in APIchanges when
>>>> the API change happened?
>>>
>>> Hi Mark,
>>>
>>> QSV contexts always need to be backed by a child context, which can be
>> DXVA2,
>>> D3D11VA or VAAPI. You can create a QSV context either by deriving from one
>> of
>>> those contexts or when create a new QSV context, it automatically creates
>> an
>>> appropriate child context - either implicitly (auto mode) or explicitly,
>> like
>>> the ffmpeg implementation does in most cases.
>>
>> ... or by using the one the user supplies when they create it.
>>
>>> When working with "user-supplied" frames on Linux, you need to create a
>> VAAPI
>>> context with those frames and derive a QSV context from that context.
>>>
>>> There is no way to create or supply QSV frames directly.
>>
>> ???  The ability for the user to set up their own version of these things is
>> literally the whole point of the split alloc/init API.
>>
>>
>> // Some user stuff involving libmfx - has a D3D or VAAPI backing, but this
>> code doesn't need to care about it.
>>
>> // It has a session and creates some surfaces to use with MemId filled
>> compatible with ffmpeg.
>> user_session = ...;
>> user_surfaces = ...;
>>
>> // No ffmpeg involved before this, now we want to pass these surfaces we've
>> got into ffmpeg.
>>
>> // Create a device context using the existing session.
>>
>> mfx_ctx = av_hwdevice_ctx_alloc(MFX);
>>
>> dc = mfx_ctx->data;
>> mfx_dc = dc->hwctx;
>> mfx_dc->session = user_session;
>>
>> av_hwdevice_ctx_init(mfx_ctx);
>>
>> // Create a frames context out of the surfaces we've got.
>>
>> mfx_frames = av_hwframes_ctx_alloc(mfx_ctx);
>>
>> fc = mfx_frames->data;
>> fc.pool = user_surfaces.allocator;
>> fc.width = user_surfaces.width;
>> // etc.
>>
>> mfx_fc = fc->hwctx;
>> mfx_fc.surfaces = user_surfaces.array;
>> mfx_fc.nb_surfaces = user_surfaces.count;
>> mfx_fc.frame_type = user_surfaces.memtype;
>>
>> av_hwframe_ctx_init(frames);
>>
>> // Do stuff with frames.
> 
> I wouldn't consider an mfxSession as an entity that could or should be
> shared between implementations. IMO, this is not a valid use case.
> A consumer of the mfx API needs to make certain choices regarding
> the usage of the API, one of which is the way how frames are allocated
> and managed.
> This is not something that is meant to be shared between implementations.
> Even inside ffmpeg, we don't use a single mfx session. We use separate
> sessions for decoding, encoding and filtering that are joined together
> via MFXJoinSession.
> When an (ffmpeg-)API consumer is creating its own MFX session and its
> own frame allocator implementation, it shouldn't be and allowed
> scenario to create an ffmpeg hw context using this session.

The user is also aware of the thread safety rules.  I was giving the example above entirely serially to avoid that, but if they were using libmfx elsewhere in parallel with the above then indeed they would need a bit more care (create another session, some sort of locking to ensure serialisation) when passing it to ffmpeg.

> This shouldn't be considered a public API of ffmpeg because it doesn't
> make sense to share an mfx session like this.

So, to clarify, your opinion is that none of hwcontext_qsv.h should be public API?  If it were actually removed (not visible in installed headers) then I agree that would fix the problem.

> Besides that, is there any ffmpeg documentation indicating that
> the memId field of mfxFrameSurface1 could be casted to VASurfaceId?

It is user-visible API, and it matched the behaviour of the libmfx examples for D3D9 (identical pointer to IDirect3DSurface9) and VAAPI (a compatible pointer to a structure with the VASurfaceID as the first member) so doing the obvious thing worked.

- Mark
Soft Works Dec. 28, 2021, 7:04 p.m. UTC | #9
> -----Original Message-----
> From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of Mark
> Thompson
> Sent: Tuesday, December 28, 2021 1:54 PM
> To: ffmpeg-devel@ffmpeg.org
> Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix a bug
> for mapping qsv frame to opencl
> 
> On 28/12/2021 01:17, Soft Works wrote:
> >> -----Original Message-----
> >> From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of Mark
> >> Thompson
> >> Sent: Tuesday, December 28, 2021 12:46 AM
> >> To: ffmpeg-devel@ffmpeg.org
> >> Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix a
> bug
> >> for mapping qsv frame to opencl
> >>
> >> On 27/12/2021 20:31, Soft Works wrote:>> -----Original Message-----
> >>>> From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of Mark
> >>>> Thompson
> >>>> Sent: Monday, December 27, 2021 7:51 PM
> >>>> To: ffmpeg-devel@ffmpeg.org
> >>>> Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix
> a
> >> bug
> >>>> for mapping qsv frame to opencl
> >>>>
> >>>> On 16/11/2021 08:16, Wenbin Chen wrote:
> >>>>> From: nyanmisaka <nst799610810@gmail.com>
> >>>>>
> >>>>> mfxHDLPair was added to qsv, so modify qsv->opencl map function as
> well.
> >>>>> Now the following commandline works:
> >>>>>
> >>>>> ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 \
> >>>>> -init_hw_device qsv=qs@va -init_hw_device opencl=ocl@va -
> filter_hw_device
> >>>> ocl \
> >>>>> -hwaccel qsv -hwaccel_output_format qsv -hwaccel_device qs -c:v
> h264_qsv
> >> \
> >>>>> -i input.264 -vf
> >> "hwmap=derive_device=opencl,format=opencl,avgblur_opencl,
> >>>> \
> >>>>> hwmap=derive_device=qsv:reverse=1:extra_hw_frames=32,format=qsv" \
> >>>>> -c:v h264_qsv output.264
> >>>>>
> >>>>> Signed-off-by: nyanmisaka <nst799610810@gmail.com>
> >>>>> Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
> >>>>> ---
> >>>>>     libavutil/hwcontext_opencl.c | 3 ++-
> >>>>>     1 file changed, 2 insertions(+), 1 deletion(-)
> >>>>>
> >>>>> diff --git a/libavutil/hwcontext_opencl.c
> b/libavutil/hwcontext_opencl.c
> >>>>> index 26a3a24593..4b6e74ff6f 100644
> >>>>> --- a/libavutil/hwcontext_opencl.c
> >>>>> +++ b/libavutil/hwcontext_opencl.c
> >>>>> @@ -2249,7 +2249,8 @@ static int opencl_map_from_qsv(AVHWFramesContext
> >>>> *dst_fc, AVFrame *dst,
> >>>>>     #if CONFIG_LIBMFX
> >>>>>         if (src->format == AV_PIX_FMT_QSV) {
> >>>>>             mfxFrameSurface1 *mfx_surface = (mfxFrameSurface1*)src-
> >>> data[3];
> >>>>> -        va_surface = *(VASurfaceID*)mfx_surface->Data.MemId;
> >>>>> +        mfxHDLPair *pair = (mfxHDLPair*)mfx_surface->Data.MemId;
> >>>>> +        va_surface = *(VASurfaceID*)pair->first;
> >>>>>         } else
> >>>>>     #endif
> >>>>>             if (src->format == AV_PIX_FMT_VAAPI) {
> >>>>
> >>>> Since these frames can be user-supplied, this implies that the user-
> facing
> >>>> API/ABI for AV_PIX_FMT_QSV has changed.
> >>>>
> >>>> It looks like this was broken by using HDLPairs when D3D11 was
> introduced,
> >>>> which silently changed the existing API for DXVA2 and VAAPI as well.
> >>>>
> >>>> Could someone related to that please document it properly (clearly not
> all
> >>>> possible valid mfxFrameSurface1s are allowed), and note in APIchanges
> when
> >>>> the API change happened?
> >>>
> >>> Hi Mark,
> >>>
> >>> QSV contexts always need to be backed by a child context, which can be
> >> DXVA2,
> >>> D3D11VA or VAAPI. You can create a QSV context either by deriving from
> one
> >> of
> >>> those contexts or when create a new QSV context, it automatically creates
> >> an
> >>> appropriate child context - either implicitly (auto mode) or explicitly,
> >> like
> >>> the ffmpeg implementation does in most cases.
> >>
> >> ... or by using the one the user supplies when they create it.
> >>
> >>> When working with "user-supplied" frames on Linux, you need to create a
> >> VAAPI
> >>> context with those frames and derive a QSV context from that context.
> >>>
> >>> There is no way to create or supply QSV frames directly.
> >>
> >> ???  The ability for the user to set up their own version of these things
> is
> >> literally the whole point of the split alloc/init API.
> >>
> >>
> >> // Some user stuff involving libmfx - has a D3D or VAAPI backing, but this
> >> code doesn't need to care about it.
> >>
> >> // It has a session and creates some surfaces to use with MemId filled
> >> compatible with ffmpeg.
> >> user_session = ...;
> >> user_surfaces = ...;
> >>
> >> // No ffmpeg involved before this, now we want to pass these surfaces
> we've
> >> got into ffmpeg.
> >>
> >> // Create a device context using the existing session.
> >>
> >> mfx_ctx = av_hwdevice_ctx_alloc(MFX);
> >>
> >> dc = mfx_ctx->data;
> >> mfx_dc = dc->hwctx;
> >> mfx_dc->session = user_session;
> >>
> >> av_hwdevice_ctx_init(mfx_ctx);
> >>
> >> // Create a frames context out of the surfaces we've got.
> >>
> >> mfx_frames = av_hwframes_ctx_alloc(mfx_ctx);
> >>
> >> fc = mfx_frames->data;
> >> fc.pool = user_surfaces.allocator;
> >> fc.width = user_surfaces.width;
> >> // etc.
> >>
> >> mfx_fc = fc->hwctx;
> >> mfx_fc.surfaces = user_surfaces.array;
> >> mfx_fc.nb_surfaces = user_surfaces.count;
> >> mfx_fc.frame_type = user_surfaces.memtype;
> >>
> >> av_hwframe_ctx_init(frames);
> >>
> >> // Do stuff with frames.
> >
> > I wouldn't consider an mfxSession as an entity that could or should be
> > shared between implementations. IMO, this is not a valid use case.
> > A consumer of the mfx API needs to make certain choices regarding
> > the usage of the API, one of which is the way how frames are allocated
> > and managed.
> > This is not something that is meant to be shared between implementations.
> > Even inside ffmpeg, we don't use a single mfx session. We use separate
> > sessions for decoding, encoding and filtering that are joined together
> > via MFXJoinSession.
> > When an (ffmpeg-)API consumer is creating its own MFX session and its
> > own frame allocator implementation, it shouldn't be and allowed
> > scenario to create an ffmpeg hw context using this session.
> 
> The user is also aware of the thread safety rules.  I was giving the example
> above entirely serially to avoid that, but if they were using libmfx
> elsewhere in parallel with the above then indeed they would need a bit more
> care (create another session, some sort of locking to ensure serialisation)
> when passing it to ffmpeg.
> 
> > This shouldn't be considered a public API of ffmpeg because it doesn't
> > make sense to share an mfx session like this.
> 
> So, to clarify, your opinion is that none of hwcontext_qsv.h should be public
> API?  If it were actually removed (not visible in installed headers) then I
> agree that would fix the problem.

I think that exposing the mfx session handle is OK in order to allow an
API user to interop in a way that you create (and manage) your own session
and your own surface allocation, and use the exposes mfx session handle to
join the ffmpeg-created (and managed) session.

> > Besides that, is there any ffmpeg documentation indicating that
> > the memId field of mfxFrameSurface1 could be casted to VASurfaceId?
> 
> It is user-visible API, and it matched the behaviour of the libmfx examples
> for D3D9 (identical pointer to IDirect3DSurface9) and VAAPI (a compatible
> pointer to a structure with the VASurfaceID as the first member) so doing the
> obvious thing worked.

Also I think it's OK to expose the array of mfxSurface1 pointers allowing
to use them in a opaque way (without making use of and any assumptions
about the mfxMemId field).

This would allow an API user to interoperate with a joined session, e.g. 
taking frames from a joined ffmpeg session as input to a user-created 
session doing custom VPP operations where the output frames are from (and 
managed by) the user-created session.

On the other side, when and API user wants to access and interop with frames
directly, then the proper way would be to use derive-from or derive-to
(VAAPI, D3D9, D3D11) to get an appropriate frames context to work with.

That's how I would see it. What do you think?

Thanks,
softworkz
Mark Thompson Dec. 29, 2021, 11:22 p.m. UTC | #10
On 28/12/2021 19:04, Soft Works wrote:>> -----Original Message-----
>> From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of Mark
>> Thompson
>> Sent: Tuesday, December 28, 2021 1:54 PM
>> To: ffmpeg-devel@ffmpeg.org
>> Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix a bug
>> for mapping qsv frame to opencl
>>
>> On 28/12/2021 01:17, Soft Works wrote:
>>>> -----Original Message-----
>>>> From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of Mark
>>>> Thompson
>>>> Sent: Tuesday, December 28, 2021 12:46 AM
>>>> To: ffmpeg-devel@ffmpeg.org
>>>> Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix a
>> bug
>>>> for mapping qsv frame to opencl
>>>>
>>>> On 27/12/2021 20:31, Soft Works wrote:>> -----Original Message-----
>>>>>> From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of Mark
>>>>>> Thompson
>>>>>> Sent: Monday, December 27, 2021 7:51 PM
>>>>>> To: ffmpeg-devel@ffmpeg.org
>>>>>> Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix
>> a
>>>> bug
>>>>>> for mapping qsv frame to opencl
>>>>>>
>>>>>> On 16/11/2021 08:16, Wenbin Chen wrote:
>>>>>>> From: nyanmisaka <nst799610810@gmail.com>
>>>>>>>
>>>>>>> mfxHDLPair was added to qsv, so modify qsv->opencl map function as
>> well.
>>>>>>> Now the following commandline works:
>>>>>>>
>>>>>>> ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 \
>>>>>>> -init_hw_device qsv=qs@va -init_hw_device opencl=ocl@va -
>> filter_hw_device
>>>>>> ocl \
>>>>>>> -hwaccel qsv -hwaccel_output_format qsv -hwaccel_device qs -c:v
>> h264_qsv
>>>> \
>>>>>>> -i input.264 -vf
>>>> "hwmap=derive_device=opencl,format=opencl,avgblur_opencl,
>>>>>> \
>>>>>>> hwmap=derive_device=qsv:reverse=1:extra_hw_frames=32,format=qsv" \
>>>>>>> -c:v h264_qsv output.264
>>>>>>>
>>>>>>> Signed-off-by: nyanmisaka <nst799610810@gmail.com>
>>>>>>> Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
>>>>>>> ---
>>>>>>>      libavutil/hwcontext_opencl.c | 3 ++-
>>>>>>>      1 file changed, 2 insertions(+), 1 deletion(-)
>>>>>>>
>>>>>>> diff --git a/libavutil/hwcontext_opencl.c
>> b/libavutil/hwcontext_opencl.c
>>>>>>> index 26a3a24593..4b6e74ff6f 100644
>>>>>>> --- a/libavutil/hwcontext_opencl.c
>>>>>>> +++ b/libavutil/hwcontext_opencl.c
>>>>>>> @@ -2249,7 +2249,8 @@ static int opencl_map_from_qsv(AVHWFramesContext
>>>>>> *dst_fc, AVFrame *dst,
>>>>>>>      #if CONFIG_LIBMFX
>>>>>>>          if (src->format == AV_PIX_FMT_QSV) {
>>>>>>>              mfxFrameSurface1 *mfx_surface = (mfxFrameSurface1*)src-
>>>>> data[3];
>>>>>>> -        va_surface = *(VASurfaceID*)mfx_surface->Data.MemId;
>>>>>>> +        mfxHDLPair *pair = (mfxHDLPair*)mfx_surface->Data.MemId;
>>>>>>> +        va_surface = *(VASurfaceID*)pair->first;
>>>>>>>          } else
>>>>>>>      #endif
>>>>>>>              if (src->format == AV_PIX_FMT_VAAPI) {
>>>>>>
>>>>>> Since these frames can be user-supplied, this implies that the user-
>> facing
>>>>>> API/ABI for AV_PIX_FMT_QSV has changed.
>>>>>>
>>>>>> It looks like this was broken by using HDLPairs when D3D11 was
>> introduced,
>>>>>> which silently changed the existing API for DXVA2 and VAAPI as well.
>>>>>>
>>>>>> Could someone related to that please document it properly (clearly not
>> all
>>>>>> possible valid mfxFrameSurface1s are allowed), and note in APIchanges
>> when
>>>>>> the API change happened?
>>>>>
>>>>> Hi Mark,
>>>>>
>>>>> QSV contexts always need to be backed by a child context, which can be
>>>> DXVA2,
>>>>> D3D11VA or VAAPI. You can create a QSV context either by deriving from
>> one
>>>> of
>>>>> those contexts or when create a new QSV context, it automatically creates
>>>> an
>>>>> appropriate child context - either implicitly (auto mode) or explicitly,
>>>> like
>>>>> the ffmpeg implementation does in most cases.
>>>>
>>>> ... or by using the one the user supplies when they create it.
>>>>
>>>>> When working with "user-supplied" frames on Linux, you need to create a
>>>> VAAPI
>>>>> context with those frames and derive a QSV context from that context.
>>>>>
>>>>> There is no way to create or supply QSV frames directly.
>>>>
>>>> ???  The ability for the user to set up their own version of these things
>> is
>>>> literally the whole point of the split alloc/init API.
>>>>
>>>>
>>>> // Some user stuff involving libmfx - has a D3D or VAAPI backing, but this
>>>> code doesn't need to care about it.
>>>>
>>>> // It has a session and creates some surfaces to use with MemId filled
>>>> compatible with ffmpeg.
>>>> user_session = ...;
>>>> user_surfaces = ...;
>>>>
>>>> // No ffmpeg involved before this, now we want to pass these surfaces
>> we've
>>>> got into ffmpeg.
>>>>
>>>> // Create a device context using the existing session.
>>>>
>>>> mfx_ctx = av_hwdevice_ctx_alloc(MFX);
>>>>
>>>> dc = mfx_ctx->data;
>>>> mfx_dc = dc->hwctx;
>>>> mfx_dc->session = user_session;
>>>>
>>>> av_hwdevice_ctx_init(mfx_ctx);
>>>>
>>>> // Create a frames context out of the surfaces we've got.
>>>>
>>>> mfx_frames = av_hwframes_ctx_alloc(mfx_ctx);
>>>>
>>>> fc = mfx_frames->data;
>>>> fc.pool = user_surfaces.allocator;
>>>> fc.width = user_surfaces.width;
>>>> // etc.
>>>>
>>>> mfx_fc = fc->hwctx;
>>>> mfx_fc.surfaces = user_surfaces.array;
>>>> mfx_fc.nb_surfaces = user_surfaces.count;
>>>> mfx_fc.frame_type = user_surfaces.memtype;
>>>>
>>>> av_hwframe_ctx_init(frames);
>>>>
>>>> // Do stuff with frames.
>>>
>>> I wouldn't consider an mfxSession as an entity that could or should be
>>> shared between implementations. IMO, this is not a valid use case.
>>> A consumer of the mfx API needs to make certain choices regarding
>>> the usage of the API, one of which is the way how frames are allocated
>>> and managed.
>>> This is not something that is meant to be shared between implementations.
>>> Even inside ffmpeg, we don't use a single mfx session. We use separate
>>> sessions for decoding, encoding and filtering that are joined together
>>> via MFXJoinSession.
>>> When an (ffmpeg-)API consumer is creating its own MFX session and its
>>> own frame allocator implementation, it shouldn't be and allowed
>>> scenario to create an ffmpeg hw context using this session.
>>
>> The user is also aware of the thread safety rules.  I was giving the example
>> above entirely serially to avoid that, but if they were using libmfx
>> elsewhere in parallel with the above then indeed they would need a bit more
>> care (create another session, some sort of locking to ensure serialisation)
>> when passing it to ffmpeg.
>>
>>> This shouldn't be considered a public API of ffmpeg because it doesn't
>>> make sense to share an mfx session like this.
>>
>> So, to clarify, your opinion is that none of hwcontext_qsv.h should be public
>> API?  If it were actually removed (not visible in installed headers) then I
>> agree that would fix the problem.
> 
> I think that exposing the mfx session handle is OK in order to allow an
> API user to interop in a way that you create (and manage) your own session
> and your own surface allocation, and use the exposes mfx session handle to
> join the ffmpeg-created (and managed) session.

I'm not sure what you gain by making this one direction only.  Setting it is currently supported, and hasn't had an API-breaking internal change like the MemIds.

>>> Besides that, is there any ffmpeg documentation indicating that
>>> the memId field of mfxFrameSurface1 could be casted to VASurfaceId?
>>
>> It is user-visible API, and it matched the behaviour of the libmfx examples
>> for D3D9 (identical pointer to IDirect3DSurface9) and VAAPI (a compatible
>> pointer to a structure with the VASurfaceID as the first member) so doing the
>> obvious thing worked.
> 
> Also I think it's OK to expose the array of mfxSurface1 pointers allowing
> to use them in a opaque way (without making use of and any assumptions
> about the mfxMemId field).
> 
> This would allow an API user to interoperate with a joined session, e.g.
> taking frames from a joined ffmpeg session as input to a user-created
> session doing custom VPP operations where the output frames are from (and
> managed by) the user-created session.
> 
> On the other side, when and API user wants to access and interop with frames
> directly, then the proper way would be to use derive-from or derive-to
> (VAAPI, D3D9, D3D11) to get an appropriate frames context to work with.
> 
> That's how I would see it. What do you think?
I don't much like the idea of a special-case for read-only structures here, unlike all other hwcontexts.

Any users will already have been broken by the API change.  Is there any reason to believe that it will change again?  If not, we could document the new behaviour and say that's the new API.

If we do expect it to change further, then I guess that what you say is probably the least-bad option.  (It would want some extra code to disallow external pools so that users can't get into the bad case again, too.)

- Mark
Soft Works Dec. 30, 2021, 12:17 a.m. UTC | #11
> -----Original Message-----
> From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of Mark
> Thompson
> Sent: Thursday, December 30, 2021 12:23 AM
> To: ffmpeg-devel@ffmpeg.org
> Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix a bug
> for mapping qsv frame to opencl
> 
> On 28/12/2021 19:04, Soft Works wrote:>> -----Original Message-----
> >> From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of Mark
> >> Thompson
> >> Sent: Tuesday, December 28, 2021 1:54 PM
> >> To: ffmpeg-devel@ffmpeg.org
> >> Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix a
> bug
> >> for mapping qsv frame to opencl
> >>
> >> On 28/12/2021 01:17, Soft Works wrote:
> >>>> -----Original Message-----
> >>>> From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of Mark
> >>>> Thompson
> >>>> Sent: Tuesday, December 28, 2021 12:46 AM
> >>>> To: ffmpeg-devel@ffmpeg.org
> >>>> Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix
> a
> >> bug
> >>>> for mapping qsv frame to opencl
> >>>>
> >>>> On 27/12/2021 20:31, Soft Works wrote:>> -----Original Message-----
> >>>>>> From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of Mark
> >>>>>> Thompson
> >>>>>> Sent: Monday, December 27, 2021 7:51 PM
> >>>>>> To: ffmpeg-devel@ffmpeg.org
> >>>>>> Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl:
> fix
> >> a
> >>>> bug
> >>>>>> for mapping qsv frame to opencl
> >>>>>>
> >>>>>> On 16/11/2021 08:16, Wenbin Chen wrote:
> >>>>>>> From: nyanmisaka <nst799610810@gmail.com>
> >>>>>>>
> >>>>>>> mfxHDLPair was added to qsv, so modify qsv->opencl map function as
> >> well.
> >>>>>>> Now the following commandline works:
> >>>>>>>
> >>>>>>> ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 \
> >>>>>>> -init_hw_device qsv=qs@va -init_hw_device opencl=ocl@va -
> >> filter_hw_device
> >>>>>> ocl \
> >>>>>>> -hwaccel qsv -hwaccel_output_format qsv -hwaccel_device qs -c:v
> >> h264_qsv
> >>>> \
> >>>>>>> -i input.264 -vf
> >>>> "hwmap=derive_device=opencl,format=opencl,avgblur_opencl,
> >>>>>> \
> >>>>>>> hwmap=derive_device=qsv:reverse=1:extra_hw_frames=32,format=qsv" \
> >>>>>>> -c:v h264_qsv output.264
> >>>>>>>
> >>>>>>> Signed-off-by: nyanmisaka <nst799610810@gmail.com>
> >>>>>>> Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
> >>>>>>> ---
> >>>>>>>      libavutil/hwcontext_opencl.c | 3 ++-
> >>>>>>>      1 file changed, 2 insertions(+), 1 deletion(-)
> >>>>>>>
> >>>>>>> diff --git a/libavutil/hwcontext_opencl.c
> >> b/libavutil/hwcontext_opencl.c
> >>>>>>> index 26a3a24593..4b6e74ff6f 100644
> >>>>>>> --- a/libavutil/hwcontext_opencl.c
> >>>>>>> +++ b/libavutil/hwcontext_opencl.c
> >>>>>>> @@ -2249,7 +2249,8 @@ static int
> opencl_map_from_qsv(AVHWFramesContext
> >>>>>> *dst_fc, AVFrame *dst,
> >>>>>>>      #if CONFIG_LIBMFX
> >>>>>>>          if (src->format == AV_PIX_FMT_QSV) {
> >>>>>>>              mfxFrameSurface1 *mfx_surface = (mfxFrameSurface1*)src-
> >>>>> data[3];
> >>>>>>> -        va_surface = *(VASurfaceID*)mfx_surface->Data.MemId;
> >>>>>>> +        mfxHDLPair *pair = (mfxHDLPair*)mfx_surface->Data.MemId;
> >>>>>>> +        va_surface = *(VASurfaceID*)pair->first;
> >>>>>>>          } else
> >>>>>>>      #endif
> >>>>>>>              if (src->format == AV_PIX_FMT_VAAPI) {
> >>>>>>
> >>>>>> Since these frames can be user-supplied, this implies that the user-
> >> facing
> >>>>>> API/ABI for AV_PIX_FMT_QSV has changed.
> >>>>>>
> >>>>>> It looks like this was broken by using HDLPairs when D3D11 was
> >> introduced,
> >>>>>> which silently changed the existing API for DXVA2 and VAAPI as well.
> >>>>>>
> >>>>>> Could someone related to that please document it properly (clearly not
> >> all
> >>>>>> possible valid mfxFrameSurface1s are allowed), and note in APIchanges
> >> when
> >>>>>> the API change happened?
> >>>>>
> >>>>> Hi Mark,
> >>>>>
> >>>>> QSV contexts always need to be backed by a child context, which can be
> >>>> DXVA2,
> >>>>> D3D11VA or VAAPI. You can create a QSV context either by deriving from
> >> one
> >>>> of
> >>>>> those contexts or when create a new QSV context, it automatically
> creates
> >>>> an
> >>>>> appropriate child context - either implicitly (auto mode) or
> explicitly,
> >>>> like
> >>>>> the ffmpeg implementation does in most cases.
> >>>>
> >>>> ... or by using the one the user supplies when they create it.
> >>>>
> >>>>> When working with "user-supplied" frames on Linux, you need to create a
> >>>> VAAPI
> >>>>> context with those frames and derive a QSV context from that context.
> >>>>>
> >>>>> There is no way to create or supply QSV frames directly.
> >>>>
> >>>> ???  The ability for the user to set up their own version of these
> things
> >> is
> >>>> literally the whole point of the split alloc/init API.
> >>>>
> >>>>
> >>>> // Some user stuff involving libmfx - has a D3D or VAAPI backing, but
> this
> >>>> code doesn't need to care about it.
> >>>>
> >>>> // It has a session and creates some surfaces to use with MemId filled
> >>>> compatible with ffmpeg.
> >>>> user_session = ...;
> >>>> user_surfaces = ...;
> >>>>
> >>>> // No ffmpeg involved before this, now we want to pass these surfaces
> >> we've
> >>>> got into ffmpeg.
> >>>>
> >>>> // Create a device context using the existing session.
> >>>>
> >>>> mfx_ctx = av_hwdevice_ctx_alloc(MFX);
> >>>>
> >>>> dc = mfx_ctx->data;
> >>>> mfx_dc = dc->hwctx;
> >>>> mfx_dc->session = user_session;
> >>>>
> >>>> av_hwdevice_ctx_init(mfx_ctx);
> >>>>
> >>>> // Create a frames context out of the surfaces we've got.
> >>>>
> >>>> mfx_frames = av_hwframes_ctx_alloc(mfx_ctx);
> >>>>
> >>>> fc = mfx_frames->data;
> >>>> fc.pool = user_surfaces.allocator;
> >>>> fc.width = user_surfaces.width;
> >>>> // etc.
> >>>>
> >>>> mfx_fc = fc->hwctx;
> >>>> mfx_fc.surfaces = user_surfaces.array;
> >>>> mfx_fc.nb_surfaces = user_surfaces.count;
> >>>> mfx_fc.frame_type = user_surfaces.memtype;
> >>>>
> >>>> av_hwframe_ctx_init(frames);
> >>>>
> >>>> // Do stuff with frames.
> >>>
> >>> I wouldn't consider an mfxSession as an entity that could or should be
> >>> shared between implementations. IMO, this is not a valid use case.
> >>> A consumer of the mfx API needs to make certain choices regarding
> >>> the usage of the API, one of which is the way how frames are allocated
> >>> and managed.
> >>> This is not something that is meant to be shared between implementations.
> >>> Even inside ffmpeg, we don't use a single mfx session. We use separate
> >>> sessions for decoding, encoding and filtering that are joined together
> >>> via MFXJoinSession.
> >>> When an (ffmpeg-)API consumer is creating its own MFX session and its
> >>> own frame allocator implementation, it shouldn't be and allowed
> >>> scenario to create an ffmpeg hw context using this session.
> >>
> >> The user is also aware of the thread safety rules.  I was giving the
> example
> >> above entirely serially to avoid that, but if they were using libmfx
> >> elsewhere in parallel with the above then indeed they would need a bit
> more
> >> care (create another session, some sort of locking to ensure
> serialisation)
> >> when passing it to ffmpeg.
> >>
> >>> This shouldn't be considered a public API of ffmpeg because it doesn't
> >>> make sense to share an mfx session like this.
> >>
> >> So, to clarify, your opinion is that none of hwcontext_qsv.h should be
> public
> >> API?  If it were actually removed (not visible in installed headers) then
> I
> >> agree that would fix the problem.
> >
> > I think that exposing the mfx session handle is OK in order to allow an
> > API user to interop in a way that you create (and manage) your own session
> > and your own surface allocation, and use the exposes mfx session handle to
> > join the ffmpeg-created (and managed) session.
> 
> I'm not sure what you gain by making this one direction only.  Setting it is
> currently supported, and hasn't had an API-breaking internal change like the
> MemIds.

It's a logical consequence of the MSDK API that there can't be "two
owners" of an mfx session and that an mfx session is not meant to be shared
between implementations where one implementation tries to "match" the frame 
allocation logic of the other one.
Even decoding, processing and encoding inside ffmpeg are having their own mfx 
sessions.

But there also never existed a documented way how it could/should be used. 

You said that one could have inferred such way from MSDK samples, but looking 
at the MSDK samples for VAAPI, there are already two different ones using 
different structures for mfxMemId.

https://github.com/Intel-Media-SDK/MediaSDK/blob/510d19dcace1d8c57567fdd40b557155ab11ab8e/tutorials/common/common_vaapi.cpp#L199-L207

https://github.com/Intel-Media-SDK/MediaSDK/blob/510d19dcace1d8c57567fdd40b557155ab11ab8e/_studio/shared/include/libmfx_allocator_vaapi.h#L33-L39

So, I wouldn't say that it's obvious or straightforward how it would be 
implemented "normally".


> >>> Besides that, is there any ffmpeg documentation indicating that
> >>> the memId field of mfxFrameSurface1 could be casted to VASurfaceId?
> >>
> >> It is user-visible API, and it matched the behaviour of the libmfx
> examples
> >> for D3D9 (identical pointer to IDirect3DSurface9) and VAAPI (a compatible
> >> pointer to a structure with the VASurfaceID as the first member) so doing
> the
> >> obvious thing worked.
> >
> > Also I think it's OK to expose the array of mfxSurface1 pointers allowing
> > to use them in a opaque way (without making use of and any assumptions
> > about the mfxMemId field).
> >
> > This would allow an API user to interoperate with a joined session, e.g.
> > taking frames from a joined ffmpeg session as input to a user-created
> > session doing custom VPP operations where the output frames are from (and
> > managed by) the user-created session.
> >
> > On the other side, when and API user wants to access and interop with
> frames
> > directly, then the proper way would be to use derive-from or derive-to
> > (VAAPI, D3D9, D3D11) to get an appropriate frames context to work with.
> >
> > That's how I would see it. What do you think?
> I don't much like the idea of a special-case for read-only structures here,
> unlike all other hwcontexts.
> 
> Any users will already have been broken by the API change.  Is there any
> reason to believe that it will change again?  If not, we could document the
> new behaviour and say that's the new API.

I don't know... Maybe openVPL will start to support Vulkan as a child context
and maybe mfxHandlePair won't suffice in that case..?

It just appears an odd case to me to "support" a case where the ownership
of an mfx session is shared and not clearly separated. 

> If we do expect it to change further, then I guess that what you say is
> probably the least-bad option.  (It would want some extra code to disallow
> external pools so that users can't get into the bad case again, too.)

I don't think it's bad at all, but the most reasonable way as it implies
constraints that are implied by the libmfx API.
Though, I don't want to destroy sombody's use case, but I think when there 
would be anybody who uses the qsv context like that, he would already know 
what he's doing (and that it's more a kind of a hack than regular 
implementation).

Maybe the easiest solution is to leave it as-is and put a note somewhere 
about that change.
(while in reality, nobody will read that note in the first place. one would
notice that it doesn’t work like before and start researching why..)

Kind regards,
softworkz


> - Mark
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
diff mbox series

Patch

diff --git a/libavutil/hwcontext_opencl.c b/libavutil/hwcontext_opencl.c
index 26a3a24593..4b6e74ff6f 100644
--- a/libavutil/hwcontext_opencl.c
+++ b/libavutil/hwcontext_opencl.c
@@ -2249,7 +2249,8 @@  static int opencl_map_from_qsv(AVHWFramesContext *dst_fc, AVFrame *dst,
 #if CONFIG_LIBMFX
     if (src->format == AV_PIX_FMT_QSV) {
         mfxFrameSurface1 *mfx_surface = (mfxFrameSurface1*)src->data[3];
-        va_surface = *(VASurfaceID*)mfx_surface->Data.MemId;
+        mfxHDLPair *pair = (mfxHDLPair*)mfx_surface->Data.MemId;
+        va_surface = *(VASurfaceID*)pair->first;
     } else
 #endif
         if (src->format == AV_PIX_FMT_VAAPI) {