diff mbox

[FFmpeg-devel] Add support for RockChip Media Process Platform

Message ID CY4PR1701MB1782AF031C898D261111EAC1B4C20@CY4PR1701MB1782.namprd17.prod.outlook.com
State Superseded
Headers show

Commit Message

LongChair . June 13, 2017, 10:48 a.m. UTC

The use case for this is mostly on embedded devices that can use drm to display video with zerocopy.
Most devices i have played with that can use such implementation will allow to create a drm overlay from that drmprime information with drmModeAddFB2:

Here is a sample usage for it https://github.com/LongChair/mpv/blob/rockchip/video/out/opengl/hwdec_drmprime_drm.c#L103-L137

Also that allows some other egl / dmabuf implementation :

The idea behind making this a non rockchip specific hwdec was that on embedded devices, most maufacturers are heading towards V4L2 harwadre decoding implementations. from what i have seen such v4L2 decoding coudl as well make sue of such a format, which would allow to have only one renderer that is hwdec agnostic for the rendering part.

Now of course, I am not 100% sure that this would fit all the possible implementations, If you feel like this should be rockchip specific, then i'll get back to it which was the version 1 of this patch.


Le 13/06/2017 à 12:17, Mark Thompson a écrit :

On 13/06/17 07:21, LongChair . wrote:

From: LongChair <LongChair@hotmail.com><mailto:LongChair@hotmail.com>

This adds hardware decoding for h264 / HEVC / VP8 using MPP Rockchip API.
Will return frames holding a av_drmprime struct in buf[3} that allows drm / dmabuf usage.
Was tested on RK3288 (TinkerBoard) and RK3328.

Additions from patch v1
 - Change AV_PIX_FMT_RKMPP to AV_PIX_FMT_DRMPRIME as any decoder able to output drmprime structures could use that pixel format.
 Changelog              |   1 +
 configure              |  12 ++
 libavcodec/Makefile    |   5 +
 libavcodec/allcodecs.c |   6 +
 libavcodec/drmprime.h  |  17 ++
 libavcodec/rkmppdec.c  | 523 +++++++++++++++++++++++++++++++++++++++++++++++++
 libavutil/pixdesc.c    |   4 +
 libavutil/pixfmt.h     |   5 +
 8 files changed, 573 insertions(+)
 create mode 100644 libavcodec/drmprime.h
 create mode 100644 libavcodec/rkmppdec.c

I'm not sure that a structure like this should be baked into the API/ABI now as a generic thing.  Are you confident that it is sufficient to represent any picture in a DRM object which might be used in future?  (Including things like tiling modes, field splitting, other craziness that GPU makers dream up?)

Can you explain the cases you are using this in?  Are you intending to make other components with input / output in this format?

With just this decoder in isolation, it would seem preferable to me to make a structure specific to the API for now (or use the existing one - is MppFrame sufficient?).  If later there are multiple implementations and need for a common structure like this then it will be clearer what the requirements are.  Compare VAAPI and QSV (possibly also CUDA, CUVID, VDPAU?) - they can expose DRM objects, but don't currently in the ffmpeg API because any consumers which want them in that form generally prefer the wrapping object including the relevant metadata anyway.

Also, zero is a valid file descriptor.


- Mark

ffmpeg-devel mailing list
diff mbox


diff --git a/Changelog b/Changelog
index 3533bdc..498e433 100644
--- a/Changelog
+++ b/Changelog
@@ -52,6 +52,7 @@  version 3.3:
 - Removed asyncts filter (use af_aresample instead)
 - Intel QSV-accelerated VP8 video decoding
 - VAAPI-accelerated deinterlacing
+- Addition of Rockchip MPP harware decoding

 version 3.2:
diff --git a/configure b/configure
index 4ec8f21..883fc84 100755
--- a/configure
+++ b/configure
@@ -304,6 +304,7 @@  External library support:
   --disable-nvenc          disable Nvidia video encoding code [autodetect]
   --enable-omx             enable OpenMAX IL code [no]
   --enable-omx-rpi         enable OpenMAX IL code for Raspberry Pi [no]
+  --enable-rkmpp           enable Rockchip Media Process Platform code [no]
   --disable-vaapi          disable Video Acceleration API (mainly Unix/Intel) code [autodetect]
   --disable-vda            disable Apple Video Decode Acceleration code [autodetect]
   --disable-vdpau          disable Nvidia Video Decode and Presentation API for Unix code [autodetect]
@@ -1607,6 +1608,7 @@  HWACCEL_LIBRARY_LIST="
+    rkmpp

@@ -2616,6 +2618,7 @@  h264_dxva2_hwaccel_select="h264_decoder"
@@ -2634,6 +2637,7 @@  hevc_mediacodec_hwaccel_deps="mediacodec"
 hevc_dxva2_hwaccel_deps="dxva2 DXVA_PicParams_HEVC"
 hevc_vaapi_hwaccel_deps="vaapi VAPictureParameterBufferHEVC"
 hevc_vdpau_hwaccel_deps="vdpau VdpPictureInfoHEVC"
@@ -2696,6 +2700,7 @@  vp9_cuvid_hwaccel_deps="cuda cuvid"
 vp9_d3d11va_hwaccel_deps="d3d11va DXVA_PicParams_VP9"
 vp9_dxva2_hwaccel_deps="dxva2 DXVA_PicParams_VP9"

Why do these hwaccels exist?  They don't appear to do anything, since you only have hardware surface output anyway.

diff --git a/libavcodec/drmprime.h b/libavcodec/drmprime.h
new file mode 100644
index 0000000..44816db
--- /dev/null
+++ b/libavcodec/drmprime.h
@@ -0,0 +1,17 @@ 
+#include <stdint.h>
+#define AV_DRMPRIME_NUM_PLANES  4   // maximum number of planes
+typedef struct av_drmprime {
+    int strides[AV_DRMPRIME_NUM_PLANES];    // stride in byte of the according plane
+    int offsets[AV_DRMPRIME_NUM_PLANES];    // offset from start in byte of the according plane
+    int fds[AV_DRMPRIME_NUM_PLANES];        // file descriptor for the plane, (0) if unused.
+    uint32_t format;                        // FOURC_CC drm format for the image
+} av_drmprime;