From patchwork Tue Apr 12 15:53:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elie Roudninski X-Patchwork-Id: 35291 Delivered-To: ffmpegpatchwork2@gmail.com Received: by 2002:a05:6a20:671c:b0:7c:62c8:b2d1 with SMTP id q28csp1654240pzh; Tue, 12 Apr 2022 08:54:00 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy3tVw19g2QMIqQ87D7QQWLBRKsWDt0GrmyMbACOmyzE2DVgTF7SRS91iOehHbBs8dq+bFD X-Received: by 2002:a17:906:974c:b0:6e8:9453:4209 with SMTP id o12-20020a170906974c00b006e894534209mr9387772ejy.755.1649778840112; Tue, 12 Apr 2022 08:54:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1649778840; cv=none; d=google.com; s=arc-20160816; b=YewGLEAU0IIfpIu7uTkr5FAdg7/5EHsvAag7XWuervZf4OYn1O/S2O/hJCX8r0kfzQ 58Z+Gi7j9KyupaOhJKPMwlzQhxCHeDL2q3sCsUXC8n2i7TSHojPIWR5Fw6RkjJgZeYii Hj6Ggjjb3oyPbByW5TFeAh5QbmuSCayT6eSXVWACzBtd/4UZQxQM3dNmP8/RymFKSFQ0 2u8y2bJD9oKxTSysYWbGAQV648s0BiNEjqj2xecP/n4hJW/J6xqFRL9wTOxHGszxDJ5L +exkzPQdevdHOfoWxdjz/2ZZ6q9ILrw4TsQ/vSo+OUqkhESafZD5lzFTNPC0HCIViECm 7HDw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:cc:reply-to :list-subscribe:list-help:list-post:list-archive:list-unsubscribe :list-id:precedence:subject:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:delivered-to; bh=ZR++S94jZxbSZTkZY7x1Ylr/pzZqDeb6xmhQCZqK4C4=; b=HxGoR1r5PvXlBtlHcJAvm+BK18d5AkCAtNDJaIx0xjVBZ7wjBfL8A7oc9o/QuVl1jf DE7/hwHJui9Jrqi3pWmImRdHV5meDi1YFdQE9cy8SSMNEfN4tWY048+mtyMaL+PoprK0 lVSzSQ+lgyPN1sFMgTIcZhKbX6+5MTeB526W9QGe6Qx38bQylKlKaqFebx/I/a0zHcwX DJExPydprvvwX3Z1tlmaSDAJxqm6jj1GkaEx0JEO492GYKz5ZWSwhy+1iP83Zme2xQ8W UdLrrdmnjX62IbLsmfTAGl4VS9Ts5YxeXpQXUtdBcrdL+PJ6vRuvNf0cnOwtOVDAwBTk bGKA== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20210112 header.b=qrsCARDi; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id q12-20020a170906940c00b006e86c020a63si6657032ejx.367.2022.04.12.08.53.59; Tue, 12 Apr 2022 08:54:00 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@gmail.com header.s=20210112 header.b=qrsCARDi; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id A3CFB68B2E8; Tue, 12 Apr 2022 18:53:38 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-wr1-f46.google.com (mail-wr1-f46.google.com [209.85.221.46]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 135CE68B226 for ; Tue, 12 Apr 2022 18:53:32 +0300 (EEST) Received: by mail-wr1-f46.google.com with SMTP id k22so9803721wrd.2 for ; Tue, 12 Apr 2022 08:53:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=bLHPzz/UlGhObJzq9JwwIFMfNSBEyzasUmoyhC5CQUw=; b=qrsCARDinh0Sq9cG1CcJ/6WKDsH3T20Ds2BbCzytq4OxFFcC+8ifZ/GXvWOXEPHAMk kHffEzKz8HRIKdiDjV1nVnqaeekF3RrU34SviquO4Rab8wtm85CE2KPKSy7AMayXnHmZ OqtO6hcx4SnWZTGux435bi5mi0dEhyM5PNKzJlc+NXM0dFGouU9V3YC987/02zM9WVOI GJkLEu0d8WrsMPqO+6Ao+oL7QOwzTBmkSF7etUg1aMdsSzP7XeE0HUE+24Wa3vwOwRIt uZPoFmxgTD24IzrTSRXCtxKwtzeypSfRZd1E3Sh4xL4yRKTomPHcR3sk6MvTEcYlhGoN EMXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bLHPzz/UlGhObJzq9JwwIFMfNSBEyzasUmoyhC5CQUw=; b=TsWI7vB7/Bckss2vpMmMNM/4nftCeJZsA0p2M6gNwB1+Kmpj37T3oHcgc73f5eKjRr pq3ecTIEhBeGSzdcCJDxkcTS1I6hZcM/BEzBBRFFeMg7nwXWQ8uoy4t+NzjDdL7UtkiF bOeEdAS5RokUGXZ2Atg0ul3bH7McElUGdRY1jTNFcINI80olvwsRGFkhmfy6vSa7ZALc n6LHt4o/6/WQzlwHCRHnHGezpby9Mn1fKXKXkejLwjqZXFfe4aaneVWiYG20axARFfJI SP88jSWQhx6YDnHOrK5FoUQb/8XZG/Ww7uPbiS12Hjyn4hx7+5JIKiUvagm+szYAYUZ2 fw0Q== X-Gm-Message-State: AOAM532aT5+tQZ2vQ3mC2KHyXVSCL8p1nkDYEPZWRcHZkY4gIxGWNZck ZpUq9B1arG/WxPUdTeFRx1VSL9wxzKo= X-Received: by 2002:adf:fd4d:0:b0:204:ec5:bd62 with SMTP id h13-20020adffd4d000000b002040ec5bd62mr28935117wrs.403.1649778810119; Tue, 12 Apr 2022 08:53:30 -0700 (PDT) Received: from localhost.localdomain (2a01cb04051815002a7fcffffe8ae6ac.ipv6.abo.wanadoo.fr. [2a01:cb04:518:1500:2a7f:cfff:fe8a:e6ac]) by smtp.gmail.com with ESMTPSA id n2-20020adfb742000000b00205eda3b3c1sm31264590wre.34.2022.04.12.08.53.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Apr 2022 08:53:29 -0700 (PDT) From: xademax@gmail.com To: ffmpeg-devel@ffmpeg.org Date: Tue, 12 Apr 2022 17:53:16 +0200 Message-Id: <20220412155316.22417-3-xademax@gmail.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220412155316.22417-1-xademax@gmail.com> References: <20220412155316.22417-1-xademax@gmail.com> MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH v2 2/2] avcodec: add common V4L2 request API code X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Elie ROUDNINSKI , Jonas Karlman Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" X-TUID: Y/vTrpKa3f7J From: Jonas Karlman Co-authored-by: Elie ROUDNINSKI --- configure | 7 + libavcodec/Makefile | 3 +- libavcodec/hwconfig.h | 2 + libavcodec/v4l2_buffers.c | 8 +- libavcodec/v4l2_context.c | 80 ++-- libavcodec/v4l2_context.h | 5 +- libavcodec/v4l2_device.c | 229 ++++++++++ libavcodec/v4l2_device.h | 60 +++ libavcodec/v4l2_m2m.c | 119 ++--- libavcodec/v4l2_m2m.h | 4 +- libavcodec/v4l2_m2m_dec.c | 10 +- libavcodec/v4l2_m2m_enc.c | 8 +- libavcodec/v4l2_request.c | 892 ++++++++++++++++++++++++++++++++++++++ libavcodec/v4l2_request.h | 87 ++++ 14 files changed, 1382 insertions(+), 132 deletions(-) create mode 100644 libavcodec/v4l2_device.c create mode 100644 libavcodec/v4l2_device.h create mode 100644 libavcodec/v4l2_request.c create mode 100644 libavcodec/v4l2_request.h diff --git a/configure b/configure index 9c8965852b..c74cad782b 100755 --- a/configure +++ b/configure @@ -347,6 +347,7 @@ External library support: --enable-omx-rpi enable OpenMAX IL code for Raspberry Pi [no] --enable-rkmpp enable Rockchip Media Process Platform code [no] --disable-v4l2-m2m disable V4L2 mem2mem code [autodetect] + --enable-v4l2-request enable V4L2 request API code (experimental) [no] --disable-vaapi disable Video Acceleration API (mainly Unix/Intel) code [autodetect] --disable-vdpau disable Nvidia Video Decode and Presentation API for Unix code [autodetect] --disable-videotoolbox disable VideoToolbox code [autodetect] @@ -1920,6 +1921,7 @@ HWACCEL_LIBRARY_LIST=" mmal omx opencl + v4l2_request " DOCUMENT_LIST=" @@ -3006,6 +3008,7 @@ d3d11va_deps="dxva_h ID3D11VideoDecoder ID3D11VideoContext" dxva2_deps="dxva2api_h DXVA2_ConfigPictureDecode ole32 user32" ffnvcodec_deps_any="libdl LoadLibrary" nvdec_deps="ffnvcodec" +v4l2_request_deps="linux_videodev2_h linux_media_h v4l2_timeval_to_ns libdrm" vaapi_x11_deps="xlib_x11" videotoolbox_hwaccel_deps="videotoolbox pthreads" videotoolbox_hwaccel_extralibs="-framework QuartzCore" @@ -6711,6 +6714,8 @@ enabled rkmpp && { require_pkg_config rkmpp rockchip_mpp rockchip/r { enabled libdrm || die "ERROR: rkmpp requires --enable-libdrm"; } } +enabled v4l2_request && { enabled libdrm || + die "ERROR: v4l2-request requires --enable-libdrm"; } enabled vapoursynth && require_pkg_config vapoursynth "vapoursynth-script >= 42" VSScript.h vsscript_init @@ -6793,6 +6798,8 @@ if enabled v4l2_m2m; then check_cc vp9_v4l2_m2m linux/videodev2.h "int i = V4L2_PIX_FMT_VP9;" fi +check_func_headers "linux/media.h linux/videodev2.h" v4l2_timeval_to_ns + check_headers sys/videoio.h test_code cc sys/videoio.h "struct v4l2_frmsizeenum vfse; vfse.discrete.width = 0;" && enable_sanitized struct_v4l2_frmivalenum_discrete diff --git a/libavcodec/Makefile b/libavcodec/Makefile index 90f46035d9..378b1cdcb0 100644 --- a/libavcodec/Makefile +++ b/libavcodec/Makefile @@ -159,7 +159,8 @@ OBJS-$(CONFIG_VIDEODSP) += videodsp.o OBJS-$(CONFIG_VP3DSP) += vp3dsp.o OBJS-$(CONFIG_VP56DSP) += vp56dsp.o OBJS-$(CONFIG_VP8DSP) += vp8dsp.o -OBJS-$(CONFIG_V4L2_M2M) += v4l2_m2m.o v4l2_context.o v4l2_buffers.o v4l2_fmt.o +OBJS-$(CONFIG_V4L2_M2M) += v4l2_device.o v4l2_m2m.o v4l2_context.o v4l2_buffers.o v4l2_fmt.o +OBJS-$(CONFIG_V4L2_REQUEST) += v4l2_device.o v4l2_request.o OBJS-$(CONFIG_WMA_FREQS) += wma_freqs.o OBJS-$(CONFIG_WMV2DSP) += wmv2dsp.o diff --git a/libavcodec/hwconfig.h b/libavcodec/hwconfig.h index 721424912c..00864efc27 100644 --- a/libavcodec/hwconfig.h +++ b/libavcodec/hwconfig.h @@ -78,6 +78,8 @@ typedef struct AVCodecHWConfigInternal { HW_CONFIG_HWACCEL(1, 1, 1, VIDEOTOOLBOX, VIDEOTOOLBOX, ff_ ## codec ## _videotoolbox_hwaccel) #define HWACCEL_D3D11VA(codec) \ HW_CONFIG_HWACCEL(0, 0, 1, D3D11VA_VLD, NONE, ff_ ## codec ## _d3d11va_hwaccel) +#define HWACCEL_V4L2REQUEST(codec) \ + HW_CONFIG_HWACCEL(1, 0, 0, DRM_PRIME, DRM, ff_ ## codec ## _v4l2request_hwaccel) #define HW_CONFIG_ENCODER(device, frames, ad_hoc, format, device_type_) \ &(const AVCodecHWConfigInternal) { \ diff --git a/libavcodec/v4l2_buffers.c b/libavcodec/v4l2_buffers.c index 3f5471067a..49ebd46e76 100644 --- a/libavcodec/v4l2_buffers.c +++ b/libavcodec/v4l2_buffers.c @@ -504,7 +504,7 @@ int ff_v4l2_buffer_initialize(V4L2Buffer* avbuf, int index) avbuf->buf.m.planes = avbuf->planes; } - ret = ioctl(buf_to_m2mctx(avbuf)->fd, VIDIOC_QUERYBUF, &avbuf->buf); + ret = ioctl(buf_to_m2mctx(avbuf)->device.fd, VIDIOC_QUERYBUF, &avbuf->buf); if (ret < 0) return AVERROR(errno); @@ -528,12 +528,12 @@ int ff_v4l2_buffer_initialize(V4L2Buffer* avbuf, int index) avbuf->plane_info[i].length = avbuf->buf.m.planes[i].length; avbuf->plane_info[i].mm_addr = mmap(NULL, avbuf->buf.m.planes[i].length, PROT_READ | PROT_WRITE, MAP_SHARED, - buf_to_m2mctx(avbuf)->fd, avbuf->buf.m.planes[i].m.mem_offset); + buf_to_m2mctx(avbuf)->device.fd, avbuf->buf.m.planes[i].m.mem_offset); } else { avbuf->plane_info[i].length = avbuf->buf.length; avbuf->plane_info[i].mm_addr = mmap(NULL, avbuf->buf.length, PROT_READ | PROT_WRITE, MAP_SHARED, - buf_to_m2mctx(avbuf)->fd, avbuf->buf.m.offset); + buf_to_m2mctx(avbuf)->device.fd, avbuf->buf.m.offset); } if (avbuf->plane_info[i].mm_addr == MAP_FAILED) @@ -563,7 +563,7 @@ int ff_v4l2_buffer_enqueue(V4L2Buffer* avbuf) avbuf->buf.flags = avbuf->flags; - ret = ioctl(buf_to_m2mctx(avbuf)->fd, VIDIOC_QBUF, &avbuf->buf); + ret = ioctl(buf_to_m2mctx(avbuf)->device.fd, VIDIOC_QBUF, &avbuf->buf); if (ret < 0) return AVERROR(errno); diff --git a/libavcodec/v4l2_context.c b/libavcodec/v4l2_context.c index e891649f92..22d38ce324 100644 --- a/libavcodec/v4l2_context.c +++ b/libavcodec/v4l2_context.c @@ -72,7 +72,7 @@ static AVRational v4l2_get_sar(V4L2Context *ctx) memset(&cropcap, 0, sizeof(cropcap)); cropcap.type = ctx->type; - ret = ioctl(ctx_to_m2mctx(ctx)->fd, VIDIOC_CROPCAP, &cropcap); + ret = ioctl(ctx_to_m2mctx(ctx)->device.fd, VIDIOC_CROPCAP, &cropcap); if (ret) return sar; @@ -161,7 +161,7 @@ static int v4l2_start_decode(V4L2Context *ctx) }; int ret; - ret = ioctl(ctx_to_m2mctx(ctx)->fd, VIDIOC_DECODER_CMD, &cmd); + ret = ioctl(ctx_to_m2mctx(ctx)->device.fd, VIDIOC_DECODER_CMD, &cmd); if (ret) return AVERROR(errno); @@ -180,7 +180,7 @@ static int v4l2_handle_event(V4L2Context *ctx) struct v4l2_event evt = { 0 }; int ret; - ret = ioctl(s->fd, VIDIOC_DQEVENT, &evt); + ret = ioctl(s->device.fd, VIDIOC_DQEVENT, &evt); if (ret < 0) { av_log(logger(ctx), AV_LOG_ERROR, "%s VIDIOC_DQEVENT\n", ctx->name); return 0; @@ -194,7 +194,7 @@ static int v4l2_handle_event(V4L2Context *ctx) if (evt.type != V4L2_EVENT_SOURCE_CHANGE) return 0; - ret = ioctl(s->fd, VIDIOC_G_FMT, &cap_fmt); + ret = ioctl(s->device.fd, VIDIOC_G_FMT, &cap_fmt); if (ret) { av_log(logger(ctx), AV_LOG_ERROR, "%s VIDIOC_G_FMT\n", s->capture.name); return 0; @@ -234,7 +234,7 @@ static int v4l2_stop_decode(V4L2Context *ctx) }; int ret; - ret = ioctl(ctx_to_m2mctx(ctx)->fd, VIDIOC_DECODER_CMD, &cmd); + ret = ioctl(ctx_to_m2mctx(ctx)->device.fd, VIDIOC_DECODER_CMD, &cmd); if (ret) { /* DECODER_CMD is optional */ if (errno == ENOTTY) @@ -254,7 +254,7 @@ static int v4l2_stop_encode(V4L2Context *ctx) }; int ret; - ret = ioctl(ctx_to_m2mctx(ctx)->fd, VIDIOC_ENCODER_CMD, &cmd); + ret = ioctl(ctx_to_m2mctx(ctx)->device.fd, VIDIOC_ENCODER_CMD, &cmd); if (ret) { /* ENCODER_CMD is optional */ if (errno == ENOTTY) @@ -273,7 +273,7 @@ static V4L2Buffer* v4l2_dequeue_v4l2buf(V4L2Context *ctx, int timeout) V4L2Buffer *avbuf; struct pollfd pfd = { .events = POLLIN | POLLRDNORM | POLLPRI | POLLOUT | POLLWRNORM, /* default blocking capture */ - .fd = ctx_to_m2mctx(ctx)->fd, + .fd = ctx_to_m2mctx(ctx)->device.fd, }; int i, ret; @@ -380,7 +380,7 @@ dequeue: buf.m.planes = planes; } - ret = ioctl(ctx_to_m2mctx(ctx)->fd, VIDIOC_DQBUF, &buf); + ret = ioctl(ctx_to_m2mctx(ctx)->device.fd, VIDIOC_DQBUF, &buf); if (ret) { if (errno != EAGAIN) { ctx->done = 1; @@ -456,10 +456,10 @@ static int v4l2_release_buffers(V4L2Context* ctx) } } - return ioctl(ctx_to_m2mctx(ctx)->fd, VIDIOC_REQBUFS, &req); + return ioctl(ctx_to_m2mctx(ctx)->device.fd, VIDIOC_REQBUFS, &req); } -static inline int v4l2_try_raw_format(V4L2Context* ctx, enum AVPixelFormat pixfmt) +static inline int v4l2_try_raw_format(V4L2DeviceVideo *dev, V4L2Context* ctx, enum AVPixelFormat pixfmt) { struct v4l2_format *fmt = &ctx->format; uint32_t v4l2_fmt; @@ -476,14 +476,14 @@ static inline int v4l2_try_raw_format(V4L2Context* ctx, enum AVPixelFormat pixfm fmt->type = ctx->type; - ret = ioctl(ctx_to_m2mctx(ctx)->fd, VIDIOC_TRY_FMT, fmt); + ret = ioctl(dev->fd, VIDIOC_TRY_FMT, fmt); if (ret) return AVERROR(EINVAL); return 0; } -static int v4l2_get_raw_format(V4L2Context* ctx, enum AVPixelFormat *p) +static int v4l2_get_raw_format(V4L2DeviceVideo *dev, V4L2Context* ctx, enum AVPixelFormat *p) { enum AVPixelFormat pixfmt = ctx->av_pix_fmt; struct v4l2_fmtdesc fdesc; @@ -493,18 +493,18 @@ static int v4l2_get_raw_format(V4L2Context* ctx, enum AVPixelFormat *p) fdesc.type = ctx->type; if (pixfmt != AV_PIX_FMT_NONE) { - ret = v4l2_try_raw_format(ctx, pixfmt); + ret = v4l2_try_raw_format(dev, ctx, pixfmt); if (!ret) return 0; } for (;;) { - ret = ioctl(ctx_to_m2mctx(ctx)->fd, VIDIOC_ENUM_FMT, &fdesc); + ret = ioctl(dev->fd, VIDIOC_ENUM_FMT, &fdesc); if (ret) return AVERROR(EINVAL); pixfmt = ff_v4l2_format_v4l2_to_avfmt(fdesc.pixelformat, AV_CODEC_ID_RAWVIDEO); - ret = v4l2_try_raw_format(ctx, pixfmt); + ret = v4l2_try_raw_format(dev, ctx, pixfmt); if (ret){ fdesc.index++; continue; @@ -518,7 +518,7 @@ static int v4l2_get_raw_format(V4L2Context* ctx, enum AVPixelFormat *p) return AVERROR(EINVAL); } -static int v4l2_get_coded_format(V4L2Context* ctx, uint32_t *p) +static int v4l2_get_coded_format(V4L2DeviceVideo *dev, V4L2Context* ctx, uint32_t *p) { struct v4l2_fmtdesc fdesc; uint32_t v4l2_fmt; @@ -526,17 +526,27 @@ static int v4l2_get_coded_format(V4L2Context* ctx, uint32_t *p) /* translate to a valid v4l2 format */ v4l2_fmt = ff_v4l2_format_avcodec_to_v4l2(ctx->av_codec_id); - if (!v4l2_fmt) + if (!v4l2_fmt) { + av_log(logger(ctx), AV_LOG_ERROR, "Could not get v4l2 format from avcodec %d\n", ctx->av_codec_id); return AVERROR(EINVAL); + } else { + av_log(logger(ctx), AV_LOG_TRACE, "Requested v4l2 format %d\n", v4l2_fmt); + } /* check if the driver supports this format */ memset(&fdesc, 0, sizeof(fdesc)); fdesc.type = ctx->type; for (;;) { - ret = ioctl(ctx_to_m2mctx(ctx)->fd, VIDIOC_ENUM_FMT, &fdesc); - if (ret) - return AVERROR(EINVAL); + av_log(logger(ctx), AV_LOG_TRACE, "Trying to retrieve format for index %d with type %d\n", fdesc.index, ctx->type); + ret = ioctl(dev->fd, VIDIOC_ENUM_FMT, &fdesc); + if (ret) { + if (errno != EINVAL) { + av_log(logger(ctx), AV_LOG_ERROR, "VIDIOC_ENUM_FMT for index %d failed: %s (%d)\n", fdesc.index, av_err2str(AVERROR(errno)), errno); + } + return AVERROR(errno); + } + av_log(logger(ctx), AV_LOG_TRACE, "Retrieved v4l2 format %s (%d)\n", fdesc.description, fdesc.pixelformat); if (fdesc.pixelformat == v4l2_fmt) break; @@ -560,7 +570,7 @@ int ff_v4l2_context_set_status(V4L2Context* ctx, uint32_t cmd) int type = ctx->type; int ret; - ret = ioctl(ctx_to_m2mctx(ctx)->fd, cmd, &type); + ret = ioctl(ctx_to_m2mctx(ctx)->device.fd, cmd, &type); if (ret < 0) return AVERROR(errno); @@ -659,15 +669,19 @@ int ff_v4l2_context_dequeue_packet(V4L2Context* ctx, AVPacket* pkt) return ff_v4l2_buffer_buf_to_avpkt(pkt, avbuf); } -int ff_v4l2_context_get_format(V4L2Context* ctx, int probe) +int ff_v4l2_context_get_format(V4L2DeviceVideo *dev, V4L2Context* ctx, int probe) { struct v4l2_format_update fmt = { 0 }; int ret; if (ctx->av_codec_id == AV_CODEC_ID_RAWVIDEO) { - ret = v4l2_get_raw_format(ctx, &fmt.av_fmt); - if (ret) + ret = v4l2_get_raw_format(dev, ctx, &fmt.av_fmt); + if (ret) { + av_log(logger(ctx), AV_LOG_ERROR, "Could not get raw format\n"); return ret; + } else { + av_log(logger(ctx), AV_LOG_DEBUG, "Retrieved raw format %d\n", fmt.av_fmt); + } fmt.update_avfmt = !probe; v4l2_save_to_context(ctx, &fmt); @@ -676,19 +690,23 @@ int ff_v4l2_context_get_format(V4L2Context* ctx, int probe) return ret; } - ret = v4l2_get_coded_format(ctx, &fmt.v4l2_fmt); - if (ret) + ret = v4l2_get_coded_format(dev, ctx, &fmt.v4l2_fmt); + if (ret) { + av_log(logger(ctx), AV_LOG_ERROR, "Could not get v4l2 format\n"); return ret; + } else { + av_log(logger(ctx), AV_LOG_DEBUG, "Retrieved v4l2 format %d\n", fmt.v4l2_fmt); + } fmt.update_v4l2 = 1; v4l2_save_to_context(ctx, &fmt); - return ioctl(ctx_to_m2mctx(ctx)->fd, VIDIOC_TRY_FMT, &ctx->format); + return ioctl(dev->fd, VIDIOC_TRY_FMT, &ctx->format); } -int ff_v4l2_context_set_format(V4L2Context* ctx) +int ff_v4l2_context_set_format(V4L2DeviceVideo *dev, V4L2Context* ctx) { - return ioctl(ctx_to_m2mctx(ctx)->fd, VIDIOC_S_FMT, &ctx->format); + return ioctl(dev->fd, VIDIOC_S_FMT, &ctx->format); } void ff_v4l2_context_release(V4L2Context* ctx) @@ -716,7 +734,7 @@ int ff_v4l2_context_init(V4L2Context* ctx) return AVERROR_PATCHWELCOME; } - ret = ioctl(s->fd, VIDIOC_G_FMT, &ctx->format); + ret = ioctl(s->device.fd, VIDIOC_G_FMT, &ctx->format); if (ret) av_log(logger(ctx), AV_LOG_ERROR, "%s VIDIOC_G_FMT failed\n", ctx->name); @@ -724,7 +742,7 @@ int ff_v4l2_context_init(V4L2Context* ctx) req.count = ctx->num_buffers; req.memory = V4L2_MEMORY_MMAP; req.type = ctx->type; - ret = ioctl(s->fd, VIDIOC_REQBUFS, &req); + ret = ioctl(s->device.fd, VIDIOC_REQBUFS, &req); if (ret < 0) { av_log(logger(ctx), AV_LOG_ERROR, "%s VIDIOC_REQBUFS failed: %s\n", ctx->name, strerror(errno)); return AVERROR(errno); diff --git a/libavcodec/v4l2_context.h b/libavcodec/v4l2_context.h index 6f7460c89a..0acf3b5ff2 100644 --- a/libavcodec/v4l2_context.h +++ b/libavcodec/v4l2_context.h @@ -33,6 +33,7 @@ #include "codec_id.h" #include "packet.h" #include "v4l2_buffers.h" +#include "v4l2_device.h" typedef struct V4L2Context { /** @@ -109,7 +110,7 @@ int ff_v4l2_context_init(V4L2Context* ctx); * @param[in] ctx A pointer to a V4L2Context. See V4L2Context description for required variables. * @return 0 in case of success, a negative value representing the error otherwise. */ -int ff_v4l2_context_set_format(V4L2Context* ctx); +int ff_v4l2_context_set_format(V4L2DeviceVideo *dev, V4L2Context* ctx); /** * Queries the driver for a valid v4l2 format and copies it to the context. @@ -118,7 +119,7 @@ int ff_v4l2_context_set_format(V4L2Context* ctx); * @param[in] probe Probe only and ignore changes to the format. * @return 0 in case of success, a negative value representing the error otherwise. */ -int ff_v4l2_context_get_format(V4L2Context* ctx, int probe); +int ff_v4l2_context_get_format(V4L2DeviceVideo* dev, V4L2Context* ctx, int probe); /** * Releases a V4L2Context. diff --git a/libavcodec/v4l2_device.c b/libavcodec/v4l2_device.c new file mode 100644 index 0000000000..4d55ae2c3d --- /dev/null +++ b/libavcodec/v4l2_device.c @@ -0,0 +1,229 @@ +/* + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include "libavutil/error.h" + +#include "v4l2_device.h" + +static int v4l2_device_open_video(const char *devname, struct v4l2_capability *cap) +{ + int fd, ret; + + fd = open(devname, O_RDWR | O_NONBLOCK, 0); + if (fd < 0) + return AVERROR(errno); + + memset(cap, 0, sizeof(*cap)); + ret = ioctl(fd, VIDIOC_QUERYCAP, cap); + if (ret < 0) { + ret = errno; + close(fd); + return AVERROR(ret); + } + + return fd; +} + +int ff_v4l2_device_probe_video(AVCodecContext *avctx, + struct V4L2DeviceVideo *device, + void *opaque, + bool (*checkdev)(struct V4L2DeviceVideo *device, void *opaque)) +{ + int ret = AVERROR(EINVAL); + struct dirent *entry; + DIR *dirp; + struct V4L2DeviceVideo tmpdev = { + .devname = { 0 }, + }; + + dirp = opendir("/dev"); + if (!dirp) + return AVERROR(errno); + + for (entry = readdir(dirp); entry; entry = readdir(dirp)) { + + if (strncmp(entry->d_name, "video", 5)) + continue; + + snprintf(tmpdev.devname, sizeof(tmpdev.devname), "/dev/%s", entry->d_name); + av_log(avctx, AV_LOG_INFO, "Probing device %s\n", tmpdev.devname); + ret = v4l2_device_open_video(tmpdev.devname, &tmpdev.capability); + if (ret < 0) + continue; + tmpdev.fd = ret; + + if (checkdev(&tmpdev, opaque)) { + break; + } else { + close(tmpdev.fd); + ret = AVERROR(EINVAL); + } + } + + closedir(dirp); + + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "Could not find a valid video device\n"); + return ret; + } + + *device = tmpdev; + av_log(avctx, AV_LOG_INFO, "Using video device %s\n", device->devname); + + return 0; +} + +static int v4l2_device_open_media(const char *devname, struct media_device_info *info) +{ + int fd, ret; + + fd = open(devname, O_RDWR, 0); + if (fd < 0) + return AVERROR(errno); + + memset(info, 0, sizeof(*info)); + ret = ioctl(fd, MEDIA_IOC_DEVICE_INFO, info); + if (ret < 0) { + ret = errno; + close(fd); + return AVERROR(ret); + } + + return fd; +} + +static int v4l2_device_check_media(AVCodecContext *avctx, int media_fd, + uint32_t video_major, uint32_t video_minor) +{ + int ret; + struct media_v2_topology topology = {0}; + struct media_v2_interface *interfaces = NULL; + + ret = ioctl(media_fd, MEDIA_IOC_G_TOPOLOGY, &topology); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: get media topology failed, %s (%d)\n", + __func__, av_err2str(AVERROR(errno)), errno); + return AVERROR(errno); + } + + if (topology.num_interfaces <= 0) { + av_log(avctx, AV_LOG_ERROR, "%s: media device has no interfaces\n", __func__); + return AVERROR(EINVAL); + } + + interfaces = av_mallocz(topology.num_interfaces * sizeof(struct media_v2_interface)); + if (!interfaces) { + av_log(avctx, AV_LOG_ERROR, "%s: allocating media interface struct failed\n", __func__); + return AVERROR(ENOMEM); + } + + topology.ptr_interfaces = (__u64)(uintptr_t)interfaces; + ret = ioctl(media_fd, MEDIA_IOC_G_TOPOLOGY, &topology); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: get media topology failed, %s (%d)\n", + __func__, av_err2str(AVERROR(errno)), errno); + av_freep(&interfaces); + return AVERROR(errno); + } + + ret = AVERROR(EINVAL); + for (int i = 0; i < topology.num_interfaces; i++) { + if (interfaces[i].intf_type != MEDIA_INTF_T_V4L_VIDEO) + continue; + + av_log(avctx, AV_LOG_INFO, "%s: media device number %d:%d\n", __func__, interfaces[i].devnode.major, interfaces[i].devnode.minor); + if (interfaces[i].devnode.major == video_major && interfaces[i].devnode.minor == video_minor) { + ret = 0; + break; + } + } + + av_freep(&interfaces); + return ret; +} + +int ff_v4l2_device_retrieve_media(AVCodecContext *avctx, + const struct V4L2DeviceVideo *video_device, + struct V4L2DeviceMedia *media_device) +{ + int fd, ret = AVERROR(EINVAL); + struct dirent *entry; + DIR *dirp; + char devname[PATH_MAX] = { 0 }; + struct media_device_info info; + + struct stat statbuf; + memset(&statbuf, 0, sizeof(statbuf)); + ret = fstat(video_device->fd, &statbuf); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: get video device stats failed, %s (%d)\n", + __func__, av_err2str(AVERROR(errno)), errno); + return AVERROR(errno); + } + + av_log(avctx, AV_LOG_INFO, "%s: video device number %d:%d\n", __func__, major(statbuf.st_dev), minor(statbuf.st_dev)); + + dirp = opendir("/dev"); + if (!dirp) + return AVERROR(errno); + + ret = AVERROR(EINVAL); + for (entry = readdir(dirp); entry; entry = readdir(dirp)) { + + if (strncmp(entry->d_name, "media", 5)) + continue; + + snprintf(devname, sizeof(devname), "/dev/%s", entry->d_name); + av_log(avctx, AV_LOG_INFO, "Probing device %s\n", devname); + ret = v4l2_device_open_media(devname, &info); + if (ret < 0) + continue; + + fd = ret; + ret = v4l2_device_check_media(avctx, fd, major(statbuf.st_rdev), minor(statbuf.st_rdev)); + if (ret == 0) + break; + + close(ret); + } + + closedir(dirp); + + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "Could not find a valid media device\n"); + return ret; + } + + av_log(avctx, AV_LOG_INFO, "Using media device %s\n", devname); + strcpy(media_device->devname, devname); + media_device->fd = fd; + media_device->info = info; + + return 0; +} diff --git a/libavcodec/v4l2_device.h b/libavcodec/v4l2_device.h new file mode 100644 index 0000000000..958d7b5f3c --- /dev/null +++ b/libavcodec/v4l2_device.h @@ -0,0 +1,60 @@ +/* + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ +#ifndef AVCODEC_V4L2_DEVICE_H +#define AVCODEC_V4L2_DEVICE_H + +#include + +#include +#include + +#include "libavcodec/avcodec.h" + +typedef struct V4L2DeviceVideo { + char devname[PATH_MAX]; + int fd; + struct v4l2_capability capability; +} V4L2DeviceVideo; + +typedef struct V4L2DeviceMedia { + char devname[PATH_MAX]; + int fd; + struct media_device_info info; +} V4L2DeviceMedia; + +/* + * Probes for a valid video device. + * + * @return 0 in case of success, a negative value representing the error otherwise. + */ +int ff_v4l2_device_probe_video(AVCodecContext *avctx, + struct V4L2DeviceVideo *device, + void *opaque, + bool (*checkdev)(struct V4L2DeviceVideo *device, void *opaque)); + +/* + * Retrieves the media device associated with a video device. + * + * @return 0 in case of success, a negative value representing the error otherwise. + */ + +int ff_v4l2_device_retrieve_media(AVCodecContext *avctx, + const struct V4L2DeviceVideo *video_device, + struct V4L2DeviceMedia *media_device); + +#endif /* AVCODEC_V4L2_DEVICE_H */ diff --git a/libavcodec/v4l2_m2m.c b/libavcodec/v4l2_m2m.c index 3178ef06b8..b7ec49ec41 100644 --- a/libavcodec/v4l2_m2m.c +++ b/libavcodec/v4l2_m2m.c @@ -35,7 +35,7 @@ #include "v4l2_fmt.h" #include "v4l2_m2m.h" -static inline int v4l2_splane_video(struct v4l2_capability *cap) +static inline int v4l2_splane_video(const struct v4l2_capability *cap) { if (cap->capabilities & (V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_VIDEO_OUTPUT) && cap->capabilities & V4L2_CAP_STREAMING) @@ -47,7 +47,7 @@ static inline int v4l2_splane_video(struct v4l2_capability *cap) return 0; } -static inline int v4l2_mplane_video(struct v4l2_capability *cap) +static inline int v4l2_mplane_video(const struct v4l2_capability *cap) { if (cap->capabilities & (V4L2_CAP_VIDEO_CAPTURE_MPLANE | V4L2_CAP_VIDEO_OUTPUT_MPLANE) && cap->capabilities & V4L2_CAP_STREAMING) @@ -59,11 +59,9 @@ static inline int v4l2_mplane_video(struct v4l2_capability *cap) return 0; } -static int v4l2_prepare_contexts(V4L2m2mContext *s, int probe) +static int v4l2_prepare_contexts(V4L2m2mContext *s, const struct v4l2_capability *cap, int probe) { - struct v4l2_capability cap; void *log_ctx = s->avctx; - int ret; s->capture.done = s->output.done = 0; s->capture.name = "capture"; @@ -71,23 +69,18 @@ static int v4l2_prepare_contexts(V4L2m2mContext *s, int probe) atomic_init(&s->refcount, 0); sem_init(&s->refsync, 0, 0); - memset(&cap, 0, sizeof(cap)); - ret = ioctl(s->fd, VIDIOC_QUERYCAP, &cap); - if (ret < 0) - return ret; - av_log(log_ctx, probe ? AV_LOG_DEBUG : AV_LOG_INFO, - "driver '%s' on card '%s' in %s mode\n", cap.driver, cap.card, - v4l2_mplane_video(&cap) ? "mplane" : - v4l2_splane_video(&cap) ? "splane" : "unknown"); + "driver '%s' on card '%s' in %s mode\n", cap->driver, cap->card, + v4l2_mplane_video(cap) ? "mplane" : + v4l2_splane_video(cap) ? "splane" : "unknown"); - if (v4l2_mplane_video(&cap)) { + if (v4l2_mplane_video(cap)) { s->capture.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE; s->output.type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE; return 0; } - if (v4l2_splane_video(&cap)) { + if (v4l2_splane_video(cap)) { s->capture.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; s->output.type = V4L2_BUF_TYPE_VIDEO_OUTPUT; return 0; @@ -96,40 +89,29 @@ static int v4l2_prepare_contexts(V4L2m2mContext *s, int probe) return AVERROR(EINVAL); } -static int v4l2_probe_driver(V4L2m2mContext *s) +static bool v4l2_check_video_device(struct V4L2DeviceVideo *device, void *opaque) { + V4L2m2mContext *s = opaque; void *log_ctx = s->avctx; int ret; - s->fd = open(s->devname, O_RDWR | O_NONBLOCK, 0); - if (s->fd < 0) - return AVERROR(errno); - - ret = v4l2_prepare_contexts(s, 1); + ret = v4l2_prepare_contexts(s, &device->capability, 1); if (ret < 0) - goto done; + return false; - ret = ff_v4l2_context_get_format(&s->output, 1); + ret = ff_v4l2_context_get_format(device, &s->output, 1); if (ret) { - av_log(log_ctx, AV_LOG_DEBUG, "v4l2 output format not supported\n"); - goto done; + av_log(log_ctx, AV_LOG_DEBUG, "v4l2 output format not supported: %s (%d)\n", av_err2str(AVERROR(errno)), errno); + return false; } - ret = ff_v4l2_context_get_format(&s->capture, 1); + ret = ff_v4l2_context_get_format(device, &s->capture, 1); if (ret) { av_log(log_ctx, AV_LOG_DEBUG, "v4l2 capture format not supported\n"); - goto done; + return false; } -done: - if (close(s->fd) < 0) { - ret = AVERROR(errno); - av_log(log_ctx, AV_LOG_ERROR, "failure closing %s (%s)\n", s->devname, av_err2str(AVERROR(errno))); - } - - s->fd = -1; - - return ret; + return true; } static int v4l2_configure_contexts(V4L2m2mContext *s) @@ -138,11 +120,7 @@ static int v4l2_configure_contexts(V4L2m2mContext *s) int ret; struct v4l2_format ofmt, cfmt; - s->fd = open(s->devname, O_RDWR | O_NONBLOCK, 0); - if (s->fd < 0) - return AVERROR(errno); - - ret = v4l2_prepare_contexts(s, 0); + ret = v4l2_prepare_contexts(s, &s->device.capability, 0); if (ret < 0) goto error; @@ -156,13 +134,13 @@ static int v4l2_configure_contexts(V4L2m2mContext *s) cfmt.fmt.pix_mp.pixelformat : cfmt.fmt.pix.pixelformat)); - ret = ff_v4l2_context_set_format(&s->output); + ret = ff_v4l2_context_set_format(&s->device, &s->output); if (ret) { av_log(log_ctx, AV_LOG_ERROR, "can't set v4l2 output format\n"); goto error; } - ret = ff_v4l2_context_set_format(&s->capture); + ret = ff_v4l2_context_set_format(&s->device, &s->capture); if (ret) { av_log(log_ctx, AV_LOG_ERROR, "can't to set v4l2 capture format\n"); goto error; @@ -186,12 +164,12 @@ static int v4l2_configure_contexts(V4L2m2mContext *s) return 0; error: - if (close(s->fd) < 0) { + if (close(s->device.fd) < 0) { ret = AVERROR(errno); av_log(log_ctx, AV_LOG_ERROR, "error closing %s (%s)\n", - s->devname, av_err2str(AVERROR(errno))); + s->device.devname, av_err2str(AVERROR(errno))); } - s->fd = -1; + s->device.fd = -1; return ret; } @@ -224,14 +202,14 @@ int ff_v4l2_m2m_codec_reinit(V4L2m2mContext *s) ff_v4l2_context_release(&s->capture); /* 3. get the new capture format */ - ret = ff_v4l2_context_get_format(&s->capture, 0); + ret = ff_v4l2_context_get_format(&s->device, &s->capture, 0); if (ret) { av_log(log_ctx, AV_LOG_ERROR, "query the new capture format\n"); return ret; } /* 4. set the capture format */ - ret = ff_v4l2_context_set_format(&s->capture); + ret = ff_v4l2_context_set_format(&s->device, &s->capture); if (ret) { av_log(log_ctx, AV_LOG_ERROR, "setting capture format\n"); return ret; @@ -249,7 +227,7 @@ int ff_v4l2_m2m_codec_full_reinit(V4L2m2mContext *s) void *log_ctx = s->avctx; int ret; - av_log(log_ctx, AV_LOG_DEBUG, "%s full reinit\n", s->devname); + av_log(log_ctx, AV_LOG_DEBUG, "%s full reinit\n", s->device.devname); /* wait for pending buffer references */ if (atomic_load(&s->refcount)) @@ -275,25 +253,25 @@ int ff_v4l2_m2m_codec_full_reinit(V4L2m2mContext *s) s->draining = 0; s->reinit = 0; - ret = ff_v4l2_context_get_format(&s->output, 0); + ret = ff_v4l2_context_get_format(&s->device, &s->output, 0); if (ret) { av_log(log_ctx, AV_LOG_DEBUG, "v4l2 output format not supported\n"); goto error; } - ret = ff_v4l2_context_get_format(&s->capture, 0); + ret = ff_v4l2_context_get_format(&s->device, &s->capture, 0); if (ret) { av_log(log_ctx, AV_LOG_DEBUG, "v4l2 capture format not supported\n"); goto error; } - ret = ff_v4l2_context_set_format(&s->output); + ret = ff_v4l2_context_set_format(&s->device, &s->output); if (ret) { av_log(log_ctx, AV_LOG_ERROR, "can't set v4l2 output format\n"); goto error; } - ret = ff_v4l2_context_set_format(&s->capture); + ret = ff_v4l2_context_set_format(&s->device, &s->capture); if (ret) { av_log(log_ctx, AV_LOG_ERROR, "can't to set v4l2 capture format\n"); goto error; @@ -327,7 +305,7 @@ static void v4l2_m2m_destroy_context(void *opaque, uint8_t *context) ff_v4l2_context_release(&s->capture); sem_destroy(&s->refsync); - close(s->fd); + close(s->device.fd); av_frame_unref(s->frame); av_frame_free(&s->frame); av_packet_unref(&s->buf_pkt); @@ -343,7 +321,7 @@ int ff_v4l2_m2m_codec_end(V4L2m2mPriv *priv) if (!s) return 0; - if (s->fd >= 0) { + if (s->device.fd >= 0) { ret = ff_v4l2_context_set_status(&s->output, VIDIOC_STREAMOFF); if (ret) av_log(s->avctx, AV_LOG_ERROR, "VIDIOC_STREAMOFF %s\n", s->output.name); @@ -364,37 +342,12 @@ int ff_v4l2_m2m_codec_end(V4L2m2mPriv *priv) int ff_v4l2_m2m_codec_init(V4L2m2mPriv *priv) { int ret = AVERROR(EINVAL); - struct dirent *entry; - DIR *dirp; V4L2m2mContext *s = priv->context; - dirp = opendir("/dev"); - if (!dirp) - return AVERROR(errno); - - for (entry = readdir(dirp); entry; entry = readdir(dirp)) { - - if (strncmp(entry->d_name, "video", 5)) - continue; - - snprintf(s->devname, sizeof(s->devname), "/dev/%s", entry->d_name); - av_log(s->avctx, AV_LOG_DEBUG, "probing device %s\n", s->devname); - ret = v4l2_probe_driver(s); - if (!ret) - break; - } - - closedir(dirp); - - if (ret) { - av_log(s->avctx, AV_LOG_ERROR, "Could not find a valid device\n"); - memset(s->devname, 0, sizeof(s->devname)); - + ret = ff_v4l2_device_probe_video(s->avctx, &priv->context->device, s, v4l2_check_video_device); + if (ret < 0) return ret; - } - - av_log(s->avctx, AV_LOG_INFO, "Using device %s\n", s->devname); return v4l2_configure_contexts(s); } @@ -420,7 +373,7 @@ int ff_v4l2_m2m_create_context(V4L2m2mPriv *priv, V4L2m2mContext **s) priv->context->capture.num_buffers = priv->num_capture_buffers; priv->context->output.num_buffers = priv->num_output_buffers; priv->context->self_ref = priv->context_ref; - priv->context->fd = -1; + priv->context->device.fd = -1; priv->context->frame = av_frame_alloc(); if (!priv->context->frame) { diff --git a/libavcodec/v4l2_m2m.h b/libavcodec/v4l2_m2m.h index b67b216331..a8a7dc2c29 100644 --- a/libavcodec/v4l2_m2m.h +++ b/libavcodec/v4l2_m2m.h @@ -31,6 +31,7 @@ #include "libavcodec/avcodec.h" #include "v4l2_context.h" +#include "v4l2_device.h" #define container_of(ptr, type, member) ({ \ const __typeof__(((type *)0)->member ) *__mptr = (ptr); \ @@ -41,8 +42,7 @@ OFFSET(num_output_buffers), AV_OPT_TYPE_INT, { .i64 = 16 }, 6, INT_MAX, FLAGS } typedef struct V4L2m2mContext { - char devname[PATH_MAX]; - int fd; + V4L2DeviceVideo device; /* the codec context queues */ V4L2Context capture; diff --git a/libavcodec/v4l2_m2m_dec.c b/libavcodec/v4l2_m2m_dec.c index 8a51dec3fa..4478474aaf 100644 --- a/libavcodec/v4l2_m2m_dec.c +++ b/libavcodec/v4l2_m2m_dec.c @@ -56,7 +56,7 @@ static int v4l2_try_start(AVCodecContext *avctx) /* 2. get the capture format */ capture->format.type = capture->type; - ret = ioctl(s->fd, VIDIOC_G_FMT, &capture->format); + ret = ioctl(s->device.fd, VIDIOC_G_FMT, &capture->format); if (ret) { av_log(avctx, AV_LOG_WARNING, "VIDIOC_G_FMT ioctl\n"); return ret; @@ -70,9 +70,9 @@ static int v4l2_try_start(AVCodecContext *avctx) selection.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; selection.r.height = avctx->coded_height; selection.r.width = avctx->coded_width; - ret = ioctl(s->fd, VIDIOC_S_SELECTION, &selection); + ret = ioctl(s->device.fd, VIDIOC_S_SELECTION, &selection); if (!ret) { - ret = ioctl(s->fd, VIDIOC_G_SELECTION, &selection); + ret = ioctl(s->device.fd, VIDIOC_G_SELECTION, &selection); if (ret) { av_log(avctx, AV_LOG_WARNING, "VIDIOC_G_SELECTION ioctl\n"); } else { @@ -113,7 +113,7 @@ static int v4l2_prepare_decoder(V4L2m2mContext *s) */ memset(&sub, 0, sizeof(sub)); sub.type = V4L2_EVENT_SOURCE_CHANGE; - ret = ioctl(s->fd, VIDIOC_SUBSCRIBE_EVENT, &sub); + ret = ioctl(s->device.fd, VIDIOC_SUBSCRIBE_EVENT, &sub); if ( ret < 0) { if (output->height == 0 || output->width == 0) { av_log(s->avctx, AV_LOG_ERROR, @@ -125,7 +125,7 @@ static int v4l2_prepare_decoder(V4L2m2mContext *s) memset(&sub, 0, sizeof(sub)); sub.type = V4L2_EVENT_EOS; - ret = ioctl(s->fd, VIDIOC_SUBSCRIBE_EVENT, &sub); + ret = ioctl(s->device.fd, VIDIOC_SUBSCRIBE_EVENT, &sub); if (ret < 0) av_log(s->avctx, AV_LOG_WARNING, "the v4l2 driver does not support end of stream VIDIOC_SUBSCRIBE_EVENT\n"); diff --git a/libavcodec/v4l2_m2m_enc.c b/libavcodec/v4l2_m2m_enc.c index 20f81df750..36f66397d9 100644 --- a/libavcodec/v4l2_m2m_enc.c +++ b/libavcodec/v4l2_m2m_enc.c @@ -46,7 +46,7 @@ static inline void v4l2_set_timeperframe(V4L2m2mContext *s, unsigned int num, un parm.parm.output.timeperframe.denominator = den; parm.parm.output.timeperframe.numerator = num; - if (ioctl(s->fd, VIDIOC_S_PARM, &parm) < 0) + if (ioctl(s->device.fd, VIDIOC_S_PARM, &parm) < 0) av_log(s->avctx, AV_LOG_WARNING, "Failed to set timeperframe"); } @@ -64,7 +64,7 @@ static inline void v4l2_set_ext_ctrl(V4L2m2mContext *s, unsigned int id, signed ctrl.value = value; ctrl.id = id; - if (ioctl(s->fd, VIDIOC_S_EXT_CTRLS, &ctrls) < 0) + if (ioctl(s->device.fd, VIDIOC_S_EXT_CTRLS, &ctrls) < 0) av_log(s->avctx, log_warning || errno != EINVAL ? AV_LOG_WARNING : AV_LOG_DEBUG, "Failed to set %s: %s\n", name, strerror(errno)); else @@ -85,7 +85,7 @@ static inline int v4l2_get_ext_ctrl(V4L2m2mContext *s, unsigned int id, signed i /* set ctrl*/ ctrl.id = id ; - ret = ioctl(s->fd, VIDIOC_G_EXT_CTRLS, &ctrls); + ret = ioctl(s->device.fd, VIDIOC_G_EXT_CTRLS, &ctrls); if (ret < 0) { av_log(s->avctx, log_warning || errno != EINVAL ? AV_LOG_WARNING : AV_LOG_DEBUG, "Failed to get %s\n", name); @@ -166,7 +166,7 @@ static inline void v4l2_subscribe_eos_event(V4L2m2mContext *s) memset(&sub, 0, sizeof(sub)); sub.type = V4L2_EVENT_EOS; - if (ioctl(s->fd, VIDIOC_SUBSCRIBE_EVENT, &sub) < 0) + if (ioctl(s->device.fd, VIDIOC_SUBSCRIBE_EVENT, &sub) < 0) av_log(s->avctx, AV_LOG_WARNING, "the v4l2 driver does not support end of stream VIDIOC_SUBSCRIBE_EVENT\n"); } diff --git a/libavcodec/v4l2_request.c b/libavcodec/v4l2_request.c new file mode 100644 index 0000000000..15d498ef41 --- /dev/null +++ b/libavcodec/v4l2_request.c @@ -0,0 +1,892 @@ +/* + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include + +#include +#include +#include + +#include +#include +#include +#include +#include + +#include "decode.h" +#include "internal.h" +#include "v4l2_request.h" + +#define OUTPUT_BUFFER_PADDING_SIZE (AV_INPUT_BUFFER_PADDING_SIZE * 4) + +uint64_t ff_v4l2_request_get_capture_timestamp(AVFrame *frame) +{ + V4L2RequestDescriptor *req = (V4L2RequestDescriptor*)frame->data[0]; + return req ? v4l2_timeval_to_ns(&req->capture.buffer.timestamp) : 0; +} + +int ff_v4l2_request_reset_frame(AVCodecContext *avctx, AVFrame *frame) +{ + V4L2RequestDescriptor *req = (V4L2RequestDescriptor*)frame->data[0]; + memset(&req->drm, 0, sizeof(AVDRMFrameDescriptor)); + req->output.used = 0; + return 0; +} + +int ff_v4l2_request_append_output_buffer(AVCodecContext *avctx, AVFrame *frame, + const uint8_t *data, uint32_t size) +{ + V4L2RequestDescriptor *req = (V4L2RequestDescriptor*)frame->data[0]; + if (req->output.used + size + OUTPUT_BUFFER_PADDING_SIZE <= req->output.size) { + memcpy(req->output.addr + req->output.used, data, size); + req->output.used += size; + } else { + av_log(avctx, AV_LOG_DEBUG, "%s: output.used=%u output.size=%u size=%u\n", + __func__, req->output.used, req->output.size, size); + } + return 0; +} + +static int v4l2_request_controls(V4L2DeviceVideo *device, int request_fd, + unsigned long type, struct v4l2_ext_control *control, int count) +{ + struct v4l2_ext_controls controls = { + .controls = control, + .count = count, + .request_fd = request_fd, + .which = (request_fd >= 0) ? V4L2_CTRL_WHICH_REQUEST_VAL : 0, + }; + + if (!control || !count) + return 0; + + return ioctl(device->fd, type, &controls); +} + +static int v4l2_request_set_controls(V4L2DeviceVideo *device, int request_fd, + struct v4l2_ext_control *control, int count) +{ + return v4l2_request_controls(device, request_fd, VIDIOC_S_EXT_CTRLS, control, count); +} + +int ff_v4l2_request_set_controls(AVCodecContext *avctx, struct v4l2_ext_control *control, int count) +{ + V4L2RequestContext *ctx = avctx->internal->hwaccel_priv_data; + int ret; + + ret = v4l2_request_controls(&ctx->video_device, -1, VIDIOC_S_EXT_CTRLS, control, count); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: set controls failed, %s (%d)\n", __func__, av_err2str(AVERROR(errno)), errno); + return AVERROR(errno); + } + + return ret; +} + +int ff_v4l2_request_get_controls(AVCodecContext *avctx, struct v4l2_ext_control *control, int count) +{ + V4L2RequestContext *ctx = avctx->internal->hwaccel_priv_data; + int ret; + + ret = v4l2_request_controls(&ctx->video_device, -1, VIDIOC_G_EXT_CTRLS, control, count); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: get controls failed, %s (%d)\n", __func__, av_err2str(AVERROR(errno)), errno); + return AVERROR(errno); + } + + return ret; +} + +int ff_v4l2_request_query_control(AVCodecContext *avctx, struct v4l2_query_ext_ctrl *control) +{ + V4L2RequestContext *ctx = avctx->internal->hwaccel_priv_data; + int ret; + + ret = ioctl(ctx->video_device.fd, VIDIOC_QUERY_EXT_CTRL, control); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: query control failed, %s (%d)\n", __func__, av_err2str(AVERROR(errno)), errno); + return AVERROR(errno); + } + + return 0; +} + +int ff_v4l2_request_query_control_default_value(AVCodecContext *avctx, uint32_t id) +{ + V4L2RequestContext *ctx = avctx->internal->hwaccel_priv_data; + struct v4l2_queryctrl control = { .id = id }; + int ret; + + ret = ioctl(ctx->video_device.fd, VIDIOC_QUERYCTRL, &control); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: query control failed, %s (%d)\n", __func__, av_err2str(AVERROR(errno)), errno); + return AVERROR(errno); + } + + return control.default_value; +} + +static int v4l2_request_queue_buffer(V4L2RequestContext *ctx, int request_fd, + V4L2RequestBuffer *buf, uint32_t flags) +{ + struct v4l2_plane planes[1] = {}; + struct v4l2_buffer buffer = { + .type = buf->buffer.type, + .memory = buf->buffer.memory, + .index = buf->index, + .timestamp.tv_usec = ctx->timestamp, + .bytesused = buf->used, + .request_fd = request_fd, + .flags = ((request_fd >= 0) ? V4L2_BUF_FLAG_REQUEST_FD : 0) | flags, + }; + + buf->buffer.flags = buffer.flags; + buf->buffer.timestamp = buffer.timestamp; + + if (V4L2_TYPE_IS_MULTIPLANAR(buf->buffer.type)) { + planes[0].bytesused = buf->used; + buffer.bytesused = 0; + buffer.length = 1; + buffer.m.planes = planes; + } + + return ioctl(ctx->video_device.fd, VIDIOC_QBUF, &buffer); +} + +static int v4l2_request_dequeue_buffer(V4L2RequestContext *ctx, V4L2RequestBuffer *buf) +{ + int ret; + struct v4l2_plane planes[1] = {}; + struct v4l2_buffer buffer = { + .type = buf->buffer.type, + .memory = buf->buffer.memory, + .index = buf->index, + }; + + if (V4L2_TYPE_IS_MULTIPLANAR(buf->buffer.type)) { + buffer.length = 1; + buffer.m.planes = planes; + } + + ret = ioctl(ctx->video_device.fd, VIDIOC_DQBUF, &buffer); + if (ret < 0) + return ret; + + buf->buffer.flags = buffer.flags; + buf->buffer.timestamp = buffer.timestamp; + return 0; +} + +const uint32_t v4l2_request_capture_pixelformats[] = { + V4L2_PIX_FMT_NV12, +#ifdef DRM_FORMAT_MOD_ALLWINNER_TILED + V4L2_PIX_FMT_SUNXI_TILED_NV12, +#endif +}; + +static int v4l2_request_set_drm_descriptor(V4L2RequestDescriptor *req, struct v4l2_format *format) +{ + AVDRMFrameDescriptor *desc = &req->drm; + AVDRMLayerDescriptor *layer = &desc->layers[0]; + uint32_t pixelformat = V4L2_TYPE_IS_MULTIPLANAR(format->type) ? format->fmt.pix_mp.pixelformat : format->fmt.pix.pixelformat; + + switch (pixelformat) { + case V4L2_PIX_FMT_NV12: + layer->format = DRM_FORMAT_NV12; + desc->objects[0].format_modifier = DRM_FORMAT_MOD_LINEAR; + break; +#ifdef DRM_FORMAT_MOD_ALLWINNER_TILED + case V4L2_PIX_FMT_SUNXI_TILED_NV12: + layer->format = DRM_FORMAT_NV12; + desc->objects[0].format_modifier = DRM_FORMAT_MOD_ALLWINNER_TILED; + break; +#endif + default: + return -1; + } + + desc->nb_objects = 1; + desc->objects[0].fd = req->capture.fd; + desc->objects[0].size = req->capture.size; + + desc->nb_layers = 1; + layer->nb_planes = 2; + + layer->planes[0].object_index = 0; + layer->planes[0].offset = 0; + layer->planes[0].pitch = V4L2_TYPE_IS_MULTIPLANAR(format->type) ? format->fmt.pix_mp.plane_fmt[0].bytesperline : format->fmt.pix.bytesperline; + + layer->planes[1].object_index = 0; + layer->planes[1].offset = layer->planes[0].pitch * (V4L2_TYPE_IS_MULTIPLANAR(format->type) ? format->fmt.pix_mp.height : format->fmt.pix.height); + layer->planes[1].pitch = layer->planes[0].pitch; + + return 0; +} + +static int v4l2_request_queue_decode(AVCodecContext *avctx, AVFrame *frame, struct v4l2_ext_control *control, + int count, int first_slice, int last_slice) +{ + V4L2RequestContext *ctx = avctx->internal->hwaccel_priv_data; + V4L2RequestDescriptor *req = (V4L2RequestDescriptor*)frame->data[0]; + struct timeval tv = { 2, 0 }; + fd_set except_fds; + int ret; + + av_log(avctx, AV_LOG_DEBUG, "%s: avctx=%p used=%u controls=%d index=%d fd=%d request_fd=%d first_slice=%d last_slice=%d\n", + __func__, avctx, req->output.used, count, req->capture.index, req->capture.fd, req->request_fd, first_slice, last_slice); + + if (first_slice) + ctx->timestamp++; + + ret = v4l2_request_set_controls(&ctx->video_device, req->request_fd, control, count); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: set controls failed for request %d, %s (%d)\n", + __func__, req->request_fd, av_err2str(AVERROR(errno)), errno); + return AVERROR(errno); + } + + memset(req->output.addr + req->output.used, 0, OUTPUT_BUFFER_PADDING_SIZE); + + ret = v4l2_request_queue_buffer(ctx, req->request_fd, &req->output, last_slice ? 0 : V4L2_BUF_FLAG_M2M_HOLD_CAPTURE_BUF); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: queue output buffer %d failed for request %d, %s (%d)\n", + __func__, req->output.index, req->request_fd, av_err2str(AVERROR(errno)), errno); + return AVERROR(errno); + } + + if (first_slice) { + ret = v4l2_request_queue_buffer(ctx, -1, &req->capture, 0); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: queue capture buffer %d failed for request %d, %s (%d)\n", + __func__, req->capture.index, req->request_fd, av_err2str(AVERROR(errno)), errno); + return AVERROR(errno); + } + } + + ret = ioctl(req->request_fd, MEDIA_REQUEST_IOC_QUEUE, NULL); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: queue request %d failed, %s (%d)\n", + __func__, req->request_fd, av_err2str(AVERROR(errno)), errno); + goto fail; + } + + FD_ZERO(&except_fds); + FD_SET(req->request_fd, &except_fds); + + ret = select(req->request_fd + 1, NULL, NULL, &except_fds, &tv); + if (ret == 0) { + av_log(avctx, AV_LOG_ERROR, "%s: request %d timeout\n", __func__, req->request_fd); + goto fail; + } else if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: select request %d failed, %s (%d)\n", + __func__, req->request_fd, av_err2str(AVERROR(errno)), errno); + goto fail; + } + + ret = v4l2_request_dequeue_buffer(ctx, &req->output); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: dequeue output buffer %d failed for request %d, %s (%d)\n", + __func__, req->output.index, req->request_fd, av_err2str(AVERROR(errno)), errno); + return AVERROR(errno); + } + + if (last_slice) { + ret = v4l2_request_dequeue_buffer(ctx, &req->capture); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: dequeue capture buffer %d failed for request %d, %s (%d)\n", + __func__, req->capture.index, req->request_fd, av_err2str(AVERROR(errno)), errno); + return AVERROR(errno); + } + + if (req->capture.buffer.flags & V4L2_BUF_FLAG_ERROR) { + av_log(avctx, AV_LOG_WARNING, "%s: capture buffer %d flagged with error for request %d\n", + __func__, req->capture.index, req->request_fd); + frame->flags |= AV_FRAME_FLAG_CORRUPT; + } + } + + ret = ioctl(req->request_fd, MEDIA_REQUEST_IOC_REINIT, NULL); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: reinit request %d failed, %s (%d)\n", + __func__, req->request_fd, av_err2str(AVERROR(errno)), errno); + return AVERROR(errno); + } + + if (last_slice) { + ret = v4l2_request_set_drm_descriptor(req, &ctx->format); + if(ret < 0) + return AVERROR(EINVAL); + } + + return 0; + +fail: + ret = v4l2_request_dequeue_buffer(ctx, &req->output); + if (ret < 0) + av_log(avctx, AV_LOG_ERROR, "%s: dequeue output buffer %d failed for request %d, %s (%d)\n", + __func__, req->output.index, req->request_fd, av_err2str(AVERROR(errno)), errno); + + ret = v4l2_request_dequeue_buffer(ctx, &req->capture); + if (ret < 0) + av_log(avctx, AV_LOG_ERROR, "%s: dequeue capture buffer %d failed for request %d, %s (%d)\n", + __func__, req->capture.index, req->request_fd, av_err2str(AVERROR(errno)), errno); + + ret = ioctl(req->request_fd, MEDIA_REQUEST_IOC_REINIT, NULL); + if (ret < 0) + av_log(avctx, AV_LOG_ERROR, "%s: reinit request %d failed, %s (%d)\n", + __func__, req->request_fd, av_err2str(AVERROR(errno)), errno); + + return AVERROR(EINVAL); +} + +int ff_v4l2_request_decode_slice(AVCodecContext *avctx, AVFrame *frame, + struct v4l2_ext_control *control, int count, + int first_slice, int last_slice) +{ + V4L2RequestDescriptor *req = (V4L2RequestDescriptor*)frame->data[0]; + + /* fall back to queue each slice as a full frame */ + if ((req->output.capabilities & V4L2_BUF_CAP_SUPPORTS_M2M_HOLD_CAPTURE_BUF) != V4L2_BUF_CAP_SUPPORTS_M2M_HOLD_CAPTURE_BUF) + return v4l2_request_queue_decode(avctx, frame, control, count, 1, 1); + + return v4l2_request_queue_decode(avctx, frame, control, count, first_slice, last_slice); +} + +int ff_v4l2_request_decode_frame(AVCodecContext *avctx, AVFrame *frame, struct v4l2_ext_control *control, int count) +{ + return v4l2_request_queue_decode(avctx, frame, control, count, 1, 1); +} + +static int v4l2_request_try_format(AVCodecContext *avctx, V4L2DeviceVideo *device, enum v4l2_buf_type type, uint32_t pixelformat) +{ + struct v4l2_fmtdesc fmtdesc = { + .index = 0, + .type = type, + }; + + if (V4L2_TYPE_IS_OUTPUT(type)) { + struct v4l2_create_buffers buffers = { + .count = 0, + .memory = V4L2_MEMORY_MMAP, + .format.type = type, + }; + + if (ioctl(device->fd, VIDIOC_CREATE_BUFS, &buffers) < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: create buffers failed for type %u, %s (%d)\n", + __func__, type, av_err2str(AVERROR(errno)), errno); + return -1; + } + + if ((buffers.capabilities & V4L2_BUF_CAP_SUPPORTS_REQUESTS) != V4L2_BUF_CAP_SUPPORTS_REQUESTS) { + av_log(avctx, AV_LOG_INFO, "%s: output buffer type do not support requests, capabilities %u\n", + __func__, buffers.capabilities); + return -1; + } + } + + while (ioctl(device->fd, VIDIOC_ENUM_FMT, &fmtdesc) >= 0) { + if (fmtdesc.pixelformat == pixelformat) + return 0; + + fmtdesc.index++; + } + + av_log(avctx, AV_LOG_INFO, "%s: pixelformat %u not supported for type %u\n", + __func__, pixelformat, type); + return -1; +} + +static int v4l2_request_set_format(V4L2DeviceVideo *device, enum v4l2_buf_type type, + uint32_t width, uint32_t height, + uint32_t pixelformat, uint32_t buffersize) +{ + struct v4l2_format format = { + .type = type, + }; + + if (V4L2_TYPE_IS_MULTIPLANAR(type)) { + format.fmt.pix_mp.width = width; + format.fmt.pix_mp.height = height; + format.fmt.pix_mp.pixelformat = pixelformat; + format.fmt.pix_mp.plane_fmt[0].sizeimage = buffersize; + format.fmt.pix_mp.num_planes = 1; + } else { + format.fmt.pix.width = width; + format.fmt.pix.height = height; + format.fmt.pix.pixelformat = pixelformat; + format.fmt.pix.sizeimage = buffersize; + } + + return ioctl(device->fd, VIDIOC_S_FMT, &format); +} + +static int v4l2_request_select_capture_format(AVCodecContext *avctx, V4L2DeviceVideo *device) +{ + V4L2RequestContext *ctx = avctx->internal->hwaccel_priv_data; + enum v4l2_buf_type type = ctx->format.type; + + /* V4L2 documentation for stateless decoders suggests the driver should choose the preferred/optimal format */ + /* but drivers do not fully implement this, so use v4l2_request_capture_pixelformats. */ +#if 0 + struct v4l2_format format = { + .type = type, + }; + struct v4l2_fmtdesc fmtdesc = { + .index = 0, + .type = type, + }; + uint32_t pixelformat; + int i; + + if (ioctl(ctx->video_device.fd, VIDIOC_G_FMT, &format) < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: get capture format failed, %s (%d)\n", __func__, av_err2str(AVERROR(errno)), errno); + return -1; + } + + pixelformat = V4L2_TYPE_IS_MULTIPLANAR(type) ? format.fmt.pix_mp.pixelformat : format.fmt.pix.pixelformat; + + for (i = 0; i < FF_ARRAY_ELEMS(v4l2_request_capture_pixelformats); i++) { + if (pixelformat == v4l2_request_capture_pixelformats[i]) + return v4l2_request_set_format(avctx, type, pixelformat, 0); + } + + while (ioctl(ctx->video_device.fd, VIDIOC_ENUM_FMT, &fmtdesc) >= 0) { + for (i = 0; i < FF_ARRAY_ELEMS(v4l2_request_capture_pixelformats); i++) { + if (fmtdesc.pixelformat == v4l2_request_capture_pixelformats[i]) + return v4l2_request_set_format(avctx, type, fmtdesc.pixelformat, 0); + } + + fmtdesc.index++; + } +#else + for (int i = 0; i < FF_ARRAY_ELEMS(v4l2_request_capture_pixelformats); i++) { + uint32_t pixelformat = v4l2_request_capture_pixelformats[i]; + if (!v4l2_request_try_format(avctx, device, type, pixelformat)) + return v4l2_request_set_format(device, type, avctx->coded_width, avctx->coded_height, pixelformat, 0); + } +#endif + + return -1; +} + +static bool check_video_device(V4L2DeviceVideo *device, void *opaque) +{ + V4L2RequestParams *params = opaque; + AVCodecContext *avctx = params->avctx; + V4L2RequestContext *ctx = avctx->internal->hwaccel_priv_data; + int ret; + unsigned int capabilities = 0; + + if (device->capability.capabilities & V4L2_CAP_DEVICE_CAPS) + capabilities = device->capability.device_caps; + else + capabilities = device->capability.capabilities; + + av_log(avctx, AV_LOG_DEBUG, "%s: avctx=%p ctx=%p path=%s capabilities=%u\n", + __func__, avctx, ctx, device->devname, capabilities); + + if ((capabilities & V4L2_CAP_STREAMING) != V4L2_CAP_STREAMING) { + av_log(avctx, AV_LOG_ERROR, "%s: missing required streaming capability\n", __func__); + return false; + } + + if ((capabilities & V4L2_CAP_VIDEO_M2M_MPLANE) == V4L2_CAP_VIDEO_M2M_MPLANE) { + ctx->output_type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE; + ctx->format.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE; + } else if ((capabilities & V4L2_CAP_VIDEO_M2M) == V4L2_CAP_VIDEO_M2M) { + ctx->output_type = V4L2_BUF_TYPE_VIDEO_OUTPUT; + ctx->format.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; + } else { + av_log(avctx, AV_LOG_ERROR, "%s: missing required mem2mem capability\n", __func__); + return false; + } + + ret = v4l2_request_try_format(avctx, device, ctx->output_type, params->pixelformat); + if (ret < 0) { + av_log(avctx, AV_LOG_WARNING, "%s: try output format failed\n", __func__); + return false; + } + + ret = v4l2_request_set_format(device, ctx->output_type, avctx->coded_width, avctx->coded_height, params->pixelformat, params->buffersize); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: set output format failed, %s (%d)\n", + __func__, av_err2str(AVERROR(errno)), errno); + return false; + } + + ret = v4l2_request_set_controls(device, -1, params->ptr_controls, params->num_controls); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: set controls failed, %s (%d)\n", + __func__, av_err2str(AVERROR(errno)), errno); + return false; + } + + ret = v4l2_request_select_capture_format(avctx, device); + if (ret < 0) { + av_log(avctx, AV_LOG_WARNING, "%s: select capture format failed\n", __func__); + return false; + } + + return true; +} + +static int v4l2_request_init_context(AVCodecContext *avctx) +{ + V4L2RequestContext *ctx = avctx->internal->hwaccel_priv_data; + int ret; + + ret = ioctl(ctx->video_device.fd, VIDIOC_G_FMT, &ctx->format); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: get capture format failed, %s (%d)\n", + __func__, av_err2str(AVERROR(errno)), errno); + ret = AVERROR(EINVAL); + goto fail; + } + + if (V4L2_TYPE_IS_MULTIPLANAR(ctx->format.type)) { + av_log(avctx, AV_LOG_DEBUG, + "%s: pixelformat=%d width=%u height=%u bytesperline=%u sizeimage=%u num_planes=%u\n", + __func__, ctx->format.fmt.pix_mp.pixelformat, + ctx->format.fmt.pix_mp.width, ctx->format.fmt.pix_mp.height, + ctx->format.fmt.pix_mp.plane_fmt[0].bytesperline, ctx->format.fmt.pix_mp.plane_fmt[0].sizeimage, + ctx->format.fmt.pix_mp.num_planes); + } else { + av_log(avctx, AV_LOG_DEBUG, + "%s: pixelformat=%d width=%u height=%u bytesperline=%u sizeimage=%u\n", + __func__, ctx->format.fmt.pix.pixelformat, + ctx->format.fmt.pix.width, ctx->format.fmt.pix.height, + ctx->format.fmt.pix.bytesperline, ctx->format.fmt.pix.sizeimage); + } + + ret = ff_decode_get_hw_frames_ctx(avctx, AV_HWDEVICE_TYPE_DRM); + if (ret < 0) + goto fail; + + ret = ioctl(ctx->video_device.fd, VIDIOC_STREAMON, &ctx->output_type); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: output stream on failed, %s (%d)\n", + __func__, av_err2str(AVERROR(errno)), errno); + ret = AVERROR(EINVAL); + goto fail; + } + + ret = ioctl(ctx->video_device.fd, VIDIOC_STREAMON, &ctx->format.type); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: capture stream on failed, %s (%d)\n", + __func__, av_err2str(AVERROR(errno)), errno); + ret = AVERROR(EINVAL); + goto fail; + } + + return 0; + +fail: + ff_v4l2_request_uninit(avctx); + return ret; +} + +int ff_v4l2_request_init(V4L2RequestParams params) +{ + AVCodecContext *avctx = params.avctx; + V4L2RequestContext *ctx = avctx->internal->hwaccel_priv_data; + int ret = AVERROR(EINVAL); + + av_log(avctx, AV_LOG_DEBUG, "%s: avctx=%p hw_device_ctx=%p hw_frames_ctx=%p\n", + __func__, avctx, avctx->hw_device_ctx, avctx->hw_frames_ctx); + + ctx->media_device.fd = -1; + ctx->video_device.fd = -1; + ctx->timestamp = 0; + + ret = ff_v4l2_device_probe_video(avctx, &ctx->video_device, ¶ms, check_video_device); + if (ret) + return ret; + + ret = ff_v4l2_device_retrieve_media(avctx, &ctx->video_device, &ctx->media_device); + if (ret) + return ret; + + return v4l2_request_init_context(avctx); +} + +int ff_v4l2_request_uninit(AVCodecContext *avctx) +{ + V4L2RequestContext *ctx = avctx->internal->hwaccel_priv_data; + int ret; + + av_log(avctx, AV_LOG_DEBUG, "%s: avctx=%p ctx=%p\n", __func__, avctx, ctx); + + if (ctx->video_device.fd >= 0) { + ret = ioctl(ctx->video_device.fd, VIDIOC_STREAMOFF, &ctx->output_type); + if (ret < 0) + av_log(avctx, AV_LOG_ERROR, "%s: output stream off failed, %s (%d)\n", + __func__, av_err2str(AVERROR(errno)), errno); + + ret = ioctl(ctx->video_device.fd, VIDIOC_STREAMOFF, &ctx->format.type); + if (ret < 0) + av_log(avctx, AV_LOG_ERROR, "%s: capture stream off failed, %s (%d)\n", + __func__, av_err2str(AVERROR(errno)), errno); + } + + if (avctx->hw_frames_ctx) { + AVHWFramesContext *hwfc = (AVHWFramesContext*)avctx->hw_frames_ctx->data; + av_buffer_pool_flush(hwfc->pool); + } + + if (ctx->video_device.fd >= 0) + close(ctx->video_device.fd); + + if (ctx->media_device.fd >= 0) + close(ctx->media_device.fd); + + return 0; +} + +static int v4l2_request_buffer_alloc(AVCodecContext *avctx, V4L2RequestBuffer *buf, enum v4l2_buf_type type) +{ + V4L2RequestContext *ctx = avctx->internal->hwaccel_priv_data; + int ret; + struct v4l2_plane planes[1] = {}; + struct v4l2_create_buffers buffers = { + .count = 1, + .memory = V4L2_MEMORY_MMAP, + .format.type = type, + }; + + av_log(avctx, AV_LOG_DEBUG, "%s: avctx=%p buf=%p type=%u\n", __func__, avctx, buf, type); + + ret = ioctl(ctx->video_device.fd, VIDIOC_G_FMT, &buffers.format); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: get format failed for type %u, %s (%d)\n", + __func__, type, av_err2str(AVERROR(errno)), errno); + return ret; + } + + if (V4L2_TYPE_IS_MULTIPLANAR(buffers.format.type)) { + av_log(avctx, AV_LOG_DEBUG, + "%s: pixelformat=%d width=%u height=%u bytesperline=%u sizeimage=%u num_planes=%u\n", + __func__, buffers.format.fmt.pix_mp.pixelformat, + buffers.format.fmt.pix_mp.width, buffers.format.fmt.pix_mp.height, + buffers.format.fmt.pix_mp.plane_fmt[0].bytesperline, buffers.format.fmt.pix_mp.plane_fmt[0].sizeimage, + buffers.format.fmt.pix_mp.num_planes); + } else { + av_log(avctx, AV_LOG_DEBUG, "%s: pixelformat=%d width=%u height=%u bytesperline=%u sizeimage=%u\n", + __func__, buffers.format.fmt.pix.pixelformat, + buffers.format.fmt.pix.width, buffers.format.fmt.pix.height, + buffers.format.fmt.pix.bytesperline, buffers.format.fmt.pix.sizeimage); + } + + ret = ioctl(ctx->video_device.fd, VIDIOC_CREATE_BUFS, &buffers); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: create buffers failed for type %u, %s (%d)\n", + __func__, type, av_err2str(AVERROR(errno)), errno); + return ret; + } + + if (V4L2_TYPE_IS_MULTIPLANAR(type)) { + buf->width = buffers.format.fmt.pix_mp.width; + buf->height = buffers.format.fmt.pix_mp.height; + buf->size = buffers.format.fmt.pix_mp.plane_fmt[0].sizeimage; + buf->buffer.length = 1; + buf->buffer.m.planes = planes; + } else { + buf->width = buffers.format.fmt.pix.width; + buf->height = buffers.format.fmt.pix.height; + buf->size = buffers.format.fmt.pix.sizeimage; + } + + buf->index = buffers.index; + buf->capabilities = buffers.capabilities; + buf->used = 0; + + buf->buffer.type = type; + buf->buffer.memory = V4L2_MEMORY_MMAP; + buf->buffer.index = buf->index; + + ret = ioctl(ctx->video_device.fd, VIDIOC_QUERYBUF, &buf->buffer); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: query buffer %d failed, %s (%d)\n", + __func__, buf->index, av_err2str(AVERROR(errno)), errno); + return ret; + } + + if (V4L2_TYPE_IS_OUTPUT(type)) { + off_t offset = V4L2_TYPE_IS_MULTIPLANAR(type) ? buf->buffer.m.planes[0].m.mem_offset : buf->buffer.m.offset; + void *addr = mmap(NULL, buf->size, PROT_READ | PROT_WRITE, MAP_SHARED, ctx->video_device.fd, offset); + if (addr == MAP_FAILED) { + av_log(avctx, AV_LOG_ERROR, "%s: mmap failed, %s (%d)\n", + __func__, av_err2str(AVERROR(errno)), errno); + return -1; + } + + buf->addr = (uint8_t*)addr; + } else { + struct v4l2_exportbuffer exportbuffer = { + .type = type, + .index = buf->index, + .flags = O_RDONLY, + }; + + ret = ioctl(ctx->video_device.fd, VIDIOC_EXPBUF, &exportbuffer); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: export buffer %d failed, %s (%d)\n", + __func__, buf->index, av_err2str(AVERROR(errno)), errno); + return ret; + } + + buf->fd = exportbuffer.fd; + } + + av_log(avctx, AV_LOG_DEBUG, "%s: buf=%p index=%d fd=%d addr=%p width=%u height=%u size=%u\n", + __func__, buf, buf->index, buf->fd, buf->addr, buf->width, buf->height, buf->size); + return 0; +} + +static void v4l2_request_buffer_free(V4L2RequestBuffer *buf) +{ + av_log(NULL, AV_LOG_DEBUG, "%s: buf=%p index=%d fd=%d addr=%p width=%u height=%u size=%u\n", + __func__, buf, buf->index, buf->fd, buf->addr, buf->width, buf->height, buf->size); + + if (buf->addr) + munmap(buf->addr, buf->size); + + if (buf->fd >= 0) + close(buf->fd); +} + +static void v4l2_request_frame_free(void *opaque, uint8_t *data) +{ + AVCodecContext *avctx = opaque; + V4L2RequestDescriptor *req = (V4L2RequestDescriptor*)data; + + av_log(NULL, AV_LOG_DEBUG, "%s: avctx=%p data=%p request_fd=%d\n", + __func__, avctx, data, req->request_fd); + + if (req->request_fd >= 0) + close(req->request_fd); + + v4l2_request_buffer_free(&req->capture); + v4l2_request_buffer_free(&req->output); + + av_free(data); +} + +static AVBufferRef *v4l2_request_frame_alloc(void *opaque, size_t size) +{ + AVCodecContext *avctx = opaque; + V4L2RequestContext *ctx = avctx->internal->hwaccel_priv_data; + V4L2RequestDescriptor *req; + AVBufferRef *ref; + uint8_t *data; + int ret; + + data = av_mallocz(size); + if (!data) + return NULL; + + av_log(avctx, AV_LOG_DEBUG, "%s: avctx=%p size=%zu data=%p\n", + __func__, avctx, size, data); + + ref = av_buffer_create(data, size, v4l2_request_frame_free, avctx, 0); + if (!ref) { + av_freep(&data); + return NULL; + } + + req = (V4L2RequestDescriptor*)data; + req->request_fd = -1; + req->output.fd = -1; + req->capture.fd = -1; + + ret = v4l2_request_buffer_alloc(avctx, &req->output, ctx->output_type); + if (ret < 0) { + av_buffer_unref(&ref); + return NULL; + } + + ret = v4l2_request_buffer_alloc(avctx, &req->capture, ctx->format.type); + if (ret < 0) { + av_buffer_unref(&ref); + return NULL; + } + + ret = ioctl(ctx->media_device.fd, MEDIA_IOC_REQUEST_ALLOC, &req->request_fd); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "%s: request alloc failed, %s (%d)\n", + __func__, av_err2str(AVERROR(errno)), errno); + av_buffer_unref(&ref); + return NULL; + } + + av_log(avctx, AV_LOG_DEBUG, "%s: avctx=%p size=%zu data=%p request_fd=%d\n", + __func__, avctx, size, data, req->request_fd); + return ref; +} + +static void v4l2_request_pool_free(void *opaque) +{ + av_log(NULL, AV_LOG_DEBUG, "%s: opaque=%p\n", __func__, opaque); +} + +static void v4l2_request_hwframe_ctx_free(AVHWFramesContext *hwfc) +{ + av_log(NULL, AV_LOG_DEBUG, "%s: hwfc=%p pool=%p\n", __func__, hwfc, hwfc->pool); + + av_buffer_pool_flush(hwfc->pool); + av_buffer_pool_uninit(&hwfc->pool); +} + +int ff_v4l2_request_frame_params(AVCodecContext *avctx, AVBufferRef *hw_frames_ctx) +{ + V4L2RequestContext *ctx = avctx->internal->hwaccel_priv_data; + AVHWFramesContext *hwfc = (AVHWFramesContext*)hw_frames_ctx->data; + + hwfc->format = AV_PIX_FMT_DRM_PRIME; + hwfc->sw_format = AV_PIX_FMT_NV12; + if (V4L2_TYPE_IS_MULTIPLANAR(ctx->format.type)) { + hwfc->width = ctx->format.fmt.pix_mp.width; + hwfc->height = ctx->format.fmt.pix_mp.height; + } else { + hwfc->width = ctx->format.fmt.pix.width; + hwfc->height = ctx->format.fmt.pix.height; + } + + hwfc->pool = av_buffer_pool_init2(sizeof(V4L2RequestDescriptor), avctx, v4l2_request_frame_alloc, v4l2_request_pool_free); + if (!hwfc->pool) + return AVERROR(ENOMEM); + + hwfc->free = v4l2_request_hwframe_ctx_free; + + hwfc->initial_pool_size = 1; + + switch (avctx->codec_id) { + case AV_CODEC_ID_VP9: + hwfc->initial_pool_size += 8; + break; + case AV_CODEC_ID_VP8: + hwfc->initial_pool_size += 3; + break; + default: + hwfc->initial_pool_size += 2; + } + + av_log(avctx, AV_LOG_DEBUG, "%s: avctx=%p ctx=%p hw_frames_ctx=%p hwfc=%p pool=%p width=%d height=%d initial_pool_size=%d\n", + __func__, avctx, ctx, hw_frames_ctx, hwfc, hwfc->pool, hwfc->width, hwfc->height, hwfc->initial_pool_size); + + return 0; +} diff --git a/libavcodec/v4l2_request.h b/libavcodec/v4l2_request.h new file mode 100644 index 0000000000..25a0deb312 --- /dev/null +++ b/libavcodec/v4l2_request.h @@ -0,0 +1,87 @@ +/* + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#ifndef AVCODEC_V4L2_REQUEST_H +#define AVCODEC_V4L2_REQUEST_H + +#include + +#include "libavutil/hwcontext_drm.h" + +#include "v4l2_device.h" + +typedef struct V4L2RequestParams { + AVCodecContext *avctx; + uint32_t pixelformat; + uint32_t buffersize; + struct v4l2_ext_control *ptr_controls; + size_t num_controls; +} V4L2RequestParams; + +typedef struct V4L2RequestContext { + V4L2DeviceVideo video_device; + V4L2DeviceMedia media_device; + enum v4l2_buf_type output_type; + struct v4l2_format format; + int timestamp; +} V4L2RequestContext; + +typedef struct V4L2RequestBuffer { + int index; + int fd; + uint8_t *addr; + uint32_t width; + uint32_t height; + uint32_t size; + uint32_t used; + uint32_t capabilities; + struct v4l2_buffer buffer; +} V4L2RequestBuffer; + +typedef struct V4L2RequestDescriptor { + AVDRMFrameDescriptor drm; + int request_fd; + V4L2RequestBuffer output; + V4L2RequestBuffer capture; +} V4L2RequestDescriptor; + +uint64_t ff_v4l2_request_get_capture_timestamp(AVFrame *frame); + +int ff_v4l2_request_reset_frame(AVCodecContext *avctx, AVFrame *frame); + +int ff_v4l2_request_append_output_buffer(AVCodecContext *avctx, AVFrame *frame, const uint8_t *data, uint32_t size); + +int ff_v4l2_request_set_controls(AVCodecContext *avctx, struct v4l2_ext_control *control, int count); + +int ff_v4l2_request_get_controls(AVCodecContext *avctx, struct v4l2_ext_control *control, int count); + +int ff_v4l2_request_query_control(AVCodecContext *avctx, struct v4l2_query_ext_ctrl *control); + +int ff_v4l2_request_query_control_default_value(AVCodecContext *avctx, uint32_t id); + +int ff_v4l2_request_decode_slice(AVCodecContext *avctx, AVFrame *frame, struct v4l2_ext_control *control, int count, int first_slice, int last_slice); + +int ff_v4l2_request_decode_frame(AVCodecContext *avctx, AVFrame *frame, struct v4l2_ext_control *control, int count); + +int ff_v4l2_request_init(V4L2RequestParams params); + +int ff_v4l2_request_uninit(AVCodecContext *avctx); + +int ff_v4l2_request_frame_params(AVCodecContext *avctx, AVBufferRef *hw_frames_ctx); + +#endif /* AVCODEC_V4L2_REQUEST_H */