From patchwork Tue Apr 4 05:45:41 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: wm4 X-Patchwork-Id: 3279 Delivered-To: ffmpegpatchwork@gmail.com Received: by 10.103.44.195 with SMTP id s186csp32948vss; Mon, 3 Apr 2017 22:46:01 -0700 (PDT) X-Received: by 10.28.97.2 with SMTP id v2mr11955475wmb.88.1491284761090; Mon, 03 Apr 2017 22:46:01 -0700 (PDT) Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id 71si22906859wrc.318.2017.04.03.22.46.00; Mon, 03 Apr 2017 22:46:00 -0700 (PDT) Received-SPF: pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@googlemail.com; spf=pass (google.com: domain of ffmpeg-devel-bounces@ffmpeg.org designates 79.124.17.100 as permitted sender) smtp.mailfrom=ffmpeg-devel-bounces@ffmpeg.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=googlemail.com Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id BDE04680C95; Tue, 4 Apr 2017 08:45:54 +0300 (EEST) X-Original-To: ffmpeg-devel@ffmpeg.org Delivered-To: ffmpeg-devel@ffmpeg.org Received: from mail-wr0-f195.google.com (mail-wr0-f195.google.com [209.85.128.195]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id CD7B7680734 for ; Tue, 4 Apr 2017 08:45:47 +0300 (EEST) Received: by mail-wr0-f195.google.com with SMTP id k6so37641382wre.3 for ; Mon, 03 Apr 2017 22:45:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=IwdG4aCHa2ITJffdept2c83YaV5yZN1evnQhDh/TFB0=; b=LmuQ8riTOTslQYiddrAg2YKIZA15d4JqIAaHXH9kE5nbviJyEM/U2c1nVzlN86CHdW AhXVK4VpN10PRXou4JbqWJHGamtp7ICeMLJ6pYjhJh0krFS3Jzzgo7bQsUbxFggqhzOU gFNCzBaO/3kTe0Y7lIxlBwQJKS2f/fMrRE5rX59iVhRcTI8cPGddh6QktxMZRT84tIPJ gbljR6DaitP6nZkTvzeWtMDHK0vYhqWoi0M7mhKrE58w5ffD8O8ODzeyoHoG8eHamJaH 8G5qGbl7Ww8dgU4d3FiEjC+0fISpFdU/DVz2Xo8sNy/U8yXLhc7U6b1ohfyARw9pB8S8 4k2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=IwdG4aCHa2ITJffdept2c83YaV5yZN1evnQhDh/TFB0=; b=bkrczXYeQHXi7OyTI74laqLA/yvjgAyRJtaw2MKeYPYNM7r6Xw/BdXXc4ryQInFLYa 4kbZlakkmV7ujo17Qmj8Zz7pNOhHbvdOMUV2uBeD0L9zPNhRQ1Pjq10ZNPF1f25LHaaQ knuLJgm8f36uzzJvADC71qAWFVJGSMO2dNTXjPUVuBRgLZmkGAOApAJIUbml2+gZ94cQ zpZftrw/0kNhZgfTTYS4tIpk4jUkQTGHNHnQU74ycCveJg3KA4dtSB6J6fWTYZ89KzVD Mi117mtQTZwulVH8SoSzYl/FFwL+4yE3W3y0RW7T5JzF99SXOzTMNufg1KLDUeZ/p8SW hADg== X-Gm-Message-State: AFeK/H10d3r7QNfCwcmFf/uIJYAY6Ua0M6XtIGBsd5BOQD9mquLeWaMF xhqlAU+VfrXnAg== X-Received: by 10.28.113.12 with SMTP id m12mr12447864wmc.18.1491284748028; Mon, 03 Apr 2017 22:45:48 -0700 (PDT) Received: from localhost.localdomain (p4FF0086E.dip0.t-ipconnect.de. [79.240.8.110]) by smtp.googlemail.com with ESMTPSA id 110sm20706154wra.49.2017.04.03.22.45.41 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 03 Apr 2017 22:45:46 -0700 (PDT) From: wm4 To: ffmpeg-devel@ffmpeg.org Date: Tue, 4 Apr 2017 07:45:41 +0200 Message-Id: <20170404054541.22058-1-nfxjfg@googlemail.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <828ed2cb-eaa6-dfad-5a91-9a9fb38ddca6@gmail.com> References: <828ed2cb-eaa6-dfad-5a91-9a9fb38ddca6@gmail.com> Subject: [FFmpeg-devel] [PATCH v2] Add MediaFoundation wrapper X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: wm4 MIME-Version: 1.0 Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" Can do audio decoding, audio encoding, video decoding, video encoding, video HW encoding. I also had video HW decoding, but removed it for now, as thw hwframes integration wasn't very sane. Some of the MS codecs aren't well tested, and might not work properly. --- Didn't actually test compilation again, but should be fine. --- configure | 34 + libavcodec/Makefile | 1 + libavcodec/allcodecs.c | 23 + libavcodec/mf.c | 1951 ++++++++++++++++++++++++++++++++++++++++++++++++ libavcodec/mf_utils.c | 696 +++++++++++++++++ libavcodec/mf_utils.h | 200 +++++ 6 files changed, 2905 insertions(+) create mode 100644 libavcodec/mf.c create mode 100644 libavcodec/mf_utils.c create mode 100644 libavcodec/mf_utils.h diff --git a/configure b/configure index 6d76cf7e1e..9909e7ab77 100755 --- a/configure +++ b/configure @@ -280,6 +280,7 @@ External library support: --disable-lzma disable lzma [autodetect] --enable-decklink enable Blackmagic DeckLink I/O support [no] --enable-mediacodec enable Android MediaCodec support [no] + --enable-mf enable decoding via MediaFoundation [no] --enable-netcdf enable NetCDF, needed for sofalizer filter [no] --enable-openal enable OpenAL 1.1 capture support [no] --enable-opencl enable OpenCL code @@ -1582,6 +1583,7 @@ EXTERNAL_LIBRARY_LIST=" libzmq libzvbi mediacodec + mf netcdf openal opencl @@ -2124,6 +2126,7 @@ CONFIG_EXTRA=" lpc lzf me_cmp + mf mpeg_er mpegaudio mpegaudiodsp @@ -2584,6 +2587,8 @@ zmbv_encoder_select="zlib" # platform codecs audiotoolbox_deps="AudioToolbox_AudioToolbox_h" audiotoolbox_extralibs="-framework CoreFoundation -framework AudioToolbox -framework CoreMedia" +mf_deps="mftransform_h" +mf_extralibs="-lmfplat -lole32 -lstrmiids -luuid -loleaut32 -lshlwapi" # hardware accelerators crystalhd_deps="libcrystalhd_libcrystalhd_if_h" @@ -2892,6 +2897,34 @@ libx265_encoder_deps="libx265" libxavs_encoder_deps="libxavs" libxvid_encoder_deps="libxvid" libzvbi_teletext_decoder_deps="libzvbi" +aac_mf_decoder_deps="mf" +aac_mf_encoder_deps="mf" +ac3_mf_decoder_deps="mf" +ac3_mf_encoder_deps="mf" +eac3_mf_decoder_deps="mf" +h264_mf_decoder_deps="mf" +h264_mf_encoder_deps="mf" +hevc_mf_decoder_deps="mf" +hevc_mf_encoder_deps="mf" +mjpeg_mf_decoder_deps="mf" +mp1_mf_decoder_deps="mf" +mp2_mf_decoder_deps="mf" +mp3_mf_decoder_deps="mf" +mp3_mf_encoder_deps="mf" +mpeg2_mf_decoder_deps="mf" +mpeg4_mf_decoder_deps="mf" +msmpeg4v1_mf_decoder_deps="mf" +msmpeg4v2_mf_decoder_deps="mf" +msmpeg4v3_mf_decoder_deps="mf" +vc1_mf_decoder_deps="mf" +wmav1_mf_decoder_deps="mf" +wmav2_mf_decoder_deps="mf" +wmalossless_mf_decoder_deps="mf" +wmapro_mf_decoder_deps="mf" +wmavoice_mf_decoder_deps="mf" +wmv1_mf_decoder_deps="mf" +wmv2_mf_decoder_deps="mf" +wmv3_mf_decoder_deps="mf" videotoolbox_deps="VideoToolbox_VideoToolbox_h" videotoolbox_extralibs="-framework CoreFoundation -framework VideoToolbox -framework CoreMedia -framework CoreVideo" videotoolbox_encoder_deps="videotoolbox VTCompressionSessionPrepareToEncodeFrames" @@ -5632,6 +5665,7 @@ check_header io.h check_header libcrystalhd/libcrystalhd_if.h check_header mach/mach_time.h check_header malloc.h +check_header mftransform.h check_header net/udplite.h check_header poll.h check_header sys/mman.h diff --git a/libavcodec/Makefile b/libavcodec/Makefile index 876a69e013..d26124f60c 100644 --- a/libavcodec/Makefile +++ b/libavcodec/Makefile @@ -915,6 +915,7 @@ OBJS-$(CONFIG_LIBX265_ENCODER) += libx265.o OBJS-$(CONFIG_LIBXAVS_ENCODER) += libxavs.o OBJS-$(CONFIG_LIBXVID_ENCODER) += libxvid.o OBJS-$(CONFIG_LIBZVBI_TELETEXT_DECODER) += libzvbi-teletextdec.o ass.o +OBJS-$(CONFIG_MF) += mf.o mf_utils.o mpeg4audio.o # parsers OBJS-$(CONFIG_AAC_LATM_PARSER) += latm_parser.o diff --git a/libavcodec/allcodecs.c b/libavcodec/allcodecs.c index b7d03ad601..c8704a267b 100644 --- a/libavcodec/allcodecs.c +++ b/libavcodec/allcodecs.c @@ -631,6 +631,29 @@ static void register_all(void) REGISTER_ENCODER(LIBXAVS, libxavs); REGISTER_ENCODER(LIBXVID, libxvid); REGISTER_DECODER(LIBZVBI_TELETEXT, libzvbi_teletext); + REGISTER_ENCDEC (AAC_MF, aac_mf); + REGISTER_ENCDEC (AC3_MF, ac3_mf); + REGISTER_DECODER(EAC3_MF, eac3_mf); + REGISTER_ENCDEC (H264_MF, h264_mf); + REGISTER_ENCDEC (HEVC_MF, hevc_mf); + REGISTER_DECODER(MJPEG_MF, mjpeg_mf); + REGISTER_DECODER(MP1_MF, mp1_mf); + REGISTER_DECODER(MP2_MF, mp2_mf); + REGISTER_ENCDEC (MP3_MF, mp3_mf); + REGISTER_DECODER(MPEG2_MF, mpeg2_mf); + REGISTER_DECODER(MPEG4_MF, mpeg4_mf); + REGISTER_DECODER(MSMPEG4V1_MF, msmpeg4v1_mf); + REGISTER_DECODER(MSMPEG4V2_MF, msmpeg4v2_mf); + REGISTER_DECODER(MSMPEG4V3_MF, msmpeg4v3_mf); + REGISTER_DECODER(VC1_MF, vc1_mf); + REGISTER_DECODER(WMALOSSLESS_MF, wmalossless_mf); + REGISTER_DECODER(WMAPRO_MF, wmapro_mf); + REGISTER_DECODER(WMAV1_MF, wmav1_mf); + REGISTER_DECODER(WMAV2_MF, wmav2_mf); + REGISTER_DECODER(WMAVOICE_MF, wmavoice_mf); + REGISTER_DECODER(WMV1_MF, wmv1_mf); + REGISTER_DECODER(WMV2_MF, wmv2_mf); + REGISTER_DECODER(WMV3_MF, wmv3_mf); /* text */ REGISTER_DECODER(BINTEXT, bintext); diff --git a/libavcodec/mf.c b/libavcodec/mf.c new file mode 100644 index 0000000000..d2213f8fa3 --- /dev/null +++ b/libavcodec/mf.c @@ -0,0 +1,1951 @@ +/* + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ +#define COBJMACROS +#if !defined(_WIN32_WINNT) || _WIN32_WINNT < 0x0601 +#undef _WIN32_WINNT +#define _WIN32_WINNT 0x0601 +#endif + +#include "mf_utils.h" + +// Include after mf_utils.h due to Windows include mess. +#include "mpeg4audio.h" + +typedef struct MFContext { + AVClass *av_class; + int is_dec, is_enc, is_video, is_audio; + GUID main_subtype; + IMFTransform *mft; + IMFMediaEventGenerator *async_events; + DWORD in_stream_id, out_stream_id; + MFT_INPUT_STREAM_INFO in_info; + MFT_OUTPUT_STREAM_INFO out_info; + int out_stream_provides_samples; + int draining, draining_done; + int sample_sent; + int async_need_input, async_have_output, async_marker; + int lavc_init_done; + uint8_t *send_extradata; + int send_extradata_size; + ICodecAPI *codec_api; + AVBSFContext *bsfc; + // Important parameters which might be overwritten by decoding. + int original_channels; + // set by AVOption + int opt_enc_rc; + int opt_enc_quality; + int opt_enc_d3d; +} MFContext; + +static int mf_choose_output_type(AVCodecContext *avctx); +static int mf_setup_context(AVCodecContext *avctx); + +#define MF_TIMEBASE (AVRational){1, 10000000} +// Sentinel value only used by us. +#define MF_INVALID_TIME AV_NOPTS_VALUE + +static int mf_wait_events(AVCodecContext *avctx) +{ + MFContext *c = avctx->priv_data; + + if (!c->async_events) + return 0; + + while (!(c->async_need_input || c->async_have_output || c->draining_done || c->async_marker)) { + IMFMediaEvent *ev = NULL; + MediaEventType ev_id = 0; + HRESULT hr = IMFMediaEventGenerator_GetEvent(c->async_events, 0, &ev); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "IMFMediaEventGenerator_GetEvent() failed: %s\n", + ff_hr_str(hr)); + return AVERROR_EXTERNAL; + } + IMFMediaEvent_GetType(ev, &ev_id); + switch (ev_id) { + case ff_METransformNeedInput: + if (!c->draining) + c->async_need_input = 1; + break; + case ff_METransformHaveOutput: + c->async_have_output = 1; + break; + case ff_METransformDrainComplete: + c->draining_done = 1; + break; + case ff_METransformMarker: + c->async_marker = 1; + break; + default: ; + } + IMFMediaEvent_Release(ev); + } + + return 0; +} + +static AVRational mf_get_tb(AVCodecContext *avctx) +{ + if (avctx->pkt_timebase.num > 0 && avctx->pkt_timebase.den > 0) + return avctx->pkt_timebase; + if (avctx->time_base.num > 0 && avctx->time_base.den > 0) + return avctx->time_base; + return MF_TIMEBASE; +} + +static LONGLONG mf_to_mf_time(AVCodecContext *avctx, int64_t av_pts) +{ + if (av_pts == AV_NOPTS_VALUE) + return MF_INVALID_TIME; + return av_rescale_q(av_pts, mf_get_tb(avctx), MF_TIMEBASE); +} + +static void mf_sample_set_pts(AVCodecContext *avctx, IMFSample *sample, int64_t av_pts) +{ + LONGLONG stime = mf_to_mf_time(avctx, av_pts); + if (stime != MF_INVALID_TIME) + IMFSample_SetSampleTime(sample, stime); +} + +static int64_t mf_from_mf_time(AVCodecContext *avctx, LONGLONG stime) +{ + return av_rescale_q(stime, MF_TIMEBASE, mf_get_tb(avctx)); +} + +static int64_t mf_sample_get_pts(AVCodecContext *avctx, IMFSample *sample) +{ + LONGLONG pts; + HRESULT hr = IMFSample_GetSampleTime(sample, &pts); + if (FAILED(hr)) + return AV_NOPTS_VALUE; + return mf_from_mf_time(avctx, pts); +} + +static IMFSample *mf_avpacket_to_sample(AVCodecContext *avctx, const AVPacket *avpkt) +{ + MFContext *c = avctx->priv_data; + IMFSample *sample = NULL; + AVPacket tmp = {0}; + int ret; + + if ((ret = av_packet_ref(&tmp, avpkt)) < 0) + goto done; + + if (c->bsfc) { + AVPacket tmp2 = {0}; + if ((ret = av_bsf_send_packet(c->bsfc, &tmp)) < 0) + goto done; + if ((ret = av_bsf_receive_packet(c->bsfc, &tmp)) < 0) + goto done; + // We don't support any 1:m BSF filtering - but at least don't get stuck. + while ((ret = av_bsf_receive_packet(c->bsfc, &tmp2)) >= 0) + av_log(avctx, AV_LOG_ERROR, "Discarding unsupported sub-packet.\n"); + av_packet_unref(&tmp2); + } + + sample = ff_create_memory_sample(tmp.data, tmp.size, c->in_info.cbAlignment); + if (sample) { + int64_t pts = avpkt->pts; + if (pts == AV_NOPTS_VALUE) + pts = avpkt->dts; + mf_sample_set_pts(avctx, sample, pts); + if (avpkt->flags & AV_PKT_FLAG_KEY) + IMFAttributes_SetUINT32(sample, &MFSampleExtension_CleanPoint, TRUE); + } + +done: + av_packet_unref(&tmp); + return sample; +} + +static int mf_deca_output_type_get(AVCodecContext *avctx, IMFMediaType *type) +{ + UINT32 t; + HRESULT hr; + + hr = IMFAttributes_GetUINT32(type, &MF_MT_AUDIO_NUM_CHANNELS, &t); + if (FAILED(hr)) + return AVERROR_EXTERNAL; + avctx->channels = t; + avctx->channel_layout = av_get_default_channel_layout(t); + + hr = IMFAttributes_GetUINT32(type, &MF_MT_AUDIO_CHANNEL_MASK, &t); + if (!FAILED(hr)) + avctx->channel_layout = t; + + hr = IMFAttributes_GetUINT32(type, &MF_MT_AUDIO_SAMPLES_PER_SECOND, &t); + if (FAILED(hr)) + return AVERROR_EXTERNAL; + avctx->sample_rate = t; + + avctx->sample_fmt = ff_media_type_to_sample_fmt((IMFAttributes *)type); + + if (avctx->sample_fmt == AV_SAMPLE_FMT_NONE || !avctx->channels) + return AVERROR_EXTERNAL; + + return 0; +} + +static int mf_decv_output_type_get(AVCodecContext *avctx, IMFMediaType *type) +{ + HRESULT hr; + UINT32 w, h, cw, ch, t, t2; + MFVideoArea area = {0}; + int ret; + + avctx->pix_fmt = ff_media_type_to_pix_fmt((IMFAttributes *)type); + + hr = ff_MFGetAttributeSize((IMFAttributes *)type, &MF_MT_FRAME_SIZE, &cw, &ch); + if (FAILED(hr)) + return AVERROR_EXTERNAL; + + // Cropping rectangle. Ignore the fractional offset, because nobody uses that anyway. + // (libavcodec native decoders still try to crop away mod-2 offset pixels by + // adjusting the pixel plane pointers.) + hr = IMFAttributes_GetBlob(type, &MF_MT_MINIMUM_DISPLAY_APERTURE, (void *)&area, sizeof(area), NULL); + if (FAILED(hr)) { + w = cw; + h = ch; + } else { + w = area.OffsetX.value + area.Area.cx; + h = area.OffsetY.value + area.Area.cy; + } + + if (w > cw || h > ch) + return AVERROR_EXTERNAL; + + hr = ff_MFGetAttributeRatio((IMFAttributes *)type, &MF_MT_PIXEL_ASPECT_RATIO, &t, &t2); + if (!FAILED(hr)) { + avctx->sample_aspect_ratio.num = t; + avctx->sample_aspect_ratio.den = t2; + } + + hr = IMFAttributes_GetUINT32(type, &MF_MT_YUV_MATRIX, &t); + if (!FAILED(hr)) { + switch (t) { + case MFVideoTransferMatrix_BT709: avctx->colorspace = AVCOL_SPC_BT709; break; + case MFVideoTransferMatrix_BT601: avctx->colorspace = AVCOL_SPC_BT470BG; break; + case MFVideoTransferMatrix_SMPTE240M: avctx->colorspace = AVCOL_SPC_SMPTE240M; break; + } + } + + hr = IMFAttributes_GetUINT32(type, &MF_MT_VIDEO_PRIMARIES, &t); + if (!FAILED(hr)) { + switch (t) { + case MFVideoPrimaries_BT709: avctx->color_primaries = AVCOL_PRI_BT709; break; + case MFVideoPrimaries_BT470_2_SysM: avctx->color_primaries = AVCOL_PRI_BT470M; break; + case MFVideoPrimaries_BT470_2_SysBG: avctx->color_primaries = AVCOL_PRI_BT470BG; break; + case MFVideoPrimaries_SMPTE170M: avctx->color_primaries = AVCOL_PRI_SMPTE170M; break; + case MFVideoPrimaries_SMPTE240M: avctx->color_primaries = AVCOL_PRI_SMPTE240M; break; + } + } + + hr = IMFAttributes_GetUINT32(type, &MF_MT_TRANSFER_FUNCTION, &t); + if (!FAILED(hr)) { + switch (t) { + case MFVideoTransFunc_10: avctx->color_trc = AVCOL_TRC_LINEAR; break; + case MFVideoTransFunc_22: avctx->color_trc = AVCOL_TRC_GAMMA22; break; + case MFVideoTransFunc_709: avctx->color_trc = AVCOL_TRC_BT709; break; + case MFVideoTransFunc_240M: avctx->color_trc = AVCOL_TRC_SMPTE240M; break; + case MFVideoTransFunc_sRGB: avctx->color_trc = AVCOL_TRC_IEC61966_2_1; break; + case MFVideoTransFunc_28: avctx->color_trc = AVCOL_TRC_GAMMA28; break; + // mingw doesn't define these yet + //case MFVideoTransFunc_Log_100: avctx->color_trc = AVCOL_TRC_LOG; break; + //case MFVideoTransFunc_Log_316: avctx->color_trc = AVCOL_TRC_LOG_SQRT; break; + } + } + + hr = IMFAttributes_GetUINT32(type, &MF_MT_VIDEO_CHROMA_SITING, &t); + if (!FAILED(hr)) { + switch (t) { + case MFVideoChromaSubsampling_MPEG2: avctx->chroma_sample_location = AVCHROMA_LOC_LEFT; break; + case MFVideoChromaSubsampling_MPEG1: avctx->chroma_sample_location = AVCHROMA_LOC_CENTER; break; + } + } + + hr = IMFAttributes_GetUINT32(type, &MF_MT_VIDEO_NOMINAL_RANGE, &t); + if (!FAILED(hr)) { + switch (t) { + case MFNominalRange_0_255: avctx->color_range = AVCOL_RANGE_JPEG; break; + case MFNominalRange_16_235: avctx->color_range = AVCOL_RANGE_MPEG; break; + } + } + + if ((ret = ff_set_dimensions(avctx, cw, ch)) < 0) + return ret; + + avctx->width = w; + avctx->height = h; + + return ret; +} + +static int mf_enca_output_type_get(AVCodecContext *avctx, IMFMediaType *type) +{ + MFContext *c = avctx->priv_data; + HRESULT hr; + UINT32 sz; + + if (avctx->codec_id != AV_CODEC_ID_MP3 && avctx->codec_id != AV_CODEC_ID_AC3) { + hr = IMFAttributes_GetBlobSize(type, &MF_MT_USER_DATA, &sz); + if (!FAILED(hr) && sz > 0) { + avctx->extradata = av_mallocz(sz + AV_INPUT_BUFFER_PADDING_SIZE); + if (!avctx->extradata) + return AVERROR(ENOMEM); + avctx->extradata_size = sz; + hr = IMFAttributes_GetBlob(type, &MF_MT_USER_DATA, avctx->extradata, sz, NULL); + if (FAILED(hr)) + return AVERROR_EXTERNAL; + + if (avctx->codec_id == AV_CODEC_ID_AAC && avctx->extradata_size >= 12) { + // Get rid of HEAACWAVEINFO (after wfx field, 12 bytes). + avctx->extradata_size = avctx->extradata_size - 12; + memmove(avctx->extradata, avctx->extradata + 12, avctx->extradata_size); + } + } + } + + // I don't know where it's documented that we need this. It happens with the + // MS mp3 encoder MFT. The idea for the workaround is taken from NAudio. + // (Certainly any lossy codec will have frames much smaller than 1 second.) + if (!c->out_info.cbSize && !c->out_stream_provides_samples) { + hr = IMFAttributes_GetUINT32(type, &MF_MT_AUDIO_AVG_BYTES_PER_SECOND, &sz); + if (!FAILED(hr)) { + av_log(avctx, AV_LOG_VERBOSE, "MFT_OUTPUT_STREAM_INFO.cbSize set to 0, " + "assuming %d bytes instead.\n", (int)sz); + c->out_info.cbSize = sz; + } + } + + return 0; +} + +static int mf_encv_output_type_get(AVCodecContext *avctx, IMFMediaType *type) +{ + MFContext *c = avctx->priv_data; + HRESULT hr; + UINT32 sz; + + hr = IMFAttributes_GetBlobSize(type, &MF_MT_MPEG_SEQUENCE_HEADER, &sz); + if (!FAILED(hr) && sz > 0) { + uint8_t *extradata = av_mallocz(sz + AV_INPUT_BUFFER_PADDING_SIZE); + if (!extradata) + return AVERROR(ENOMEM); + hr = IMFAttributes_GetBlob(type, &MF_MT_MPEG_SEQUENCE_HEADER, extradata, sz, NULL); + if (FAILED(hr)) { + av_free(extradata); + return AVERROR_EXTERNAL; + } + if (c->lavc_init_done) { + // At least the Intel QSV h264 MFT sets up extradata when the first + // frame is encoded, and after the AVCodecContext was opened. + // Send it as side-data with the next packet. + av_freep(&c->send_extradata); + c->send_extradata = extradata; + c->send_extradata_size = sz; + } else { + av_freep(&avctx->extradata); + avctx->extradata = extradata; + avctx->extradata_size = sz; + } + } + + return 0; +} + +static int mf_output_type_get(AVCodecContext *avctx) +{ + MFContext *c = avctx->priv_data; + HRESULT hr; + IMFMediaType *type; + int ret; + + hr = IMFTransform_GetOutputCurrentType(c->mft, c->out_stream_id, &type); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "could not get output type\n"); + return AVERROR_EXTERNAL; + } + + av_log(avctx, AV_LOG_VERBOSE, "final output type:\n"); + ff_media_type_dump(avctx, type); + + ret = 0; + if (c->is_dec && c->is_video) { + ret = mf_decv_output_type_get(avctx, type); + } else if (c->is_dec && c->is_audio) { + ret = mf_deca_output_type_get(avctx, type); + } else if (c->is_enc && c->is_video) { + ret = mf_encv_output_type_get(avctx, type); + } else if (c->is_enc && c->is_audio) { + ret = mf_enca_output_type_get(avctx, type); + } + + if (ret < 0) + av_log(avctx, AV_LOG_ERROR, "output type not supported\n"); + + IMFMediaType_Release(type); + return ret; +} + +static int mf_sample_to_a_avframe(AVCodecContext *avctx, IMFSample *sample, AVFrame *frame) +{ + HRESULT hr; + int ret; + DWORD len; + IMFMediaBuffer *buffer; + BYTE *data; + size_t bps; + + hr = IMFSample_GetTotalLength(sample, &len); + if (FAILED(hr)) + return AVERROR_EXTERNAL; + + bps = av_get_bytes_per_sample(avctx->sample_fmt) * avctx->channels; + + frame->nb_samples = len / bps; + if (frame->nb_samples * bps != len) + return AVERROR_EXTERNAL; // unaligned crap -> assume not possible + + if ((ret = ff_get_buffer(avctx, frame, 0)) < 0) + return ret; + + IMFSample_ConvertToContiguousBuffer(sample, &buffer); + if (FAILED(hr)) + return AVERROR_EXTERNAL; + + hr = IMFMediaBuffer_Lock(buffer, &data, NULL, NULL); + if (FAILED(hr)) { + IMFMediaBuffer_Release(buffer); + return AVERROR_EXTERNAL; + } + + memcpy(frame->data[0], data, len); + + IMFMediaBuffer_Unlock(buffer); + IMFMediaBuffer_Release(buffer); + + return 0; +} + +static int mf_transfer_data_from(AVFrame *dst, IMFSample *sample, + int src_width, int src_height) +{ + HRESULT hr; + DWORD num_buffers; + IMFMediaBuffer *buffer; + IMF2DBuffer *buffer_2d = NULL; + IMF2DBuffer2 *buffer_2d2 = NULL; + uint8_t *src_data[4] = {0}; + int src_linesizes[4] = {0}; + int locked_1d = 0; + int locked_2d = 0; + int copy_w = FFMIN(dst->width, src_width); + int copy_h = FFMIN(dst->height, src_height); + int ret = 0; + + hr = IMFSample_GetBufferCount(sample, &num_buffers); + if (FAILED(hr) || num_buffers != 1) + return AVERROR_EXTERNAL; + + hr = IMFSample_GetBufferByIndex(sample, 0, &buffer); + if (FAILED(hr)) + return AVERROR_EXTERNAL; + + // Prefer IMF2DBuffer(2) if supported - it's faster, but usually only + // present if hwaccel is used. + hr = IMFMediaBuffer_QueryInterface(buffer, &IID_IMF2DBuffer, (void **)&buffer_2d); + if (!FAILED(hr) && (dst->format == AV_PIX_FMT_NV12 || + dst->format == AV_PIX_FMT_P010)) { + BYTE *sc = NULL; + LONG pitch = 0; + + // Prefer IMF2DBuffer2 if supported. + IMFMediaBuffer_QueryInterface(buffer, &IID_IMF2DBuffer2, (void **)&buffer_2d2); + if (buffer_2d2) { + BYTE *start = NULL; + DWORD length = 0; + hr = IMF2DBuffer2_Lock2DSize(buffer_2d2, MF2DBuffer_LockFlags_Read, &sc, &pitch, &start, &length); + } else { + hr = IMF2DBuffer_Lock2D(buffer_2d, &sc, &pitch); + } + if (FAILED(hr)) { + ret = AVERROR_EXTERNAL; + goto done; + } + locked_2d = 1; // (always uses IMF2DBuffer_Unlock2D) + + src_data[0] = (uint8_t *)sc; + src_linesizes[0] = pitch; + src_data[1] = (uint8_t *)sc + pitch * src_height; + src_linesizes[1] = pitch; + } else { + BYTE *data; + DWORD length; + + hr = IMFMediaBuffer_Lock(buffer, &data, NULL, &length); + if (FAILED(hr)) { + ret = AVERROR_EXTERNAL; + goto done; + } + locked_1d = 1; + + av_image_fill_arrays(src_data, src_linesizes, data, dst->format, + src_width, src_height, 1); + } + + av_image_copy(dst->data, dst->linesize, (void *)src_data, src_linesizes, + dst->format, copy_w, copy_h); + +done: + + if (locked_1d) + IMFMediaBuffer_Unlock(buffer); + if (locked_2d) + IMF2DBuffer_Unlock2D(buffer_2d); + if (buffer_2d) + IMF2DBuffer_Release(buffer_2d); + if (buffer_2d2) + IMF2DBuffer2_Release(buffer_2d2); + + IMFMediaBuffer_Release(buffer); + return ret; +} + +static int mf_sample_to_v_avframe(AVCodecContext *avctx, IMFSample *sample, AVFrame *frame) +{ + int ret = 0; + + frame->width = avctx->width; + frame->height = avctx->height; + frame->format = avctx->pix_fmt; + + if ((ret = ff_get_buffer(avctx, frame, 0)) < 0) + return ret; + + return mf_transfer_data_from(frame, sample, avctx->coded_width, avctx->coded_height); +} + +// Allocate the given frame and copy the sample to it. +// Format must have been set on the avctx. +static int mf_sample_to_avframe(AVCodecContext *avctx, IMFSample *sample, AVFrame *frame) +{ + MFContext *c = avctx->priv_data; + int ret; + + if (c->is_audio) { + ret = mf_sample_to_a_avframe(avctx, sample, frame); + } else { + ret = mf_sample_to_v_avframe(avctx, sample, frame); + } + + frame->pts = mf_sample_get_pts(avctx, sample); + frame->pkt_dts = AV_NOPTS_VALUE; + + return ret; +} + +static int mf_sample_to_avpacket(AVCodecContext *avctx, IMFSample *sample, AVPacket *avpkt) +{ + HRESULT hr; + int ret; + DWORD len; + IMFMediaBuffer *buffer; + BYTE *data; + UINT64 t; + UINT32 t32; + + hr = IMFSample_GetTotalLength(sample, &len); + if (FAILED(hr)) + return AVERROR_EXTERNAL; + + if ((ret = av_new_packet(avpkt, len)) < 0) + return ret; + + IMFSample_ConvertToContiguousBuffer(sample, &buffer); + if (FAILED(hr)) + return AVERROR_EXTERNAL; + + hr = IMFMediaBuffer_Lock(buffer, &data, NULL, NULL); + if (FAILED(hr)) { + IMFMediaBuffer_Release(buffer); + return AVERROR_EXTERNAL; + } + + memcpy(avpkt->data, data, len); + + IMFMediaBuffer_Unlock(buffer); + IMFMediaBuffer_Release(buffer); + + avpkt->pts = avpkt->dts = mf_sample_get_pts(avctx, sample); + + hr = IMFAttributes_GetUINT32(sample, &MFSampleExtension_CleanPoint, &t32); + if (!FAILED(hr) && t32 != 0) + avpkt->flags |= AV_PKT_FLAG_KEY; + + hr = IMFAttributes_GetUINT64(sample, &MFSampleExtension_DecodeTimestamp, &t); + if (!FAILED(hr)) + avpkt->dts = mf_from_mf_time(avctx, t); + + return 0; +} + +static IMFSample *mf_a_avframe_to_sample(AVCodecContext *avctx, const AVFrame *frame) +{ + MFContext *c = avctx->priv_data; + size_t len; + size_t bps; + IMFSample *sample; + + bps = av_get_bytes_per_sample(avctx->sample_fmt) * avctx->channels; + len = frame->nb_samples * bps; + + sample = ff_create_memory_sample(frame->data[0], len, c->in_info.cbAlignment); + if (sample) + IMFSample_SetSampleDuration(sample, mf_to_mf_time(avctx, frame->nb_samples)); + return sample; +} + +static IMFSample *mf_v_avframe_to_sample(AVCodecContext *avctx, const AVFrame *frame) +{ + MFContext *c = avctx->priv_data; + IMFSample *sample; + IMFMediaBuffer *buffer; + BYTE *data; + HRESULT hr; + int ret; + int size; + + size = av_image_get_buffer_size(avctx->pix_fmt, avctx->width, avctx->height, 1); + if (size < 0) + return NULL; + + sample = ff_create_memory_sample(NULL, size, c->in_info.cbAlignment); + if (!sample) + return NULL; + + hr = IMFSample_GetBufferByIndex(sample, 0, &buffer); + if (FAILED(hr)) { + IMFSample_Release(sample); + return NULL; + } + + hr = IMFMediaBuffer_Lock(buffer, &data, NULL, NULL); + if (FAILED(hr)) { + IMFMediaBuffer_Release(buffer); + IMFSample_Release(sample); + return NULL; + } + + ret = av_image_copy_to_buffer((uint8_t *)data, size, (void *)frame->data, frame->linesize, + avctx->pix_fmt, avctx->width, avctx->height, 1); + IMFMediaBuffer_SetCurrentLength(buffer, size); + IMFMediaBuffer_Unlock(buffer); + IMFMediaBuffer_Release(buffer); + if (ret < 0) { + IMFSample_Release(sample); + return NULL; + } + + IMFSample_SetSampleDuration(sample, mf_to_mf_time(avctx, frame->pkt_duration)); + + return sample; +} + +static IMFSample *mf_avframe_to_sample(AVCodecContext *avctx, const AVFrame *frame) +{ + MFContext *c = avctx->priv_data; + IMFSample *sample; + + if (c->is_audio) { + sample = mf_a_avframe_to_sample(avctx, frame); + } else { + sample = mf_v_avframe_to_sample(avctx, frame); + } + + if (sample) + mf_sample_set_pts(avctx, sample, frame->pts); + + return sample; +} + +static int mf_send_sample(AVCodecContext *avctx, IMFSample *sample) +{ + MFContext *c = avctx->priv_data; + HRESULT hr; + int ret; + + if (sample) { + if (c->async_events) { + if ((ret = mf_wait_events(avctx)) < 0) + return ret; + if (!c->async_need_input) + return AVERROR(EAGAIN); + } + if (!c->sample_sent) + IMFSample_SetUINT32(sample, &MFSampleExtension_Discontinuity, TRUE); + c->sample_sent = 1; + hr = IMFTransform_ProcessInput(c->mft, c->in_stream_id, sample, 0); + if (hr == MF_E_NOTACCEPTING) { + return AVERROR(EAGAIN); + } else if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "failed processing input: %s\n", ff_hr_str(hr)); + return AVERROR_EXTERNAL; + } + c->async_need_input = 0; + } else if (!c->draining) { + hr = IMFTransform_ProcessMessage(c->mft, MFT_MESSAGE_COMMAND_DRAIN, 0); + if (FAILED(hr)) + av_log(avctx, AV_LOG_ERROR, "failed draining: %s\n", ff_hr_str(hr)); + // Some MFTs (AC3) will send a frame after each drain command (???), so + // this is required to make draining actually terminate. + c->draining = 1; + c->async_need_input = 0; + } else { + return AVERROR_EOF; + } + return 0; +} + +static int mf_send_frame(AVCodecContext *avctx, const AVFrame *frame) +{ + MFContext *c = avctx->priv_data; + int ret; + IMFSample *sample = NULL; + if (frame) { + sample = mf_avframe_to_sample(avctx, frame); + if (!sample) + return AVERROR(ENOMEM); + if (c->is_enc && c->is_video && c->codec_api) { + if (frame->pict_type == AV_PICTURE_TYPE_I || !c->sample_sent) + ICodecAPI_SetValue(c->codec_api, &ff_CODECAPI_AVEncVideoForceKeyFrame, FF_VAL_VT_UI4(1)); + } + } + ret = mf_send_sample(avctx, sample); + if (sample) + IMFSample_Release(sample); + return ret; +} + +static int mf_send_packet(AVCodecContext *avctx, const AVPacket *avpkt) +{ + int ret; + IMFSample *sample = NULL; + if (avpkt) { + sample = mf_avpacket_to_sample(avctx, avpkt); + if (!sample) + return AVERROR(ENOMEM); + } + ret = mf_send_sample(avctx, sample); + if (sample) + IMFSample_Release(sample); + return ret; +} + +static int mf_receive_sample(AVCodecContext *avctx, IMFSample **out_sample) +{ + MFContext *c = avctx->priv_data; + HRESULT hr; + DWORD st; + MFT_OUTPUT_DATA_BUFFER out_buffers; + IMFSample *sample; + int ret = 0; + + while (1) { + *out_sample = NULL; + sample = NULL; + + if (c->async_events) { + if ((ret = mf_wait_events(avctx)) < 0) + return ret; + if (!c->async_have_output || c->draining_done) { + ret = 0; + break; + } + } + + if (!c->out_stream_provides_samples) { + sample = ff_create_memory_sample(NULL, c->out_info.cbSize, c->out_info.cbAlignment); + if (!sample) + return AVERROR(ENOMEM); + } + + out_buffers = (MFT_OUTPUT_DATA_BUFFER) { + .dwStreamID = c->out_stream_id, + .pSample = sample, + }; + + st = 0; + hr = IMFTransform_ProcessOutput(c->mft, 0, 1, &out_buffers, &st); + + if (out_buffers.pEvents) + IMFCollection_Release(out_buffers.pEvents); + + if (!FAILED(hr)) { + *out_sample = out_buffers.pSample; + ret = 0; + break; + } + + if (out_buffers.pSample) + IMFSample_Release(out_buffers.pSample); + + if (hr == MF_E_TRANSFORM_NEED_MORE_INPUT) { + if (c->draining) + c->draining_done = 1; + ret = 0; + } else if (hr == MF_E_TRANSFORM_STREAM_CHANGE) { + av_log(avctx, AV_LOG_WARNING, "stream format change\n"); + ret = mf_choose_output_type(avctx); + if (ret == 0) // we don't expect renegotiating the input type + ret = AVERROR_EXTERNAL; + if (ret > 0) { + ret = mf_setup_context(avctx); + if (ret >= 0) { + c->async_have_output = 0; + continue; + } + } + } else { + av_log(avctx, AV_LOG_ERROR, "failed processing output: %s\n", ff_hr_str(hr)); + ret = AVERROR_EXTERNAL; + } + + break; + } + + c->async_have_output = 0; + + if (ret >= 0 && !*out_sample) + ret = c->draining_done ? AVERROR_EOF : AVERROR(EAGAIN); + + return ret; +} + +static int mf_receive_frame(AVCodecContext *avctx, AVFrame *frame) +{ + IMFSample *sample; + int ret; + + ret = mf_receive_sample(avctx, &sample); + if (ret < 0) + return ret; + + ret = mf_sample_to_avframe(avctx, sample, frame); + IMFSample_Release(sample); + return ret; +} + +static int mf_receive_packet(AVCodecContext *avctx, AVPacket *avpkt) +{ + MFContext *c = avctx->priv_data; + IMFSample *sample; + int ret; + + ret = mf_receive_sample(avctx, &sample); + if (ret < 0) + return ret; + + ret = mf_sample_to_avpacket(avctx, sample, avpkt); + IMFSample_Release(sample); + + if (c->send_extradata) { + ret = av_packet_add_side_data(avpkt, AV_PKT_DATA_NEW_EXTRADATA, + c->send_extradata, + c->send_extradata_size); + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "Failed to add extradata: %i\n", ret); + return ret; + } + c->send_extradata = NULL; + c->send_extradata_size = 0; + } + + return ret; +} + +static void mf_flush(AVCodecContext *avctx) +{ + MFContext *c = avctx->priv_data; + HRESULT hr = IMFTransform_ProcessMessage(c->mft, MFT_MESSAGE_COMMAND_FLUSH, 0); + if (FAILED(hr)) + av_log(avctx, AV_LOG_ERROR, "flushing failed\n"); + + hr = IMFTransform_ProcessMessage(c->mft, MFT_MESSAGE_NOTIFY_END_OF_STREAM, 0); + if (FAILED(hr)) + av_log(avctx, AV_LOG_ERROR, "could not end streaming (%s)\n", ff_hr_str(hr)); + + // In async mode, we have to wait until previous events have been flushed. + if (c->async_events) { + hr = IMFMediaEventGenerator_QueueEvent(c->async_events, ff_METransformMarker, + &GUID_NULL, S_OK, NULL); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "sending marker failed\n"); + } else { + while (!c->async_marker) { + if (mf_wait_events(avctx) < 0) + break; // just don't lock up + c->async_need_input = c->async_have_output = c->draining_done = 0; + } + c->async_marker = 0; + } + } + + c->draining = 0; + c->sample_sent = 0; + c->draining_done = 0; + c->async_need_input = c->async_have_output = 0; + hr = IMFTransform_ProcessMessage(c->mft, MFT_MESSAGE_NOTIFY_START_OF_STREAM, 0); + if (FAILED(hr)) + av_log(avctx, AV_LOG_ERROR, "stream restart failed\n"); +} + +// Most encoders seem to enumerate supported audio formats on the output types, +// at least as far as channel configuration and sample rate is concerned. Pick +// the one which seems to match best. +static int64_t mf_enca_output_score(AVCodecContext *avctx, IMFMediaType *type) +{ + MFContext *c = avctx->priv_data; + HRESULT hr; + UINT32 t; + GUID tg; + int64_t score = 0; + + hr = IMFAttributes_GetUINT32(type, &MF_MT_AUDIO_SAMPLES_PER_SECOND, &t); + if (!FAILED(hr) && t == avctx->sample_rate) + score |= 1LL << 32; + + hr = IMFAttributes_GetUINT32(type, &MF_MT_AUDIO_NUM_CHANNELS, &t); + if (!FAILED(hr) && t == avctx->channels) + score |= 2LL << 32; + + hr = IMFAttributes_GetGUID(type, &MF_MT_SUBTYPE, &tg); + if (!FAILED(hr)) { + if (IsEqualGUID(&c->main_subtype, &tg)) + score |= 4LL << 32; + } + + // Select the bitrate (lowest priority). + hr = IMFAttributes_GetUINT32(type, &MF_MT_AUDIO_AVG_BYTES_PER_SECOND, &t); + if (!FAILED(hr)) { + int diff = (int)t - avctx->bit_rate / 8; + if (diff >= 0) { + score |= (1LL << 31) - diff; // prefer lower bitrate + } else { + score |= (1LL << 30) + diff; // prefer higher bitrate + } + } + + hr = IMFAttributes_GetUINT32(type, &MF_MT_AAC_PAYLOAD_TYPE, &t); + if (!FAILED(hr) && t != 0) + return -1; + + return score; +} + +static int64_t mf_enca_input_score(AVCodecContext *avctx, IMFMediaType *type) +{ + HRESULT hr; + UINT32 t; + int64_t score = 0; + + enum AVSampleFormat sformat = ff_media_type_to_sample_fmt((IMFAttributes *)type); + if (sformat == AV_SAMPLE_FMT_NONE) + return -1; // can not use + + if (sformat == avctx->sample_fmt) + score |= 1; + + hr = IMFAttributes_GetUINT32(type, &MF_MT_AUDIO_SAMPLES_PER_SECOND, &t); + if (!FAILED(hr) && t == avctx->sample_rate) + score |= 2; + + hr = IMFAttributes_GetUINT32(type, &MF_MT_AUDIO_NUM_CHANNELS, &t); + if (!FAILED(hr) && t == avctx->channels) + score |= 4; + + return score; +} + +static int mf_enca_input_adjust(AVCodecContext *avctx, IMFMediaType *type) +{ + HRESULT hr; + UINT32 t; + + enum AVSampleFormat sformat = ff_media_type_to_sample_fmt((IMFAttributes *)type); + if (sformat != avctx->sample_fmt) { + av_log(avctx, AV_LOG_ERROR, "unsupported input sample format set\n"); + return AVERROR(EINVAL); + } + + hr = IMFAttributes_GetUINT32(type, &MF_MT_AUDIO_SAMPLES_PER_SECOND, &t); + if (FAILED(hr) || t != avctx->sample_rate) { + av_log(avctx, AV_LOG_ERROR, "unsupported input sample rate set\n"); + return AVERROR(EINVAL); + } + + hr = IMFAttributes_GetUINT32(type, &MF_MT_AUDIO_NUM_CHANNELS, &t); + if (FAILED(hr) || t != avctx->channels) { + av_log(avctx, AV_LOG_ERROR, "unsupported input channel number set\n"); + return AVERROR(EINVAL); + } + + return 0; +} + +static int64_t mf_encv_output_score(AVCodecContext *avctx, IMFMediaType *type) +{ + MFContext *c = avctx->priv_data; + GUID tg; + HRESULT hr; + int score = -1; + + hr = IMFAttributes_GetGUID(type, &MF_MT_SUBTYPE, &tg); + if (!FAILED(hr)) { + if (IsEqualGUID(&c->main_subtype, &tg)) + score = 1; + } + + return score; +} + +static int mf_encv_output_adjust(AVCodecContext *avctx, IMFMediaType *type) +{ + MFContext *c = avctx->priv_data; + AVRational frame_rate = av_inv_q(avctx->time_base); + frame_rate.den *= avctx->ticks_per_frame; + + ff_MFSetAttributeSize((IMFAttributes *)type, &MF_MT_FRAME_SIZE, avctx->width, avctx->height); + IMFAttributes_SetUINT32(type, &MF_MT_INTERLACE_MODE, MFVideoInterlace_Progressive); + + ff_MFSetAttributeRatio((IMFAttributes *)type, &MF_MT_FRAME_RATE, frame_rate.num, frame_rate.den); + + // (MS HEVC supports eAVEncH265VProfile_Main_420_8 only.) + if (avctx->codec_id == AV_CODEC_ID_H264) { + UINT32 profile = eAVEncH264VProfile_Base; + switch (avctx->profile) { + case FF_PROFILE_H264_MAIN: + profile = eAVEncH264VProfile_Main; + break; + case FF_PROFILE_H264_HIGH: + profile = eAVEncH264VProfile_High; + break; + } + IMFAttributes_SetUINT32(type, &MF_MT_MPEG2_PROFILE, profile); + } + + IMFAttributes_SetUINT32(type, &MF_MT_AVG_BITRATE, avctx->bit_rate); + + // Note that some of the ICodecAPI options must be set before SetOutputType. + if (c->codec_api) { + if (avctx->bit_rate) + ICodecAPI_SetValue(c->codec_api, &ff_CODECAPI_AVEncCommonMeanBitRate, FF_VAL_VT_UI4(avctx->bit_rate)); + + if (c->opt_enc_rc >= 0) + ICodecAPI_SetValue(c->codec_api, &ff_CODECAPI_AVEncCommonRateControlMode, FF_VAL_VT_UI4(c->opt_enc_rc)); + + if (c->opt_enc_quality >= 0) + ICodecAPI_SetValue(c->codec_api, &ff_CODECAPI_AVEncCommonQuality, FF_VAL_VT_UI4(c->opt_enc_quality)); + + if (avctx->max_b_frames > 0) + ICodecAPI_SetValue(c->codec_api, &ff_CODECAPI_AVEncMPVDefaultBPictureCount, FF_VAL_VT_UI4(avctx->max_b_frames)); + + ICodecAPI_SetValue(c->codec_api, &ff_CODECAPI_AVEncH264CABACEnable, FF_VAL_VT_BOOL(1)); + } + + return 0; +} + +static int64_t mf_encv_input_score(AVCodecContext *avctx, IMFMediaType *type) +{ + enum AVPixelFormat pix_fmt = ff_media_type_to_pix_fmt((IMFAttributes *)type); + if (pix_fmt != avctx->pix_fmt) + return -1; // can not use + + return 0; +} + +static int mf_encv_input_adjust(AVCodecContext *avctx, IMFMediaType *type) +{ + enum AVPixelFormat pix_fmt = ff_media_type_to_pix_fmt((IMFAttributes *)type); + if (pix_fmt != avctx->pix_fmt) { + av_log(avctx, AV_LOG_ERROR, "unsupported input pixel format set\n"); + return AVERROR(EINVAL); + } + + return 0; +} + +static int mf_deca_input_adjust(AVCodecContext *avctx, IMFMediaType *type) +{ + MFContext *c = avctx->priv_data; + + int sample_rate = avctx->sample_rate; + int channels = avctx->channels; + + IMFAttributes_SetGUID(type, &MF_MT_MAJOR_TYPE, &MFMediaType_Audio); + IMFAttributes_SetGUID(type, &MF_MT_SUBTYPE, &c->main_subtype); + + if (avctx->codec_id == AV_CODEC_ID_AAC) { + int assume_adts = avctx->extradata_size == 0; + // The first 12 bytes are the remainder of HEAACWAVEINFO. + // Fortunately all fields can be left 0. + size_t ed_size = 12 + (size_t)avctx->extradata_size; + uint8_t *ed = av_mallocz(ed_size); + if (!ed) + return AVERROR(ENOMEM); + if (assume_adts) + ed[0] = 1; // wPayloadType=1 (ADTS) + if (avctx->extradata_size) { + MPEG4AudioConfig c = {0}; + memcpy(ed + 12, avctx->extradata, avctx->extradata_size); + if (avpriv_mpeg4audio_get_config(&c, avctx->extradata, avctx->extradata_size * 8, 0) >= 0) { + if (c.channels > 0) + channels = c.channels; + sample_rate = c.sample_rate; + } + } + IMFAttributes_SetBlob(type, &MF_MT_USER_DATA, ed, ed_size); + av_free(ed); + IMFAttributes_SetUINT32(type, &MF_MT_AAC_PAYLOAD_TYPE, assume_adts ? 1 : 0); + } else if (avctx->extradata_size) { + IMFAttributes_SetBlob(type, &MF_MT_USER_DATA, avctx->extradata, avctx->extradata_size); + } + + IMFAttributes_SetUINT32(type, &MF_MT_AUDIO_SAMPLES_PER_SECOND, sample_rate); + IMFAttributes_SetUINT32(type, &MF_MT_AUDIO_NUM_CHANNELS, channels); + + // WAVEFORMATEX stuff; might be required by some codecs. + if (avctx->block_align) + IMFAttributes_SetUINT32(type, &MF_MT_AUDIO_BLOCK_ALIGNMENT, avctx->block_align); + if (avctx->bit_rate) + IMFAttributes_SetUINT32(type, &MF_MT_AUDIO_AVG_BYTES_PER_SECOND, avctx->bit_rate / 8); + if (avctx->bits_per_coded_sample) + IMFAttributes_SetUINT32(type, &MF_MT_AUDIO_BITS_PER_SAMPLE, avctx->bits_per_coded_sample); + + IMFAttributes_SetUINT32(type, &MF_MT_AUDIO_PREFER_WAVEFORMATEX, 1); + + return 0; +} + +static int64_t mf_decv_input_score(AVCodecContext *avctx, IMFMediaType *type) +{ + MFContext *c = avctx->priv_data; + uint32_t fourcc; + GUID tg; + HRESULT hr; + int score = -1; + + hr = IMFAttributes_GetGUID(type, &MF_MT_SUBTYPE, &tg); + if (!FAILED(hr)) { + if (IsEqualGUID(&c->main_subtype, &tg)) + score = 1; + + // For the MPEG-4 decoder (selects MPEG-4 variant via FourCC). + if (ff_fourcc_from_guid(&tg, &fourcc) >= 0 && fourcc == avctx->codec_tag) + score = 2; + } + + return score; +} + +static int mf_decv_input_adjust(AVCodecContext *avctx, IMFMediaType *type) +{ + MFContext *c = avctx->priv_data; + HRESULT hr; + int use_extradata = avctx->extradata_size && !c->bsfc; + + IMFAttributes_SetGUID(type, &MF_MT_MAJOR_TYPE, &MFMediaType_Video); + + hr = IMFAttributes_GetItem(type, &MF_MT_SUBTYPE, NULL); + if (FAILED(hr)) + IMFAttributes_SetGUID(type, &MF_MT_SUBTYPE, &c->main_subtype); + + ff_MFSetAttributeSize((IMFAttributes *)type, &MF_MT_FRAME_SIZE, avctx->width, avctx->height); + + IMFAttributes_SetUINT32(type, &MF_MT_INTERLACE_MODE, MFVideoInterlace_MixedInterlaceOrProgressive); + + if (avctx->sample_aspect_ratio.num) + ff_MFSetAttributeRatio((IMFAttributes *)type, &MF_MT_PIXEL_ASPECT_RATIO, + avctx->sample_aspect_ratio.num, avctx->sample_aspect_ratio.den); + + if (avctx->bit_rate) + IMFAttributes_SetUINT32(type, &MF_MT_AVG_BITRATE, avctx->bit_rate); + + if (IsEqualGUID(&c->main_subtype, &MFVideoFormat_MP4V) || + IsEqualGUID(&c->main_subtype, &MFVideoFormat_MP43) || + IsEqualGUID(&c->main_subtype, &ff_MFVideoFormat_MP42)) { + if (avctx->extradata_size < 3 || + avctx->extradata[0] || avctx->extradata[1] || + avctx->extradata[2] != 1) + use_extradata = 0; + } + + if (use_extradata) + IMFAttributes_SetBlob(type, &MF_MT_USER_DATA, avctx->extradata, avctx->extradata_size); + + return 0; +} + +static int64_t mf_deca_input_score(AVCodecContext *avctx, IMFMediaType *type) +{ + MFContext *c = avctx->priv_data; + HRESULT hr; + GUID tg; + int score = -1; + + hr = IMFAttributes_GetGUID(type, &MF_MT_SUBTYPE, &tg); + if (!FAILED(hr)) { + if (IsEqualGUID(&c->main_subtype, &tg)) + score = 1; + } + + return score; +} + +// Sort the types by preference: +// - float sample format (highest) +// - sample depth +// - channel count +// - sample rate (lowest) +// Assume missing information means any is allowed. +static int64_t mf_deca_output_score(AVCodecContext *avctx, IMFMediaType *type) +{ + MFContext *c = avctx->priv_data; + HRESULT hr; + UINT32 t; + int sample_fmt; + int64_t score = 0; + + hr = IMFAttributes_GetUINT32(type, &MF_MT_AUDIO_SAMPLES_PER_SECOND, &t); + if (!FAILED(hr)) + score |= t; + + // MF doesn't seem to tell us the native channel count. Try to get the + // same number of channels by looking at the input codec parameters. + // (With some luck they are correct, or even come from a parser.) + // Prefer equal or larger channel count. + hr = IMFAttributes_GetUINT32(type, &MF_MT_AUDIO_NUM_CHANNELS, &t); + if (!FAILED(hr)) { + int channels = av_get_channel_layout_nb_channels(avctx->request_channel_layout); + int64_t ch_score = 0; + int diff; + if (channels < 1) + channels = c->original_channels; + diff = (int)t - channels; + if (diff >= 0) { + ch_score |= (1LL << 7) - diff; + } else { + ch_score |= (1LL << 6) + diff; + } + score |= ch_score << 20; + } + + sample_fmt = ff_media_type_to_sample_fmt((IMFAttributes *)type); + if (sample_fmt == AV_SAMPLE_FMT_NONE) { + score = -1; + } else { + score |= av_get_bytes_per_sample(sample_fmt) << 28; + if (sample_fmt == AV_SAMPLE_FMT_FLT) + score |= 1LL << 32; + } + + return score; +} + +static int mf_deca_output_adjust(AVCodecContext *avctx, IMFMediaType *type) +{ + int block_align; + HRESULT hr; + + // Some decoders (wmapro) do not list any output types. I have no clue + // what we're supposed to do, and this is surely a MFT bug. Setting an + // arbitrary output type helps. + hr = IMFAttributes_GetItem(type, &MF_MT_MAJOR_TYPE, NULL); + if (!FAILED(hr)) + return 0; + + IMFAttributes_SetGUID(type, &MF_MT_MAJOR_TYPE, &MFMediaType_Audio); + + block_align = 4; + IMFAttributes_SetGUID(type, &MF_MT_SUBTYPE, &MFAudioFormat_Float); + IMFAttributes_SetUINT32(type, &MF_MT_AUDIO_BITS_PER_SAMPLE, 32); + + block_align *= avctx->channels; + IMFAttributes_SetUINT32(type, &MF_MT_AUDIO_NUM_CHANNELS, avctx->channels); + + IMFAttributes_SetUINT32(type, &MF_MT_AUDIO_BLOCK_ALIGNMENT, block_align); + + IMFAttributes_SetUINT32(type, &MF_MT_AUDIO_SAMPLES_PER_SECOND, avctx->sample_rate); + + IMFAttributes_SetUINT32(type, &MF_MT_AUDIO_AVG_BYTES_PER_SECOND, block_align * avctx->sample_rate); + + return 0; +} + +static int64_t mf_decv_output_score(AVCodecContext *avctx, IMFMediaType *type) +{ + enum AVPixelFormat pix_fmt = ff_media_type_to_pix_fmt((IMFAttributes *)type); + if (pix_fmt == AV_PIX_FMT_NONE) + return -1; + if (pix_fmt == AV_PIX_FMT_P010) + return 2; + if (pix_fmt == AV_PIX_FMT_NV12) + return 1; + return 0; +} + +static int mf_choose_output_type(AVCodecContext *avctx) +{ + MFContext *c = avctx->priv_data; + HRESULT hr; + int ret; + IMFMediaType *out_type = NULL; + int64_t out_type_score = -1; + int out_type_index = -1; + int n; + + av_log(avctx, AV_LOG_VERBOSE, "output types:\n"); + for (n = 0; ; n++) { + IMFMediaType *type; + int64_t score = -1; + + hr = IMFTransform_GetOutputAvailableType(c->mft, c->out_stream_id, n, &type); + if (hr == MF_E_NO_MORE_TYPES || hr == E_NOTIMPL) + break; + if (hr == MF_E_TRANSFORM_TYPE_NOT_SET) { + av_log(avctx, AV_LOG_VERBOSE, "(need to set input type)\n"); + ret = 0; + goto done; + } + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "error getting output type: %s\n", ff_hr_str(hr)); + ret = AVERROR_EXTERNAL; + goto done; + } + + av_log(avctx, AV_LOG_VERBOSE, "output type %d:\n", n); + ff_media_type_dump(avctx, type); + + if (c->is_dec && c->is_video) { + score = mf_decv_output_score(avctx, type); + } else if (c->is_dec && c->is_audio) { + score = mf_deca_output_score(avctx, type); + } else if (c->is_enc && c->is_video) { + score = mf_encv_output_score(avctx, type); + } else if (c->is_enc && c->is_audio) { + score = mf_enca_output_score(avctx, type); + } + + if (score > out_type_score) { + if (out_type) + IMFMediaType_Release(out_type); + out_type = type; + out_type_score = score; + out_type_index = n; + IMFMediaType_AddRef(out_type); + } + + IMFMediaType_Release(type); + } + + if (out_type) { + av_log(avctx, AV_LOG_VERBOSE, "picking output type %d.\n", out_type_index); + } else { + hr = MFCreateMediaType(&out_type); + if (FAILED(hr)) { + ret = AVERROR(ENOMEM); + goto done; + } + } + + ret = 0; + if (c->is_dec && c->is_video) { + // not needed + } else if (c->is_dec && c->is_audio) { + ret = mf_deca_output_adjust(avctx, out_type); + } else if (c->is_enc && c->is_video) { + ret = mf_encv_output_adjust(avctx, out_type); + } else if (c->is_enc && c->is_audio) { + // not needed + } + + if (ret >= 0) { + av_log(avctx, AV_LOG_VERBOSE, "setting output type:\n"); + ff_media_type_dump(avctx, out_type); + + hr = IMFTransform_SetOutputType(c->mft, c->out_stream_id, out_type, 0); + if (!FAILED(hr)) { + ret = 1; + } else if (hr == MF_E_TRANSFORM_TYPE_NOT_SET) { + av_log(avctx, AV_LOG_VERBOSE, "rejected - need to set input type\n"); + ret = 0; + } else { + av_log(avctx, AV_LOG_ERROR, "could not set output type (%s)\n", ff_hr_str(hr)); + ret = AVERROR_EXTERNAL; + } + } + +done: + if (out_type) + IMFMediaType_Release(out_type); + return ret; +} + +static int mf_choose_input_type(AVCodecContext *avctx) +{ + MFContext *c = avctx->priv_data; + HRESULT hr; + int ret; + IMFMediaType *in_type = NULL; + int64_t in_type_score = -1; + int in_type_index = -1; + int n; + + av_log(avctx, AV_LOG_VERBOSE, "input types:\n"); + for (n = 0; ; n++) { + IMFMediaType *type = NULL; + int64_t score = -1; + + hr = IMFTransform_GetInputAvailableType(c->mft, c->in_stream_id, n, &type); + if (hr == MF_E_NO_MORE_TYPES || hr == E_NOTIMPL) + break; + if (hr == MF_E_TRANSFORM_TYPE_NOT_SET) { + av_log(avctx, AV_LOG_VERBOSE, "(need to set output type 1)\n"); + ret = 0; + goto done; + } + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "error getting input type: %s\n", ff_hr_str(hr)); + ret = AVERROR_EXTERNAL; + goto done; + } + + av_log(avctx, AV_LOG_VERBOSE, "input type %d:\n", n); + ff_media_type_dump(avctx, type); + + if (c->is_dec && c->is_video) { + score = mf_decv_input_score(avctx, type); + } else if (c->is_dec && c->is_audio) { + score = mf_deca_input_score(avctx, type); + } else if (c->is_enc && c->is_video) { + score = mf_encv_input_score(avctx, type); + } else if (c->is_enc && c->is_audio) { + score = mf_enca_input_score(avctx, type); + } + + if (score > in_type_score) { + if (in_type) + IMFMediaType_Release(in_type); + in_type = type; + in_type_score = score; + in_type_index = n; + IMFMediaType_AddRef(in_type); + } + + IMFMediaType_Release(type); + } + + if (in_type) { + av_log(avctx, AV_LOG_VERBOSE, "picking input type %d.\n", in_type_index); + } else { + // Some buggy MFTs (WMA encoder) fail to return MF_E_TRANSFORM_TYPE_NOT_SET. + if (c->is_enc) { + av_log(avctx, AV_LOG_VERBOSE, "(need to set output type 2)\n"); + ret = 0; + goto done; + } + hr = MFCreateMediaType(&in_type); + if (FAILED(hr)) { + ret = AVERROR(ENOMEM); + goto done; + } + } + + ret = 0; + if (c->is_dec && c->is_video) { + ret = mf_decv_input_adjust(avctx, in_type); + } else if (c->is_dec && c->is_audio) { + ret = mf_deca_input_adjust(avctx, in_type); + } else if (c->is_enc && c->is_video) { + ret = mf_encv_input_adjust(avctx, in_type); + } else if (c->is_enc && c->is_audio) { + ret = mf_enca_input_adjust(avctx, in_type); + } + + if (ret >= 0) { + av_log(avctx, AV_LOG_VERBOSE, "setting input type:\n"); + ff_media_type_dump(avctx, in_type); + + hr = IMFTransform_SetInputType(c->mft, c->in_stream_id, in_type, 0); + if (!FAILED(hr)) { + ret = 1; + } else if (hr == MF_E_TRANSFORM_TYPE_NOT_SET) { + av_log(avctx, AV_LOG_VERBOSE, "rejected - need to set output type\n"); + ret = 0; + } else { + av_log(avctx, AV_LOG_ERROR, "could not set input type (%s)\n", ff_hr_str(hr)); + ret = AVERROR_EXTERNAL; + } + } + +done: + if (in_type) + IMFMediaType_Release(in_type); + return ret; +} + +static int mf_negotiate_types(AVCodecContext *avctx) +{ + // This follows steps 1-5 on: + // https://msdn.microsoft.com/en-us/library/windows/desktop/aa965264(v=vs.85).aspx + // If every MFT implementer does this correctly, this loop should at worst + // be repeated once. + int need_input = 1, need_output = 1; + int n; + for (n = 0; n < 2 && (need_input || need_output); n++) { + int ret; + ret = mf_choose_input_type(avctx); + if (ret < 0) + return ret; + need_input = ret < 1; + ret = mf_choose_output_type(avctx); + if (ret < 0) + return ret; + need_output = ret < 1; + } + if (need_input || need_output) { + av_log(avctx, AV_LOG_ERROR, "format negotiation failed (%d/%d)\n", + need_input, need_output); + return AVERROR_EXTERNAL; + } + return 0; +} + +static int mf_setup_context(AVCodecContext *avctx) +{ + MFContext *c = avctx->priv_data; + HRESULT hr; + int ret; + + hr = IMFTransform_GetInputStreamInfo(c->mft, c->in_stream_id, &c->in_info); + if (FAILED(hr)) + return AVERROR_EXTERNAL; + av_log(avctx, AV_LOG_VERBOSE, "in_info: size=%d, align=%d\n", + (int)c->in_info.cbSize, (int)c->in_info.cbAlignment); + + hr = IMFTransform_GetOutputStreamInfo(c->mft, c->out_stream_id, &c->out_info); + if (FAILED(hr)) + return AVERROR_EXTERNAL; + c->out_stream_provides_samples = + (c->out_info.dwFlags & MFT_OUTPUT_STREAM_PROVIDES_SAMPLES) || + (c->out_info.dwFlags & MFT_OUTPUT_STREAM_CAN_PROVIDE_SAMPLES); + av_log(avctx, AV_LOG_VERBOSE, "out_info: size=%d, align=%d%s\n", + (int)c->out_info.cbSize, (int)c->out_info.cbAlignment, + c->out_stream_provides_samples ? " (provides samples)" : ""); + + if ((ret = mf_output_type_get(avctx)) < 0) + return ret; + + return 0; +} + +static LONG mf_codecapi_get_int(ICodecAPI *capi, const GUID *guid, LONG def) +{ + LONG ret = def; + VARIANT v; + HRESULT hr = ICodecAPI_GetValue(capi, &ff_CODECAPI_AVDecVideoMaxCodedWidth, &v); + if (FAILED(hr)) + return ret; + if (v.vt == VT_I4) + ret = v.lVal; + if (v.vt == VT_UI4) + ret = v.ulVal; + VariantClear(&v); + return ret; +} + +static int mf_check_codec_requirements(AVCodecContext *avctx) +{ + MFContext *c = avctx->priv_data; + + if (c->is_dec && c->is_video && c->codec_api) { + LONG w = mf_codecapi_get_int(c->codec_api, &ff_CODECAPI_AVDecVideoMaxCodedWidth, 0); + LONG h = mf_codecapi_get_int(c->codec_api, &ff_CODECAPI_AVDecVideoMaxCodedHeight, 0); + + if (w <= 0 || h <= 0) + return 0; + + av_log(avctx, AV_LOG_VERBOSE, "Max. supported video size: %dx%d\n", (int)w, (int)h); + + // avctx generally has only the cropped size. Assume the coded size is + // the same size, rounded up to the next macroblock boundary. + if (avctx->width > w || avctx->height > h) { + av_log(avctx, AV_LOG_ERROR, "Video size %dx%d larger than supported size.\n", + avctx->width, avctx->height); + return AVERROR(EINVAL); + } + } + + return 0; +} + +static int mf_unlock_async(AVCodecContext *avctx) +{ + MFContext *c = avctx->priv_data; + HRESULT hr; + IMFAttributes *attrs; + UINT32 v; + int res = AVERROR_EXTERNAL; + + // For hw encoding we unfortunately need it, otherwise don't risk it. + if (!(c->is_enc && c->is_video && c->opt_enc_d3d)) + return 0; + + hr = IMFTransform_GetAttributes(c->mft, &attrs); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "error retrieving MFT attributes: %s\n", ff_hr_str(hr)); + goto err; + } + + hr = IMFAttributes_GetUINT32(attrs, &MF_TRANSFORM_ASYNC, &v); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "error querying async: %s\n", ff_hr_str(hr)); + goto err; + } + + if (!v) { + av_log(avctx, AV_LOG_ERROR, "hardware MFT is not async\n"); + goto err; + } + + hr = IMFAttributes_SetUINT32(attrs, &MF_TRANSFORM_ASYNC_UNLOCK, TRUE); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "could not set async unlock: %s\n", ff_hr_str(hr)); + goto err; + } + + hr = IMFTransform_QueryInterface(c->mft, &IID_IMFMediaEventGenerator, (void **)&c->async_events); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "could not get async interface\n"); + goto err; + } + + res = 0; + +err: + IMFAttributes_Release(attrs); + return res; +} + +static int mf_create(void *log, IMFTransform **mft, const AVCodec *codec, int use_hw) +{ + int is_audio = codec->type == AVMEDIA_TYPE_AUDIO; + int is_dec = av_codec_is_decoder(codec); + const CLSID *subtype = ff_codec_to_mf_subtype(codec->id); + MFT_REGISTER_TYPE_INFO reg = {0}; + GUID category; + int ret; + + *mft = NULL; + + if (!subtype) + return AVERROR(ENOSYS); + + reg.guidSubtype = *subtype; + + if (is_dec) { + if (is_audio) { + reg.guidMajorType = MFMediaType_Audio; + category = MFT_CATEGORY_AUDIO_DECODER; + } else { + reg.guidMajorType = MFMediaType_Video; + category = MFT_CATEGORY_VIDEO_DECODER; + } + + if ((ret = ff_instantiate_mf(log, category, ®, NULL, use_hw, mft)) < 0) + return ret; + } else { + if (is_audio) { + reg.guidMajorType = MFMediaType_Audio; + category = MFT_CATEGORY_AUDIO_ENCODER; + } else { + reg.guidMajorType = MFMediaType_Video; + category = MFT_CATEGORY_VIDEO_ENCODER; + } + + if ((ret = ff_instantiate_mf(log, category, NULL, ®, use_hw, mft)) < 0) + return ret; + } + + return 0; +} + +static int mf_init(AVCodecContext *avctx) +{ + MFContext *c = avctx->priv_data; + HRESULT hr; + int ret; + const CLSID *subtype = ff_codec_to_mf_subtype(avctx->codec_id); + int use_hw = 0; + + c->original_channels = avctx->channels; + + c->is_dec = av_codec_is_decoder(avctx->codec); + c->is_enc = !c->is_dec; + c->is_audio = avctx->codec_type == AVMEDIA_TYPE_AUDIO; + c->is_video = !c->is_audio; + + if (c->is_video && c->is_enc && c->opt_enc_d3d) + use_hw = 1; + + if (!subtype) + return AVERROR(ENOSYS); + + c->main_subtype = *subtype; + + if ((ret = mf_create(avctx, &c->mft, avctx->codec, use_hw)) < 0) + return ret; + + if ((ret = mf_unlock_async(avctx)) < 0) + return ret; + + hr = IMFTransform_QueryInterface(c->mft, &IID_ICodecAPI, (void **)&c->codec_api); + if (!FAILED(hr)) + av_log(avctx, AV_LOG_VERBOSE, "MFT supports ICodecAPI.\n"); + + if (c->is_dec) { + const char *bsf = NULL; + + if (avctx->codec->id == AV_CODEC_ID_H264 && avctx->extradata && avctx->extradata[0] == 1) + bsf = "h264_mp4toannexb"; + + if (avctx->codec->id == AV_CODEC_ID_HEVC && avctx->extradata && avctx->extradata[0] == 1) + bsf = "hevc_mp4toannexb"; + + if (bsf) { + const AVBitStreamFilter *bsfc = av_bsf_get_by_name(bsf); + if (!bsfc) { + ret = AVERROR(ENOSYS); + goto bsf_done; + } + if ((ret = av_bsf_alloc(bsfc, &c->bsfc)) < 0) + goto bsf_done; + if ((ret = avcodec_parameters_from_context(c->bsfc->par_in, avctx)) < 0) + goto bsf_done; + if ((ret = av_bsf_init(c->bsfc)) < 0) + goto bsf_done; + ret = 0; + bsf_done: + if (ret < 0) { + av_log(avctx, AV_LOG_ERROR, "Cannot open the %s BSF!\n", bsf); + return ret; + } + } + } + + if ((ret = mf_check_codec_requirements(avctx)) < 0) + return ret; + + hr = IMFTransform_GetStreamIDs(c->mft, 1, &c->in_stream_id, 1, &c->out_stream_id); + if (hr == E_NOTIMPL) { + c->in_stream_id = c->out_stream_id = 0; + } else if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "could not get stream IDs (%s)\n", ff_hr_str(hr)); + return AVERROR_EXTERNAL; + } + + if ((ret = mf_negotiate_types(avctx)) < 0) + return ret; + + if ((ret = mf_setup_context(avctx)) < 0) + return ret; + + hr = IMFTransform_ProcessMessage(c->mft, MFT_MESSAGE_NOTIFY_BEGIN_STREAMING, 0); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "could not start streaming (%s)\n", ff_hr_str(hr)); + return AVERROR_EXTERNAL; + } + + hr = IMFTransform_ProcessMessage(c->mft, MFT_MESSAGE_NOTIFY_START_OF_STREAM, 0); + if (FAILED(hr)) { + av_log(avctx, AV_LOG_ERROR, "could not start stream (%s)\n", ff_hr_str(hr)); + return AVERROR_EXTERNAL; + } + + c->lavc_init_done = 1; + + return 0; +} + +static int mf_close(AVCodecContext *avctx) +{ + MFContext *c = avctx->priv_data; + int uninit_com = c->mft != NULL; + HANDLE lib; + + if (c->codec_api) + ICodecAPI_Release(c->codec_api); + + if (c->async_events) + IMFMediaEventGenerator_Release(c->async_events); + + av_bsf_free(&c->bsfc); + + // At least async MFTs require this to be called to truly terminate it. + // Of course, mingw is missing both the import lib stub for + // MFShutdownObject, as well as the entire IMFShutdown interface. + lib = LoadLibraryW(L"mf.dll"); + if (lib) { + HRESULT (WINAPI *MFShutdownObject_ptr)(IUnknown *pUnk) + = (void *)GetProcAddress(lib, "MFShutdownObject"); + if (MFShutdownObject_ptr) + MFShutdownObject_ptr((IUnknown *)c->mft); + FreeLibrary(lib); + } + + IMFTransform_Release(c->mft); + + if (uninit_com) + CoUninitialize(); + + if (c->is_enc) { + av_freep(&avctx->extradata); + avctx->extradata_size = 0; + + av_freep(&c->send_extradata); + c->send_extradata_size = 0; + } + + return 0; +} + +#define OFFSET(x) offsetof(MFContext, x) + +#define MF_DECODER(MEDIATYPE, NAME, ID, OPTS) \ + static const AVClass ff_ ## NAME ## _mf_decoder_class = { \ + .class_name = #NAME "_mf", \ + .item_name = av_default_item_name, \ + .option = OPTS, \ + .version = LIBAVUTIL_VERSION_INT, \ + }; \ + AVCodec ff_ ## NAME ## _mf_decoder = { \ + .priv_class = &ff_ ## NAME ## _mf_decoder_class, \ + .name = #NAME "_mf", \ + .long_name = NULL_IF_CONFIG_SMALL(#ID " via MediaFoundation"), \ + .type = AVMEDIA_TYPE_ ## MEDIATYPE, \ + .id = AV_CODEC_ID_ ## ID, \ + .priv_data_size = sizeof(MFContext), \ + .init = mf_init, \ + .close = mf_close, \ + .send_packet = mf_send_packet, \ + .receive_frame = mf_receive_frame, \ + .flush = mf_flush, \ + .capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_AVOID_PROBING, \ + .caps_internal = FF_CODEC_CAP_SETS_PKT_DTS | \ + FF_CODEC_CAP_INIT_THREADSAFE | \ + FF_CODEC_CAP_INIT_CLEANUP, \ + }; + +MF_DECODER(AUDIO, ac3, AC3, NULL); +MF_DECODER(AUDIO, eac3, EAC3, NULL); +MF_DECODER(AUDIO, aac, AAC, NULL); +MF_DECODER(AUDIO, mp1, MP1, NULL); +MF_DECODER(AUDIO, mp2, MP2, NULL); +MF_DECODER(AUDIO, mp3, MP3, NULL); +MF_DECODER(AUDIO, wmav1, WMAV1, NULL); +MF_DECODER(AUDIO, wmav2, WMAV2, NULL); +MF_DECODER(AUDIO, wmalossless, WMALOSSLESS, NULL); +MF_DECODER(AUDIO, wmapro, WMAPRO, NULL); +MF_DECODER(AUDIO, wmavoice, WMAVOICE, NULL); + +#define VD AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_DECODING_PARAM +static const AVOption vdec_opts[] = { + {NULL} +}; + +#define MF_VIDEO_DECODER(NAME, ID) \ + MF_DECODER(VIDEO, NAME, ID, vdec_opts); + +MF_VIDEO_DECODER(h264, H264); +MF_VIDEO_DECODER(hevc, HEVC); +MF_VIDEO_DECODER(vc1, VC1); +MF_VIDEO_DECODER(wmv1, WMV1); +MF_VIDEO_DECODER(wmv2, WMV2); +MF_VIDEO_DECODER(wmv3, WMV3); +MF_VIDEO_DECODER(mpeg2, MPEG2VIDEO); +MF_VIDEO_DECODER(mpeg4, MPEG4); +MF_VIDEO_DECODER(msmpeg4v1, MSMPEG4V1); +MF_VIDEO_DECODER(msmpeg4v2, MSMPEG4V2); +MF_VIDEO_DECODER(msmpeg4v3, MSMPEG4V3); +MF_VIDEO_DECODER(mjpeg, MJPEG); + +#define MF_ENCODER(MEDIATYPE, NAME, ID, OPTS, EXTRA) \ + static const AVClass ff_ ## NAME ## _mf_encoder_class = { \ + .class_name = #NAME "_mf", \ + .item_name = av_default_item_name, \ + .option = OPTS, \ + .version = LIBAVUTIL_VERSION_INT, \ + }; \ + AVCodec ff_ ## NAME ## _mf_encoder = { \ + .priv_class = &ff_ ## NAME ## _mf_encoder_class, \ + .name = #NAME "_mf", \ + .long_name = NULL_IF_CONFIG_SMALL(#ID " via MediaFoundation"), \ + .type = AVMEDIA_TYPE_ ## MEDIATYPE, \ + .id = AV_CODEC_ID_ ## ID, \ + .priv_data_size = sizeof(MFContext), \ + .init = mf_init, \ + .close = mf_close, \ + .send_frame = mf_send_frame, \ + .receive_packet = mf_receive_packet, \ + EXTRA \ + .capabilities = AV_CODEC_CAP_DELAY, \ + .caps_internal = FF_CODEC_CAP_INIT_THREADSAFE | \ + FF_CODEC_CAP_INIT_CLEANUP, \ + }; + +#define AFMTS \ + .sample_fmts = (const enum AVSampleFormat[]){ AV_SAMPLE_FMT_S16, \ + AV_SAMPLE_FMT_NONE }, + +MF_ENCODER(AUDIO, aac, AAC, NULL, AFMTS); +MF_ENCODER(AUDIO, ac3, AC3, NULL, AFMTS); +MF_ENCODER(AUDIO, mp3, MP3, NULL, AFMTS); + +#define VE AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM +static const AVOption venc_opts[] = { + {"rate_control", "Select rate control mode", OFFSET(opt_enc_rc), AV_OPT_TYPE_INT, {.i64 = -1}, -1, INT_MAX, VE, "rate_control"}, + { "default", "Default mode", 0, AV_OPT_TYPE_CONST, {.i64 = -1}, 0, 0, VE, "rate_control"}, + { "cbr", "CBR mode", 0, AV_OPT_TYPE_CONST, {.i64 = ff_eAVEncCommonRateControlMode_CBR}, 0, 0, VE, "rate_control"}, + { "pc_vbr", "Peak constrained VBR mode", 0, AV_OPT_TYPE_CONST, {.i64 = ff_eAVEncCommonRateControlMode_PeakConstrainedVBR}, 0, 0, VE, "rate_control"}, + { "u_vbr", "Unconstrained VBR mode", 0, AV_OPT_TYPE_CONST, {.i64 = ff_eAVEncCommonRateControlMode_UnconstrainedVBR}, 0, 0, VE, "rate_control"}, + { "quality", "Quality mode", 0, AV_OPT_TYPE_CONST, {.i64 = ff_eAVEncCommonRateControlMode_Quality}, 0, 0, VE, "rate_control" }, + // The following rate_control modes require Windows 8. + { "ld_vbr", "Low delay VBR mode", 0, AV_OPT_TYPE_CONST, {.i64 = ff_eAVEncCommonRateControlMode_LowDelayVBR}, 0, 0, VE, "rate_control"}, + { "g_vbr", "Global VBR mode", 0, AV_OPT_TYPE_CONST, {.i64 = ff_eAVEncCommonRateControlMode_GlobalVBR}, 0, 0, VE, "rate_control" }, + { "gld_vbr", "Global low delay VBR mode", 0, AV_OPT_TYPE_CONST, {.i64 = ff_eAVEncCommonRateControlMode_GlobalLowDelayVBR}, 0, 0, VE, "rate_control"}, + {"quality", "Quality", OFFSET(opt_enc_quality), AV_OPT_TYPE_INT, {.i64 = -1}, -1, 100, VE}, + {"hw_encoding", "Force hardware encoding", OFFSET(opt_enc_d3d), AV_OPT_TYPE_INT, {.i64 = 0}, 0, 1, VE, "hw_encoding"}, + {NULL} +}; + +#define VFMTS \ + .pix_fmts = (const enum AVPixelFormat[]){ AV_PIX_FMT_NV12, \ + AV_PIX_FMT_YUV420P, \ + AV_PIX_FMT_NONE }, + +MF_ENCODER(VIDEO, h264, H264, venc_opts, VFMTS); +MF_ENCODER(VIDEO, hevc, HEVC, venc_opts, VFMTS); diff --git a/libavcodec/mf_utils.c b/libavcodec/mf_utils.c new file mode 100644 index 0000000000..f9f0a9e39f --- /dev/null +++ b/libavcodec/mf_utils.c @@ -0,0 +1,696 @@ +/* + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ +#define COBJMACROS +#if !defined(_WIN32_WINNT) || _WIN32_WINNT < 0x0601 +#undef _WIN32_WINNT +#define _WIN32_WINNT 0x0601 +#endif + +#include "mf_utils.h" + +HRESULT ff_MFGetAttributeSize( + _In_ IMFAttributes *pAttributes, + _In_ REFGUID guidKey, + _Out_ UINT32 *punWidth, + _Out_ UINT32 *punHeight +) +{ + UINT64 t; + HRESULT hr = IMFAttributes_GetUINT64(pAttributes, guidKey, &t); + if (!FAILED(hr)) { + *punWidth = t >> 32; + *punHeight = (UINT32)t; + } + return hr; +} + +HRESULT ff_MFSetAttributeSize( + _In_ IMFAttributes *pAttributes, + _In_ REFGUID guidKey, + _In_ UINT32 unWidth, + _In_ UINT32 unHeight +) +{ + UINT64 t = (((UINT64)unWidth) << 32) | unHeight; + return IMFAttributes_SetUINT64(pAttributes, guidKey, t); +} + +#define ff_MFSetAttributeRatio ff_MFSetAttributeSize +#define ff_MFGetAttributeRatio ff_MFGetAttributeSize + +// Awesome: mingw lacks MTEnumEx in mfplat.a import lib. Screw it. +HRESULT ff_MFTEnumEx( + _In_ GUID guidCategory, + _In_ UINT32 Flags, + _In_ const MFT_REGISTER_TYPE_INFO *pInputType, + _In_ const MFT_REGISTER_TYPE_INFO *pOutputType, + _Out_ IMFActivate ***pppMFTActivate, + _Out_ UINT32 *pcMFTActivate + ) +{ + HRESULT (WINAPI *MFTEnumEx_ptr)( + _In_ GUID guidCategory, + _In_ UINT32 Flags, + _In_ const MFT_REGISTER_TYPE_INFO *pInputType, + _In_ const MFT_REGISTER_TYPE_INFO *pOutputType, + _Out_ IMFActivate ***pppMFTActivate, + _Out_ UINT32 *pcMFTActivate + ) = NULL; + HANDLE lib = GetModuleHandleW(L"mfplat.dll"); + if (lib) + MFTEnumEx_ptr = (void *)GetProcAddress(lib, "MFTEnumEx"); + if (!MFTEnumEx_ptr) + return E_FAIL; + return MFTEnumEx_ptr(guidCategory, + Flags, + pInputType, + pOutputType, + pppMFTActivate, + pcMFTActivate); +} + +char *ff_hr_str_buf(char *buf, size_t size, HRESULT hr) +{ +#define HR(x) case x: return (char *) # x; + switch (hr) { + HR(S_OK) + HR(E_UNEXPECTED) + HR(MF_E_INVALIDMEDIATYPE) + HR(MF_E_INVALIDSTREAMNUMBER) + HR(MF_E_INVALIDTYPE) + HR(MF_E_TRANSFORM_CANNOT_CHANGE_MEDIATYPE_WHILE_PROCESSING) + HR(MF_E_TRANSFORM_TYPE_NOT_SET) + HR(MF_E_UNSUPPORTED_D3D_TYPE) + HR(MF_E_TRANSFORM_NEED_MORE_INPUT) + HR(MF_E_TRANSFORM_STREAM_CHANGE) + HR(MF_E_NOTACCEPTING) + HR(MF_E_NO_SAMPLE_TIMESTAMP) + HR(MF_E_NO_SAMPLE_DURATION) +#undef HR + } + snprintf(buf, size, "%x", (unsigned)hr); + return buf; +} + +// If fill_data!=NULL, initialize the buffer and set the length. (This is a +// subtle but important difference: some decoders want CurrentLength==0 on +// provided output buffers.) +IMFSample *ff_create_memory_sample(void *fill_data, size_t size, size_t align) +{ + HRESULT hr; + IMFSample *sample; + IMFMediaBuffer *buffer; + + hr = MFCreateSample(&sample); + if (FAILED(hr)) + return NULL; + + align = FFMAX(align, 16); // 16 is "recommended", even if not required + + hr = MFCreateAlignedMemoryBuffer(size, align - 1, &buffer); + if (FAILED(hr)) + return NULL; + + if (fill_data) { + BYTE *tmp; + + hr = IMFMediaBuffer_Lock(buffer, &tmp, NULL, NULL); + if (FAILED(hr)) { + IMFMediaBuffer_Release(buffer); + IMFSample_Release(sample); + return NULL; + } + memcpy(tmp, fill_data, size); + + IMFMediaBuffer_SetCurrentLength(buffer, size); + IMFMediaBuffer_Unlock(buffer); + } + + IMFSample_AddBuffer(sample, buffer); + IMFMediaBuffer_Release(buffer); + + return sample; +} + +enum AVSampleFormat ff_media_type_to_sample_fmt(IMFAttributes *type) +{ + HRESULT hr; + UINT32 bits; + GUID subtype; + + hr = IMFAttributes_GetUINT32(type, &MF_MT_AUDIO_BITS_PER_SAMPLE, &bits); + if (FAILED(hr)) + return AV_SAMPLE_FMT_NONE; + + hr = IMFAttributes_GetGUID(type, &MF_MT_SUBTYPE, &subtype); + if (FAILED(hr)) + return AV_SAMPLE_FMT_NONE; + + if (IsEqualGUID(&subtype, &MFAudioFormat_PCM)) { + switch (bits) { + case 8: return AV_SAMPLE_FMT_U8; + case 16: return AV_SAMPLE_FMT_S16; + case 32: return AV_SAMPLE_FMT_S32; + } + } else if (IsEqualGUID(&subtype, &MFAudioFormat_Float)) { + switch (bits) { + case 32: return AV_SAMPLE_FMT_FLT; + case 64: return AV_SAMPLE_FMT_DBL; + } + } + + return AV_SAMPLE_FMT_NONE; +} + +struct mf_pix_fmt_entry { + const GUID *guid; + enum AVPixelFormat pix_fmt; +}; + +static const struct mf_pix_fmt_entry mf_pix_fmts[] = { + {&MFVideoFormat_IYUV, AV_PIX_FMT_YUV420P}, + {&MFVideoFormat_I420, AV_PIX_FMT_YUV420P}, + {&MFVideoFormat_NV12, AV_PIX_FMT_NV12}, + {&MFVideoFormat_P010, AV_PIX_FMT_P010}, + {&MFVideoFormat_P016, AV_PIX_FMT_P016}, + {&MFVideoFormat_YUY2, AV_PIX_FMT_YUYV422}, +}; + +enum AVPixelFormat ff_media_type_to_pix_fmt(IMFAttributes *type) +{ + HRESULT hr; + GUID subtype; + int i; + + hr = IMFAttributes_GetGUID(type, &MF_MT_SUBTYPE, &subtype); + if (FAILED(hr)) + return AV_PIX_FMT_NONE; + + for (i = 0; i < FF_ARRAY_ELEMS(mf_pix_fmts); i++) { + if (IsEqualGUID(&subtype, mf_pix_fmts[i].guid)) + return mf_pix_fmts[i].pix_fmt; + } + + return AV_PIX_FMT_NONE; +} + +const GUID *ff_pix_fmt_to_guid(enum AVPixelFormat pix_fmt) +{ + int i; + + for (i = 0; i < FF_ARRAY_ELEMS(mf_pix_fmts); i++) { + if (mf_pix_fmts[i].pix_fmt == pix_fmt) + return mf_pix_fmts[i].guid; + } + + return NULL; +} + +// If this GUID is of the form XXXXXXXX-0000-0010-8000-00AA00389B71, then +// extract the XXXXXXXX prefix as FourCC (oh the pain). +int ff_fourcc_from_guid(GUID *guid, uint32_t *out_fourcc) +{ + if (guid->Data2 == 0 && guid->Data3 == 0x0010 && + guid->Data4[0] == 0x80 && + guid->Data4[1] == 0x00 && + guid->Data4[2] == 0x00 && + guid->Data4[3] == 0xAA && + guid->Data4[4] == 0x00 && + guid->Data4[5] == 0x38 && + guid->Data4[6] == 0x9B && + guid->Data4[7] == 0x71) { + *out_fourcc = guid->Data1; + return 0; + } + + *out_fourcc = 0; + return AVERROR_UNKNOWN; +} + +struct GUID_Entry { + const GUID *guid; + const char *name; +}; + +#define GUID_ENTRY(var) {&(var), # var} + +static struct GUID_Entry guid_names[] = { + GUID_ENTRY(MFT_FRIENDLY_NAME_Attribute), + GUID_ENTRY(MFT_TRANSFORM_CLSID_Attribute), + GUID_ENTRY(MFT_ENUM_HARDWARE_URL_Attribute), + GUID_ENTRY(MFT_CONNECTED_STREAM_ATTRIBUTE), + GUID_ENTRY(MFT_CONNECTED_TO_HW_STREAM), + GUID_ENTRY(MF_SA_D3D_AWARE), + GUID_ENTRY(ff_MF_SA_MINIMUM_OUTPUT_SAMPLE_COUNT), + GUID_ENTRY(ff_MF_SA_MINIMUM_OUTPUT_SAMPLE_COUNT_PROGRESSIVE), + GUID_ENTRY(ff_MF_SA_D3D11_BINDFLAGS), + GUID_ENTRY(ff_MF_SA_D3D11_USAGE), + GUID_ENTRY(ff_MF_SA_D3D11_AWARE), + GUID_ENTRY(ff_MF_SA_D3D11_SHARED), + GUID_ENTRY(ff_MF_SA_D3D11_SHARED_WITHOUT_MUTEX), + GUID_ENTRY(MF_MT_SUBTYPE), + GUID_ENTRY(MF_MT_MAJOR_TYPE), + GUID_ENTRY(MF_MT_AUDIO_SAMPLES_PER_SECOND), + GUID_ENTRY(MF_MT_AUDIO_NUM_CHANNELS), + GUID_ENTRY(MF_MT_AUDIO_CHANNEL_MASK), + GUID_ENTRY(MF_MT_FRAME_SIZE), + GUID_ENTRY(MF_MT_INTERLACE_MODE), + GUID_ENTRY(MF_MT_USER_DATA), + GUID_ENTRY(MF_MT_PIXEL_ASPECT_RATIO), + GUID_ENTRY(MFMediaType_Audio), + GUID_ENTRY(MFMediaType_Video), + GUID_ENTRY(MFAudioFormat_PCM), + GUID_ENTRY(MFAudioFormat_Float), + GUID_ENTRY(MFVideoFormat_H264), + GUID_ENTRY(MFVideoFormat_H264_ES), + GUID_ENTRY(MFVideoFormat_HEVC), + GUID_ENTRY(MFVideoFormat_HEVC_ES), + GUID_ENTRY(MFVideoFormat_MPEG2), + GUID_ENTRY(MFVideoFormat_MP43), + GUID_ENTRY(MFVideoFormat_MP4V), + GUID_ENTRY(MFVideoFormat_WMV1), + GUID_ENTRY(MFVideoFormat_WMV2), + GUID_ENTRY(MFVideoFormat_WMV3), + GUID_ENTRY(MFVideoFormat_WVC1), + GUID_ENTRY(MFAudioFormat_Dolby_AC3), + GUID_ENTRY(MFAudioFormat_Dolby_DDPlus), + GUID_ENTRY(MFAudioFormat_AAC), + GUID_ENTRY(MFAudioFormat_MP3), + GUID_ENTRY(MFAudioFormat_MSP1), + GUID_ENTRY(ff_MFAudioFormat_MSAUDIO1), + GUID_ENTRY(MFAudioFormat_WMAudioV8), + GUID_ENTRY(MFAudioFormat_WMAudioV9), + GUID_ENTRY(MFAudioFormat_WMAudio_Lossless), + GUID_ENTRY(MF_MT_ALL_SAMPLES_INDEPENDENT), + GUID_ENTRY(MF_MT_AM_FORMAT_TYPE), + GUID_ENTRY(MF_MT_COMPRESSED), + GUID_ENTRY(MF_MT_FIXED_SIZE_SAMPLES), + GUID_ENTRY(MF_MT_SAMPLE_SIZE), + GUID_ENTRY(MF_MT_WRAPPED_TYPE), + GUID_ENTRY(MF_MT_AAC_AUDIO_PROFILE_LEVEL_INDICATION), + GUID_ENTRY(MF_MT_AAC_PAYLOAD_TYPE), + GUID_ENTRY(MF_MT_AUDIO_AVG_BYTES_PER_SECOND), + GUID_ENTRY(MF_MT_AUDIO_BITS_PER_SAMPLE), + GUID_ENTRY(MF_MT_AUDIO_BLOCK_ALIGNMENT), + GUID_ENTRY(MF_MT_AUDIO_CHANNEL_MASK), + GUID_ENTRY(MF_MT_AUDIO_FLOAT_SAMPLES_PER_SECOND), + GUID_ENTRY(MF_MT_AUDIO_FOLDDOWN_MATRIX), + GUID_ENTRY(MF_MT_AUDIO_NUM_CHANNELS), + GUID_ENTRY(MF_MT_AUDIO_PREFER_WAVEFORMATEX), + GUID_ENTRY(MF_MT_AUDIO_SAMPLES_PER_BLOCK), + GUID_ENTRY(MF_MT_AUDIO_SAMPLES_PER_SECOND), + GUID_ENTRY(MF_MT_AUDIO_VALID_BITS_PER_SAMPLE), + GUID_ENTRY(MF_MT_AUDIO_WMADRC_AVGREF), + GUID_ENTRY(MF_MT_AUDIO_WMADRC_AVGTARGET), + GUID_ENTRY(MF_MT_AUDIO_WMADRC_PEAKREF), + GUID_ENTRY(MF_MT_AUDIO_WMADRC_PEAKTARGET), + GUID_ENTRY(MF_MT_ORIGINAL_WAVE_FORMAT_TAG), + GUID_ENTRY(MF_MT_AVG_BIT_ERROR_RATE), + GUID_ENTRY(MF_MT_AVG_BITRATE), + GUID_ENTRY(MF_MT_CUSTOM_VIDEO_PRIMARIES), + GUID_ENTRY(MF_MT_DEFAULT_STRIDE), + GUID_ENTRY(MF_MT_DRM_FLAGS), + GUID_ENTRY(MF_MT_FRAME_RATE), + GUID_ENTRY(MF_MT_FRAME_RATE_RANGE_MAX), + GUID_ENTRY(MF_MT_FRAME_RATE_RANGE_MIN), + GUID_ENTRY(MF_MT_FRAME_SIZE), + GUID_ENTRY(MF_MT_GEOMETRIC_APERTURE), + GUID_ENTRY(MF_MT_INTERLACE_MODE), + GUID_ENTRY(MF_MT_MAX_KEYFRAME_SPACING), + GUID_ENTRY(MF_MT_MINIMUM_DISPLAY_APERTURE), + GUID_ENTRY(MF_MT_MPEG_SEQUENCE_HEADER), + GUID_ENTRY(MF_MT_MPEG_START_TIME_CODE), + GUID_ENTRY(MF_MT_MPEG2_FLAGS), + GUID_ENTRY(MF_MT_MPEG2_LEVEL), + GUID_ENTRY(MF_MT_MPEG2_PROFILE), + GUID_ENTRY(MF_MT_ORIGINAL_4CC), + GUID_ENTRY(MF_MT_PAD_CONTROL_FLAGS), + GUID_ENTRY(MF_MT_PALETTE), + GUID_ENTRY(MF_MT_PAN_SCAN_APERTURE), + GUID_ENTRY(MF_MT_PAN_SCAN_ENABLED), + GUID_ENTRY(MF_MT_PIXEL_ASPECT_RATIO), + GUID_ENTRY(MF_MT_SOURCE_CONTENT_HINT), + GUID_ENTRY(MF_MT_TRANSFER_FUNCTION), + GUID_ENTRY(MF_MT_VIDEO_CHROMA_SITING), + GUID_ENTRY(MF_MT_VIDEO_LIGHTING), + GUID_ENTRY(MF_MT_VIDEO_NOMINAL_RANGE), + GUID_ENTRY(MF_MT_VIDEO_PRIMARIES), + GUID_ENTRY(ff_MF_MT_VIDEO_ROTATION), + GUID_ENTRY(MF_MT_YUV_MATRIX), + GUID_ENTRY(ff_CODECAPI_AVDecVideoThumbnailGenerationMode), + GUID_ENTRY(ff_CODECAPI_AVDecVideoDropPicWithMissingRef), + GUID_ENTRY(ff_CODECAPI_AVDecVideoSoftwareDeinterlaceMode), + GUID_ENTRY(ff_CODECAPI_AVDecVideoFastDecodeMode), + GUID_ENTRY(ff_CODECAPI_AVLowLatencyMode), + GUID_ENTRY(ff_CODECAPI_AVDecVideoH264ErrorConcealment), + GUID_ENTRY(ff_CODECAPI_AVDecVideoMPEG2ErrorConcealment), + GUID_ENTRY(ff_CODECAPI_AVDecVideoCodecType), + GUID_ENTRY(ff_CODECAPI_AVDecVideoDXVAMode), + GUID_ENTRY(ff_CODECAPI_AVDecVideoDXVABusEncryption), + GUID_ENTRY(ff_CODECAPI_AVDecVideoSWPowerLevel), + GUID_ENTRY(ff_CODECAPI_AVDecVideoMaxCodedWidth), + GUID_ENTRY(ff_CODECAPI_AVDecVideoMaxCodedHeight), + GUID_ENTRY(ff_CODECAPI_AVDecNumWorkerThreads), + GUID_ENTRY(ff_CODECAPI_AVDecSoftwareDynamicFormatChange), + GUID_ENTRY(ff_CODECAPI_AVDecDisableVideoPostProcessing), +}; + +char *ff_guid_str_buf(char *buf, size_t buf_size, GUID *guid) +{ + uint32_t fourcc; + int n; + for (n = 0; n < FF_ARRAY_ELEMS(guid_names); n++) { + if (IsEqualGUID(guid, guid_names[n].guid)) { + snprintf(buf, buf_size, "%s", guid_names[n].name); + return buf; + } + } + + if (ff_fourcc_from_guid(guid, &fourcc) >= 0) { + snprintf(buf, buf_size, "", av_fourcc2str(fourcc)); + return buf; + } + + snprintf(buf, buf_size, + "{%8.8x-%4.4x-%4.4x-%2.2x%2.2x-%2.2x%2.2x%2.2x%2.2x%2.2x%2.2x}", + (unsigned) guid->Data1, guid->Data2, guid->Data3, + guid->Data4[0], guid->Data4[1], + guid->Data4[2], guid->Data4[3], + guid->Data4[4], guid->Data4[5], + guid->Data4[6], guid->Data4[7]); + return buf; +} + +void ff_attributes_dump(void *log, IMFAttributes *attrs) +{ + HRESULT hr; + UINT32 count; + int n; + + hr = IMFAttributes_GetCount(attrs, &count); + if (FAILED(hr)) + return; + + for (n = 0; n < count; n++) { + GUID key; + MF_ATTRIBUTE_TYPE type; + char extra[80] = {0}; + const char *name = NULL; + + hr = IMFAttributes_GetItemByIndex(attrs, n, &key, NULL); + if (FAILED(hr)) + goto err; + + name = ff_guid_str(&key); + + if (IsEqualGUID(&key, &MF_MT_AUDIO_CHANNEL_MASK)) { + UINT32 v; + hr = IMFAttributes_GetUINT32(attrs, &key, &v); + if (FAILED(hr)) + goto err; + snprintf(extra, sizeof(extra), " (0x%x)", (unsigned)v); + } else if (IsEqualGUID(&key, &MF_MT_FRAME_SIZE)) { + UINT32 w, h; + + hr = ff_MFGetAttributeSize(attrs, &MF_MT_FRAME_SIZE, &w, &h); + if (FAILED(hr)) + goto err; + snprintf(extra, sizeof(extra), " (%dx%d)", (int)w, (int)h); + } else if (IsEqualGUID(&key, &MF_MT_PIXEL_ASPECT_RATIO) || + IsEqualGUID(&key, &MF_MT_FRAME_RATE)) { + UINT32 num, den; + + hr = ff_MFGetAttributeRatio(attrs, &key, &num, &den); + if (FAILED(hr)) + goto err; + snprintf(extra, sizeof(extra), " (%d:%d)", (int)num, (int)den); + } + + hr = IMFAttributes_GetItemType(attrs, &key, &type); + if (FAILED(hr)) + goto err; + + switch (type) { + case MF_ATTRIBUTE_UINT32: { + UINT32 v; + hr = IMFAttributes_GetUINT32(attrs, &key, &v); + if (FAILED(hr)) + goto err; + av_log(log, AV_LOG_VERBOSE, " %s=%d%s\n", name, (int)v, extra); + break; + case MF_ATTRIBUTE_UINT64: { + UINT64 v; + hr = IMFAttributes_GetUINT64(attrs, &key, &v); + if (FAILED(hr)) + goto err; + av_log(log, AV_LOG_VERBOSE, " %s=%lld%s\n", name, (long long)v, extra); + break; + } + case MF_ATTRIBUTE_DOUBLE: { + DOUBLE v; + hr = IMFAttributes_GetDouble(attrs, &key, &v); + if (FAILED(hr)) + goto err; + av_log(log, AV_LOG_VERBOSE, " %s=%f%s\n", name, (double)v, extra); + break; + } + case MF_ATTRIBUTE_STRING: { + wchar_t s[512]; // being lazy here + hr = IMFAttributes_GetString(attrs, &key, s, sizeof(s), NULL); + if (FAILED(hr)) + goto err; + av_log(log, AV_LOG_VERBOSE, " %s='%ls'%s\n", name, s, extra); + break; + } + case MF_ATTRIBUTE_GUID: { + GUID v; + hr = IMFAttributes_GetGUID(attrs, &key, &v); + if (FAILED(hr)) + goto err; + av_log(log, AV_LOG_VERBOSE, " %s=%s%s\n", name, ff_guid_str(&v), extra); + break; + } + case MF_ATTRIBUTE_BLOB: { + UINT32 sz; + UINT8 buffer[100]; + hr = IMFAttributes_GetBlobSize(attrs, &key, &sz); + if (FAILED(hr)) + goto err; + if (sz <= sizeof(buffer)) { + // hex-dump it + char str[512] = {0}; + size_t pos = 0; + hr = IMFAttributes_GetBlob(attrs, &key, buffer, sizeof(buffer), &sz); + if (FAILED(hr)) + goto err; + for (pos = 0; pos < sz; pos++) { + const char *hex = "0123456789ABCDEF"; + if (pos * 3 + 3 > sizeof(str)) + break; + str[pos * 3 + 0] = hex[buffer[pos] >> 4]; + str[pos * 3 + 1] = hex[buffer[pos] & 15]; + str[pos * 3 + 2] = ' '; + } + str[pos * 3 + 0] = 0; + av_log(log, AV_LOG_VERBOSE, " %s=%s\n", name, (int)sz, str, extra); + } else { + av_log(log, AV_LOG_VERBOSE, " %s=%s\n", name, (int)sz, extra); + } + break; + } + case MF_ATTRIBUTE_IUNKNOWN: { + av_log(log, AV_LOG_VERBOSE, " %s=%s\n", name, extra); + break; + } + default: + av_log(log, AV_LOG_VERBOSE, " %s=%s\n", name, extra); + break; + } + } + + if (IsEqualGUID(&key, &MF_MT_SUBTYPE)) { + const char *fmt; + fmt = av_get_sample_fmt_name(ff_media_type_to_sample_fmt(attrs)); + if (fmt) + av_log(log, AV_LOG_VERBOSE, " FF-sample-format=%s\n", fmt); + + fmt = av_get_pix_fmt_name(ff_media_type_to_pix_fmt(attrs)); + if (fmt) + av_log(log, AV_LOG_VERBOSE, " FF-pixel-format=%s\n", fmt); + } + + continue; + err: + av_log(log, AV_LOG_VERBOSE, " %s=\n", name ? name : "?"); + } +} + +void ff_media_type_dump(void *log, IMFMediaType *type) +{ + ff_attributes_dump(log, (IMFAttributes *)type); +} + +const CLSID *ff_codec_to_mf_subtype(enum AVCodecID codec) +{ + switch (codec) { + case AV_CODEC_ID_H264: return &MFVideoFormat_H264; + case AV_CODEC_ID_HEVC: return &MFVideoFormat_HEVC; + case AV_CODEC_ID_MJPEG: return &MFVideoFormat_MJPG; + case AV_CODEC_ID_MPEG2VIDEO: return &MFVideoFormat_MPEG2; + case AV_CODEC_ID_MPEG4: return &MFVideoFormat_MP4V; + case AV_CODEC_ID_MSMPEG4V1: + case AV_CODEC_ID_MSMPEG4V2: return &ff_MFVideoFormat_MP42; + case AV_CODEC_ID_MSMPEG4V3: return &MFVideoFormat_MP43; + case AV_CODEC_ID_WMV1: return &MFVideoFormat_WMV1; + case AV_CODEC_ID_WMV2: return &MFVideoFormat_WMV2; + case AV_CODEC_ID_WMV3: return &MFVideoFormat_WMV3; + case AV_CODEC_ID_VC1: return &MFVideoFormat_WVC1; + case AV_CODEC_ID_AC3: return &MFAudioFormat_Dolby_AC3; + case AV_CODEC_ID_EAC3: return &MFAudioFormat_Dolby_DDPlus; + case AV_CODEC_ID_AAC: return &MFAudioFormat_AAC; + case AV_CODEC_ID_MP1: return &MFAudioFormat_MPEG; + case AV_CODEC_ID_MP2: return &MFAudioFormat_MPEG; + case AV_CODEC_ID_MP3: return &MFAudioFormat_MP3; + case AV_CODEC_ID_WMAVOICE: return &MFAudioFormat_MSP1; + case AV_CODEC_ID_WMAV1: return &ff_MFAudioFormat_MSAUDIO1; + case AV_CODEC_ID_WMAV2: return &MFAudioFormat_WMAudioV8; + case AV_CODEC_ID_WMAPRO: return &MFAudioFormat_WMAudioV9; + case AV_CODEC_ID_WMALOSSLESS: return &MFAudioFormat_WMAudio_Lossless; + default: return NULL; + } +} + +int ff_init_com_mf(void *log) +{ + HRESULT hr; + + hr = CoInitializeEx(NULL, COINIT_MULTITHREADED); + if (hr == RPC_E_CHANGED_MODE) { + av_log(log, AV_LOG_ERROR, "COM must not be in STA mode\n"); + return AVERROR(EINVAL); + } else if (FAILED(hr)) { + av_log(log, AV_LOG_ERROR, "could not initialize COM\n"); + return AVERROR(ENOSYS); + } + + hr = MFStartup(MF_VERSION, MFSTARTUP_FULL); + if (FAILED(hr)) { + av_log(log, AV_LOG_ERROR, "could not initialize MediaFoundation\n"); + return AVERROR(ENOSYS); + } + + return 0; +} + +// Find and create a IMFTransform with the given input/output types. +int ff_instantiate_mf(void *log, + GUID category, + MFT_REGISTER_TYPE_INFO *in_type, + MFT_REGISTER_TYPE_INFO *out_type, + int use_hw, + IMFTransform **res) +{ + HRESULT hr; + int n; + int ret; + IMFActivate **activate; + UINT32 num_activate; + IMFActivate *winner = 0; + UINT32 flags; + + ret = ff_init_com_mf(log); + if (ret < 0) + return ret; + + flags = MFT_ENUM_FLAG_SORTANDFILTER; + + if (use_hw) { + flags |= MFT_ENUM_FLAG_HARDWARE; + } else { + flags |= MFT_ENUM_FLAG_SYNCMFT; + } + + hr = ff_MFTEnumEx(category, flags, in_type, out_type, &activate, &num_activate); + if (FAILED(hr)) + goto error_uninit_mf; + + if (log) { + if (!num_activate) + av_log(log, AV_LOG_ERROR, "could not find any MFT for the given media type\n"); + + for (n = 0; n < num_activate; n++) { + av_log(log, AV_LOG_VERBOSE, "MF %d attributes:\n", n); + ff_attributes_dump(log, (IMFAttributes *)activate[n]); + } + } + + for (n = 0; n < num_activate; n++) { + if (log) + av_log(log, AV_LOG_VERBOSE, "activate MFT %d\n", n); + hr = IMFActivate_ActivateObject(activate[n], &IID_IMFTransform, + (void **)res); + if (FAILED(hr)) + *res = NULL; + if (*res) { + winner = activate[n]; + IMFActivate_AddRef(winner); + break; + } + } + + for (n = 0; n < num_activate; n++) + IMFActivate_Release(activate[n]); + CoTaskMemFree(activate); + + if (!*res) { + if (log) + av_log(log, AV_LOG_ERROR, "could not create MFT\n"); + goto error_uninit_mf; + } + + if (log) { + wchar_t s[512]; // being lazy here + IMFAttributes *attrs; + hr = IMFTransform_GetAttributes(*res, &attrs); + if (!FAILED(hr) && attrs) { + + av_log(log, AV_LOG_VERBOSE, "MFT attributes\n"); + ff_attributes_dump(log, attrs); + IMFAttributes_Release(attrs); + } + + hr = IMFActivate_GetString(winner, &MFT_FRIENDLY_NAME_Attribute, s, sizeof(s), NULL); + if (!FAILED(hr)) + av_log(log, AV_LOG_INFO, "MFT name: '%ls'\n", s); + + } + + IMFActivate_Release(winner); + + return 0; + +error_uninit_mf: + return AVERROR(ENOSYS); +} + +void ff_free_mf(IMFTransform **mft) +{ + if (*mft) + IMFTransform_Release(*mft); + *mft = NULL; +} diff --git a/libavcodec/mf_utils.h b/libavcodec/mf_utils.h new file mode 100644 index 0000000000..f72c9ac12c --- /dev/null +++ b/libavcodec/mf_utils.h @@ -0,0 +1,200 @@ +/* + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#ifndef AVCODEC_MF_UTILS_H +#define AVCODEC_MF_UTILS_H + +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "libavutil/avassert.h" +#include "libavutil/imgutils.h" +#include "libavutil/opt.h" + +#include "avcodec.h" +#include "internal.h" + +// ugly wrappers for mingw not providing these functions + +// the attribute/ratio setter/getters are guarded by __cplusplus + +HRESULT ff_MFGetAttributeSize( + _In_ IMFAttributes *pAttributes, + _In_ REFGUID guidKey, + _Out_ UINT32 *punWidth, + _Out_ UINT32 *punHeight +); + +HRESULT ff_MFSetAttributeSize( + _In_ IMFAttributes *pAttributes, + _In_ REFGUID guidKey, + _In_ UINT32 unWidth, + _In_ UINT32 unHeight +); + +#define ff_MFSetAttributeRatio ff_MFSetAttributeSize +#define ff_MFGetAttributeRatio ff_MFGetAttributeSize + +// mingw declares this, but it's missing from the import lib +HRESULT ff_MFTEnumEx( + _In_ GUID guidCategory, + _In_ UINT32 Flags, + _In_ const MFT_REGISTER_TYPE_INFO *pInputType, + _In_ const MFT_REGISTER_TYPE_INFO *pOutputType, + _Out_ IMFActivate ***pppMFTActivate, + _Out_ UINT32 *pcMFTActivate +); + +DEFINE_GUID(ff_MF_MT_VIDEO_ROTATION, 0xc380465d, 0x2271, 0x428c, 0x9b, 0x83, 0xec, 0xea, 0x3b, 0x4a, 0x85, 0xc1); +DEFINE_GUID(ff_CODECAPI_AVDecVideoAcceleration_H264, 0xf7db8a2f, 0x4f48, 0x4ee8, 0xae, 0x31, 0x8b, 0x6e, 0xbe, 0x55, 0x8a, 0xe2); + +// WMA1. Apparently there is no official GUID symbol for this. +DEFINE_GUID(ff_MFAudioFormat_MSAUDIO1, 0x00000160, 0x0000, 0x0010, 0x80, 0x00, 0x00, 0xAA, 0x00, 0x38, 0x9B, 0x71); + +// MP42 FourCC. I made up the symbol name. +DEFINE_GUID(ff_MFVideoFormat_MP42, 0x3234504D, 0x0000, 0x0010, 0x80, 0x00, 0x00, 0xAA, 0x00, 0x38, 0x9B, 0x71); + +// All the following symbols are missing from mingw. +DEFINE_GUID(ff_CODECAPI_AVDecVideoThumbnailGenerationMode, 0x2efd8eee,0x1150,0x4328,0x9c,0xf5,0x66,0xdc,0xe9,0x33,0xfc,0xf4); +DEFINE_GUID(ff_CODECAPI_AVDecVideoDropPicWithMissingRef, 0xf8226383,0x14c2,0x4567,0x97,0x34,0x50,0x04,0xe9,0x6f,0xf8,0x87); +DEFINE_GUID(ff_CODECAPI_AVDecVideoSoftwareDeinterlaceMode, 0x0c08d1ce,0x9ced,0x4540,0xba,0xe3,0xce,0xb3,0x80,0x14,0x11,0x09); +DEFINE_GUID(ff_CODECAPI_AVDecVideoFastDecodeMode, 0x6b529f7d,0xd3b1,0x49c6,0xa9,0x99,0x9e,0xc6,0x91,0x1b,0xed,0xbf); +DEFINE_GUID(ff_CODECAPI_AVLowLatencyMode, 0x9c27891a,0xed7a,0x40e1,0x88,0xe8,0xb2,0x27,0x27,0xa0,0x24,0xee); +DEFINE_GUID(ff_CODECAPI_AVDecVideoH264ErrorConcealment, 0xececace8,0x3436,0x462c,0x92,0x94,0xcd,0x7b,0xac,0xd7,0x58,0xa9); +DEFINE_GUID(ff_CODECAPI_AVDecVideoMPEG2ErrorConcealment, 0x9d2bfe18,0x728d,0x48d2,0xb3,0x58,0xbc,0x7e,0x43,0x6c,0x66,0x74); +DEFINE_GUID(ff_CODECAPI_AVDecVideoCodecType, 0x434528e5,0x21f0,0x46b6,0xb6,0x2c,0x9b,0x1b,0x6b,0x65,0x8c,0xd1); +DEFINE_GUID(ff_CODECAPI_AVDecVideoDXVAMode, 0xf758f09e,0x7337,0x4ae7,0x83,0x87,0x73,0xdc,0x2d,0x54,0xe6,0x7d); +DEFINE_GUID(ff_CODECAPI_AVDecVideoDXVABusEncryption, 0x42153c8b,0xfd0b,0x4765,0xa4,0x62,0xdd,0xd9,0xe8,0xbc,0xc3,0x88); +DEFINE_GUID(ff_CODECAPI_AVDecVideoSWPowerLevel, 0xfb5d2347,0x4dd8,0x4509,0xae,0xd0,0xdb,0x5f,0xa9,0xaa,0x93,0xf4); +DEFINE_GUID(ff_CODECAPI_AVDecVideoMaxCodedWidth, 0x5ae557b8,0x77af,0x41f5,0x9f,0xa6,0x4d,0xb2,0xfe,0x1d,0x4b,0xca); +DEFINE_GUID(ff_CODECAPI_AVDecVideoMaxCodedHeight, 0x7262a16a,0xd2dc,0x4e75,0x9b,0xa8,0x65,0xc0,0xc6,0xd3,0x2b,0x13); +DEFINE_GUID(ff_CODECAPI_AVDecNumWorkerThreads, 0x9561c3e8,0xea9e,0x4435,0x9b,0x1e,0xa9,0x3e,0x69,0x18,0x94,0xd8); +DEFINE_GUID(ff_CODECAPI_AVDecSoftwareDynamicFormatChange, 0x862e2f0a,0x507b,0x47ff,0xaf,0x47,0x01,0xe2,0x62,0x42,0x98,0xb7); +DEFINE_GUID(ff_CODECAPI_AVDecDisableVideoPostProcessing, 0xf8749193,0x667a,0x4f2c,0xa9,0xe8,0x5d,0x4a,0xf9,0x24,0xf0,0x8f); + +DEFINE_GUID(ff_CODECAPI_AVEncCommonRateControlMode, 0x1c0608e9, 0x370c, 0x4710, 0x8a, 0x58, 0xcb, 0x61, 0x81, 0xc4, 0x24, 0x23); +DEFINE_GUID(ff_CODECAPI_AVEncCommonLowLatency, 0x9d3ecd55, 0x89e8, 0x490a, 0x97, 0x0a, 0x0c, 0x95, 0x48, 0xd5, 0xa5, 0x6e); +DEFINE_GUID(ff_CODECAPI_AVEncCommonRealTime, 0x143a0ff6, 0xa131, 0x43da, 0xb8, 0x1e, 0x98, 0xfb, 0xb8, 0xec, 0x37, 0x8e); +DEFINE_GUID(ff_CODECAPI_AVEncCommonQuality, 0xfcbf57a3, 0x7ea5, 0x4b0c, 0x96, 0x44, 0x69, 0xb4, 0x0c, 0x39, 0xc3, 0x91); +DEFINE_GUID(ff_CODECAPI_AVEncCommonQualityVsSpeed, 0x98332df8, 0x03cd, 0x476b, 0x89, 0xfa, 0x3f, 0x9e, 0x44, 0x2d, 0xec, 0x9f); +DEFINE_GUID(ff_CODECAPI_AVEncCommonTranscodeEncodingProfile, 0x6947787C, 0xF508, 0x4EA9, 0xB1, 0xE9, 0xA1, 0xFE, 0x3A, 0x49, 0xFB, 0xC9); +DEFINE_GUID(ff_CODECAPI_AVEncCommonMeanBitRate, 0xf7222374, 0x2144, 0x4815, 0xb5, 0x50, 0xa3, 0x7f, 0x8e, 0x12, 0xee, 0x52); +DEFINE_GUID(ff_CODECAPI_AVEncCommonMeanBitRateInterval, 0xbfaa2f0c, 0xcb82, 0x4bc0, 0x84, 0x74, 0xf0, 0x6a, 0x8a, 0x0d, 0x02, 0x58); +DEFINE_GUID(ff_CODECAPI_AVEncCommonMaxBitRate, 0x9651eae4, 0x39b9, 0x4ebf, 0x85, 0xef, 0xd7, 0xf4, 0x44, 0xec, 0x74, 0x65); +DEFINE_GUID(ff_CODECAPI_AVEncCommonMinBitRate, 0x101405b2, 0x2083, 0x4034, 0xa8, 0x06, 0xef, 0xbe, 0xdd, 0xd7, 0xc9, 0xff); +DEFINE_GUID(ff_CODECAPI_AVEncVideoCBRMotionTradeoff, 0x0d49451e, 0x18d5, 0x4367, 0xa4, 0xef, 0x32, 0x40, 0xdf, 0x16, 0x93, 0xc4); +DEFINE_GUID(ff_CODECAPI_AVEncVideoCodedVideoAccessUnitSize, 0xb4b10c15, 0x14a7, 0x4ce8, 0xb1, 0x73, 0xdc, 0x90, 0xa0, 0xb4, 0xfc, 0xdb); +DEFINE_GUID(ff_CODECAPI_AVEncVideoMaxKeyframeDistance, 0x2987123a, 0xba93, 0x4704, 0xb4, 0x89, 0xec, 0x1e, 0x5f, 0x25, 0x29, 0x2c); +DEFINE_GUID(ff_CODECAPI_AVEncH264CABACEnable, 0xee6cad62, 0xd305, 0x4248, 0xa5, 0xe, 0xe1, 0xb2, 0x55, 0xf7, 0xca, 0xf8); +DEFINE_GUID(ff_CODECAPI_AVEncVideoContentType, 0x66117aca, 0xeb77, 0x459d, 0x93, 0xc, 0xa4, 0x8d, 0x9d, 0x6, 0x83, 0xfc); +DEFINE_GUID(ff_CODECAPI_AVEncNumWorkerThreads, 0xb0c8bf60, 0x16f7, 0x4951, 0xa3, 0xb, 0x1d, 0xb1, 0x60, 0x92, 0x93, 0xd6); +DEFINE_GUID(ff_CODECAPI_AVEncVideoEncodeQP, 0x2cb5696b, 0x23fb, 0x4ce1, 0xa0, 0xf9, 0xef, 0x5b, 0x90, 0xfd, 0x55, 0xca); +DEFINE_GUID(ff_CODECAPI_AVEncVideoMinQP, 0x0ee22c6a, 0xa37c, 0x4568, 0xb5, 0xf1, 0x9d, 0x4c, 0x2b, 0x3a, 0xb8, 0x86); +DEFINE_GUID(ff_CODECAPI_AVEncVideoForceKeyFrame, 0x398c1b98, 0x8353, 0x475a, 0x9e, 0xf2, 0x8f, 0x26, 0x5d, 0x26, 0x3, 0x45); +DEFINE_GUID(ff_CODECAPI_AVEncAdaptiveMode, 0x4419b185, 0xda1f, 0x4f53, 0xbc, 0x76, 0x9, 0x7d, 0xc, 0x1e, 0xfb, 0x1e); +DEFINE_GUID(ff_CODECAPI_AVEncVideoTemporalLayerCount, 0x19caebff, 0xb74d, 0x4cfd, 0x8c, 0x27, 0xc2, 0xf9, 0xd9, 0x7d, 0x5f, 0x52); +DEFINE_GUID(ff_CODECAPI_AVEncVideoUsage, 0x1f636849, 0x5dc1, 0x49f1, 0xb1, 0xd8, 0xce, 0x3c, 0xf6, 0x2e, 0xa3, 0x85); +DEFINE_GUID(ff_CODECAPI_AVEncVideoSelectLayer, 0xeb1084f5, 0x6aaa, 0x4914, 0xbb, 0x2f, 0x61, 0x47, 0x22, 0x7f, 0x12, 0xe7); +DEFINE_GUID(ff_CODECAPI_AVEncVideoRateControlParams, 0x87d43767, 0x7645, 0x44ec, 0xb4, 0x38, 0xd3, 0x32, 0x2f, 0xbc, 0xa2, 0x9f); +DEFINE_GUID(ff_CODECAPI_AVEncVideoSupportedControls, 0xd3f40fdd, 0x77b9, 0x473d, 0x81, 0x96, 0x6, 0x12, 0x59, 0xe6, 0x9c, 0xff); +DEFINE_GUID(ff_CODECAPI_AVEncVideoEncodeFrameTypeQP, 0xaa70b610, 0xe03f, 0x450c, 0xad, 0x07, 0x07, 0x31, 0x4e, 0x63, 0x9c, 0xe7); +DEFINE_GUID(ff_CODECAPI_AVEncSliceControlMode, 0xe9e782ef, 0x5f18, 0x44c9, 0xa9, 0x0b, 0xe9, 0xc3, 0xc2, 0xc1, 0x7b, 0x0b); +DEFINE_GUID(ff_CODECAPI_AVEncSliceControlSize, 0x92f51df3, 0x07a5, 0x4172, 0xae, 0xfe, 0xc6, 0x9c, 0xa3, 0xb6, 0x0e, 0x35); +DEFINE_GUID(ff_CODECAPI_AVEncVideoMaxNumRefFrame, 0x964829ed, 0x94f9, 0x43b4, 0xb7, 0x4d, 0xef, 0x40, 0x94, 0x4b, 0x69, 0xa0); +DEFINE_GUID(ff_CODECAPI_AVEncVideoMeanAbsoluteDifference, 0xe5c0c10f, 0x81a4, 0x422d, 0x8c, 0x3f, 0xb4, 0x74, 0xa4, 0x58, 0x13, 0x36); +DEFINE_GUID(ff_CODECAPI_AVEncVideoMaxQP, 0x3daf6f66, 0xa6a7, 0x45e0, 0xa8, 0xe5, 0xf2, 0x74, 0x3f, 0x46, 0xa3, 0xa2); +DEFINE_GUID(ff_CODECAPI_AVEncMPVGOPSize, 0x95f31b26, 0x95a4, 0x41aa, 0x93, 0x03, 0x24, 0x6a, 0x7f, 0xc6, 0xee, 0xf1); +DEFINE_GUID(ff_CODECAPI_AVEncMPVGOPOpen, 0xb1d5d4a6, 0x3300, 0x49b1, 0xae, 0x61, 0xa0, 0x99, 0x37, 0xab, 0x0e, 0x49); +DEFINE_GUID(ff_CODECAPI_AVEncMPVDefaultBPictureCount, 0x8d390aac, 0xdc5c, 0x4200, 0xb5, 0x7f, 0x81, 0x4d, 0x04, 0xba, 0xba, 0xb2); +DEFINE_GUID(ff_CODECAPI_AVEncMPVProfile, 0xdabb534a, 0x1d99, 0x4284, 0x97, 0x5a, 0xd9, 0x0e, 0x22, 0x39, 0xba, 0xa1); +DEFINE_GUID(ff_CODECAPI_AVEncMPVLevel, 0x6ee40c40, 0xa60c, 0x41ef, 0x8f, 0x50, 0x37, 0xc2, 0x24, 0x9e, 0x2c, 0xb3); +DEFINE_GUID(ff_CODECAPI_AVEncMPVFrameFieldMode, 0xacb5de96, 0x7b93, 0x4c2f, 0x88, 0x25, 0xb0, 0x29, 0x5f, 0xa9, 0x3b, 0xf4); +DEFINE_GUID(ff_CODECAPI_AVEncMPVAddSeqEndCode, 0xa823178f, 0x57df, 0x4c7a, 0xb8, 0xfd, 0xe5, 0xec, 0x88, 0x87, 0x70, 0x8d); +DEFINE_GUID(ff_CODECAPI_AVEncMPVGOPSInSeq, 0x993410d4, 0x2691, 0x4192, 0x99, 0x78, 0x98, 0xdc, 0x26, 0x03, 0x66, 0x9f); +DEFINE_GUID(ff_CODECAPI_AVEncMPVUseConcealmentMotionVectors, 0xec770cf3, 0x6908, 0x4b4b, 0xaa, 0x30, 0x7f, 0xb9, 0x86, 0x21, 0x4f, 0xea); + +DEFINE_GUID(ff_MF_SA_D3D11_BINDFLAGS, 0xeacf97ad, 0x065c, 0x4408, 0xbe, 0xe3, 0xfd, 0xcb, 0xfd, 0x12, 0x8b, 0xe2); +DEFINE_GUID(ff_MF_SA_D3D11_USAGE, 0xe85fe442, 0x2ca3, 0x486e, 0xa9, 0xc7, 0x10, 0x9d, 0xda, 0x60, 0x98, 0x80); +DEFINE_GUID(ff_MF_SA_D3D11_AWARE, 0x206b4fc8, 0xfcf9, 0x4c51, 0xaf, 0xe3, 0x97, 0x64, 0x36, 0x9e, 0x33, 0xa0); +DEFINE_GUID(ff_MF_SA_D3D11_SHARED, 0x7b8f32c3, 0x6d96, 0x4b89, 0x92, 0x3, 0xdd, 0x38, 0xb6, 0x14, 0x14, 0xf3); +DEFINE_GUID(ff_MF_SA_D3D11_SHARED_WITHOUT_MUTEX, 0x39dbd44d, 0x2e44, 0x4931, 0xa4, 0xc8, 0x35, 0x2d, 0x3d, 0xc4, 0x21, 0x15); +DEFINE_GUID(ff_MF_SA_MINIMUM_OUTPUT_SAMPLE_COUNT, 0x851745d5, 0xc3d6, 0x476d, 0x95, 0x27, 0x49, 0x8e, 0xf2, 0xd1, 0xd, 0x18); +DEFINE_GUID(ff_MF_SA_MINIMUM_OUTPUT_SAMPLE_COUNT_PROGRESSIVE, 0xf5523a5, 0x1cb2, 0x47c5, 0xa5, 0x50, 0x2e, 0xeb, 0x84, 0xb4, 0xd1, 0x4a); + +enum ff_eAVEncH264PictureType { + ff_eAVEncH264PictureType_IDR = 0, + ff_eAVEncH264PictureType_P, + ff_eAVEncH264PictureType_B +}; +enum ff_eAVEncCommonRateControlMode { + ff_eAVEncCommonRateControlMode_CBR = 0, + ff_eAVEncCommonRateControlMode_PeakConstrainedVBR = 1, + ff_eAVEncCommonRateControlMode_UnconstrainedVBR = 2, + ff_eAVEncCommonRateControlMode_Quality = 3, + ff_eAVEncCommonRateControlMode_LowDelayVBR = 4, + ff_eAVEncCommonRateControlMode_GlobalVBR = 5, + ff_eAVEncCommonRateControlMode_GlobalLowDelayVBR = 6 +}; + +enum { + ff_METransformUnknown = 600, + ff_METransformNeedInput, + ff_METransformHaveOutput, + ff_METransformDrainComplete, + ff_METransformMarker, +}; + +char *ff_hr_str_buf(char *buf, size_t size, HRESULT hr); +#define ff_hr_str(hr) ff_hr_str_buf((char[80]){0}, 80, hr) + +// Possibly compiler-dependent; the MS/MinGW definition for this is just crazy. +#define FF_VARIANT_VALUE(type, contents) &(VARIANT){ .vt = (type), contents } + +#define FF_VAL_VT_UI4(v) FF_VARIANT_VALUE(VT_UI4, .ulVal = (v)) +#define FF_VAL_VT_BOOL(v) FF_VARIANT_VALUE(VT_BOOL, .boolVal = (v)) + +IMFSample *ff_create_memory_sample(void *fill_data, size_t size, size_t align); +enum AVSampleFormat ff_media_type_to_sample_fmt(IMFAttributes *type); +enum AVPixelFormat ff_media_type_to_pix_fmt(IMFAttributes *type); +const GUID *ff_pix_fmt_to_guid(enum AVPixelFormat pix_fmt); +int ff_fourcc_from_guid(GUID *guid, uint32_t *out_fourcc); +char *ff_guid_str_buf(char *buf, size_t buf_size, GUID *guid); +#define ff_guid_str(guid) ff_guid_str_buf((char[80]){0}, 80, guid) +void ff_attributes_dump(void *log, IMFAttributes *attrs); +void ff_media_type_dump(void *log, IMFMediaType *type); +const CLSID *ff_codec_to_mf_subtype(enum AVCodecID codec); +int ff_init_com_mf(void *log); +int ff_instantiate_mf(void *log, + GUID category, + MFT_REGISTER_TYPE_INFO *in_type, + MFT_REGISTER_TYPE_INFO *out_type, + int use_hw, + IMFTransform **res); +void ff_free_mf(IMFTransform **mft); + +#endif