diff mbox

[FFmpeg-devel] Limited timecode support for lavd/decklink

Message ID alpine.LSU.2.20.1806062240420.18032@iq
State New
Headers show

Commit Message

Marton Balint June 6, 2018, 8:50 p.m. UTC
On Mon, 4 Jun 2018, Dave Rice wrote:

>
>>> 
>>> In my testing the timecode value set here has corrected been 
>>> associated with the first video frame (maintaining the 
>>> timecode-to-first-frame relationship as found on the source video 
>>> stream). Although only having first timecode value known is limiting, 
>>> I think this is still quite useful. This function also mirrors how 
>>> BlackMagic Media Express and Adobe Premiere handle capturing 
>>> video+timecode where only the first value is documented and all 
>>> subsequent values are presumed.
>> 
>> Could you give me an example? (e.g. ffmpeg command line?)
>
> ./ffmpeg -timecode_format vitc2 -f decklink -draw_bars 0 -audio_input embedded -video_input sdi -format_code ntsc -channels 8 -raw_format yuv422p10 -i "UltraStudio 3D" -c:v v210 -c:a aac output.mov
>
> This worked for me to embed a QuickTime timecode track based upon the 
> timecode value of the first frame. If the input contained non-sequential 
> timecode values then the timecode track would not be accurate from that 
> point onward, but creating a timecode track based only upon the initial 
> value is what BlackMagic Media Express and Adobe Premiere are doing 
> anyhow.
>

Hmm, either the decklink drivers became better in hinding the first few 
NoSignal frames, or maybe that issue only affected to old models? (e.g. 
DeckLink SDI or DeckLink Duo 1). I did some test with a Mini Recorder, and 
even the first frame was useful, in this case the timecode was indeed 
correct.

>>>> I'd rather see a new AVPacketSideData type which will contain the timecode as a string, so you can set it frame-by-frame.
>>> 
>>> Using side data for timecode would be preferable, but the possibility that a patch for that may someday arrive shouldn’t completely block this more limited patch.
>> 
>> I would like to make sure the code works reliably even for the limited use case and no race conditions are affectig the way it works.
>
> Feel welcome to suggest any testing. I’ll have access for testing again tomorrow.

I reworked the patch a bit (see attached), and added per-frame timcode 
support into the PKT_STRINGS_METADATA packet side data, this way the 
drawtext filter can also be used to blend the timecode into the frames, 
which seems like a useful feature.

Regards,
Marton

Comments

Dave Rice June 6, 2018, 9:13 p.m. UTC | #1
> On Jun 6, 2018, at 4:50 PM, Marton Balint <cus@passwd.hu> wrote:
> 
> On Mon, 4 Jun 2018, Dave Rice wrote:
> 
>> 
>>>> In my testing the timecode value set here has corrected been associated with the first video frame (maintaining the timecode-to-first-frame relationship as found on the source video stream). Although only having first timecode value known is limiting, I think this is still quite useful. This function also mirrors how BlackMagic Media Express and Adobe Premiere handle capturing video+timecode where only the first value is documented and all subsequent values are presumed.
>>> Could you give me an example? (e.g. ffmpeg command line?)
>> 
>> ./ffmpeg -timecode_format vitc2 -f decklink -draw_bars 0 -audio_input embedded -video_input sdi -format_code ntsc -channels 8 -raw_format yuv422p10 -i "UltraStudio 3D" -c:v v210 -c:a aac output.mov
>> 
>> This worked for me to embed a QuickTime timecode track based upon the timecode value of the first frame. If the input contained non-sequential timecode values then the timecode track would not be accurate from that point onward, but creating a timecode track based only upon the initial value is what BlackMagic Media Express and Adobe Premiere are doing anyhow.
>> 
> 
> Hmm, either the decklink drivers became better in hinding the first few NoSignal frames, or maybe that issue only affected to old models? (e.g. DeckLink SDI or DeckLink Duo 1). I did some test with a Mini Recorder, and even the first frame was useful, in this case the timecode was indeed correct.
> 
>>>>> I'd rather see a new AVPacketSideData type which will contain the timecode as a string, so you can set it frame-by-frame.
>>>> Using side data for timecode would be preferable, but the possibility that a patch for that may someday arrive shouldn’t completely block this more limited patch.
>>> I would like to make sure the code works reliably even for the limited use case and no race conditions are affectig the way it works.
>> 
>> Feel welcome to suggest any testing. I’ll have access for testing again tomorrow.
> 
> I reworked the patch a bit (see attached), and added per-frame timcode support into the PKT_STRINGS_METADATA packet side data, this way the drawtext filter can also be used to blend the timecode into the frames, which seems like a useful feature.


That sounds helpful.

libavdevice/decklink_dec.cpp:734:21: error: unknown type name 'DECKLINK_STR'
                    DECKLINK_STR decklink_tc;

Dave
Marton Balint June 6, 2018, 9:32 p.m. UTC | #2
On Wed, 6 Jun 2018, Dave Rice wrote:

>
>> On Jun 6, 2018, at 4:50 PM, Marton Balint <cus@passwd.hu> wrote:
>> 
>> On Mon, 4 Jun 2018, Dave Rice wrote:
>> 
>>> 
>>>>> In my testing the timecode value set here has corrected been associated with the first video frame (maintaining the timecode-to-first-frame relationship as found on the source video stream). Although only having first timecode value known is limiting, I think this is still quite useful. This function also mirrors how BlackMagic Media Express and Adobe Premiere handle capturing video+timecode where only the first value is documented and all subsequent values are presumed.
>>>> Could you give me an example? (e.g. ffmpeg command line?)
>>> 
>>> ./ffmpeg -timecode_format vitc2 -f decklink -draw_bars 0 -audio_input embedded -video_input sdi -format_code ntsc -channels 8 -raw_format yuv422p10 -i "UltraStudio 3D" -c:v v210 -c:a aac output.mov
>>> 
>>> This worked for me to embed a QuickTime timecode track based upon the timecode value of the first frame. If the input contained non-sequential timecode values then the timecode track would not be accurate from that point onward, but creating a timecode track based only upon the initial value is what BlackMagic Media Express and Adobe Premiere are doing anyhow.
>>> 
>> 
>> Hmm, either the decklink drivers became better in hinding the first few NoSignal frames, or maybe that issue only affected to old models? (e.g. DeckLink SDI or DeckLink Duo 1). I did some test with a Mini Recorder, and even the first frame was useful, in this case the timecode was indeed correct.
>> 
>>>>>> I'd rather see a new AVPacketSideData type which will contain the timecode as a string, so you can set it frame-by-frame.
>>>>> Using side data for timecode would be preferable, but the possibility that a patch for that may someday arrive shouldn’t completely block this more limited patch.
>>>> I would like to make sure the code works reliably even for the limited use case and no race conditions are affectig the way it works.
>>> 
>>> Feel welcome to suggest any testing. I’ll have access for testing again tomorrow.
>> 
>> I reworked the patch a bit (see attached), and added per-frame timcode support into the PKT_STRINGS_METADATA packet side data, this way the drawtext filter can also be used to blend the timecode into the frames, which seems like a useful feature.
>
>
> That sounds helpful.
>
> libavdevice/decklink_dec.cpp:734:21: error: unknown type name 'DECKLINK_STR'
>                    DECKLINK_STR decklink_tc;

The patch I sent only replaces the second patch, the first one:

http://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20180526/185eb219/attachment.obj

is still needed.

Regards,
Marton
Dave Rice June 7, 2018, 4:12 p.m. UTC | #3
> On Jun 6, 2018, at 5:32 PM, Marton Balint <cus@passwd.hu> wrote:
> 
> On Wed, 6 Jun 2018, Dave Rice wrote:
> 
>>> On Jun 6, 2018, at 4:50 PM, Marton Balint <cus@passwd.hu> wrote:
>>> On Mon, 4 Jun 2018, Dave Rice wrote:
>>>>>> In my testing the timecode value set here has corrected been associated with the first video frame (maintaining the timecode-to-first-frame relationship as found on the source video stream). Although only having first timecode value known is limiting, I think this is still quite useful. This function also mirrors how BlackMagic Media Express and Adobe Premiere handle capturing video+timecode where only the first value is documented and all subsequent values are presumed.
>>>>> Could you give me an example? (e.g. ffmpeg command line?)
>>>> ./ffmpeg -timecode_format vitc2 -f decklink -draw_bars 0 -audio_input embedded -video_input sdi -format_code ntsc -channels 8 -raw_format yuv422p10 -i "UltraStudio 3D" -c:v v210 -c:a aac output.mov
>>>> This worked for me to embed a QuickTime timecode track based upon the timecode value of the first frame. If the input contained non-sequential timecode values then the timecode track would not be accurate from that point onward, but creating a timecode track based only upon the initial value is what BlackMagic Media Express and Adobe Premiere are doing anyhow.
>>> Hmm, either the decklink drivers became better in hinding the first few NoSignal frames, or maybe that issue only affected to old models? (e.g. DeckLink SDI or DeckLink Duo 1). I did some test with a Mini Recorder, and even the first frame was useful, in this case the timecode was indeed correct.
>>>>>>> I'd rather see a new AVPacketSideData type which will contain the timecode as a string, so you can set it frame-by-frame.
>>>>>> Using side data for timecode would be preferable, but the possibility that a patch for that may someday arrive shouldn’t completely block this more limited patch.
>>>>> I would like to make sure the code works reliably even for the limited use case and no race conditions are affectig the way it works.
>>>> Feel welcome to suggest any testing. I’ll have access for testing again tomorrow.
>>> I reworked the patch a bit (see attached), and added per-frame timcode support into the PKT_STRINGS_METADATA packet side data, this way the drawtext filter can also be used to blend the timecode into the frames, which seems like a useful feature.
>> 
>> 
>> That sounds helpful.
>> 
>> libavdevice/decklink_dec.cpp:734:21: error: unknown type name 'DECKLINK_STR'
>>                   DECKLINK_STR decklink_tc;
> 
> The patch I sent only replaces the second patch, the first one:
> 
> http://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20180526/185eb219/attachment.obj <http://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20180526/185eb219/attachment.obj>

Thanks for the update. I continued testing and found this very useful, particularly with the side data.

Before I only tested with vitc but now have a serial cable connected as well and found a source tape that has distinct values for LTC and VITC timecodes. The LTC values are from 1:00:00 to 2:00:00 and the VITC values are from 07:00:00 - 08:00:00.

With the deckcontrol utility at https://github.com/bavc/deckcontrol <https://github.com/bavc/deckcontrol>, I can use the command gettimecode to grab the LTC value:

deckcontrol gettimecode
Issued command 'gettimecode'
TC=07:37:56:21
Command sucessfully issued
Error sending command (No error)

With these patches, I can only grab the vitc values:

for i in rp188vitc rp188vitc2 rp188ltc rp188any vitc vitc2 serial ; do echo -n "${i}: " ; ./ffprobe -v quiet -timecode_format "$i" -f decklink -draw_bars 0 -audio_input embedded -video_input sdi -format_code ntsc -channels 8 -raw_format yuv422p10 -i "UltraStudio Express" -select_streams v -show_entries stream_tags=timecode -of default=nw=1:nk=1 ; echo ; done
rp188vitc: 
rp188vitc2: 
rp188ltc: 
rp188any: 
vitc: 01:41:44;06
vitc2: 01:41:44;21
serial: 

Also it may be interesting in cases like this to support accepting multiple timecode inputs at once, such as "-timecode_format vitc+rp188ltc” though it would need to be contextualized more in metadata.

With a serial cable connected, I can access LTC via the deckcontrol utility but not with this patch.

./ffprobe -timecode_format rp188ltc -v debug -f decklink -draw_bars 0 -audio_input embedded -video_input sdi -format_code ntsc -channels 8 -raw_format yuv422p10 -i "UltraStudio Express"
ffprobe version N-91240-g3769aafb7c Copyright (c) 2007-2018 the FFmpeg developers
  built with Apple LLVM version 9.0.0 (clang-900.0.38)
  configuration: --enable-libfreetype --enable-nonfree --enable-decklink --extra-cflags=-I/usr/local/include --extra-ldflags=-L/usr/local/include --disable-muxer=mxf --disable-demuxer=mxf
  libavutil      56. 18.102 / 56. 18.102
  libavcodec     58. 19.105 / 58. 19.105
  libavformat    58. 17.100 / 58. 17.100
  libavdevice    58.  4.100 / 58.  4.100
  libavfilter     7. 25.100 /  7. 25.100
  libswscale      5.  2.100 /  5.  2.100
  libswresample   3.  2.100 /  3.  2.100
[decklink @ 0x7fe2b8003000] Trying to find mode for frame size 0x0, frame timing 0/0, field order 0, direction 0, mode number 0, format code ntsc
[decklink @ 0x7fe2b8003000] Found Decklink mode 720 x 486 with rate 29.97(i)
[decklink @ 0x7fe2b8003000] Using 8 input audio channels
[decklink @ 0x7fe2b8003000] Unable to find timecode.
    Last message repeated 5 times
[decklink @ 0x7fe2b8003000] Probe buffer size limit of 5000000 bytes reached
Input #0, decklink, from 'UltraStudio Express':
  Duration: N/A, start: 0.000000, bitrate: 229869 kb/s
    Stream #0:0, 5, 1/1000000: Audio: pcm_s16le, 48000 Hz, 8 channels, s16, 6144 kb/s
    Stream #0:1, 6, 1/1000000: Video: v210, 1 reference frame (V210 / 0x30313256), yuv422p10le(bottom first), 720x486, 0/1, 223725 kb/s, 29.97 fps, 29.97 tbr, 1000k tbn, 1000k tbc

./ffprobe -timecode_format serial -v debug -f decklink -draw_bars 0 -audio_input embedded -video_input sdi -format_code ntsc -channels 8 -raw_format yuv422p10 -i "UltraStudio Express"
ffprobe version N-91240-g3769aafb7c Copyright (c) 2007-2018 the FFmpeg developers
  built with Apple LLVM version 9.0.0 (clang-900.0.38)
  configuration: --enable-libfreetype --enable-nonfree --enable-decklink --extra-cflags=-I/usr/local/include --extra-ldflags=-L/usr/local/include --disable-muxer=mxf --disable-demuxer=mxf
  libavutil      56. 18.102 / 56. 18.102
  libavcodec     58. 19.105 / 58. 19.105
  libavformat    58. 17.100 / 58. 17.100
  libavdevice    58.  4.100 / 58.  4.100
  libavfilter     7. 25.100 /  7. 25.100
  libswscale      5.  2.100 /  5.  2.100
  libswresample   3.  2.100 /  3.  2.100
[decklink @ 0x7fb086003000] Trying to find mode for frame size 0x0, frame timing 0/0, field order 0, direction 0, mode number 0, format code ntsc
[decklink @ 0x7fb086003000] Found Decklink mode 720 x 486 with rate 29.97(i)
[decklink @ 0x7fb086003000] Using 8 input audio channels
[decklink @ 0x7fb086003000] Unable to find timecode.
    Last message repeated 5 times
[decklink @ 0x7fb086003000] Probe buffer size limit of 5000000 bytes reached
Input #0, decklink, from 'UltraStudio Express':
  Duration: N/A, start: 0.000000, bitrate: 229869 kb/s
    Stream #0:0, 5, 1/1000000: Audio: pcm_s16le, 48000 Hz, 8 channels, s16, 6144 kb/s
    Stream #0:1, 6, 1/1000000: Video: v210, 1 reference frame (V210 / 0x30313256), yuv422p10le(bottom first), 720x486, 0/1, 223725 kb/s, 29.97 fps, 29.97 tbr, 1000k tbn, 1000k tbc

Thanks,
Dave Rice
Dave Rice June 7, 2018, 4:21 p.m. UTC | #4
> On Jun 7, 2018, at 12:12 PM, Dave Rice <dave@dericed.com> wrote:
> 
> 
>> On Jun 6, 2018, at 5:32 PM, Marton Balint <cus@passwd.hu> wrote:
>> 
>> On Wed, 6 Jun 2018, Dave Rice wrote:
>> 
>>>> On Jun 6, 2018, at 4:50 PM, Marton Balint <cus@passwd.hu> wrote:
>>>> On Mon, 4 Jun 2018, Dave Rice wrote:
>>>>>>> In my testing the timecode value set here has corrected been associated with the first video frame (maintaining the timecode-to-first-frame relationship as found on the source video stream). Although only having first timecode value known is limiting, I think this is still quite useful. This function also mirrors how BlackMagic Media Express and Adobe Premiere handle capturing video+timecode where only the first value is documented and all subsequent values are presumed.
>>>>>> Could you give me an example? (e.g. ffmpeg command line?)
>>>>> ./ffmpeg -timecode_format vitc2 -f decklink -draw_bars 0 -audio_input embedded -video_input sdi -format_code ntsc -channels 8 -raw_format yuv422p10 -i "UltraStudio 3D" -c:v v210 -c:a aac output.mov
>>>>> This worked for me to embed a QuickTime timecode track based upon the timecode value of the first frame. If the input contained non-sequential timecode values then the timecode track would not be accurate from that point onward, but creating a timecode track based only upon the initial value is what BlackMagic Media Express and Adobe Premiere are doing anyhow.
>>>> Hmm, either the decklink drivers became better in hinding the first few NoSignal frames, or maybe that issue only affected to old models? (e.g. DeckLink SDI or DeckLink Duo 1). I did some test with a Mini Recorder, and even the first frame was useful, in this case the timecode was indeed correct.
>>>>>>>> I'd rather see a new AVPacketSideData type which will contain the timecode as a string, so you can set it frame-by-frame.
>>>>>>> Using side data for timecode would be preferable, but the possibility that a patch for that may someday arrive shouldn’t completely block this more limited patch.
>>>>>> I would like to make sure the code works reliably even for the limited use case and no race conditions are affectig the way it works.
>>>>> Feel welcome to suggest any testing. I’ll have access for testing again tomorrow.
>>>> I reworked the patch a bit (see attached), and added per-frame timcode support into the PKT_STRINGS_METADATA packet side data, this way the drawtext filter can also be used to blend the timecode into the frames, which seems like a useful feature.
>>> 
>>> 
>>> That sounds helpful.
>>> 
>>> libavdevice/decklink_dec.cpp:734:21: error: unknown type name 'DECKLINK_STR'
>>>                  DECKLINK_STR decklink_tc;
>> 
>> The patch I sent only replaces the second patch, the first one:
>> 
>> http://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20180526/185eb219/attachment.obj <http://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20180526/185eb219/attachment.obj>
> 
> Thanks for the update. I continued testing and found this very useful, particularly with the side data.
> 
> Before I only tested with vitc but now have a serial cable connected as well and found a source tape that has distinct values for LTC and VITC timecodes. The LTC values are from 1:00:00 to 2:00:00 and the VITC values are from 07:00:00 - 08:00:00.

Realized a mix up here, in the samples below VITC values are in 1:00:00 to 2:00:00 and the LTC values are from 07:00:00 - 08:00:00

> With the deckcontrol utility at https://github.com/bavc/deckcontrol <https://github.com/bavc/deckcontrol>, I can use the command gettimecode to grab the LTC value:
> 
> deckcontrol gettimecode
> Issued command 'gettimecode'
> TC=07:37:56:21
> Command sucessfully issued
> Error sending command (No error)
> 
> With these patches, I can only grab the vitc values:
> 
> for i in rp188vitc rp188vitc2 rp188ltc rp188any vitc vitc2 serial ; do echo -n "${i}: " ; ./ffprobe -v quiet -timecode_format "$i" -f decklink -draw_bars 0 -audio_input embedded -video_input sdi -format_code ntsc -channels 8 -raw_format yuv422p10 -i "UltraStudio Express" -select_streams v -show_entries stream_tags=timecode -of default=nw=1:nk=1 ; echo ; done
> rp188vitc: 
> rp188vitc2: 
> rp188ltc: 
> rp188any: 
> vitc: 01:41:44;06
> vitc2: 01:41:44;21
> serial: 
> 
> Also it may be interesting in cases like this to support accepting multiple timecode inputs at once, such as "-timecode_format vitc+rp188ltc” though it would need to be contextualized more in metadata.
> 
> With a serial cable connected, I can access LTC via the deckcontrol utility but not with this patch.
> 
> ./ffprobe -timecode_format rp188ltc -v debug -f decklink -draw_bars 0 -audio_input embedded -video_input sdi -format_code ntsc -channels 8 -raw_format yuv422p10 -i "UltraStudio Express"
> ffprobe version N-91240-g3769aafb7c Copyright (c) 2007-2018 the FFmpeg developers
>  built with Apple LLVM version 9.0.0 (clang-900.0.38)
>  configuration: --enable-libfreetype --enable-nonfree --enable-decklink --extra-cflags=-I/usr/local/include --extra-ldflags=-L/usr/local/include --disable-muxer=mxf --disable-demuxer=mxf
>  libavutil      56. 18.102 / 56. 18.102
>  libavcodec     58. 19.105 / 58. 19.105
>  libavformat    58. 17.100 / 58. 17.100
>  libavdevice    58.  4.100 / 58.  4.100
>  libavfilter     7. 25.100 /  7. 25.100
>  libswscale      5.  2.100 /  5.  2.100
>  libswresample   3.  2.100 /  3.  2.100
> [decklink @ 0x7fe2b8003000] Trying to find mode for frame size 0x0, frame timing 0/0, field order 0, direction 0, mode number 0, format code ntsc
> [decklink @ 0x7fe2b8003000] Found Decklink mode 720 x 486 with rate 29.97(i)
> [decklink @ 0x7fe2b8003000] Using 8 input audio channels
> [decklink @ 0x7fe2b8003000] Unable to find timecode.
>    Last message repeated 5 times
> [decklink @ 0x7fe2b8003000] Probe buffer size limit of 5000000 bytes reached
> Input #0, decklink, from 'UltraStudio Express':
>  Duration: N/A, start: 0.000000, bitrate: 229869 kb/s
>    Stream #0:0, 5, 1/1000000: Audio: pcm_s16le, 48000 Hz, 8 channels, s16, 6144 kb/s
>    Stream #0:1, 6, 1/1000000: Video: v210, 1 reference frame (V210 / 0x30313256), yuv422p10le(bottom first), 720x486, 0/1, 223725 kb/s, 29.97 fps, 29.97 tbr, 1000k tbn, 1000k tbc
> 
> ./ffprobe -timecode_format serial -v debug -f decklink -draw_bars 0 -audio_input embedded -video_input sdi -format_code ntsc -channels 8 -raw_format yuv422p10 -i "UltraStudio Express"
> ffprobe version N-91240-g3769aafb7c Copyright (c) 2007-2018 the FFmpeg developers
>  built with Apple LLVM version 9.0.0 (clang-900.0.38)
>  configuration: --enable-libfreetype --enable-nonfree --enable-decklink --extra-cflags=-I/usr/local/include --extra-ldflags=-L/usr/local/include --disable-muxer=mxf --disable-demuxer=mxf
>  libavutil      56. 18.102 / 56. 18.102
>  libavcodec     58. 19.105 / 58. 19.105
>  libavformat    58. 17.100 / 58. 17.100
>  libavdevice    58.  4.100 / 58.  4.100
>  libavfilter     7. 25.100 /  7. 25.100
>  libswscale      5.  2.100 /  5.  2.100
>  libswresample   3.  2.100 /  3.  2.100
> [decklink @ 0x7fb086003000] Trying to find mode for frame size 0x0, frame timing 0/0, field order 0, direction 0, mode number 0, format code ntsc
> [decklink @ 0x7fb086003000] Found Decklink mode 720 x 486 with rate 29.97(i)
> [decklink @ 0x7fb086003000] Using 8 input audio channels
> [decklink @ 0x7fb086003000] Unable to find timecode.
>    Last message repeated 5 times
> [decklink @ 0x7fb086003000] Probe buffer size limit of 5000000 bytes reached
> Input #0, decklink, from 'UltraStudio Express':
>  Duration: N/A, start: 0.000000, bitrate: 229869 kb/s
>    Stream #0:0, 5, 1/1000000: Audio: pcm_s16le, 48000 Hz, 8 channels, s16, 6144 kb/s
>    Stream #0:1, 6, 1/1000000: Video: v210, 1 reference frame (V210 / 0x30313256), yuv422p10le(bottom first), 720x486, 0/1, 223725 kb/s, 29.97 fps, 29.97 tbr, 1000k tbn, 1000k tbc
> 
> Thanks,
> Dave Rice
> 
> 
> 
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Marton Balint June 7, 2018, 9:01 p.m. UTC | #5
On Thu, 7 Jun 2018, Dave Rice wrote:


[...]

>
> Before I only tested with vitc but now have a serial cable connected as 
> well and found a source tape that has distinct values for LTC and VITC 
> timecodes. The LTC values are from 1:00:00 to 2:00:00 and the VITC 
> values are from 07:00:00 - 08:00:00.
>
> With the deckcontrol utility at https://github.com/bavc/deckcontrol 
> <https://github.com/bavc/deckcontrol>, I can use the command gettimecode 
> to grab the LTC value:
>
> deckcontrol gettimecode
> Issued command 'gettimecode'
> TC=07:37:56:21
> Command sucessfully issued
> Error sending command (No error)
>
> With these patches, I can only grab the vitc values:
>
> for i in rp188vitc rp188vitc2 rp188ltc rp188any vitc vitc2 serial ; do echo -n "${i}: " ; ./ffprobe -v quiet -timecode_format "$i" -f decklink -draw_bars 0 -audio_input embedded -video_input sdi -format_code ntsc -channels 8 -raw_format yuv422p10 -i "UltraStudio Express" -select_streams v -show_entries stream_tags=timecode -of default=nw=1:nk=1 ; echo ; done
> rp188vitc: 
> rp188vitc2: 
> rp188ltc: 
> rp188any: 
> vitc: 01:41:44;06
> vitc2: 01:41:44;21
> serial:
>
> Also it may be interesting in cases like this to support accepting 
> multiple timecode inputs at once, such as "-timecode_format 
> vitc+rp188ltc” though it would need to be contextualized more in 
> metadata.
>
> With a serial cable connected, I can access LTC via the deckcontrol 
> utility but not with this patch.

Well, the way I understand it, deckcontrol is using a totally different 
timecode source: the RS422 deck control interface. In contrast, the
timecode capture in the patch is using the SDI (video) source.

If the deck does not put the LTC timecode into SDI line 10, then the 
driver won't be able to capture it if you specify 'rp188ltc'. I am not 
sure however why 'serial' does not work, but from a quick look at the 
SDK maybe that only works if you use the deck control capture functions...

Regards,
Marton
Dave Rice June 12, 2018, 6:22 p.m. UTC | #6
> On Jun 7, 2018, at 5:01 PM, Marton Balint <cus@passwd.hu> wrote:
> 
> On Thu, 7 Jun 2018, Dave Rice wrote:
> 
> [...]
> 
>> 
>> Before I only tested with vitc but now have a serial cable connected as well and found a source tape that has distinct values for LTC and VITC timecodes. The LTC values are from 1:00:00 to 2:00:00 and the VITC values are from 07:00:00 - 08:00:00.
>> 
>> With the deckcontrol utility at https://github.com/bavc/deckcontrol <https://github.com/bavc/deckcontrol>, I can use the command gettimecode to grab the LTC value:
>> 
>> deckcontrol gettimecode
>> Issued command 'gettimecode'
>> TC=07:37:56:21
>> Command sucessfully issued
>> Error sending command (No error)
>> 
>> With these patches, I can only grab the vitc values:
>> 
>> for i in rp188vitc rp188vitc2 rp188ltc rp188any vitc vitc2 serial ; do echo -n "${i}: " ; ./ffprobe -v quiet -timecode_format "$i" -f decklink -draw_bars 0 -audio_input embedded -video_input sdi -format_code ntsc -channels 8 -raw_format yuv422p10 -i "UltraStudio Express" -select_streams v -show_entries stream_tags=timecode -of default=nw=1:nk=1 ; echo ; done
>> rp188vitc: rp188vitc2: rp188ltc: rp188any: vitc: 01:41:44;06
>> vitc2: 01:41:44;21
>> serial:
>> 
>> Also it may be interesting in cases like this to support accepting multiple timecode inputs at once, such as "-timecode_format vitc+rp188ltc” though it would need to be contextualized more in metadata.
>> 
>> With a serial cable connected, I can access LTC via the deckcontrol utility but not with this patch.
> 
> Well, the way I understand it, deckcontrol is using a totally different timecode source: the RS422 deck control interface. In contrast, the
> timecode capture in the patch is using the SDI (video) source.
> 
> If the deck does not put the LTC timecode into SDI line 10, then the driver won't be able to capture it if you specify 'rp188ltc'. I am not sure however why 'serial' does not work, but from a quick look at the SDK maybe that only works if you use the deck control capture functions…

I see at https://forum.blackmagicdesign.com/viewtopic.php?f=12&t=71730&p=400097&hilit=bmdTimecodeSerial#p400097 <https://forum.blackmagicdesign.com/viewtopic.php?f=12&t=71730&p=400097&hilit=bmdTimecodeSerial#p400097> that capturing bmdTimecodeSerial is an issue there too, so this is likely an issue with the sdk rather than with your patch. Otherwise I’ve been testing this more and find it really useful. Hope to see this merged.
Dave
Marton Balint June 12, 2018, 10:27 p.m. UTC | #7
On Tue, 12 Jun 2018, Dave Rice wrote:

>
>> On Jun 7, 2018, at 5:01 PM, Marton Balint <cus@passwd.hu> wrote:
>> 
>> On Thu, 7 Jun 2018, Dave Rice wrote:
>> 
>> [...]
>> 
>>> 
>>> Before I only tested with vitc but now have a serial cable connected as well and found a source tape that has distinct values for LTC and VITC timecodes. The LTC values are from 1:00:00 to 2:00:00 and the VITC values are from 07:00:00 - 08:00:00.
>>> 
>>> With the deckcontrol utility at https://github.com/bavc/deckcontrol <https://github.com/bavc/deckcontrol>, I can use the command gettimecode to grab the LTC value:
>>> 
>>> deckcontrol gettimecode
>>> Issued command 'gettimecode'
>>> TC=07:37:56:21
>>> Command sucessfully issued
>>> Error sending command (No error)
>>> 
>>> With these patches, I can only grab the vitc values:
>>> 
>>> for i in rp188vitc rp188vitc2 rp188ltc rp188any vitc vitc2 serial ; do echo -n "${i}: " ; ./ffprobe -v quiet -timecode_format "$i" -f decklink -draw_bars 0 -audio_input embedded -video_input sdi -format_code ntsc -channels 8 -raw_format yuv422p10 -i "UltraStudio Express" -select_streams v -show_entries stream_tags=timecode -of default=nw=1:nk=1 ; echo ; done
>>> rp188vitc: rp188vitc2: rp188ltc: rp188any: vitc: 01:41:44;06
>>> vitc2: 01:41:44;21
>>> serial:
>>> 
>>> Also it may be interesting in cases like this to support accepting multiple timecode inputs at once, such as "-timecode_format vitc+rp188ltc” though it would need to be contextualized more in metadata.
>>> 
>>> With a serial cable connected, I can access LTC via the deckcontrol utility but not with this patch.
>> 
>> Well, the way I understand it, deckcontrol is using a totally different timecode source: the RS422 deck control interface. In contrast, the
>> timecode capture in the patch is using the SDI (video) source.
>> 
>> If the deck does not put the LTC timecode into SDI line 10, then the driver won't be able to capture it if you specify 'rp188ltc'. I am not sure however why 'serial' does not work, but from a quick look at the SDK maybe that only works if you use the deck control capture functions…
>

> I see at 
> https://forum.blackmagicdesign.com/viewtopic.php?f=12&t=71730&p=400097&hilit=bmdTimecodeSerial#p400097 
> <https://forum.blackmagicdesign.com/viewtopic.php?f=12&t=71730&p=400097&hilit=bmdTimecodeSerial#p400097> 
> that capturing bmdTimecodeSerial is an issue there too, so this is 
> likely an issue with the sdk rather than with your patch. Otherwise I’ve 
> been testing this more and find it really useful. Hope to see this 
> merged.

Pushed, thanks for testing.

Regards,
Marton
Jonathan Morley July 10, 2018, 4:02 p.m. UTC | #8
Hi Marton et al,

I am revisiting this now that I have access to the BlackMagic DeckLink Duo again. I see what made it into master and had a few questions.

1) Can you please explain more about storing the timecodes as side packet data in each video packet? Basically is there some convention among demuxers and muxers to handle that specific metadata?

2) Is there any reason not to make a valid timecode track (ala AVMEDIA_TYPE_DATA AVStream) with timecode packets? Would that conflict with the side data approach currently implemented?

I have found that (much as you originally predicted) my original approach relies on what could generously be called a race case. Since I was using the decklink demuxer to feed the movenc format writer I was relying on having set the timecode metadata on the video stream before either mov_init or mov_write_header get called (since those two functions create the timecode track for “free” if they detect the metadata initially). This is not really a deterministic approach and seems to be worse than making a valid timecode stream to begin with.

What I am trying to understand now is if there is any overlap in responsibility between the video packet side data approach and creating a dedicated timecode data stream. Please let me know what you think.

Thanks,
Jon

> On Jun 12, 2018, at 3:27 PM, Marton Balint <cus@passwd.hu> wrote:
> 
> 
> 
> On Tue, 12 Jun 2018, Dave Rice wrote:
> 
>> 
>>> On Jun 7, 2018, at 5:01 PM, Marton Balint <cus@passwd.hu> wrote:
>>> On Thu, 7 Jun 2018, Dave Rice wrote:
>>> [...]
>>>> Before I only tested with vitc but now have a serial cable connected as well and found a source tape that has distinct values for LTC and VITC timecodes. The LTC values are from 1:00:00 to 2:00:00 and the VITC values are from 07:00:00 - 08:00:00.
>>>> With the deckcontrol utility at https://github.com/bavc/deckcontrol <https://github.com/bavc/deckcontrol>, I can use the command gettimecode to grab the LTC value:
>>>> deckcontrol gettimecode
>>>> Issued command 'gettimecode'
>>>> TC=07:37:56:21
>>>> Command sucessfully issued
>>>> Error sending command (No error)
>>>> With these patches, I can only grab the vitc values:
>>>> for i in rp188vitc rp188vitc2 rp188ltc rp188any vitc vitc2 serial ; do echo -n "${i}: " ; ./ffprobe -v quiet -timecode_format "$i" -f decklink -draw_bars 0 -audio_input embedded -video_input sdi -format_code ntsc -channels 8 -raw_format yuv422p10 -i "UltraStudio Express" -select_streams v -show_entries stream_tags=timecode -of default=nw=1:nk=1 ; echo ; done
>>>> rp188vitc: rp188vitc2: rp188ltc: rp188any: vitc: 01:41:44;06
>>>> vitc2: 01:41:44;21
>>>> serial:
>>>> Also it may be interesting in cases like this to support accepting multiple timecode inputs at once, such as "-timecode_format vitc+rp188ltc” though it would need to be contextualized more in metadata.
>>>> With a serial cable connected, I can access LTC via the deckcontrol utility but not with this patch.
>>> Well, the way I understand it, deckcontrol is using a totally different timecode source: the RS422 deck control interface. In contrast, the
>>> timecode capture in the patch is using the SDI (video) source.
>>> If the deck does not put the LTC timecode into SDI line 10, then the driver won't be able to capture it if you specify 'rp188ltc'. I am not sure however why 'serial' does not work, but from a quick look at the SDK maybe that only works if you use the deck control capture functions…
>> 
> 
>> I see at https://forum.blackmagicdesign.com/viewtopic.php?f=12&t=71730&p=400097&hilit=bmdTimecodeSerial#p400097 <https://forum.blackmagicdesign.com/viewtopic.php?f=12&t=71730&p=400097&hilit=bmdTimecodeSerial#p400097> that capturing bmdTimecodeSerial is an issue there too, so this is likely an issue with the sdk rather than with your patch. Otherwise I’ve been testing this more and find it really useful. Hope to see this merged.
> 
> Pushed, thanks for testing.
> 
> Regards,
> Marton
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Marton Balint July 15, 2018, 8:48 p.m. UTC | #9
On Tue, 10 Jul 2018, Jonathan Morley wrote:

> Hi Marton et al,
>
> I am revisiting this now that I have access to the BlackMagic DeckLink 
> Duo again. I see what made it into master and had a few questions.
>
> 1) Can you please explain more about storing the timecodes as side 
> packet data in each video packet? Basically is there some convention 
> among demuxers and muxers to handle that specific metadata?

In the current implementation per-frame timecode is stored as 
AV_PKT_DATA_STRINGS_METADATA side data, when AVPackets become AVFrames, 
the AV_PKT_DATA_STRINGS_METADATA is automatically converted to entries in 
the AVFrame->metadata AVDictionary. The dictionary key is "timecode".

There is no "standard" way to store per-frame timecode, neither in 
packets, nor in frames (other than the frame side data 
AV_FRAME_DATA_GOP_TIMECODE, but that is too specific to MPEG). Using 
AVFrame->metadata for this is also non-standard, but it allows us to 
implement the feature without worrying too much about defining / 
documenting it.

Also it is worth mentioning that the frame metadata is lost when encoding, 
so the muxers won't have access to it, unless the encoders export it in 
some way, such as packet metadata or side data (they current don't).

>
> 2) Is there any reason not to make a valid timecode track (ala 
> AVMEDIA_TYPE_DATA AVStream) with timecode packets? Would that conflict 
> with the side data approach currently implemented?

I see no conflict, you might implement a timecode "track", but I don't see 
why that would make your life any easier.

>
> I have found that (much as you originally predicted) my original 
> approach relies on what could generously be called a race case. Since I 
> was using the decklink demuxer to feed the movenc format writer I was 
> relying on having set the timecode metadata on the video stream before 
> either mov_init or mov_write_header get called (since those two 
> functions create the timecode track for “free” if they detect the 
> metadata initially). This is not really a deterministic approach and 
> seems to be worse than making a valid timecode stream to begin with.

AVStream->timecode is still set if the first decklink frame has a 
timecode, so anything that worked with your initial patch should work now 
as well. Obvisously this approach has limitations, but it works well for 
some use cases.

>
> What I am trying to understand now is if there is any overlap in 
> responsibility between the video packet side data approach and creating 
> a dedicated timecode data stream. Please let me know what you think.

I don't think there is any overlap, but I am also not aware of any muxer 
which supports timecode tracks, as separate data streams. Not even movenc 
as far as I see.

Regards,
Marton
Devin Heitmueller July 16, 2018, 1:32 p.m. UTC | #10
Hi Marton,

> 
> In the current implementation per-frame timecode is stored as AV_PKT_DATA_STRINGS_METADATA side data, when AVPackets become AVFrames, the AV_PKT_DATA_STRINGS_METADATA is automatically converted to entries in the AVFrame->metadata AVDictionary. The dictionary key is "timecode".
> 
> There is no "standard" way to store per-frame timecode, neither in packets, nor in frames (other than the frame side data AV_FRAME_DATA_GOP_TIMECODE, but that is too specific to MPEG). Using AVFrame->metadata for this is also non-standard, but it allows us to implement the feature without worrying too much about defining / documenting it.

For what it’s worth, I’ve got timecode support implemented here where I turned the uint32 defined in libavutil/timecode.h into a new frame side data type.  I’ve got the H.264 decoder extracting timecodes from SEI and creating these, which then feed it to the decklink output where they get converted into the appropriate VANC packets.  Seems to be working pretty well, although still a couple of edge cases to be ironed out with interlaced content and PAFF streams.

> 
> Also it is worth mentioning that the frame metadata is lost when encoding, so the muxers won't have access to it, unless the encoders export it in some way, such as packet metadata or side data (they current don't).

Since for the moment I’m focused on the decoding case, I’ve changed the V210 encoder to convert the AVFrame side data into AVPacket side data (so the decklink output can get access to the data), and when I hook in the decklink capture support I will be submitting patches for the H.264 and HEVC encoders.

>> 
>> 2) Is there any reason not to make a valid timecode track (ala AVMEDIA_TYPE_DATA AVStream) with timecode packets? Would that conflict with the side data approach currently implemented?
> 
> I see no conflict, you might implement a timecode "track", but I don't see why that would make your life any easier.

The whole notion of supporting via a stream versus side data is a long-standing issue.  It impacts not just timecodes but also stuff like closed captions, SCTE-104 triggers, and teletext.  In some cases like MOV it’s carried in the container as a separate stream; in other cases like MPEG2/H.264/HEVC it’s carried in the video stream.

At least for captions and timecodes the side data approach works fine in the video stream case but it’s problematic if the data is carried as side data needs to be extracted into a stream.  The only way I could think of doing it was to insert a split filter on the video stream and feed both the actual video encoder and a second encoder instance which throws away the video frames and just acts on the side data to create the caption stream.

And of course you have same problem in the other direction - if you receive the timecodes/captions via a stream, how to you get it into side data so it can be encoded by the video encoder.

---
Devin Heitmueller - LTN Global Communications
dheitmueller@ltnglobal.com
Jonathan Morley July 16, 2018, 6:56 p.m. UTC | #11
That is really interesting feedback guys. I have been thinking about things mostly in a MOV independent timecode track (or tracks) kind of way, but I know OMF, MXF, AAF, etc handle it more analogously to packet/frame side data.

Usually ffmpeg has some kind of superset of functionality for handling any one concept in order to be able to handle all the various forms and implementations. I don’t really see that for timecode though I don’t really know what that would look like either. Especially given the compromises you both pointed out.

In my case it turned out that our Decklink Duo 2 was in a duplex state that caused the first few frames to get dropped and thus miss the opening of the output format timing wise. That is why it appeared to fail setting the timecode in the output container. We don’t really need that duplex mode (or the Decklink at all for that matter) so I think we are set for now.

I will keep my thinking cap on about ffmpeg and timecode though. What I need my just be about adding understanding to mov.c and movenc.c for handling densely populated independent timecode tracks.

Thanks,
Jon

> On Jul 16, 2018, at 6:32 AM, Devin Heitmueller <dheitmueller@ltnglobal.com> wrote:
> 
> Hi Marton,
> 
>> 
>> In the current implementation per-frame timecode is stored as AV_PKT_DATA_STRINGS_METADATA side data, when AVPackets become AVFrames, the AV_PKT_DATA_STRINGS_METADATA is automatically converted to entries in the AVFrame->metadata AVDictionary. The dictionary key is "timecode".
>> 
>> There is no "standard" way to store per-frame timecode, neither in packets, nor in frames (other than the frame side data AV_FRAME_DATA_GOP_TIMECODE, but that is too specific to MPEG). Using AVFrame->metadata for this is also non-standard, but it allows us to implement the feature without worrying too much about defining / documenting it.
> 
> For what it’s worth, I’ve got timecode support implemented here where I turned the uint32 defined in libavutil/timecode.h into a new frame side data type.  I’ve got the H.264 decoder extracting timecodes from SEI and creating these, which then feed it to the decklink output where they get converted into the appropriate VANC packets.  Seems to be working pretty well, although still a couple of edge cases to be ironed out with interlaced content and PAFF streams.
> 
>> 
>> Also it is worth mentioning that the frame metadata is lost when encoding, so the muxers won't have access to it, unless the encoders export it in some way, such as packet metadata or side data (they current don't).
> 
> Since for the moment I’m focused on the decoding case, I’ve changed the V210 encoder to convert the AVFrame side data into AVPacket side data (so the decklink output can get access to the data), and when I hook in the decklink capture support I will be submitting patches for the H.264 and HEVC encoders.
> 
>>> 
>>> 2) Is there any reason not to make a valid timecode track (ala AVMEDIA_TYPE_DATA AVStream) with timecode packets? Would that conflict with the side data approach currently implemented?
>> 
>> I see no conflict, you might implement a timecode "track", but I don't see why that would make your life any easier.
> 
> The whole notion of supporting via a stream versus side data is a long-standing issue.  It impacts not just timecodes but also stuff like closed captions, SCTE-104 triggers, and teletext.  In some cases like MOV it’s carried in the container as a separate stream; in other cases like MPEG2/H.264/HEVC it’s carried in the video stream.
> 
> At least for captions and timecodes the side data approach works fine in the video stream case but it’s problematic if the data is carried as side data needs to be extracted into a stream.  The only way I could think of doing it was to insert a split filter on the video stream and feed both the actual video encoder and a second encoder instance which throws away the video frames and just acts on the side data to create the caption stream.
> 
> And of course you have same problem in the other direction - if you receive the timecodes/captions via a stream, how to you get it into side data so it can be encoded by the video encoder.
> 
> ---
> Devin Heitmueller - LTN Global Communications
> dheitmueller@ltnglobal.com
> 
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Devin Heitmueller July 16, 2018, 7:23 p.m. UTC | #12
> On Jul 16, 2018, at 2:56 PM, Jonathan Morley <jmorley@pixsystem.com> wrote:
> 
> That is really interesting feedback guys. I have been thinking about things mostly in a MOV independent timecode track (or tracks) kind of way, but I know OMF, MXF, AAF, etc handle it more analogously to packet/frame side data.
> 
> Usually ffmpeg has some kind of superset of functionality for handling any one concept in order to be able to handle all the various forms and implementations. I don’t really see that for timecode though I don’t really know what that would look like either. Especially given the compromises you both pointed out.
> 
> In my case it turned out that our Decklink Duo 2 was in a duplex state that caused the first few frames to get dropped and thus miss the opening of the output format timing wise. That is why it appeared to fail setting the timecode in the output container. We don’t really need that duplex mode (or the Decklink at all for that matter) so I think we are set for now.

I’ve run into this in my decklink libavdevice capture code for a number of other VANC types that result in streams having to be created (e.g. SMPTE 2038 and SCTE-104).  The way I approached the problem was to add an option to the demux to *always* create the stream rather than relying on the detecting the presence of the data during the probing phase.  This helps in the case where a few frames may be thrown away, as well as the case where actual data is not necessarily always present (such as SCTE-104 triggers).

Devin
Jonathan Morley July 17, 2018, 12:52 p.m. UTC | #13
Ah yeah that makes a lot of sense, Devin. I will keep that in mind for the ntv2 avdevice I created for use with our Kona4.

I basically copied the exact structure of the decklink libavdevice as my starting point anyway. It isn’t nearly as flexible and focuses exclusively on capture (demuxing), but I had the advantage of making something that exactly fit our needs for now.

If things work out I would like to do a generalization pass and see if I can contribute it back here however unlikely it is that others would use AJA hardware with ffmpeg.

Thanks for the information!

> On Jul 16, 2018, at 12:23 PM, Devin Heitmueller <dheitmueller@ltnglobal.com> wrote:
> 
> 
>> On Jul 16, 2018, at 2:56 PM, Jonathan Morley <jmorley@pixsystem.com> wrote:
>> 
>> That is really interesting feedback guys. I have been thinking about things mostly in a MOV independent timecode track (or tracks) kind of way, but I know OMF, MXF, AAF, etc handle it more analogously to packet/frame side data.
>> 
>> Usually ffmpeg has some kind of superset of functionality for handling any one concept in order to be able to handle all the various forms and implementations. I don’t really see that for timecode though I don’t really know what that would look like either. Especially given the compromises you both pointed out.
>> 
>> In my case it turned out that our Decklink Duo 2 was in a duplex state that caused the first few frames to get dropped and thus miss the opening of the output format timing wise. That is why it appeared to fail setting the timecode in the output container. We don’t really need that duplex mode (or the Decklink at all for that matter) so I think we are set for now.
> 
> I’ve run into this in my decklink libavdevice capture code for a number of other VANC types that result in streams having to be created (e.g. SMPTE 2038 and SCTE-104).  The way I approached the problem was to add an option to the demux to *always* create the stream rather than relying on the detecting the presence of the data during the probing phase.  This helps in the case where a few frames may be thrown away, as well as the case where actual data is not necessarily always present (such as SCTE-104 triggers).
> 
> Devin
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
diff mbox

Patch

From bb69bdab51d46df6ad9ffe934e395dc959904051 Mon Sep 17 00:00:00 2001
From: Jon Morley <jmorley@pixsystem.com>
Date: Thu, 31 May 2018 02:45:07 -0700
Subject: [PATCH] avdevice/decklink_dec: capture timecode to metadata when
 requested

If the user provides a valid timecode_format look for timecode of that
format in the capture and if found store it on the video avstream's
metadata.

Slightly modified by Marton Balint to capture per-frame timecode as well.

Signed-off-by: Marton Balint <cus@passwd.hu>
---
 doc/indevs.texi                 |  6 ++++++
 libavdevice/decklink_common.h   | 12 ++++++++++++
 libavdevice/decklink_common_c.h |  1 +
 libavdevice/decklink_dec.cpp    | 41 +++++++++++++++++++++++++++++++++++++++++
 libavdevice/decklink_dec_c.c    |  9 +++++++++
 5 files changed, 69 insertions(+)

diff --git a/doc/indevs.texi b/doc/indevs.texi
index 6951940a93..632d1e4743 100644
--- a/doc/indevs.texi
+++ b/doc/indevs.texi
@@ -326,6 +326,12 @@  Defaults to @samp{2}.
 Sets the decklink device duplex mode. Must be @samp{unset}, @samp{half} or @samp{full}.
 Defaults to @samp{unset}.
 
+@item timecode_format
+Timecode type to include in the frame and video stream metadata. Must be
+@samp{none}, @samp{rp188vitc}, @samp{rp188vitc2}, @samp{rp188ltc},
+@samp{rp188any}, @samp{vitc}, @samp{vitc2}, or @samp{serial}. Defaults to
+@samp{none} (not included).
+
 @item video_input
 Sets the video input source. Must be @samp{unset}, @samp{sdi}, @samp{hdmi},
 @samp{optical_sdi}, @samp{component}, @samp{composite} or @samp{s_video}.
diff --git a/libavdevice/decklink_common.h b/libavdevice/decklink_common.h
index 8064abdcb9..96b001c2d8 100644
--- a/libavdevice/decklink_common.h
+++ b/libavdevice/decklink_common.h
@@ -93,6 +93,7 @@  struct decklink_ctx {
     BMDDisplayMode bmd_mode;
     BMDVideoConnection video_input;
     BMDAudioConnection audio_input;
+    BMDTimecodeFormat tc_format;
     int bmd_width;
     int bmd_height;
     int bmd_field_dominance;
@@ -169,6 +170,17 @@  static const BMDVideoConnection decklink_video_connection_map[] = {
     bmdVideoConnectionSVideo,
 };
 
+static const BMDTimecodeFormat decklink_timecode_format_map[] = {
+    (BMDTimecodeFormat)0,
+    bmdTimecodeRP188VITC1,
+    bmdTimecodeRP188VITC2,
+    bmdTimecodeRP188LTC,
+    bmdTimecodeRP188Any,
+    bmdTimecodeVITC,
+    bmdTimecodeVITCField2,
+    bmdTimecodeSerial,
+};
+
 HRESULT ff_decklink_get_display_name(IDeckLink *This, const char **displayName);
 int ff_decklink_set_configs(AVFormatContext *avctx, decklink_direction_t direction);
 int ff_decklink_set_format(AVFormatContext *avctx, int width, int height, int tb_num, int tb_den, enum AVFieldOrder field_order, decklink_direction_t direction = DIRECTION_OUT, int num = 0);
diff --git a/libavdevice/decklink_common_c.h b/libavdevice/decklink_common_c.h
index 08e9f9bbd5..32a5d70ee1 100644
--- a/libavdevice/decklink_common_c.h
+++ b/libavdevice/decklink_common_c.h
@@ -50,6 +50,7 @@  struct decklink_cctx {
     DecklinkPtsSource video_pts_source;
     int audio_input;
     int video_input;
+    int tc_format;
     int draw_bars;
     char *format_code;
     int raw_format;
diff --git a/libavdevice/decklink_dec.cpp b/libavdevice/decklink_dec.cpp
index 974ee1d94c..f32b1b44af 100644
--- a/libavdevice/decklink_dec.cpp
+++ b/libavdevice/decklink_dec.cpp
@@ -752,6 +752,36 @@  HRESULT decklink_input_callback::VideoInputFrameArrived(
                         "- Frames dropped %u\n", ctx->frameCount, ++ctx->dropped);
             }
             no_video = 0;
+
+            // Handle Timecode (if requested)
+            if (ctx->tc_format)
+            {
+                IDeckLinkTimecode *timecode;
+                if (videoFrame->GetTimecode(ctx->tc_format, &timecode) == S_OK) {
+                    const char *tc = NULL;
+                    DECKLINK_STR decklink_tc;
+                    if (timecode->GetString(&decklink_tc) == S_OK) {
+                        tc = DECKLINK_STRDUP(decklink_tc);
+                        DECKLINK_FREE(decklink_tc);
+                    }
+                    timecode->Release();
+                    if (tc) {
+                        AVDictionary* metadata_dict = NULL;
+                        int metadata_len;
+                        uint8_t* packed_metadata;
+                        if (av_dict_set(&metadata_dict, "timecode", tc, AV_DICT_DONT_STRDUP_VAL) >= 0) {
+                            packed_metadata = av_packet_pack_dictionary(metadata_dict, &metadata_len);
+                            av_dict_free(&metadata_dict);
+                            if (packed_metadata) {
+                                if (av_packet_add_side_data(&pkt, AV_PKT_DATA_STRINGS_METADATA, packed_metadata, metadata_len) < 0)
+                                    av_freep(&packed_metadata);
+                            }
+                        }
+                    }
+                } else {
+                    av_log(avctx, AV_LOG_DEBUG, "Unable to find timecode.\n");
+                }
+            }
         }
 
         pkt.pts = get_pkt_pts(videoFrame, audioFrame, wallclock, abs_wallclock, ctx->video_pts_source, ctx->video_st->time_base, &initial_video_pts, cctx->copyts);
@@ -969,6 +999,8 @@  av_cold int ff_decklink_read_header(AVFormatContext *avctx)
     ctx->teletext_lines = cctx->teletext_lines;
     ctx->preroll      = cctx->preroll;
     ctx->duplex_mode  = cctx->duplex_mode;
+    if (cctx->tc_format > 0 && (unsigned int)cctx->tc_format < FF_ARRAY_ELEMS(decklink_timecode_format_map))
+        ctx->tc_format = decklink_timecode_format_map[cctx->tc_format];
     if (cctx->video_input > 0 && (unsigned int)cctx->video_input < FF_ARRAY_ELEMS(decklink_video_connection_map))
         ctx->video_input = decklink_video_connection_map[cctx->video_input];
     if (cctx->audio_input > 0 && (unsigned int)cctx->audio_input < FF_ARRAY_ELEMS(decklink_audio_connection_map))
@@ -1222,6 +1254,15 @@  int ff_decklink_read_packet(AVFormatContext *avctx, AVPacket *pkt)
 
     avpacket_queue_get(&ctx->queue, pkt, 1);
 
+    if (ctx->tc_format && !(av_dict_get(ctx->video_st->metadata, "timecode", NULL, 0))) {
+        int size;
+        const uint8_t *side_metadata = av_packet_get_side_data(pkt, AV_PKT_DATA_STRINGS_METADATA, &size);
+        if (side_metadata) {
+           if (av_packet_unpack_dictionary(side_metadata, size, &ctx->video_st->metadata) < 0)
+               av_log(avctx, AV_LOG_ERROR, "Unable to set timecode\n");
+        }
+    }
+
     return 0;
 }
 
diff --git a/libavdevice/decklink_dec_c.c b/libavdevice/decklink_dec_c.c
index 47018dc681..6ab3819375 100644
--- a/libavdevice/decklink_dec_c.c
+++ b/libavdevice/decklink_dec_c.c
@@ -48,6 +48,15 @@  static const AVOption options[] = {
     { "unset",         NULL,                                          0,  AV_OPT_TYPE_CONST, { .i64 = 0}, 0, 0,    DEC, "duplex_mode"},
     { "half",          NULL,                                          0,  AV_OPT_TYPE_CONST, { .i64 = 1}, 0, 0,    DEC, "duplex_mode"},
     { "full",          NULL,                                          0,  AV_OPT_TYPE_CONST, { .i64 = 2}, 0, 0,    DEC, "duplex_mode"},
+    { "timecode_format", "timecode format",           OFFSET(tc_format),  AV_OPT_TYPE_INT,   { .i64 = 0}, 0, 7,    DEC, "tc_format"},
+    { "none",          NULL,                                          0,  AV_OPT_TYPE_CONST, { .i64 = 0}, 0, 0,    DEC, "tc_format"},
+    { "rp188vitc",     NULL,                                          0,  AV_OPT_TYPE_CONST, { .i64 = 1}, 0, 0,    DEC, "tc_format"},
+    { "rp188vitc2",    NULL,                                          0,  AV_OPT_TYPE_CONST, { .i64 = 2}, 0, 0,    DEC, "tc_format"},
+    { "rp188ltc",      NULL,                                          0,  AV_OPT_TYPE_CONST, { .i64 = 3}, 0, 0,    DEC, "tc_format"},
+    { "rp188any",      NULL,                                          0,  AV_OPT_TYPE_CONST, { .i64 = 4}, 0, 0,    DEC, "tc_format"},
+    { "vitc",          NULL,                                          0,  AV_OPT_TYPE_CONST, { .i64 = 5}, 0, 0,    DEC, "tc_format"},
+    { "vitc2",         NULL,                                          0,  AV_OPT_TYPE_CONST, { .i64 = 6}, 0, 0,    DEC, "tc_format"},
+    { "serial",        NULL,                                          0,  AV_OPT_TYPE_CONST, { .i64 = 7}, 0, 0,    DEC, "tc_format"},
     { "video_input",  "video input",              OFFSET(video_input),    AV_OPT_TYPE_INT,   { .i64 = 0}, 0, 6,    DEC, "video_input"},
     { "unset",         NULL,                                          0,  AV_OPT_TYPE_CONST, { .i64 = 0}, 0, 0,    DEC, "video_input"},
     { "sdi",           NULL,                                          0,  AV_OPT_TYPE_CONST, { .i64 = 1}, 0, 0,    DEC, "video_input"},
-- 
2.16.3