Packaging for HTTP Dynamic Streaming (HDS)
When packaging content for delivery by Adobe Media Server (AMS) instead of Unified Origin, there are some additional requirements and limitations.
Limitations:
Only the streaming protocols supported by AMS are supported (i.e. HDS and i HLS). Your content won't be available in the formats MPEG-DASH and Smooth Streaming.
HDS does not support the audio codecs DTS, Dolby Digital and Dolby Digital Plus.
The progressive streaming files cannot reference the sample data in other files. The audio and video sample data is duplicated.
Requirements:
The source content must be converted to the F4F file format.
An additional client manifest file (.f4m) must be generated.
A (default) audio track must be included in each F4F file.
An index file (.f4x) must be generated for each corresponding F4F file.
The name of the .f4f files must end with the special marker '-Seg1'.
Note
These limitations and requirements do not apply for content delivered with Unified Origin.
Options for HDS packaging
The packager supports the following options.
--dry_run
Do not write the output.
--fragment_duration
The target duration of each fragment (in milliseconds), defaults to 4000.
Creating the media files (.f4f and .f4x)
The first step is to package all the source content into the format that is used by AMS. This is the fragmented-MP4 format (using the .f4f file extension) and the corresponding index file (.f4x).
The example uses this Source Content.
#!/bin/bash
mp4split -o video_200k-Seg1.f4f \
video_200k.mp4 \
audio_aac-lc.mp4
mp4split -o video_400k-Seg1.f4f \
video_400k.mp4 \
audio_aac-lc.mp4
mp4split -o video_600k-Seg1.f4f \
video_600k.mp4 \
audio_aac-lc.mp4
mp4split -o video_800k-Seg1.f4f \
video_800k.mp4 \
audio_aac-lc.mp4
mp4split -o audio_he-aac-Seg1.f4f \
audio_he-aac.mp4
Now that we have packaged all the audio and video, the following step is to create the two progressive download files. In this case the audio and video data is duplicated.
#!/bin/bash
mp4split -o video_400k.mp4 \
video_400k.mp4 \
audio_aac-lc.mp4
mp4split -o video_800k.mp4 \
video_800k.mp4 \
audio_he-aac.mp4
Using alternate audio
If you want to use alternate audio tracks, then the alternate audio tracks must be in separate .f4f files. No video should be included in these files.
The default audio track must be included in all the .f4f files containing the video.
When creating the .f4m manifest file, the packager marks the audio tracks in the audio-only .f4f files to be used as 'alternate' audio.
Note that when using alternate audio, the version of the manifest is changed to '2.0'.
Creating the media files with Adobe Primetime DRM
Note
In case your input is pre-encrypted, Packager and Origin will pick up on any DRM signaling present in the input and automatically pass it through in the output they generate. This means that for any DRM system for which signaling is present in the input, you do not need to specify DRM configuration options when preparing your stream. However, do note that there are DRM systems for which such signaling can't be present by design, like FairPlay, because signaling for these systems is never stored in the media. This means that to support such DRM systems, you will always need to add the necessary DRM configuration options.
You can add Adobe Primetime DRM to the .f4f
media files.
Use the following options to do so:
--hds.key
The key id (KID) and content_key (CEK) are passed with the --hds.key
option where KID and CEK are separated by a colon, e.g. --hds.key=KID:CEK
.
As no key id (KID) is used in Adobe Primetime DRM, this can be left empty. The content encryption key (CEK) is a (random) 128 bit value which must be coded in hex (base16).
--hds.key_iv
The 128 bit AES Initialization Vector (IV). This is a random 128 bit value.
--hds.drm_specific_data
The Adobe Primetime DRM specific data.
Can either be a Base64 string, or a file with Base64 data. The file name must contain a '.' for example: base64_data.drm
See the Using the Primetime Java SDK on how to use the SDK to provide the encryption information.
Example
#!/bin/bash
mp4split -o oceans-1Seg1.f4f \
--hds.key=:0f0e0d0c0b0a090808090a0b0c0d0e0f \
--hds.key_iv=000102030405060708090a0b0c0d0e0f \
--hds.drm_specific_data=oceans.drmmeta \
oceans-300k.mp4
mp4split -o oceans-2Seg1.f4f \
--hds.key=:0f0e0d0c0b0a090808090a0b0c0d0e0f \
--hds.key_iv=000102030405060708090a0b0c0d0e0f \
--hds.drm_specific_data=oceans.drmmeta
oceans-800k.mp4
Creating the manifest file (.f4m)
As a last step we create the client manifest file. The client manifest file is used by the OSMF player.
#!/bin/bash
mp4split -o video.f4m \
video_200k-Seg1.f4f \
video_400k-Seg1.f4f \
video_600k-Seg1.f4f \
video_800k-Seg1.f4f \
audio_he-aac-Seg1.f4f --track_description=he_aac
At this point we have the following files stored for our presentation.
Files |
Description |
---|---|
video_200k-Seg1.f4f |
AAC-LC, 200 kbps video |
video_200k-Seg1.f4x |
Index file |
video_400k-Seg1.f4f |
AAC-LC, 400 kbps video |
video_400k-Seg1.f4x |
Index file |
video_600k-Seg1.f4f |
AAC-LC, 600 kbps video |
video_600k-Seg1.f4x |
Index file |
video_800k-Seg1.f4f |
AAC-LC, 800 kbps video |
video_800k-Seg1.f4x |
Index file |
audio_he-aac-Seg1.f4f |
HE-AAC alternate audio track |
audio_he-aac-Seg1.f4x |
Index file |
video_400k.mp4 |
AAC-LC, 400 kbps video |
video_800k.mp4 |
HE-AAC, 800 kbps video |
video.f4m |
Client manifest file |
Please download the advanced-ams.sh
sample script which creates
the various server manifest as discussed above.
The sample content is Tears of Steel.