HLS Discussion Forum

I’ve set-up a discussion forum so readers of my book can ask questions about the content in the book and engage with others that are interested in HTTP Live Streaming (HLS).

Right now it’s a private forum; access is restricted to those who have bought the book. However, at some point in the future I may open it up to everyone, depending on the level of participation etc. What I’d like to do is build a community that will enable people to share their knowledge of video streaming and that people will (hopefully) find useful. Let’s see how it goes.

If you’ve already bought a copy of the book, you should have received an invite to join the forum. If you haven’t, contact me directly and I’ll chase it up.

How to Encrypt Video for HLS

In this post, we’ll look at what encryption HLS supports and how to encrypt your videos with ffmpeg.

Encryption is the process of encoding information in such a way that only authorised parties can read it. The encryption process requires some kind of secret (key) together with an encryption algorithm.

There are many different types of encryption algorithms but HLS only supports AES-128. The Advanced Encryption Standard (AES) is an example of a block cipher, which encrypts (and decrypts) data in fixed-size blocks. It’s a symmetric key algorithm, which means that the key that is used to encrypt data is also used to decrypt it. AES-128 uses a key length of 128 bits (16 bytes).

HLS uses AES in cipher block chaining (CBC) mode. This means each block is encrypted using the cipher text of the preceding block, but this gives us a problem: how do we encrypt the first block? There is no block before it! To get around this problem we use what is known as an initialisation vector (IV). In this instance, it’s a 16-byte random value that is used to intialize the encryption process. It doesn’t need to be kept secret for the encryption to be secure.

Before we can encrypt our videos, we need an encryption key. I’m going to use OpenSSL to create the key, which we can do like so:

$ openssl rand 16 > enc.key

This instructs OpenSSL to generate a random 16-byte value, which corresponds to the key length (128 bits).

The next step is to generate an IV. This step is optional. (If no value is provided, the segment sequence number will be used instead.)

$ openssl rand -hex 16
ecd0d06eaf884d8226c33928e87efa33

Make a note of the output as you’ll need it shortly.

To encrypt the video we need to tell ffmpeg what encryption key to use, the URI of the key, and so on. We do this with -hls_key_info_file option passing it the location of a key info file. The file must be in the following format:

Key URI
Path to key file
IV (optional)

The first line specifies the URI of the key, which will be written to the playlist. The second line is the path to the file containing the encryption key, and the (optional) third line contains the initialisation vector. Here’s an example (enc.keyinfo):

https://hlsbook.net/enc.key
enc.key
ecd0d06eaf884d8226c33928e87efa33

Now that we have everything we need, run the following command to encrypt the video segments:

ffmpeg -y \
    -i sample.mov \
    -hls_time 9 \
    -hls_key_info_file enc.keyinfo
    -hls_playlist_type vod \
    -hls_segment_filename "fileSequence%d.ts" \
    prog_index.m3u8

Take a look at the generated playlist (prog_index.m3u8). It should look something like this:

#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:9
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-KEY:METHOD=AES-128,URI="https://hlsbook.net/enc.key",IV=0xecd0d06eaf884d8226c33928e87efa33
#EXTINF:8.33333
fileSequence0.ts
#EXTINF:8.33333
fileSequence1.ts
#EXTINF:8.33333
fileSequence2.ts
#EXTINF:8.33333
fileSequence3.ts
#EXTINF:8.33333
fileSequence4.ts
#EXTINF:5.66667
fileSequence5.ts
#EXT-X-ENDLIST

Note the URI of the encryption key. The player will retrieve the key from this location to decrypt the media segments. To protect the key from eavesdroppers it should be served over HTTPS. You may also want to implement some of authentication mechanism to restrict who has access to the key. If you’re interested, the book goes into some detail about how to achieve this. Click here to buy a copy.

To verify that the segments really are encrypted, try playing them using a media player like QuickTime or VLC. You shouldn’t be able to. Now run the command above without the encryption and then try playing a segment. Notice the difference.

Even though HLS supports encryption, which provides some sort of content protection, it isn’t a full DRM solution. If that kind of thing interests you then you may want to take a look at Apple’s FairPlay Streaming solution.

Segmenting Video with ffmpeg – Part 2

In a previous post I showed how to segment video for HLS using ffmpeg’s segment muxer. In this one, I’ll demonstrate how to use ffmpeg’s hls muxer. It has more features specific to HLS like support for encryption, subtitles, specifying the type of playlist, and so on.

To follow along, you’ll need a recent version of ffmpeg with support for HLS. (I used version 3.1.1 on Ubuntu 14.04.) To see if your version supports HLS, run the command below. You should see something like this:

$ ffmpeg -formats | grep hls
...
 E hls              Apple HTTP Live Streaming
D  hls,applehttp    Apple HTTP Live Streaming

If ffmpeg complains about an unrecognized option when executing the commands in this post, you can see what options are supported by running the following from a terminal:

$ ffmpeg -h muxer=hls

If an option is not supported, you’ll need to upgrade your version of ffmpeg.
Continue reading

Validating HLS Video Streams

Once you’ve deployed your videos, it’s always a good idea to check them for any errors and make sure they are valid. This is especially true if you are developing an iOS application that streams video as Apple can reject your app if there are any problems with your streams. For example, if your application streams video over a cellular network, you must provide a stream with a maximum bit rate of 192 kbps. If you don’t, you aren’t going to make it into the App Store.

We can use Apple’s mediastreamvalidator tool to validate our video streams. It checks that the playlist and the media segments conform to the HTTP Live Streaming specification and will report any problems it finds so we can fix them.

To check if a video stream is valid, call mediastreamvalidator with the URL of the playlist. If you run the following command, you should see something like this:

$ mediastreamvalidator http://hlsbook.net/wp-content/examples/sintel/sintel_index.m3u8

[...] Started root playlist download
[...] Started media playlist download
[...] All media files delivered and have end tag, stopping

--------------------------------------------------------------------------------
http://hlsbook.net/wp-content/examples/sintel/sintel_index.m3u8
--------------------------------------------------------------------------------
Processed 7 out of 7 segments
Average segment duration: 10.000000
Total segment bitrates (all discontinuties): average: 867.51 kb/s, max: 1199.59 kb/s

Discontinuity: sequence: 0, parsed segment count: 7 of 7, duration: 70.000 sec, average: 867.51 kb/s, max: 1199.59 kb/s
Track ID: 1
Video Codec: avc1
Video profile: High
Video level: 3.1
Video resolution: 1024x436
Video average IDR interval: 3.646930, Standard deviation: 3.265633
Video frame rate: 24.000
Track ID: 2
Audio Codec: AAC-LC
Audio sample rate: 48000 Hz
Audio channel layout: Stereo (L R)

No problems to report here. In addition to validating the stream, it also reports information such as the average bit rate, segment duration, and resolution. You can use this information to also check that you’ve encoded your videos correctly.

The latest version (1.1) of mediastreamvalidator supports outputting the data to a file in JSON format. You can then pass this JSON file to another of Apple’s tools hlsreport, which will generate a nicely formatted summary for you in an HTML page. Let’s look at the commands we need to run to do this:

$ mediastreamvalidator -O validation.json \
       http://hlsbook.net/wp-content/examples/sintel/sintel_index.m3u8

$ hlsreport.py -o report.html validation.json

This time we run mediastreamvalidator with the -O option specifying the name of the file we want to write the data to. (I’ve omitted the output.) Next, we run hlsreport.py on the data to generate the HTML page. Here’s an example report showing the findings of the validation tool.

(Interestingly the report flags a number of issues that weren’t reported when only running the validation tool. This could be because it appears to be checking the results against the HLS Authoring Specification for Apple TV. Whatever the reason, probably best to run both just to be absolutely certain.)

So now you know how to validate your HLS video streams, you should have no problems getting your app into the App Store.

Adding Session Data to a Playlist

In this post we’ll look at how to add session data to a playlist and then how to access it from a client application. We can use this feature to add arbitrary metadata to the playlist, such as the title of the movie, who the director was, and so on. To add session data to a playlist, we can use the EXT-X-SESSION-DATA tag.

Let’s say we have our movie and we want to add the title and the name of the director as session data to the playlist. In this example, we’ll use the computer animated film, Sintel. Let’s start by embedding the values directly in the playlist:

#EXT-X-SESSION-DATA:DATA-ID="net.hlsbook.movie.title",VALUE="Sintel"
#EXT-X-SESSION-DATA:DATA-ID="net.hlsbook.movie.director",VALUE="Colin Levy"

The DATA-ID attribute is the key and the VALUE contains, surprisingly, the value. The convention is to use reverse DNS notation for the name of the DATA-ID attribute. In this instance, we’re using net.hlsbook.movie.title to represent the title of the movie and net.hlsbook.movie.director for the name of the director. (On the client – as you’ll see shortly – we’ll use the name of the DATA-ID attribute to look up its value.)
Continue reading

How to Start Playing a Video at a Specific Point in Time

Quick tip. How do you start playing a video from a specific point in time with HLS? Use the EXT-X-START tag. It’s optional, but if it’s present in a playlist it specifies the preferred point in the video to start playback.

The following attributes are defined in the spec for the EXT-X-START tag:

  • TIME-OFFSET – (required) The offset in seconds to start playing the video. If the value is negative, specifies the time from the end of the playlist. The offset must not be larger than the playlist duration.
  • PRECISE – (optional) Valid string values: YES, NO. If YES, playback starts at the point specified by TIME-OFFSET but does not render frames in the segment prior to the offset. Default is NO.

Let’s look at an example. Assume that we want to start playback precisely 25 seconds from the beginning of the video. You would add the following tag to the playlist:

#EXT-X-START:TIME-OFFSET=25,PRECISE=YES

If we want to start 15 seconds from the end of the playlist and we aren’t concerned with precision, the tag will look like this:

#EXT-X-START:TIME-OFFSET=-15

To see how this works in practice, let’s look at some examples. (I recommend watching the videos with Safari on an iOS device. I had mixed results with Safari on the desktop.) The first example starts playing at the beginning of the video, the second one – same video, different playlist – starts playing 25 seconds in.

There’s one more thing to consider if you want to use the EXT-X-START tag: It first appeared in version 6 of the protocol so you need to set the EXT-X-VERSION tag in the playlist accordingly.

Here’s the complete playlist for the example with the EXT-X-START tag:

#EXTM3U
#EXT-X-TARGETDURATION:10
#EXT-X-VERSION:6
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-START:TIME-OFFSET=25,PRECISE=YES
#EXT-X-PLAYLIST-TYPE:VOD
#EXTINF:10.00000,       
sintel0.ts
#EXTINF:10.00000,       
sintel1.ts
#EXTINF:10.00000,       
sintel2.ts
#EXTINF:10.00000,       
sintel3.ts
#EXTINF:10.00000,       
sintel4.ts
#EXTINF:10.00000,       
sintel5.ts
#EXTINF:10.00000,       
sintel6.ts
#EXT-X-ENDLIST

If you want to play the videos in a browser other than Safari, you may want to check out an earlier post that describes how to do this. If it doesn’t work as expected, this may be because the client doesn’t support the appropriate version of HLS. Your mileage may vary.

Media Playback on Apple TV

I’ve just watched a tech talk session about media playback on Apple TV.

There were some interesting uses of HLS. The video player has a sliding thumbnail view, which uses I-frame playlists, that allows viewers to seek to a specific point in the video, and there are some additional properties for displaying metadata about the video, such as chapter markers and interstitial content, but what was particularly interesting was the integration with Siri. In the presentation, the presenter pauses the video and asks Siri what the character in the video just said. This caused the video to seek backwards for 15 seconds and enable the subtitles (temporarily) for that part of the video. After 15 seconds, the subtitles were turned off. A novel use of subtitles!

It’s given me a few ideas for some new content for the book. (Looks like I’ll have to get myself an Apple TV now!)

Check it out if you have the time. It’s worth watching.

Playing HLS Video in the Browser

HLS video works in all browsers on iOS but on the desktop, the only browser that supports HLS video natively is Safari. The good news is that it is possible to play HLS video in other browsers, such as Chrome or Firefox, but you need to use a workaround. There are two options available:

  • Flash player with a plugin that handles HLS video
  • Use the HTML5 video element and Media Source Extensions (MSE)

We’ll look at an example of each in turn and discuss how they work in more detail.

Flash Player with Plugin

Flash works across practically all desktop browsers (and some mobile ones)  so you can be confident that your HLS videos will be viewable across a variety of different browsers and platforms (Windows, Mac, Linux).

For the example, I’m going to use the Flashls plugin. It works with a number of different Flash players and supports a variety of HLS features such as adaptive streaming, encryption, and so on. You can view the example here.

Given that Flash is being phased out, this solution may not be the best one going forward. Having said that, it’s unlikely that Flash is going to disappear completely in the next year or two but it is on the way out. The future is HTML5 video and that’s what we’ll look at next.

HTML5 + Media Source Extensions

The introduction of the video element in HTML5 means that it’s now possible to play video on the web without having to install plugins. However, not all browsers support the same video formats. The Media Source Extensions allows JavaScript applications to generate media streams dynamically for playback. This opens the door to enable third-parties to add support for different formats as we’ll see shortly.

To demonstrate how to use the HTML5 video element to play HLS video, I’m going to use the hls.js library. Check out this introduction for more information.

Try viewing the example in a compatible browser.

The drawback of this approach is browser compatibility. Although the MSE specification provides some guidelines on how the API should behave it does vary across browsers. You also need a relatively recent version of a browser both on the desktop and mobile device. (On iOS, MSE is not supported at all!)

Conclusion

As the approach outlined above clearly shows, it is possible to play HLS video across different browsers (without Flash!) using only the HTML5 video element and MSE, so hopefully in the not-so-distant future browsers will support it natively so there will be no need to use third-party JavaScript libraries.

How to Add Subtitles to a Live HLS Stream

In this post I’ll describe how to add subtitles to a live HLS stream.

Subtitles can be added to a live video stream by creating a live subtitle playlist. Before I delve into the details, let’s recap how the playlist for a live video stream works. A live playlist contains a fixed number of entries. Entries are added and removed accordingly as time progresses. Let’s look at an example:

#EXTM3U
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:2
#EXT-X-TARGETDURATION:10
#EXTINF:10,
fileSequence2.ts
#EXTINF:10,
fileSequence3.ts
#EXTINF:10,
fileSequence4.ts
#EXTINF:10,
fileSequence5.ts
#EXTINF:10,
fileSequence6.ts

In this instance there are 5 entries in the playlist, each segment being 10 seconds in duration. Every 10 seconds or so, the first segment is removed and a new one is added to the end of the playlist. The target duration tells the client how often it should go back to the server to retrieve new content. The client will continue to fetch the playlist until the stream ends. This occurs when the #EXT-X-ENDLIST tag is added to the playlist. (There’s more detail in the book.)
Continue reading

Segmenting Video with ffmpeg

To stream video with HLS, you need to divide your video into segments of a fixed duration and add them to a playlist. In the book I use Apple’s HTTP Live Streaming tools to do this. Here’s an example using mediafilesegmenter:

$ mediafilesegmenter -f /Library/WebServer/Documents/vod sample.mov

This command takes the video (sample.mov) and writes out the segments and the playlist to the /Library/WebServer/Documents/vod directory. Unfortunately, Apple’s tools will only work on a Mac.

However, recent versions of ffmpeg can also output HLS compatible files. Given a video as input, it will divide it into segments and create a playlist for us.

Here’s the equivalent of the command above using ffmpeg:

$ ffmpeg -y \
 -i sample.mov \
 -codec copy \
 -bsf h264_mp4toannexb \
 -map 0 \
 -f segment \
 -segment_time 10 \
 -segment_format mpegts \
 -segment_list "/Library/WebServer/Documents/vod/prog_index.m3u8" \
 -segment_list_type m3u8 \
 "/Library/WebServer/Documents/vod/fileSequence%d.ts"

We use ffmpeg‘s segment muxer to segment the video. We can specify the segment duration with the -segment_time option. The last argument passed to ffmpeg is the path to where the segments should be written; it contains a format specifier (%d) similar to those supported by the printf function in C. The %d will be replaced with the current sequence number. In this example, the segments will be named fileSequence0.ts, fileSequence1.ts, and so on.

And that’s how you process a video for streaming with HLS using ffmpeg. There are other examples in the book, including how to use ffmpeg to segment a live video stream, so if you want to learn how, buy your copy today.

In part 2 we’ll look at how to segment video using ffmpeg’s hls muxer.