Streaming HLS with Nginx’s RTMP Module

In this post I’m going to show you how to use Nginx’s RTMP module to stream a live video (over RTMP) from your desktop and then play it back with HLS. I’ll also cover how to encrypt the video.

RTMP was initially a proprietary protocol by Adobe for streaming video and audio between a Flash player and a server but is still widely used today for broadcasting video despite the demise of Flash.

Continue reading

Generating HLS Playlists with Bento4

In this post we’re going to look at how to use Bento4 to generate HLS playlists from MP4 files. In previous posts I’ve used ffmpeg to encode the videos and generate the playlist. However, sometimes it’s better to separate the encoding from the packaging. For example, you may want to support both DASH and HLS from the same encodings.

Bento4 also has support for MPEG DASH and many other formats, but in this post I’ll focus specifically on using the command line tools mp4hls and mp42hls to create HLS playlists. You can download the latest version from the website. Once you’ve downloaded it, make sure the bin directory is on your PATH. I used version in the examples.

If you are familiar with Apple’s HLS tools, mp4hls and mp42hls can be used as a replacement for variantplaylistcreator and mediafilesegmenter respectively. Or put another way, if you want to create a master playlist for more than one video, use mp4hls. If you only want to create a playlist for a single video, use mp42hls instead.

For the examples, I’m going to assume there are 3 videos available: sample-1080p.mp4, sample-720p.mp4 and sample-360p.mp4.

Let’s begin.

Continue reading

Creating a Master Playlist with Ffmpeg

In this post you’ll see how to create an HLS master playlist with ffmpeg for video on-demand.

A master playlist contains references to different variant streams (typically encoded at different bit rates) and can also include links to alternative audio tracks and audio-only renditions. It allows a client device to choose the most appropriate stream based on factors such as the capabilities of the device, available bandwidth, and so on. This is known as adaptive streaming.

The examples below use the 1080p version of the Sintel trailer which you can download here. You can of course use your own videos. Apple publishes guidelines for authoring HLS video streams that includes – among other things – recommended video and audio bit rates. We’ll create two variants: a 540p and a 360p version. The bit rates will be 2 kB/s and 365 kb/s respectively (as per Apple’s guidelines).

Continue reading

HLS and Fragmented MP4

At WWDC 2016, Apple announced support for fragmented MP4 (fMP4) as an alternative to MPEG-TS, which prior to their announcement was the only supported format.

So why use fragmented MP4 files? Well, according to Apple’s video encoding requirements in their HLS Authoring Specification, if you want to use HEVC/H.265, you have to use it (1.5). Fragmented MP4 files are also compatible with MPEG-DASH – an alternative to HLS – so you can use the same files; only the manifest file (playlist) is different. This means less encoding and less storage requirements, which should reduce costs.

In this post, I’ll demonstrate how to generate fMP4 files using ffmpeg. You’ll need a fairly recent version of ffmpeg that supports fMP4 – I used version 4.1.1 to create the files in this post.

Continue reading

Playing HLS Video in the Browser – Part 2

I often get email from people asking if there any examples in the book about how to play HLS video in the browser and my answer is always the same: no. In this short post, I’ll try to explain why not with an example of how to play encrypted HLS video in the browser, which is also something I get asked about a lot.

The main reason there aren’t any specific examples in the book that show how to play HLS video in the browser is because most of the time, all you need to do is tell the player where the playlist is and hit play; it takes care of the rest. Plus, there are a lot of players out there that support HLS video.

Most of the effort involved in playing HLS video goes into encoding the videos, segmenting them, preparing the playlist, and so on, not configuring the front-end. And this is what the book focuses on.

Let me highlight this with an example. I’m going to use Video.js, but it should work in just about every other HLS capable player. To play HLS video using Video.js you’ll need to use the Video.js HLS source handler. I’ve already written something about how to encrypt video for HLS so I won’t go over the details here – I did say this was going to be a short post.

Here’s some HTML that plays the video:

<link href="" rel="stylesheet">
<video id="example-video" class="video-js vjs-default-skin" controls="controls" width="640" height="360">
   <source src="" type="application/x-mpegURL" />
<script src=""></script>
var player = videojs('example-video');

As you can see (highlighted) all you have to do to play the video is just tell the player where the playlist is. Even though this particular video is encrypted, there is nothing more you have to do on the front-end.

In this instance, the playlist and the encrypted video are stored in an S3 bucket. (If you are interested, I wrote a post recently about how to serve HLS video from an S3 bucket that goes into more detail.)

And finally, here’s an example:

How to Serve HLS Video from an S3 Bucket

This post will describe how to configure an S3 bucket to serve video using HLS.

I’ve also written some example code that creates a bucket and configures it as outlined in this post.

Amazon S3 is a storage solution that allows you to store large amounts of data for relatively little cost, perfect for something like video files, which tend to be quite large. Files (or objects as they are often referred to) are accessed over HTTP, which makes it a great solution for storing (and serving) your HLS videos.

You’ll need an Amazon Web Services (AWS) account to use S3. If you haven’t got one, you can sign up here. Sign in to your account and navigate to the S3 console. Create a bucket. Next, upload your video segments and playlist etc. to the bucket.

Make sure the content type of the playlist and the video segments (.ts) is set to “application/x-mpegURL” and “video/MP2T” respectively. You can do this by selecting the Properties tab for each file and then clicking on Metadata.

Before you can start serving your videos, you need to grant read access to the files in the bucket; files are private by default. Select the Permissions tab and set the bucket policy to the following:

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "PublicReadForGetBucketObjects",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::<your bucket name>/*"

Most browsers don’t support HLS natively, so HLS video playback is typically achieved by using some sort of Javascript-based video player like Video.js, for example. To play an HLS video all you typically need to do is configure the player with the playlist URL and it takes care of the rest. If you serve your videos from the same domain as the video player, there are no additional steps for you to do. However, for the purposes of this post I’m going to assume the video player references the playlist stored in the S3 bucket.

This presents a bit of a problem. If you request a resource (a file) from a different domain, you need permission to do so; it violates the same-origin policy that browsers enforce. Cross-origin resource sharing (CORS) is a mechanism that allows user agents to request permission to access resources that reside on a different server. In this instance, you need to grant permission to the player to allow it to access to the video(s) in the S3 bucket.

Thankfully, Amazon S3 supports CORS so you can selectively allow cross-origin access to the files in your S3 bucket. To enable CORS on your bucket, select the Permissons tab then click on CORS configuration. By default, the configuration file allows access from anywhere. If I wanted to restrict access to the videos to this domain, I would modify the CORS configuration to look like this:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="">

Once you have granted public access to files in the S3 bucket and configured CORS appropriately, you should now be able to serve your HLS videos from an S3 bucket.

Restricting Access

Setting the allowed origin(s) in the CORS policy will prevent somebody from embedding your video on their website if they use a player like VideoJS, but it won’t prevent it if the browser supports HLS natively because in this instance, the same-origin policy doesn’t apply. One thing you can do is check for the presence of the Referer header and if the request hasn’t come from your domain, you block it. You can do this by modifying the bucket policy. Here’s the policy for my bucket:

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "PublicReadForGetBucketObjects",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::net.hlsbook/*"
            "Sid": "Explicit deny to ensure requests are allowed only from a specific domain.",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::net.hlsbook/*",
            "Condition": {
                "StringNotLike": {
                    "aws:Referer": [

This policy will check the referrer header and if it doesn’t match or it’s missing, access to the requested resource will be blocked. (You could include the check in the allowed action but I prefer to use an explicit deny because it overrides any allows.) However, bear in mind that the check can be circumvented because the referrer header can be spoofed.

Unfortunately, Chrome Mobile doesn’t include the referrer header so the above policy will block access to your videos on that particular browser. Fortunately there is a workaround (or “hack” if you prefer).

The solution is to tell the player to use the Media Source Extensions (if the platform supports it) for HLS playback instead of playing it natively. You can do this with VideoJS by setting the overrideNative property to true. If you view the source for this page, you’ll see this:

videojs.options.hls.overrideNative = true;
videojs.options.html5.nativeAudioTracks = false;
videojs.options.html5.nativeTextTracks = false;

var player = videojs('example-video');

The referrer header will now be included in the requests so access to the video will no longer be blocked on Chrome Mobile.

One More Thing

HLS playlists support the EXT-X-BYTERANGE tag. This indicates to the player that each video segment is part of a (larger) resource. (There’s more about this in the book if you are interested.) Here’s an example from a playlist that uses the tag:


With the current configuration, any requests for the video file (main.ts) will result in a 403 Access Denied response from S3. To grant access to the file, you need to allow the use of the HTTP Range header. Modify the CORS configuration file so it looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="">

Now your video player will be able to make range requests for parts of the video file.


Finally, here’s an example of playing an HLS video from an S3 bucket:


Amazon S3 is a cost effective solution for storing videos as video files can take up a lot of space. Because access to files in S3 is over HTTP and with support for CORS, it also makes it a viable solution for using it to serve your HLS videos to your viewers.

S3 also has other features that I haven’t mentioned here, such as versioning and archiving, that could be useful for managing your video assets. Check out the S3 documentation for more information.

How to Encrypt Video for HLS

In this post, we’ll look at what encryption HLS supports and how to encrypt your videos with ffmpeg.

Encryption is the process of encoding information in such a way that only authorised parties can read it. The encryption process requires some kind of secret (key) together with an encryption algorithm.

There are many different types of encryption algorithms but HLS only supports AES-128. The Advanced Encryption Standard (AES) is an example of a block cipher, which encrypts (and decrypts) data in fixed-size blocks. It’s a symmetric key algorithm, which means that the key that is used to encrypt data is also used to decrypt it. AES-128 uses a key length of 128 bits (16 bytes).

HLS uses AES in cipher block chaining (CBC) mode. This means each block is encrypted using the cipher text of the preceding block, but this gives us a problem: how do we encrypt the first block? There is no block before it! To get around this problem we use what is known as an initialisation vector (IV). In this instance, it’s a 16-byte random value that is used to intialize the encryption process. It doesn’t need to be kept secret for the encryption to be secure.

Before we can encrypt our videos, we need an encryption key. I’m going to use OpenSSL to create the key, which we can do like so:

$ openssl rand 16 > enc.key

This instructs OpenSSL to generate a random 16-byte value, which corresponds to the key length (128 bits).

The next step is to generate an IV. This step is optional. (If no value is provided, the segment sequence number will be used instead.)

$ openssl rand -hex 16

Make a note of the output as you’ll need it shortly.

To encrypt the video we need to tell ffmpeg what encryption key to use, the URI of the key, and so on. We do this with -hls_key_info_file option passing it the location of a key info file. The file must be in the following format:

Path to key file
IV (optional)

The first line specifies the URI of the key, which will be written to the playlist. The second line is the path to the file containing the encryption key, and the (optional) third line contains the initialisation vector. Here’s an example (enc.keyinfo):

Now that we have everything we need, run the following command to encrypt the video segments:

ffmpeg -y \
    -i \
    -hls_time 9 \
    -hls_key_info_file enc.keyinfo
    -hls_playlist_type vod \
    -hls_segment_filename "fileSequence%d.ts" \

Take a look at the generated playlist (prog_index.m3u8). It should look something like this:


Note the URI of the encryption key. The player will retrieve the key from this location to decrypt the media segments. To protect the key from eavesdroppers it should be served over HTTPS. You may also want to implement some of authentication mechanism to restrict who has access to the key. If you’re interested, the book goes into some detail about how to achieve this. Click here to buy a copy.

To verify that the segments really are encrypted, try playing them using a media player like QuickTime or VLC. You shouldn’t be able to. Now run the command above without the encryption and then try playing a segment. Notice the difference.

In this instance, all the segments are encrypted with the same key. It can be beneficial to periodically change the encryption keys to minimise the impact if a particular key is exposed. This is known as key rotation, and the amount of time between successive key generations is referred to as the key rotation period.

To enable key rotation, set the -hls_flags option to periodic_rekey. When enabled, the key info file will be checked periodically. If the file has changed, segments will then be encrypted with the new encryption key. However, we still need to come up with a way of generating new keys and updating the key info file. Here’s an example of how it could be done:

while true
    sleep 15
    openssl rand 16 > enc$i.key
    echo$i.key > $tmpfile
    echo enc$i.key >> $tmpfile
    echo `openssl rand -hex 16` >> $tmpfile
    mv $tmpfile enc.keyinfo
    let i++

The script uses the same values as before for the base URL (for each key) and key info file. It generates a new encryption key every 15 seconds, updating the key info file accordingly. (In practice you would choose a much longer key rotation period.)

To test it, run ffmpeg again but this time enable key rotation. At the same time, open a new terminal and run the script from the same directory. (The script will run forever until you terminate it.) When ffmpeg has finished, terminate the script.

Take a look at the playlist. You should now see a number of different key entries in the playlist. All the segments that follow a key tag (#EXT-X-KEY) are now encrypted using the key specified by the tag, instead of using one key to encrypt all segments like before.

Even though HLS supports encryption, which provides some sort of content protection, it isn’t a full DRM solution. If that kind of thing interests you then you may want to take a look at Apple’s FairPlay Streaming solution.

Segmenting Video with ffmpeg – Part 2

In a previous post I showed how to segment video for HLS using ffmpeg’s segment muxer. In this one, I’ll demonstrate how to use ffmpeg’s hls muxer. It has more features specific to HLS like support for encryption, subtitles, specifying the type of playlist, and so on.

To follow along, you’ll need a recent version of ffmpeg with support for HLS. (I used version 3.1.1 on Ubuntu 14.04.) To see if your version supports HLS, run the command below. You should see something like this:

$ ffmpeg -formats | grep hls
 E hls              Apple HTTP Live Streaming
D  hls,applehttp    Apple HTTP Live Streaming

If ffmpeg complains about an unrecognized option when executing the commands in this post, you can see what options are supported by running the following from a terminal:

$ ffmpeg -h muxer=hls

If an option is not supported, you’ll need to upgrade your version of ffmpeg.
Continue reading

Validating HLS Video Streams

Once you’ve deployed your videos, it’s always a good idea to check them for any errors and make sure they are valid. This is especially true if you are developing an iOS application that streams video as Apple can reject your app if there are any problems with your streams. For example, if your application streams video over a cellular network, you must provide a stream with a maximum bit rate of 192 kbps. If you don’t, you aren’t going to make it into the App Store.

We can use Apple’s mediastreamvalidator tool to validate our video streams. It checks that the playlist and the media segments conform to the HTTP Live Streaming specification and will report any problems it finds so we can fix them.

To check if a video stream is valid, call mediastreamvalidator with the URL of the playlist. If you run the following command, you should see something like this:

$ mediastreamvalidator

[...] Started root playlist download
[...] Started media playlist download
[...] All media files delivered and have end tag, stopping

Processed 7 out of 7 segments
Average segment duration: 10.000000
Total segment bitrates (all discontinuties): average: 867.51 kb/s, max: 1199.59 kb/s

Discontinuity: sequence: 0, parsed segment count: 7 of 7, duration: 70.000 sec, average: 867.51 kb/s, max: 1199.59 kb/s
Track ID: 1
Video Codec: avc1
Video profile: High
Video level: 3.1
Video resolution: 1024x436
Video average IDR interval: 3.646930, Standard deviation: 3.265633
Video frame rate: 24.000
Track ID: 2
Audio Codec: AAC-LC
Audio sample rate: 48000 Hz
Audio channel layout: Stereo (L R)

No problems to report here. In addition to validating the stream, it also reports information such as the average bit rate, segment duration, and resolution. You can use this information to also check that you’ve encoded your videos correctly.

The latest version (1.1) of mediastreamvalidator supports outputting the data to a file in JSON format. You can then pass this JSON file to another of Apple’s tools hlsreport, which will generate a nicely formatted summary for you in an HTML page. Let’s look at the commands we need to run to do this:

$ mediastreamvalidator -O validation.json \

$ -o report.html validation.json

This time we run mediastreamvalidator with the -O option specifying the name of the file we want to write the data to. (I’ve omitted the output.) Next, we run on the data to generate the HTML page. Here’s an example report showing the findings of the validation tool.

(Interestingly the report flags a number of issues that weren’t reported when only running the validation tool. This could be because it appears to be checking the results against the HLS Authoring Specification for Apple TV. Whatever the reason, probably best to run both just to be absolutely certain.)

So now you know how to validate your HLS video streams, you should have no problems getting your app into the App Store.

Adding Session Data to a Playlist

In this post we’ll look at how to add session data to a playlist and then how to access it from a client application. We can use this feature to add arbitrary metadata to the playlist, such as the title of the movie, who the director was, and so on. To add session data to a playlist, we can use the EXT-X-SESSION-DATA tag.

Let’s say we have our movie and we want to add the title and the name of the director as session data to the playlist. In this example, we’ll use the computer animated film, Sintel. Let’s start by embedding the values directly in the playlist:


The DATA-ID attribute is the key and the VALUE contains, surprisingly, the value. The convention is to use reverse DNS notation for the name of the DATA-ID attribute. In this instance, we’re using to represent the title of the movie and for the name of the director. (On the client – as you’ll see shortly – we’ll use the name of the DATA-ID attribute to look up its value.)
Continue reading