Page not found – HTTP Live Streaming https://hlsbook.net A Practical Guide Tue, 12 Jul 2022 07:25:06 +0000 en-GB hourly 1 https://wordpress.org/?v=4.9.25 94571499 Streaming HLS with Nginx’s RTMP Module https://hlsbook.net/hls-nginx-rtmp-module/ https://hlsbook.net/hls-nginx-rtmp-module/#comments Thu, 17 Sep 2020 10:29:37 +0000 http://hlsbook.net/?p=966 In this post I’m going to show you how to use Nginx’s RTMP module to stream a live video (over RTMP) from your desktop and then play it back with HLS. I’ll also cover how to encrypt the video.

RTMP was initially a proprietary protocol by Adobe for streaming video and audio between a Flash player and a server but is still widely used today for broadcasting video despite the demise of Flash.

Installation

I built the RTMP module on Ubuntu 18.04 but the installation instructions should work for any Debian-based Linux distribution.

The first thing to do is install Nginx if you haven’t already. You can install it from your distribution’s default package repositories but the provided version is usually outdated. A better option is to install it from the official Nginx repository. The current stable version is 1.18.

To install Nginx, run the following command: $ sudo apt install nginx

To build the RTMP module, first install the required dependencies then check out the source code of the module. We also need the source code of the version of Nginx that is installed. To build the module, run the following commands:

$ sudo apt install build-essential dpkg-dev libpcre3 libpcre3-dev libssl-dev zlib1g zlib1g-dev
$ git clone https://github.com/arut/nginx-rtmp-module.git
$ apt-get source nginx
$ cd nginx-1.18.0/
$ ./configure --with-compat --add-dynamic-module=../nginx-rtmp-module
$ make modules
$ sudo cp objs/ngx_rtmp_module.so /etc/nginx/modules/

This will build the RTMP module as a dynamic module. To load the module at runtime, add the following line to the Nginx configuration file (/etc/nginx/nginx.conf):

Now that Nginx has been installed and the RTMP module has been built, we need to configure it.

Configuring RTMP

The first thing we need to do is define an RTMP section in the Nginx configuration file:

The server will listen for connections on port 1935. Within the server block, we define an application called live. (You can call it whatever you like, I just happened to called it live.) We’ll refer back to this later on when we start streaming. To enable HLS, set hls to on. The next two parameters determine the location where the playlist and the segments will be written and the segment duration. In this instance, the playlist and segments are written to the /srv/www directory and the segment duration is set to 6 seconds.

To enable encryption (optional) set hls_keys to on. The next two values specify the directory where the keys will be written, and the base URL of the keys. Finally, we specify the key rotation period. This determines how many segments are encrypted with a key before a new key is generated. For example, if hls_fragments_per_key is set to 10 and the segment duration is 6 seconds, the encryption key will be rotated every minute.

The next step is to configure Nginx to serve the HLS playlist and segments. We do that by defining an http block in the configuration file:

The root location is set to /srv/www, which you’ll recall is the same location where the playlist and segments are written to. The expires parameter (line 48) specifies how long the playlist and the segments can be cached for, which is determined by looking up the value from a map based on the content type (line 38). In this case, the expiry time of the playlist (application/vnd.apple.mpegurl) is set to some time in the past (epoch) so it won’t be cached. Segments can be cached for 24 hours.

Note: Everything is served over HTTP. If you are encrypting the video then you should serve the keys over HTTPS. This is left as an exercise for the reader.

To start Nginx, run the following: $ sudo nginx

The next step is to prepare the live stream.

Preparing the Live Stream

If you want to stream the output of a webcam, I suggest downloading and installing OBS Studio. Alternatively, you can use ffmpeg to “live” stream a video.

However you decide to set-up your live stream, the format of the RTMP URL is the same. It must be in the following format: rtmp://<host>/<application name>/<stream key>

The host is either the IP address or the name of the server. As we are running it locally, it will be localhost.The application name must match the name in the Nginx configuration file, which in this example is live. The stream key can be anything – we’ll use test – and is used by the RTMP module to name the playlist and the segments. For example, if the stream key is test the playlist will be called test.m3u8 and the segments will be test-0.ts, test-1.ts, test-2.ts, and so on.

Setting the RTMP URL in OBS

Here’s an example of how to use ffmpeg to stream a video over RTMP:
$ ffmpeg -re -i big_buck_bunny_720p_h264.mov -c copy -f flv rtmp://localhost/live/test

Start the live stream. After a few seconds you should see some files in /srv/www.
Next, copy the following HTML and save it as index.html in the /srv/www directory:

(If you are using a different stream key, you’ll need to change the name of the playlist in the HTML.)

To play the live video, open up a browser and navigate to http://localhost/.

If you have any questions, feel free to leave a comment.

]]>
https://hlsbook.net/hls-nginx-rtmp-module/feed/ 8 966
Generating HLS Playlists with Bento4 https://hlsbook.net/generating-hls-playlists-with-bento4/ https://hlsbook.net/generating-hls-playlists-with-bento4/#comments Wed, 10 Jul 2019 17:05:16 +0000 http://hlsbook.net/?p=931 In this post we’re going to look at how to use Bento4 to generate HLS playlists from MP4 files. In previous posts I’ve used ffmpeg to encode the videos and generate the playlist. However, sometimes it’s better to separate the encoding from the packaging. For example, you may want to support both DASH and HLS from the same encodings.

Bento4 also has support for MPEG DASH and many other formats, but in this post I’ll focus specifically on using the command line tools mp4hls and mp42hls to create HLS playlists. You can download the latest version from the website. Once you’ve downloaded it, make sure the bin directory is on your PATH. I used version 1.5.1.0 in the examples.

If you are familiar with Apple’s HLS tools, mp4hls and mp42hls can be used as a replacement for variantplaylistcreator and mediafilesegmenter respectively. Or put another way, if you want to create a master playlist for more than one video, use mp4hls. If you only want to create a playlist for a single video, use mp42hls instead.

For the examples, I’m going to assume there are 3 videos available: sample-1080p.mp4, sample-720p.mp4 and sample-360p.mp4.

Let’s begin.

Media Playlists

To segment a video and create a playlist for it, open up a terminal and run the following command:

$ mp42hls sample-1080p.mp4

If you look in the directory where you ran the command, you’ll see a number of segments (segment-0.ts, segment-1.ts and so on) and the playlist (stream.m3u8). The default segment duration is 6 seconds, which is what Apple recommends, but you can change this if you want to with the --segment-duration option.

If you want to choose the name of the playlist and the segments, you can do so like this:

$ mp42hls --index-filename prog_index.m3u8 \
     --segment-filename-template "fileSequence%d.ts" sample-1080p.mp4

The segment filename template uses a printf-style format string with a specifier (%d) for the segment number.

Depending on the duration of your videos, you could potentially end up with a large number of segments. Instead of having multiple segments, HLS supports creating a single file and referencing the segments in the file using a byte range. (You can think of it as taking all of the segments and concatenating them together to create one file.)

$ mp42hls --output-single-file sample-1080p.mp4

If you look in the output directory you should see the playlist (stream.m3u8) like before but only one segment file (stream.ts). The playlist now contains byte range tags that specify the length and the offset from the beginning of the file of each segment.

This feature requires HLS version 4 (or greater). The version number is set automatically by the tool when the single file option is specified. There is an option (--hls-version) that can be used to set the version of HLS your playlist is compatible with.

You may have noticed after running the previous command that a third file was created: iframes.m3u8. This playlist contains the locations of the I-frames in the video segments. The previous command created it automatically, but if you want to create an I-frame playlist (and you aren’t outputting a single file) you can do so with the --iframe-index-filename option:

$ mp42hls --iframe-index-filename iframes.m3u8 sample-1080p.mp4

Just like outputting a single file, I-frame playlists are a version 4 feature because they also make use of the byte range tag.

Master Playlists

To generate a master playlist from our imaginary videos, run this command:

$ mp4hls sample*

This command takes the list of videos, segments them, creates the playlist for each variant then outputs the master playlist, which references each individual playlist. In this example, instead of specifying the name of each video separately, we let the shell do it for us by using glob expansion. When the command finishes, there will be an output directory. Inside that directory will be the master playlist (master.m3u8) and 3 sub-directories: media-1, media-2, and media-3; there’s one sub-directory for each video. In each of those are the segments and the playlist that correspond to that particular variant.

It’s also possible to specify the name of the master playlist and media playlist. Here’s an example:

$ mp4hls --master-playlist-name=all.m3u8 \
     --media-playlist-name=prog_index.m3u8 sample*

If you want to change the location where the files are output, you can do so by using the -o and specifying the path of the directory to write to.

In addition to the video streams, it’s sometimes necessary to provide an audio-only stream (typically 64kbps). To do that, we can extract the audio from one of the videos and encode it appropriately using ffmpeg:

$ ffmpeg -i sample-1080p.mp4 -vn -c:a aac -b:a 64k -ac 2 audio_64kbps.mp4

To create the master playlist with the audio-only stream, run this command:

$ mp4hls sample* [+audio_fallback=yes]audio_64kbps.mp4

Inside the output directory, in addition to the previous 3 sub-directories, there is now a media-4 directory. Inside the master playlist are the following media playlist entries:

# Media Playlists
#EXT-X-STREAM-INF:AVERAGE-BANDWIDTH=2359034,BANDWIDTH=4135200,CODECS="avc1.640032,mp4a.40.2",RESOLUTION=1920x1080
media-1/stream.m3u8
#EXT-X-STREAM-INF:AVERAGE-BANDWIDTH=748116,BANDWIDTH=1295505,CODECS="avc1.64001E,mp4a.40.2",RESOLUTION=854x480
media-2/stream.m3u8
#EXT-X-STREAM-INF:AVERAGE-BANDWIDTH=1256065,BANDWIDTH=2203392,CODECS="avc1.64001F,mp4a.40.2",RESOLUTION=1280x720
media-3/stream.m3u8
#EXT-X-STREAM-INF:AVERAGE-BANDWIDTH=133642,BANDWIDTH=136500,CODECS="mp4a.40.2"
media-4/stream.m3u8

The last entry corresponds to the audio stream. You can tell because the CODECS attribute (highlighted) is set to AAC-LC audio. There is no video codec component.

Encryption

In this section, we’ll see how to encrypt videos.

The first step is to generate an encryption key. HLS supports AES-128, which means that keys have to be 128 bits in length (16 bytes). We can use openssl to create the key and write it out to a file:

$ openssl rand 16 > enc.key

This key will be used for both the encryption and decryption. Next, we run the command to create the playlist and encrypt the video segments:

$ mp42hls --encryption-key $(hexdump -e '"%x"' enc.key) \
    --encryption-key-uri https://hlsbook.net/enc.key sample-1080p.mp4

We specify the encryption key with the --encryption-key option but mp42hls expects the key to be a hexadecimal string so we have to convert it first. Next, we specify the location of the key with the --encryption-key-uri option. The client will attempt to fetch the key from this location in order to decrypt the video segments.

You can also encrypt each variant when creating a master playlist. To do this with the key we’ve just generated, run this command:

mp4hls --encryption-key=$(hexdump -e '"%x"' enc.key) \
     --encryption-key-uri https://hlsbook.net/enc.key sample*

If the --signal-session-key option is present, a EXT-X-SESSION-KEY tag will be added to the master playlist. This is an optimisation that allows the client to preload the decryption key; otherwise, it would have to fetch the media playlist first to determine the location of the key.

If you want to learn more about HLS encryption, check out this post.

Conclusion

Bento4 is a useful tool that works across multi-platforms so if you don’t have access to Apple’s HLS tools, it’s a good alternative. It’s also simpler to use than ffmpeg. If you don’t believe me, take a look at my post about how to create a master playlist with ffmpeg.

In addition to some of the features highlighted here, it also supports subtitles, multilingual audio, and DRM like Apple’s FairPlay Streaming.

]]>
https://hlsbook.net/generating-hls-playlists-with-bento4/feed/ 9 931
Creating a Master Playlist with Ffmpeg https://hlsbook.net/creating-a-master-playlist-with-ffmpeg/ https://hlsbook.net/creating-a-master-playlist-with-ffmpeg/#comments Tue, 18 Jun 2019 16:17:56 +0000 http://hlsbook.net/?p=905 In this post you’ll see how to create an HLS master playlist with ffmpeg for video on-demand.

A master playlist contains references to different variant streams (typically encoded at different bit rates) and can also include links to alternative audio tracks and audio-only renditions. It allows a client device to choose the most appropriate stream based on factors such as the capabilities of the device, available bandwidth, and so on. This is known as adaptive streaming.

The examples below use the 1080p version of the Sintel trailer which you can download here. You can of course use your own videos. Apple publishes guidelines for authoring HLS video streams that includes – among other things – recommended video and audio bit rates. We’ll create two variants: a 540p and a 360p version. The bit rates will be 2 kB/s and 365 kb/s respectively (as per Apple’s guidelines).

The first thing to do is determine what streams are in the video, which we can do with the following command:

$ ffmpeg -i sintel_trailer-1080p.mp4 -hide_banner
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'sintel_trailer-1080p.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    creation_time   : 1970-01-01T00:00:00.000000Z
    title           : Sintel Trailer
    artist          : Durian Open Movie Team
    encoder         : Lavf52.62.0
    copyright       : (c) copyright Blender Foundation | durian.blender.org
    description     : Trailer for the Sintel open movie project
  Duration: 00:00:52.21, start: 0.000000, bitrate: 2240 kb/s
    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080, 2108 kb/s, 24 fps, 24 tbr, 24 tbn, 48 tbc (default)
    Metadata:
      creation_time   : 1970-01-01T00:00:00.000000Z
      handler_name    : VideoHandler
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 126 kb/s (default)
    Metadata:
      creation_time   : 1970-01-01T00:00:00.000000Z
      handler_name    : SoundHandler
At least one output file must be specified

Ignore the warning about the missing output file.

We can see there are two available streams: a video and an audio stream. (The stream references are highlighted, which we’ll need shortly.) We’ll need to encode the video stream for each variant but we can just copy the audio as is because it’s already encoded as AAC and the bit rate lies within the recommended range (32 to 160 kb/s).

Here’s the ffmpeg command to generate the variants and the master playlist:

ffmpeg -y -i sintel_trailer-1080p.mp4 \
  -preset slow -g 48 -sc_threshold 0 \
  -map 0:0 -map 0:1 -map 0:0 -map 0:1 \
  -s:v:0 640x360 -c:v:0 libx264 -b:v:0 365k \
  -s:v:1 960x540 -c:v:1 libx264 -b:v:1 2000k  \
  -c:a copy \
  -var_stream_map "v:0,a:0 v:1,a:1" \
  -master_pl_name master.m3u8 \
  -f hls -hls_time 6 -hls_list_size 0 \
  -hls_segment_filename "v%v/fileSequence%d.ts" \
  v%v/prog_index.m3u8

Let’s break it down. First, we specify the input file (-i).

This is followed by a number of general encoding options. The preset is set to slow; the default is medium. A preset is a collection of values that determines the “quality” of the encoding. A slower preset will achieve a better compression ratio but will take longer to run. You may want to play around with the different values to see what works best for you. Apple’s authoring guidelines specify that a key frame should be present every 2 seconds. We achieve this by setting the GOP size (-g) to 48, or twice the frame rate of the video input. Lastly, scene change detection is disabled by setting the threshold to zero.

Next, we need to specify the input streams that should be included in the output, which we can do with the map command. (The map command means “include this stream in the output file”.) In this instance, 0:0 refers to the video stream and 0:1 to the audio stream in the input file. The first pair of map commands represent the video and audio stream of the first variant and will be referred to subsequently in the output as v:0 and a:0; the next pair of commands represent the second variant and will be labelled as v:1 and a:1, and so on. (If we wanted to generate 3 variants streams, we would need 6 map statements, assuming that each variant has a video and audio stream.)

The next two lines specify how each video stream should be encoded. We don’t need to change the audio so it can be copied directly to the output using -c:a copy which means copy all audio streams from the input(s). The video and audio streams for each output variant are then specified using the -var_stream_map option. The -master_pl_name option sets the name of the master playlist. Finally, we set a number of HLS options including the segment filename and the duration.

Now according to Apple’s guidelines, a video on-demand playlist must have the EXT-X-PLAYLIST-TYPE tag set to VOD. In theory, this can be done with ffmpeg by setting the -hls_playlist_type option to vod. However, running the command above with this option set using the latest release of ffmpeg (4.1.3) causes a segmentation fault. Turns out it’s a bug. I downloaded a recent version of the source code and compiled ffmpeg from scratch and it works so the problem has been fixed. For the time being, you can either do what I did and compile the latest source code or just add the tag to each playlist afterwards.

Run the command. When it completes you should see the following output:

Open the master playlist (master.m3u8) in a text editor. It should look like this:

#EXTM3U
#EXT-X-VERSION:3
#EXT-X-STREAM-INF:BANDWIDTH=540863,RESOLUTION=640x360,CODECS="avc1.64001e,mp4a.40.2"
v0/prog_index.m3u8

#EXT-X-STREAM-INF:BANDWIDTH=2339363,RESOLUTION=960x540,CODECS="avc1.64001f,mp4a.40.2"
v1/prog_index.m3u8

You can see the references to the alternative streams. In the directories v0 and v1 you will find the playlist and segments for each variant stream.

]]>
https://hlsbook.net/creating-a-master-playlist-with-ffmpeg/feed/ 11 905
HLS and Fragmented MP4 https://hlsbook.net/hls-fragmented-mp4/ https://hlsbook.net/hls-fragmented-mp4/#comments Wed, 13 Feb 2019 12:55:55 +0000 http://hlsbook.net/?p=848 At WWDC 2016, Apple announced support for fragmented MP4 (fMP4) as an alternative to MPEG-TS, which prior to their announcement was the only supported format.

So why use fragmented MP4 files? Well, according to Apple’s video encoding requirements in their HLS Authoring Specification, if you want to use HEVC/H.265, you have to use it (1.5). Fragmented MP4 files are also compatible with MPEG-DASH – an alternative to HLS – so you can use the same files; only the manifest file (playlist) is different. This means less encoding and less storage requirements, which should reduce costs.

In this post, I’ll demonstrate how to generate fMP4 files using ffmpeg. You’ll need a fairly recent version of ffmpeg that supports fMP4 – I used version 4.1.1 to create the files in this post.

For the video, I used the 1080p version of the Sintel trailer. You can download it here.

Let’s take the original video (sintel_trailer-1080p.mp4) and re-encode it. We’ll alter the size to 1280×720 and reduce the video bitrate but leave the audio as it is. Open up a terminal and run the following command:

ffmpeg -y \
  -i sintel_trailer-1080p.mp4 \
  -force_key_frames "expr:gte(t,n_forced*2)" \
  -sc_threshold 0 \
  -s 1280x720 \
  -c:v libx264 -b:v 1500k \
  -c:a copy \
  -hls_time 6 \
  -hls_playlist_type vod \
  -hls_segment_type fmp4 \
  -hls_segment_filename "fileSequence%d.m4s" \
  prog_index.m3u8

To generate fMP4 files instead of MPEG-TS, which is the default, set the segment type to fmp4 (highlighted).

The above command will generate the following files: the playlist file (prog_index.m3u8), a number of files with the extension .m4s (instead of .ts), and a file called init.mp4. You’ll need all of these files when you come to play the video.

Let’s take a look at the playlist and see how it compares:

#EXTM3U
#EXT-X-VERSION:7
#EXT-X-TARGETDURATION:6
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-MAP:URI="init.mp4"
#EXTINF:6.000000,
fileSequence0.m4s
#EXTINF:6.000000,
fileSequence1.m4s
#EXTINF:6.000000,
fileSequence2.m4s
#EXTINF:6.000000,
fileSequence3.m4s
#EXTINF:6.000000,
fileSequence4.m4s
#EXTINF:6.000000,
fileSequence5.m4s
#EXTINF:6.000000,
fileSequence6.m4s
#EXTINF:6.000000,
fileSequence7.m4s
#EXTINF:4.208333,
fileSequence8.m4s
#EXT-X-ENDLIST

I’ve highlighted the differences. The #EXT-X-MAP tag specifies the location of the resource that contains the information required to parse the segments (the media initialization section). It applies to every segment that appears after it. A playlist containing an MPEG-2 transport stream can also apply the #EXT-X-MAP tag, but each segment typically includes the information required to parse it so it’s not needed.

The other difference is the file extension of the segments is .m4s instead of .ts.

HLS also supports byte range addressing. To use this feature with fMP4, set the -hls_flags option to single_file. Instead of creating several (smaller) files, it will create a single file containing all the media segments. If you look at the playlist, the location of each segment is specified as a byte offset in the file.

Now that we have the fMP4 files and the playlist, the next step is to play the video in the browser.

Most browsers don’t support HLS and fMP4 natively so you’ll need to use a video player that utilises Media Source Extensions (MSE). Thankfully, some players do but be aware that some don’t. For example, I tried using hls.js but it didn’t work – according to the documentation, support for fragmented MP4 is still in beta.

Here’s an HTML example that uses Video.js, which does support fMP4:


<!DOCTYPE html>
<html>
<head>
<link href="https://vjs.zencdn.net/7.4.1/video-js.css" rel="stylesheet">
</head>
<body>
<video class="video-js" width="1280" height="720" data-setup='{}' controls>
        <source src="prog_index.m3u8" type="application/x-mpegURL">
</video>
<script src='https://vjs.zencdn.net/7.4.1/video.js'></script>
</body>
</html>

Save the HTML. You’ll need to upload this along with all the other files created in the previous step to a web server. I tested playback in Chrome 72 and Firefox 65.0 on Ubuntu and the video played with no problems.

Conclusion

Using ffmpeg makes it easy to generate fMP4 files for HLS. If you need to support other streaming protocols like MPEG-DASH then it’s worth considering. If you encode videos with HEVC/H.265 then fMP4 is mandatory.

]]>
https://hlsbook.net/hls-fragmented-mp4/feed/ 23 848
Playing HLS Video in the Browser – Part 2 https://hlsbook.net/playing-hls-video-in-the-browser-part-2/ https://hlsbook.net/playing-hls-video-in-the-browser-part-2/#respond Thu, 14 Dec 2017 16:44:13 +0000 http://hlsbook.net/?p=541 I often get email from people asking if there any examples in the book about how to play HLS video in the browser and my answer is always the same: no. In this short post, I’ll try to explain why not with an example of how to play encrypted HLS video in the browser, which is also something I get asked about a lot.

The main reason there aren’t any specific examples in the book that show how to play HLS video in the browser is because most of the time, all you need to do is tell the player where the playlist is and hit play; it takes care of the rest. Plus, there are a lot of players out there that support HLS video.

Most of the effort involved in playing HLS video goes into encoding the videos, segmenting them, preparing the playlist, and so on, not configuring the front-end. And this is what the book focuses on.

Let me highlight this with an example. I’m going to use Video.js, but it should work in just about every other HLS capable player. To play HLS video using Video.js you’ll need to use the Video.js HLS source handler. I’ve already written something about how to encrypt video for HLS so I won’t go over the details here – I did say this was going to be a short post.

Here’s some HTML that plays the video:

<head>
<link href="https://vjs.zencdn.net/7.4.1/video-js.css" rel="stylesheet">
</head>
<body>
<video id="example-video" class="video-js vjs-default-skin" controls="controls" width="640" height="360">
   <source src="https://s3.eu-west-1.amazonaws.com/net.hlsbook.hlsvideo/prog_index.m3u8" type="application/x-mpegURL" />
</video>
<script src="https://vjs.zencdn.net/7.4.1/video.min.js"></script>
<script>
var player = videojs('example-video');
</script>
</body>

As you can see (highlighted) all you have to do to play the video is just tell the player where the playlist is. Even though this particular video is encrypted, there is nothing more you have to do on the front-end.

In this instance, the playlist and the encrypted video are stored in an S3 bucket. (If you are interested, I wrote a post recently about how to serve HLS video from an S3 bucket that goes into more detail.)

And finally, here’s an example:



]]>
https://hlsbook.net/playing-hls-video-in-the-browser-part-2/feed/ 0 541
How to Serve HLS Video from an S3 Bucket https://hlsbook.net/how-to-serve-hls-video-from-an-s3-bucket/ https://hlsbook.net/how-to-serve-hls-video-from-an-s3-bucket/#comments Sun, 10 Dec 2017 15:40:19 +0000 http://hlsbook.net/?p=511 This post will describe how to configure an S3 bucket to serve video using HLS.

I’ve also written some example code that creates a bucket and configures it as outlined in this post.

Amazon S3 is a storage solution that allows you to store large amounts of data for relatively little cost, perfect for something like video files, which tend to be quite large. Files (or objects as they are often referred to) are accessed over HTTP, which makes it a great solution for storing (and serving) your HLS videos.

You’ll need an Amazon Web Services (AWS) account to use S3. If you haven’t got one, you can sign up here. Sign in to your account and navigate to the S3 console. Create a bucket. Next, upload your video segments and playlist etc. to the bucket.

Make sure the content type of the playlist and the video segments (.ts) is set to “application/x-mpegURL” and “video/MP2T” respectively. You can do this by selecting the Properties tab for each file and then clicking on Metadata.

Before you can start serving your videos, you need to grant read access to the files in the bucket; files are private by default. Select the Permissions tab and set the bucket policy to the following:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadForGetBucketObjects",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::<your bucket name>/*"
        }
    ]
}

Most browsers don’t support HLS natively, so HLS video playback is typically achieved by using some sort of Javascript-based video player like Video.js, for example. To play an HLS video all you typically need to do is configure the player with the playlist URL and it takes care of the rest. If you serve your videos from the same domain as the video player, there are no additional steps for you to do. However, for the purposes of this post I’m going to assume the video player references the playlist stored in the S3 bucket.

This presents a bit of a problem. If you request a resource (a file) from a different domain, you need permission to do so; it violates the same-origin policy that browsers enforce. Cross-origin resource sharing (CORS) is a mechanism that allows user agents to request permission to access resources that reside on a different server. In this instance, you need to grant permission to the player to allow it to access to the video(s) in the S3 bucket.

Thankfully, Amazon S3 supports CORS so you can selectively allow cross-origin access to the files in your S3 bucket. To enable CORS on your bucket, select the Permissons tab then click on CORS configuration. By default, the configuration file allows access from anywhere. If I wanted to restrict access to the videos to this domain, I would modify the CORS configuration to look like this:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
   <AllowedOrigin>http://hlsbook.net</AllowedOrigin>
   <AllowedMethod>GET</AllowedMethod>
   <MaxAgeSeconds>3000</MaxAgeSeconds>
   <AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>

Once you have granted public access to files in the S3 bucket and configured CORS appropriately, you should now be able to serve your HLS videos from an S3 bucket.

Restricting Access

Setting the allowed origin(s) in the CORS policy will prevent somebody from embedding your video on their website if they use a player like VideoJS, but it won’t prevent it if the browser supports HLS natively because in this instance, the same-origin policy doesn’t apply. One thing you can do is check for the presence of the Referer header and if the request hasn’t come from your domain, you block it. You can do this by modifying the bucket policy. Here’s the policy for my bucket:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadForGetBucketObjects",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::net.hlsbook/*"
        },
        {
            "Sid": "Explicit deny to ensure requests are allowed only from a specific domain.",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::net.hlsbook/*",
            "Condition": {
                "StringNotLike": {
                    "aws:Referer": [
                        "http://hlsbook.net/*"
                    ]
                }
            }
        }
    ]
}

This policy will check the referrer header and if it doesn’t match or it’s missing, access to the requested resource will be blocked. (You could include the check in the allowed action but I prefer to use an explicit deny because it overrides any allows.) However, bear in mind that the check can be circumvented because the referrer header can be spoofed.

Unfortunately, Chrome Mobile doesn’t include the referrer header so the above policy will block access to your videos on that particular browser. Fortunately there is a workaround (or “hack” if you prefer).

The solution is to tell the player to use the Media Source Extensions (if the platform supports it) for HLS playback instead of playing it natively. You can do this with VideoJS by setting the overrideNative property to true. If you view the source for this page, you’ll see this:

videojs.options.hls.overrideNative = true;
videojs.options.html5.nativeAudioTracks = false;
videojs.options.html5.nativeTextTracks = false;

var player = videojs('example-video');
player.src('https://s3.eu-west-2.amazonaws.com/net.hlsbook/prog_index.m3u8');

The referrer header will now be included in the requests so access to the video will no longer be blocked on Chrome Mobile.

One More Thing

HLS playlists support the EXT-X-BYTERANGE tag. This indicates to the player that each video segment is part of a (larger) resource. (There’s more about this in the book if you are interested.) Here’s an example from a playlist that uses the tag:

#EXTM3U
#EXT-X-TARGETDURATION:10
#EXT-X-VERSION:4
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-PLAYLIST-TYPE:VOD
#EXTINF:10.00000,
#EXT-X-BYTERANGE:808400@0
main.ts
#EXTINF:10.00000,
#EXT-X-BYTERANGE:848068@808400
main.ts
#EXTINF:10.00000,
#EXT-X-BYTERANGE:811784@1656468
main.ts

With the current configuration, any requests for the video file (main.ts) will result in a 403 Access Denied response from S3. To grant access to the file, you need to allow the use of the HTTP Range header. Modify the CORS configuration file so it looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
   <AllowedOrigin>http://hlsbook.net</AllowedOrigin>
   <AllowedMethod>GET</AllowedMethod>
   <MaxAgeSeconds>3000</MaxAgeSeconds>
   <AllowedHeader>Authorization</AllowedHeader>
   <AllowedHeader>Range</AllowedHeader>
</CORSRule>
</CORSConfiguration>

Now your video player will be able to make range requests for parts of the video file.

Example

Finally, here’s an example of playing an HLS video from an S3 bucket:



Conclusion

Amazon S3 is a cost effective solution for storing videos as video files can take up a lot of space. Because access to files in S3 is over HTTP and with support for CORS, it also makes it a viable solution for using it to serve your HLS videos to your viewers.

S3 also has other features that I haven’t mentioned here, such as versioning and archiving, that could be useful for managing your video assets. Check out the S3 documentation for more information.

]]>
https://hlsbook.net/how-to-serve-hls-video-from-an-s3-bucket/feed/ 46 511
How to Encrypt Video for HLS https://hlsbook.net/how-to-encrypt-hls-video-with-ffmpeg/ https://hlsbook.net/how-to-encrypt-hls-video-with-ffmpeg/#comments Fri, 26 Aug 2016 06:00:46 +0000 http://hlsbook.net/?p=386 In this post, we’ll look at what encryption HLS supports and how to encrypt your videos with ffmpeg.

Encryption is the process of encoding information in such a way that only authorised parties can read it. The encryption process requires some kind of secret (key) together with an encryption algorithm.

There are many different types of encryption algorithms but HLS only supports AES-128. The Advanced Encryption Standard (AES) is an example of a block cipher, which encrypts (and decrypts) data in fixed-size blocks. It’s a symmetric key algorithm, which means that the key that is used to encrypt data is also used to decrypt it. AES-128 uses a key length of 128 bits (16 bytes).

HLS uses AES in cipher block chaining (CBC) mode. This means each block is encrypted using the cipher text of the preceding block, but this gives us a problem: how do we encrypt the first block? There is no block before it! To get around this problem we use what is known as an initialisation vector (IV). In this instance, it’s a 16-byte random value that is used to intialize the encryption process. It doesn’t need to be kept secret for the encryption to be secure.

Before we can encrypt our videos, we need an encryption key. I’m going to use OpenSSL to create the key, which we can do like so:

$ openssl rand 16 > enc.key

This instructs OpenSSL to generate a random 16-byte value, which corresponds to the key length (128 bits).

The next step is to generate an IV. This step is optional. (If no value is provided, the segment sequence number will be used instead.)

$ openssl rand -hex 16
ecd0d06eaf884d8226c33928e87efa33

Make a note of the output as you’ll need it shortly.

To encrypt the video we need to tell ffmpeg what encryption key to use, the URI of the key, and so on. We do this with -hls_key_info_file option passing it the location of a key info file. The file must be in the following format:

Key URI
Path to key file
IV (optional)

The first line specifies the URI of the key, which will be written to the playlist. The second line is the path to the file containing the encryption key, and the (optional) third line contains the initialisation vector. Here’s an example (enc.keyinfo):

https://hlsbook.net/enc.key
enc.key
ecd0d06eaf884d8226c33928e87efa33

Now that we have everything we need, run the following command to encrypt the video segments:

ffmpeg -y \
    -i sample.mov \
    -hls_time 9 \
    -hls_key_info_file enc.keyinfo
    -hls_playlist_type vod \
    -hls_segment_filename "fileSequence%d.ts" \
    prog_index.m3u8

Take a look at the generated playlist (prog_index.m3u8). It should look something like this:

#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:9
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-KEY:METHOD=AES-128,URI="https://hlsbook.net/enc.key",IV=0xecd0d06eaf884d8226c33928e87efa33
#EXTINF:8.33333
fileSequence0.ts
#EXTINF:8.33333
fileSequence1.ts
#EXTINF:8.33333
fileSequence2.ts
#EXTINF:8.33333
fileSequence3.ts
#EXTINF:8.33333
fileSequence4.ts
#EXTINF:5.66667
fileSequence5.ts
#EXT-X-ENDLIST

Note the URI of the encryption key. The player will retrieve the key from this location to decrypt the media segments. To protect the key from eavesdroppers it should be served over HTTPS. You may also want to implement some of authentication mechanism to restrict who has access to the key. If you’re interested, the book goes into some detail about how to achieve this. Click here to buy a copy.

To verify that the segments really are encrypted, try playing them using a media player like QuickTime or VLC. You shouldn’t be able to. Now run the command above without the encryption and then try playing a segment. Notice the difference.

In this instance, all the segments are encrypted with the same key. It can be beneficial to periodically change the encryption keys to minimise the impact if a particular key is exposed. This is known as key rotation, and the amount of time between successive key generations is referred to as the key rotation period.

To enable key rotation, set the -hls_flags option to periodic_rekey. When enabled, the key info file will be checked periodically. If the file has changed, segments will then be encrypted with the new encryption key. However, we still need to come up with a way of generating new keys and updating the key info file. Here’s an example of how it could be done:

#!/bin/bash
i=2
while true
do
    sleep 15
    tmpfile=`mktemp`
    openssl rand 16 > enc$i.key
    echo https://hlsbook.net/enc$i.key > $tmpfile
    echo enc$i.key >> $tmpfile
    echo `openssl rand -hex 16` >> $tmpfile
    mv $tmpfile enc.keyinfo
    let i++
done

The script uses the same values as before for the base URL (for each key) and key info file. It generates a new encryption key every 15 seconds, updating the key info file accordingly. (In practice you would choose a much longer key rotation period.)

To test it, run ffmpeg again but this time enable key rotation. At the same time, open a new terminal and run the script from the same directory. (The script will run forever until you terminate it.) When ffmpeg has finished, terminate the script.

Take a look at the playlist. You should now see a number of different key entries in the playlist. All the segments that follow a key tag (#EXT-X-KEY) are now encrypted using the key specified by the tag, instead of using one key to encrypt all segments like before.

Even though HLS supports encryption, which provides some sort of content protection, it isn’t a full DRM solution. If that kind of thing interests you then you may want to take a look at Apple’s FairPlay Streaming solution.

]]>
https://hlsbook.net/how-to-encrypt-hls-video-with-ffmpeg/feed/ 42 386
Segmenting Video with ffmpeg – Part 2 https://hlsbook.net/segmenting-video-with-ffmpeg-part-2/ https://hlsbook.net/segmenting-video-with-ffmpeg-part-2/#comments Sat, 20 Aug 2016 15:30:37 +0000 http://hlsbook.net/?p=362 In a previous post I showed how to segment video for HLS using ffmpeg’s segment muxer. In this one, I’ll demonstrate how to use ffmpeg’s hls muxer. It has more features specific to HLS like support for encryption, subtitles, specifying the type of playlist, and so on.

To follow along, you’ll need a recent version of ffmpeg with support for HLS. (I used version 3.1.1 on Ubuntu 14.04.) To see if your version supports HLS, run the command below. You should see something like this:

$ ffmpeg -formats | grep hls
...
 E hls              Apple HTTP Live Streaming
D  hls,applehttp    Apple HTTP Live Streaming

If ffmpeg complains about an unrecognized option when executing the commands in this post, you can see what options are supported by running the following from a terminal:

$ ffmpeg -h muxer=hls

If an option is not supported, you’ll need to upgrade your version of ffmpeg.

Assuming the correct version of ffmpeg is installed, let’s jump right in and see how to use it to segment a video and create a video on-demand (VOD) playlist:

$ ffmpeg -y \
    -i sample.mov \
    -hls_time 9 \
    -hls_segment_filename "/Library/WebServer/Documents/vod/fileSequence%d.ts" \
    -hls_playlist_type vod \
    /Library/WebServer/Documents/vod/prog_index.m3u8

Let’s break down the relevant parts of the above command. The segment duration (in seconds) is specified using the -hls_time option. The location of where the segments should be written to is defined with the -hls_segment_filename option; the path contains a format specifier (%d) similar to those supported by the printf function in C. The %d will be replaced with the current sequence number. In this case, the segments will be named fileSequence0.ts, fileSequence1.ts, fileSequence2.ts, and so on.

To create a VOD playlist we set the playlist type (-hls_playlist_type) to vod – the other possible value is event. (For a live playlist, you don’t need to specify the type.) If your version of ffmpeg doesn’t support this option, omit it and add the #EXT-X-PLAYLIST-TYPE:VOD tag to the playlist manually. However, if you do this you must set the list size to 0 (see below to see how to do this).

Let’s have a quick look at some of the other available options:

You can set the maximum number of entries in the playlist by using the -hls_list_size option. Specifying the number of entries is only relevant when streaming live video; the default value is 5. Setting the playlist type to either vod or event implicitly sets the list size to 0, which means the playlist will contain all the segments.

Streaming live video can create a lot of segment files, particularly if the broadcast lasts several hours. This can require a lot of disk space. To avoid this problem, ffmpeg supports the -hls_wrap option. Once the sequence number hits the specified limit, it wraps back to whatever the start number was set to. This effectively limits the maximum number of segment files written to disk.

For a complete list of the available options, take a look at the formats page on the official website.

]]>
https://hlsbook.net/segmenting-video-with-ffmpeg-part-2/feed/ 2 362
Validating HLS Video Streams https://hlsbook.net/validating-hls-video-streams/ https://hlsbook.net/validating-hls-video-streams/#respond Tue, 22 Mar 2016 18:12:25 +0000 http://hlsbook.net/?p=330 Once you’ve deployed your videos, it’s always a good idea to check them for any errors and make sure they are valid. This is especially true if you are developing an iOS application that streams video as Apple can reject your app if there are any problems with your streams. For example, if your application streams video over a cellular network, you must provide a stream with a maximum bit rate of 192 kbps. If you don’t, you aren’t going to make it into the App Store.

We can use Apple’s mediastreamvalidator tool to validate our video streams. It checks that the playlist and the media segments conform to the HTTP Live Streaming specification and will report any problems it finds so we can fix them.

To check if a video stream is valid, call mediastreamvalidator with the URL of the playlist. If you run the following command, you should see something like this:

$ mediastreamvalidator http://hlsbook.net/wp-content/examples/sintel/sintel_index.m3u8

[...] Started root playlist download
[...] Started media playlist download
[...] All media files delivered and have end tag, stopping

--------------------------------------------------------------------------------
http://hlsbook.net/wp-content/examples/sintel/sintel_index.m3u8
--------------------------------------------------------------------------------
Processed 7 out of 7 segments
Average segment duration: 10.000000
Total segment bitrates (all discontinuties): average: 867.51 kb/s, max: 1199.59 kb/s

Discontinuity: sequence: 0, parsed segment count: 7 of 7, duration: 70.000 sec, average: 867.51 kb/s, max: 1199.59 kb/s
Track ID: 1
Video Codec: avc1
Video profile: High
Video level: 3.1
Video resolution: 1024x436
Video average IDR interval: 3.646930, Standard deviation: 3.265633
Video frame rate: 24.000
Track ID: 2
Audio Codec: AAC-LC
Audio sample rate: 48000 Hz
Audio channel layout: Stereo (L R)

No problems to report here. In addition to validating the stream, it also reports information such as the average bit rate, segment duration, and resolution. You can use this information to also check that you’ve encoded your videos correctly.

The latest version (1.1) of mediastreamvalidator supports outputting the data to a file in JSON format. You can then pass this JSON file to another of Apple’s tools hlsreport, which will generate a nicely formatted summary for you in an HTML page. Let’s look at the commands we need to run to do this:

$ mediastreamvalidator -O validation.json \
       http://hlsbook.net/wp-content/examples/sintel/sintel_index.m3u8

$ hlsreport.py -o report.html validation.json

This time we run mediastreamvalidator with the -O option specifying the name of the file we want to write the data to. (I’ve omitted the output.) Next, we run hlsreport.py on the data to generate the HTML page. Here’s an example report showing the findings of the validation tool.

(Interestingly the report flags a number of issues that weren’t reported when only running the validation tool. This could be because it appears to be checking the results against the HLS Authoring Specification for Apple TV. Whatever the reason, probably best to run both just to be absolutely certain.)

So now you know how to validate your HLS video streams, you should have no problems getting your app into the App Store.

]]>
https://hlsbook.net/validating-hls-video-streams/feed/ 0 330
Adding Session Data to a Playlist https://hlsbook.net/adding-session-data-to-a-playlist/ https://hlsbook.net/adding-session-data-to-a-playlist/#respond Wed, 16 Mar 2016 13:22:33 +0000 http://hlsbook.net/?p=311 In this post we’ll look at how to add session data to a playlist and then how to access it from a client application. We can use this feature to add arbitrary metadata to the playlist, such as the title of the movie, who the director was, and so on. To add session data to a playlist, we can use the EXT-X-SESSION-DATA tag.

Let’s say we have our movie and we want to add the title and the name of the director as session data to the playlist. In this example, we’ll use the computer animated film, Sintel. Let’s start by embedding the values directly in the playlist:

#EXT-X-SESSION-DATA:DATA-ID="net.hlsbook.movie.title",VALUE="Sintel"
#EXT-X-SESSION-DATA:DATA-ID="net.hlsbook.movie.director",VALUE="Colin Levy"

The DATA-ID attribute is the key and the VALUE contains, surprisingly, the value. The convention is to use reverse DNS notation for the name of the DATA-ID attribute. In this instance, we’re using net.hlsbook.movie.title to represent the title of the movie and net.hlsbook.movie.director for the name of the director. (On the client – as you’ll see shortly – we’ll use the name of the DATA-ID attribute to look up its value.)

It’s also possible to provide the URI of a file that contains the data for a given DATA-ID. The contents of the file must be valid JSON. For example, instead of embedding the values directly in the playlist, we can add them to a file (data.json) like this:

{
"title":"Sintel",
"director":"Colin Levy"
}

Then we add a tag to the playlist setting the URI attribute to the location of the JSON file like so:

#EXT-X-SESSION-DATA:DATA-ID="net.hlsbook.movie.metadata",URI="data.json"

In this instance, the JSON file is in the same directory as the playlist so the URI is relative. Also note that the EXT-X-SESSION-DATA tag must contain either a VALUE or a URI attribute, but not both. You can choose to embed some values directly in the playlist and reference a JSON file for others if you prefer.

Retrieving the Session Data on the Client

Now that we know how to add session data to a playlist, how can we access it from a client application? If you are using AVFoundation you can retrieve the session data values from the metadata property of the underlying AVAsset. Let’s look at an example. (If you’ve already bought a copy of the book, you can follow along by modifying the code of the AV Foundation client example in chapter 5.)

First, we construct the AVAsset representing our movie:

NSString *movie = @"http://hlsbook.net/wp-content/examples/sintel/sintel_index.m3u8"; 
NSURL *url = [NSURL URLWithString:movie]; 
AVAsset *asset = [AVAsset assetWithURL:url];

Next, we define the loadValuesAsynchronouslyForKeys:completionHandler: method on the asset, passing in an array containing the metadata property – this is where we will access the session data. AVAsset defers loading the properties of an asset until they are requested. Property access is synchronous, so if a property is requested but hasn’t been loaded, it will block until it is ready. For this reason, we load the metadata property asynchronously. Here’s the complete code:

[asset loadValuesAsynchronouslyForKeys:@[@"metadata"] completionHandler:^{
	for (AVMetadataItem *item in asset.metadata) {
		NSString *keyString = (NSString *)[item key];
		if ([keyString isEqualToString:@"net.hlsbook.movie.metadata"]) {
			NSDictionary *dict = (NSDictionary *)[sessionItem value];
			NSLog(@"Title: %@", [dict valueForKey:@"title"]);
			NSLog(@"Directed by: %@", [dict valueForKey:@"director"]);
		}
		else if ([keyString isEqualToString:@"net.hlsbook.movie.title"]) {
			NSLog(@"Title: %@", [item value]);
		}
		else if ([keyString isEqualToString:@"net.hlsbook.movie.director"]) {
			NSLog(@"Directed by: %@", [item value]);
		}
	}
}];

As you can see from the code, we can retrieve the session data by comparing the value of the key to the name of the corresponding DATA-ID attribute(s) in the playlist; the value is returned as a string. If the value is (valid) JSON, we get back an NSDictionary. We then use the property names (in the JSON) to look up the appropriate values from the dictionary.

Summary

We’ve seen how to add session data to a playlist, either by embedding it directly or via a JSON file, and how to retrieve the data from a client application using AV Foundation. If you want to learn more, there’s a section in the book about how to add timed metadata to an HLS video. In the example in the book, you’ll learn how to add geo-location information to a video stream and plot the location dynamically on a map. Check it out.

]]>
https://hlsbook.net/adding-session-data-to-a-playlist/feed/ 0 311