In this post I’ll describe how to add subtitles to a live HLS stream.
Subtitles can be added to a live video stream by creating a live subtitle playlist. Before I delve into the details, let’s recap how the playlist for a live video stream works. A live playlist contains a fixed number of entries. Entries are added and removed accordingly as time progresses. Let’s look at an example:
In this instance there are 5 entries in the playlist, each segment being 10 seconds in duration. Every 10 seconds or so, the first segment is removed and a new one is added to the end of the playlist. The target duration tells the client how often it should go back to the server to retrieve new content. The client will continue to fetch the playlist until the stream ends. This occurs when the
#EXT-X-ENDLIST tag is added to the playlist. (There’s more detail in the book.)
To stream video with HLS, you need to divide your video into segments of a fixed duration and add them to a playlist. In the book I use Apple’s HTTP Live Streaming tools to do this. Here’s an example using
$ mediafilesegmenter -f /Library/WebServer/Documents/vod sample.mov
This command takes the video (
sample.mov) and writes out the segments and the playlist to the
/Library/WebServer/Documents/vod directory. Unfortunately, Apple’s tools will only work on a Mac.
However, recent versions of
ffmpeg can also output HLS compatible files. Given a video as input, it will divide it into segments and create a playlist for us.
Here’s the equivalent of the command above using
$ ffmpeg -y \
-i sample.mov \
-codec copy \
-bsf h264_mp4toannexb \
-map 0 \
-f segment \
-segment_time 10 \
-segment_format mpegts \
-segment_list "/Library/WebServer/Documents/vod/prog_index.m3u8" \
-segment_list_type m3u8 \
segment muxer to segment the video. We can specify the segment duration with the
-segment_time option. The last argument passed to
ffmpeg is the path to where the segments should be written; it contains a format specifier (
%d) similar to those supported by the
printf function in C. The
%d will be replaced with the current sequence number. In this example, the segments will be named
fileSequence1.ts, and so on.
And that’s how you process a video for streaming with HLS using
ffmpeg. There are other examples in the book, including how to use
ffmpeg to segment a live video stream, so if you want to learn how, buy your copy today.
In part 2 we’ll look at how to segment video using ffmpeg’s
The following article is an excerpt from Chapter 4 of HTTP Live Streaming: A Practical Guide. This is the introductory section of the chapter and is meant to give you the background you need to understand how adaptive streaming works and how to do it with HLS.
In previous chapters, we’ve seen how to stream a video with HLS for both on- demand and live video. However, a single video stream at a particular bit rate isn’t going to work on all the devices that people watch video on today. Some devices are more powerful than others, screen sizes are different, some support different H.264 profiles, and so on. Connection speeds can also vary depending on how you are connected to the internet.
For example, you can watch an HD video stream on your flat screen TV at home, but you won’t get the same experience watching the same stream on a mobile device over a cellular network. Playback will stall as the device won’t be able to download the video data fast enough. In some cases, the video may not play at all because the device won’t be powerful enough to decode the video. Then there’s the wasted bandwidth. Most screens on mobile devices do not have the same resolution as a TV so all those pixels will be wasted.
The amount of bandwidth available also plays a part. Your internet provider may promise you super-fast connection speeds, but what you actually get may fluctuate at any given time. If everybody in your neighbourhood starts streaming the latest movie from Netflix or watching YouTube videos, you can be fairly certain that your connection speed will drop off. The result? More stalls and buffering.
We need a solution that allows us to deliver a video stream that’s optimal for the device the stream is being watched on, and that is where adaptive streaming comes in to play.
Here’s the plan for what we’ll cover in this chapter:
- First we’ll look at what adaptive streaming is, how it works, and why it’s useful.
- Then you’ll learn how to take a video and stream it on-demand using adaptive streaming.
We’ll be doing some video encoding in this chapter so if you come across any terms you aren’t familiar with, refer to Appendix A, An Encoding Primer.