HLS distribution with Amazon CloudFront

I’ve blogged extensively about Wowza RTMP distribution with edge/origin and load balancing, but streaming distribution is moving more to HTTP-based systems such as Apple’s HTTP Live Streaming (known inside Wowza as “cupertino”), Adobe’s HTTP Dynamic Streaming (Wowza: “sanjose”), and Microsoft’s Smooth Streaming (Wowza: “smooth”). Future trends suggest a move to MPEG-DASH, which is a standard based on all three proprietary methods (I’ll get into DASH in a future post as the standard coalesces – we’re talking bleeding edge here). The common element in all of them, however, is that they use HTTP as a distribution method, which makes it much easier to leverage CDNs that are geared towards non-live content on HTTP. One of these CDNs is Amazon’s CloudFront service. With edges in 41 locations around the world and 12 cents a gigabyte for transfer (pricing may vary by region), it’s a good way to get into an HTTP CDN without paying a huge amount of money or committing to a big contract with a provider like Akamai.

On the player side, JW Player V6 now supports HLS, and you can do Adobe HDS with the Strobe Media Player.

With the 3.5 release, Wowza Media Server can now act as an HTTP caching origin for any HTTP based CDN, including CloudFront. Doing so is exceedingly simple. First, configure your Wowza server as an HTTP caching origin, and then create a CloudFront distribution (use a “download” type rather than a streaming type – it seems counterintuitive, but trust me on this one!), and then under the origin domain name, put the hostname of your Wowza server. You can leave the rest as defaults, and it will work. It’ll take Amazon a few minutes to provision the distribution, but once it’s ready, you’ll get a URL that looks something like “d1ed7b1ghbj64o.cloudfront.net”. You can verify that the distribution is working by opening a browser to that address, and you should see the Wowza version information. Put that same CloudFront URL in the player URL in place of the Wowza server address, and your players will now start playing from the nearest CloudFront edge cache.

See? Easy.

  • Tim

    Hey Ian!
    Great Blog.

    We’ve got Wowza set up to deliver to Androids via RTSP via edge servers, and to everything else via HTTP (multibitrate).

    I’ve added in a Cloudfront Distribution, and changed all the player URLs accordingly.

    However, when I go and check the loadbalancer information (loadbalancer?serverInfoXML), I see a connection for every HTTP client that connects, even though I can see that the video chunks are being delivered via Cloudfront. These connections don’t get spread across the load balancer.

    Does that sound right to you? I was expecting to see just one connection from cloudfront, not one per connection.

    • http://blog.ianbeyer.com/ Ian B

      The wowza load balancer only reports which server is least loaded. You’ll need to build your own mechanism to distribute those connections. With cloudfront, it’s only going to go to the origin servers you specify in your distribution.

  • Paul

    Thanks for very helpful post,
    Can you please explain why one would want to use wowza as an origin for cloud front over using an S3 bucket. I am guessing the primary advantage is that wowza segments files on demand so there is no need to “pre-segment” files and no need to actually store the segments.

    If my assumptions are correct, are there any other advantages I am missing.

    How secure is the wowza content?… what prevents someone from getting the m3u8 file, parsing the urls and just downloading the content?

    Thanks for your insight! & great site :-)

    Paul

    • http://blog.ianbeyer.com/ Ian B

      You can’t do live from an S3 bucket. To secure it, the best bet is to use cloudfront signed URLs.

  • Stefan

    Have you ever run into the 1GBit/s limit when using cloudfront? The AWS documentation says that this limit is active per edge location, That World mean, that only 1000 users could watch a 1MBit stream…

  • Jesse Decker

    We did exactly this last week for a 5-day live event in LA using a single origin c1.xlarge for transcoding and CloudFront for distribution. Our viewers in the US west of Texas, London, and Paris experienced horrendous latency and buffering, while those in Madagascar could never load the stream on a 7Mbps connection (though the website assets came through just fine). Others in Asia were just fine. We were incredibly dissatisfied with cloudfront’s spotty performance. Our throughput globally shouldn’t have exceeded the 1Gbps limit, especially not for as long as our users claim they were having issues. Have you heard of such persistent problems like this?

  • niemi

    Would I benefit from CloudFront if all my viewers are from Denmark which is reasonable close to Ireland?

    Example:

    Our Digital Rapids encoder in Copenhagen streams to an cc2.8xlarge instance (running Wowza) in Ireland with 5000 Kbit/s for one hour for a total of 2.15 GB.

    The 5000 Kbit/s stream is transcoded to three bitrates, 500 Kbps, 1000 Kbps and 2000 Kbps. 500 simultaneous viewers residing in Denmark, consume the stream in the three bitrates consuming a total of 250 GB during this hour.

    Should I expect problems doing this and would CloudFront come in handy for me?