It’s been a while since Wowza has updated their EC2 performance numbers (they date back to about 2009), and both Amazon and Wowza have made great improvements to their products. Since I have access to a high-capacity system outside of Amazon’s cloud, I am able to use Wowza’s load test tool on a variety of instance sizes to see how they perform.
The test methodology was as follows:
- Start up a Wowza instance on EC2 with no startup packages (us-east)
- Install the server-side piece of Willow (from Solid Thinking Interactive)
- Configure a 1Mbps stream in Wirecast
- Monitor the stream in JWPlayer 5 with the Quality Monitor Plugin
- Configure the Wowza Load Test Tool on one of my Wowza Hotrods located at Softlayer’s Washington DC datacenter
- Server is 14 hops/2ms from us-east-1
- Increase the load until:
- the measured bandwidth on JW player drops below stream bandwidth
- frame drops get frequent
- Bandwidth levels out on the Willow Graphs while connection count increases
- Let it run in that condition for a few minutes
In Willow, it basically looked like this (this was from the m1.small test). You can see ramping up to 100, 150, 200, 250, 275, and 300 streams. The last 3 look very similar because the server maxed out at 250 Mbps. (Yes, the graph says MBytes, that was a bug in Willow which Graeme fixed as soon as I told him about it)
Meanwhile, this is what happens on the server.. the CPU has maxed out.
So that’s the basic methodology. Here are the results:
|Size||Capacity (Mb/sec)||Capacity (GB/Hr)||Instance Cost||Cents/Mb|
There are a couple of things to note here. Naturally, if you’re not expecting a huge audience, stick to m1.small. But the best bang for the buck is the c1.medium (High-CPU Medium), which is a relatively new instance type, which gives you 4x the performance of a m1.small at less than 2x the price. The big surprise here was the m2.xlarge. It performs only marginally better than an m1.small at 4x the price.
All the instances that show 950 are effectively giving you the full benefit of the gigabit connection on that server and maxed out the interface long before the CPU maxes out. In the case of the c1.xlarge, there’s lots of CPU to spare for things like transcoding and such if you’re using a BYOL image. If you want to go faster, you’ll need to roll your own Cluster Quad or do a load-balanced set.
Disclaimers: Your mileage may vary, these are just guidelines, although I think they’re pretty close. I have not tested this anywhere but us-east-1, so if you’re using one of amazon’s other datacenters, you may get different results. I hope to test the other zones soon and see how the results compare.