Streaming to multiple simultaneous destinations

Live streaming has been a “thing” for some time. I work with many churches to help them solve their streaming challenges and develop their technology strategy for streaming. One of the most frequent questions I hear is, “can I stream to Facebook Live and still keep my other stream?” Fortunately, this is a lot easier than it used to be. There are variations on this question, but they all boil down to wanting to know how to send one stream to multiple outlets to expand audience reach.

Method 1:

Multiple outputs from your encoder

Several software encoder platforms support multiple outputs. The easiest among these is probably Telestream’s Wirecast software. (The free/open-source Open Broadcast Studio does this as well, but I don’t have much experience with it, and I prefer the Wirecast interface, which is much more polished.) With Wirecast, it’s merely a matter of adding the additional outputs to the various streaming services that are supported. The downside to this approach is that you’ll need more bandwidth, as you are sending the same stream multiple times.

Screenshot 2017-04-16 09.56.42

Method 2:

The Cloud

1. Teradek Core

This is a vendor-specific approach that integrates with Teradek‘s pro-grade encoders (Cube, Bond, Slice, and T-Rax). It provides a single pane of glass that lets you manage your entire fleet of encoder devices (and control/configure them remotely), and then virtually patch the output of those encoders to one or more outputs. You can also use their Live::Air apps for iOS as an input (stay tuned for a post about using Live::Air). If you are using a Bond product, the input is via their Sputnik server, which allows you to spread the stream across multiple connections for extra bandwidth and redundancy, and then it’s reassembled before sending it on to the next step.

In this example, I’m taking an input stream from the Live::Air Solo app on my iPhone, and sending it to Wowza Streaming Cloud, and Facebook Live, all while recording the incoming stream:

Screenshot 2017-04-16 07.10.22

This is a simple drag and drop operation: Drag a source on the left into the workspace, and then drag one or more destinations from the left – this can be:

Teradek decoders (this is great for a multisite church scenario)

Channels (which are external stream destinations):

Screenshot 2017-04-16 09.49.05

Groups (a combination of the above):

Screenshot 2017-04-16 09.50.31

If you click the “Auto” box on the outputs, it will start that output automatically when the stream is available from the input.

When you create stream destinations for social sites, it will authenticate you against that site and keep that authentication.

You can manage a lot of inputs and outputs this way. This example from Teradek’s marketing department shows the scale:



2. Wowza Streaming Engine/Streaming Cloud

Similar to Core, but not tied to a specific vendor, Wowza Streaming Engine provides Stream Targets as of version 4.4 (although the functionality has been in the software since sometime in version 2, as the PushPublish module, Stream Targets integrates it into the UI). Facebook Live support has been an option since almost the very beginning of Facebook Live. YouTube Live support is there, but as a standard RTMP destination.

Similarly, Wowza Streaming Cloud also offers this capability under the “Advanced Menu”:

Screenshot 2017-04-16 10.07.49

From there, you can create a stream target:

Screenshot 2017-04-16 10.08.02


Once that target is created, simply go into a transcoder output and add it (you can also create a target directly from there):

Screenshot 2017-04-16 10.12.26


As with Core, you can add multiple destinations to a transcoder output – Generally speaking you’ll want to send your best output to places like FB Live, YouTube, etc, as they do their own internal transcoding.

Screenshot 2017-04-16 10.13.31


Method 3:

Multiple Encoders

This is the obvious one, but also the least efficient both in terms of hardware and bandwidth. Each encoder goes to its own destination. This generally requires signal distribution amplifiers and other extra hardware.




Going Serverless: Office 365

Recently I just completed a project for a small church in Kansas. Several months ago, the senior pastor asked me for a quote on a Windows server to provide authentication as well as file and print share services. During the conversation, a few things became clear:

  1. Their desktop infrastructure was completely on Windows 10. Files were being kept locally or in a shared OneDrive account.
  2. The budget they had for this project was not going to allow for a proper server infrastructure with data protection, etc.
  3. This church already uses a web-based Church Management System, so they’re somewhat used to “the cloud” already as part of their workflows.

One of the key features provided by Windows 10 was the ability to use Office 365 as a login to your desktop (Windows 8 allowed it against a Microsoft Live account). Another is that for churches and other nonprofits, Office 365 is free of charge for the E2 plan.

I set about seeing how we could go completely serverless and provide access not only to the staff for shared documents, but also give access to key volunteer teams and church committees.

The first step was to make sure everybody was on Windows 10 Pro (we found a couple of machines running Windows 10 Home). Tech Soup gave us inexpensive access to licenses to get everyone up to Pro.

Then we needed to make sure the internet connection and internal networking at the site was sufficient to take their data to the cloud. We bumped up the internet speed and overhauled the internal network, replacing a couple of consumer-grade unmanaged switches and access points with a Ubiquiti UniFi solution for the firewall/router, network switch, and access points. This allows me and key church staff to remotely manage the network, as the UniFi controller operated on an Amazon Web Services EC2 instance (t2.micro). This new network also gave the church the ability to offer guest wifi access without compromising their office systems.

The next step was to join everyone to the Azure domain provided by Office 365. At this point, all e-mail was still on Google Apps, until we made the cutover.

Once we had login authentication in place, I set about building the file sharing infrastructure. OneDrive seemed to be the obvious solution, as they were already using a shared OneDrive For Business account.

One of OneDrive’s biggest challenges is that, like FedEx, it is actually several different products trying to behave as a single, seamless product. At this, OneDrive still misses the mark. The OneDrive brand consists of the following:

  • OneDrive Personal
  • OneDrive for Business
  • OneDrive for Business in Office 365 (a product formerly known as Groove)
  • Sharepoint Online

All the OneDrive for Business stuff is Sharepoint/Groove under the hood. If you’re not on Office 2016, you’ll want to make the upgrade, because getting the right ODB client in previous versions of Office is a nightmare. Once you get it sorted, it generally works. If you’ve got to pay full price for O365, I would recommend DropBox for Business as an alternative. But it’s hard to beat the price of Office 365 when you’re a small business.

It is very important to understand some of the limitations of OneDrive for Business versus other products like DropBox for Business. Your “personal” OneDrive for Business files can be shared with others by sending them a link, and they can download the file, but you can’t give other users permission to modify them and collaborate on a document. For this, you need to go back to the concept of shared folders, and ODB just doesn’t do this. This is where Sharepoint Online comes in to play.

Naturally, this being Sharepoint, it’s not the easiest thing in the world to set up. It’s powerful once you get it going, but I wasn’t able to simply drop all the shared files into a Sharepoint document library — There’s a 5000-file limit imposed by the software. Because the church’s shared files included a photo archive, there were WAY more than 5000 files in it.

Sharepoint is very picky about getting the right information architecture (IA) set up to begin with. Some things you can’t change after the fact, if you decide you got them wrong. Careful planning is a must.

What I ended up doing for this church is creating a single site collection for the whole organization, and several sites within that collection for each ministry/volunteer team. Each site in Sharepoint has 3 main security groups for objects within a site collection:

  • Visitors (Read-Only)
  • Members (Read/Write)
  • Owners (Read/Write/Admin)

In Office 365, much as it is with on-premises, you’re much better off creating your security groups outside of Sharepoint and then adding those groups to the security groups that are created within Sharepoint. So in this case, I created a “Worship Production” team, added the team members to the group, and then added that group to the Worship Site Owners group in Sharepoint. The Staff group was added to all the Owners groups, and the visitors group was left empty in most cases. This makes group membership administration substantially easier for the on-site admin who will be handling user accounts most of the time. It’s tedious to set up, but once it’s going, it’s smooth sailing.

Once the security permissions were set up for the various team sites, I went into the existing flat document repository and began moving files to the Sharepoint document libraries. The easiest way to do this is to go to the library in Sharepoint, and click the “Sync” button, which then syncs them to a local folder on the computer, much like OneDrive (although it’s listed as Sharepoint). There is no limit to how many folders you can sync to the local machine (well, there probably is, but for all practical purposes, there isn’t). From there it’s a matter of drag and drop. For the photos repository, I created a separate document library in the main site, and told Sharepoint it was a photo library. This gives the user some basic Digital Asset Management capabilities such as adding tags and other metadata to each picture in the library.

So far, it’s going well, and the staff enjoys having access to their Sharepoint libraries as well as Microsoft Office on their mobile devices (iOS and Android). Being able to work from anywhere also gives this church some easy business continuity should a disaster befall the facility — all they have to do is relocate to the local café that has net access, and they can continue their ministry work. Their data has now been decoupled from their facility. I have encountered dozens of churches over the years whose idea of data backup is either “what backup?” or a hard drive sitting next to the computer 24×7, which is of no use if the building burns to the ground or is spontaneously relocated to adjacent counties by a tornado. The staff doesn’t have to worry about the intricacies of running Exchange or Sharepoint on Windows Small Business Server/Essentials. Everything is a web-based administrative panel, and support from Microsoft is excellent in case there’s trouble.

If you’re interested in how to take your church or small business serverless, contact me and I’ll come up with a custom solution.

Microsoft Azure

Nonprofit Tech Deals: Microsoft Azure

Last week while I was at the Church IT Network National Conference in Anderson, SC, a colleague pointed me to a fantastic donation from Microsoft via TechSoup: $5000/year in Azure credit. At a hair over $400/month, this means you can run a pretty substantial amount of stuff. Microsoft just announced this program at the end of September, so it’s still very new. And very cool. Credits are good any time within the 12-month period, so you don’t have to split them up month by month. They do not, however, roll over to the following year.

The context of the conversation was for hosting the open-source RockRMS Church/Relationship Management System, but Wowza Streaming Engine is also available ready to go on Azure. And many other things. (and for those of us in the midwest, Microsoft’s biggest Azure datacenter is “US Central” located in Des Moines, as Iowa is currently a very business-friendly place to put a huge datacenter)

If you’re a registered 501c3 non-profit (or your local country’s equivalent if you’re outside the US), head on over to Tech Soup to take advantage of this fantastic deal.

As an added bonus, if you have Windows Server Datacenter licenses from TechSoup or that your organization purchased with Software Assurance, each 2-socket license can be run on up to two Azure compute instances each with up to 8 virtual cores, reducing the cost of your instances even further (as standard Windows instances include the cost of the Windows license at full nonprofit prices.). This also applies to SQL Server.

Here’s the process:

  1. Read the FAQ.
  2. Register your organization with TechSoup if you haven’t already done so.
  3. Head over to Microsoft’s Azure Product Donations page and hit “Get Started”
  4. At some point in the process you’ll also want to create an Azure account to associate the credits with. If you’re already using Office 365 for nonprofits, it’s best to tie an account to your O365 domain.

Multi-tenant Virtual Hosting with Wowza on EC2

That’s a mouthful, isn’t it?

I recently needed to migrate a couple of Wowza Streaming Engine tenants on a baremetal server that was getting long in the tooth, and was getting rather expensive. These tenants were low-volume DVR or HTTP transmuxing customers, with one transcoding customer that required some more CPU power. But this box was idle most of the time. So I decided to move it over to AWS and fire up the box only when necessary. Doing this used to be a cumbersome process with the AWS command-line tools that were Java-based. The current incarnation of tools is quite intuitive and runs in Python, so there’s not a lot of insane configuration and scripting to do.

You may recall my post from a few years back about multi-tenant virtual hosting. I’m going to expand on this and describe how to do it within the Amazon EC2 environment, which has historically limited you to  a single IP address on a system.

The first step to getting multiple network interfaces on EC2 is to create a Virtual Private Cloud (VPC) and start your EC2 instances within your VPC. “Classic” EC2 does not support multiple network interfaces.

Once you’ve started your Wowza instance within your VPC (for purposes of transcoding a single stream, I’m using a c4.2xlarge instance), you then go to the EC2 console, and on the left-hand toolbar, under “network and security” is a link labeled “Network Interfaces”. When you click on that, you have a page listing all your active interfaces.

To add an interface to an instance, simply create a network interface, select the VPC subnet it’s on, and optionally set its IP (the VPC subnet is all yours, in dedicated RFC1918 space, so you can select your IP). Once it’s created, you can then assign that interface to any running instance. It shows up immediately within the instance without needing to reboot.

Since this interface is within the VPC, it doesn’t get an external IP address by default, so you’ll want to assign an ElasticIP to it if you wish to have it available externally (in most cases, that’s the whole point of this exercise)

Once you have the new interface assigned, simply configure the VHosts.xml and associated VHost.xml files to listen to those specific internal IP addresses, and you’re in business.
As for scheduling the instance? On another machine that IS running 24/7 (if you want to stick to the AWS universe, you can do this in a free tier micro instance), set up the AWS command line tools and then make a crontab entry like this:

30 12 * * 1-5 aws ec2 start-instances --instance-ids i-XXXXXXXX
35 12 * * 1-5 aws ec2 associate-address --network-interface-id eni-XXXXXXXX --allocation-id eipalloc-XXXXXXXX
35 12 * * 1-5 aws ec2 associate-address --network-interface-id eni-XXXXXXXX --allocation-id eipalloc-XXXXXXXX
30 15 * * 1-5 aws ec2 stop-instances --instance-ids i-XXXXXXXX 

This fires up the instance at 12:30pm on weekdays, assigns the elastic IPs to the interfaces, and then shuts it all down 3 hours later (because this is an EBS-backed instance in a VPC, stopping the instance doesn’t nuke it like terminating does, so any configuration you make on the system is persistent)

Another way you can use this is to put multiple interfaces on an instance with high networking performance and gain the additional bandwidth of the multiple interfaces (due to Java limitations, there’s no point in going past 4 interfaces in this use case), and then put the IP addresses in either a round-robin DNS or a load balancer, and simply have Wowza bind to all IPs (which it does by default).

HLS distribution with Amazon CloudFront

I’ve blogged extensively about Wowza RTMP distribution with edge/origin and load balancing, but streaming distribution is moving more to HTTP-based systems such as Apple’s HTTP Live Streaming (known inside Wowza as “cupertino”), Adobe’s HTTP Dynamic Streaming (Wowza: “sanjose”), and Microsoft’s Smooth Streaming (Wowza: “smooth”). Future trends suggest a move to MPEG-DASH, which is a standard based on all three proprietary methods (I’ll get into DASH in a future post as the standard coalesces – we’re talking bleeding edge here). The common element in all of them, however, is that they use HTTP as a distribution method, which makes it much easier to leverage CDNs that are geared towards non-live content on HTTP. One of these CDNs is Amazon’s CloudFront service. With edges in 41 locations around the world and 12 cents a gigabyte for transfer (pricing may vary by region), it’s a good way to get into an HTTP CDN without paying a huge amount of money or committing to a big contract with a provider like Akamai.

On the player side, JW Player V6 now supports HLS, and you can do Adobe HDS with the Strobe Media Player.

With the 3.5 release, Wowza Media Server can now act as an HTTP caching origin for any HTTP based CDN, including CloudFront. Doing so is exceedingly simple. First, configure your Wowza server as an HTTP caching origin, and then create a CloudFront distribution (use a “download” type rather than a streaming type – it seems counterintuitive, but trust me on this one!), and then under the origin domain name, put the hostname of your Wowza server. You can leave the rest as defaults, and it will work. It’ll take Amazon a few minutes to provision the distribution, but once it’s ready, you’ll get a URL that looks something like “d1ed7b1ghbj64o.cloudfront.net”. You can verify that the distribution is working by opening a browser to that address, and you should see the Wowza version information. Put that same CloudFront URL in the player URL in place of the Wowza server address, and your players will now start playing from the nearest CloudFront edge cache.

See? Easy.

Wowza EC2 Capacity Update

It’s been a while since Wowza has updated their EC2 performance numbers (they date back to about 2009), and both Amazon and Wowza have made great improvements to their products. Since I have access to a high-capacity system outside of Amazon’s cloud, I am able to use Wowza’s load test tool on a variety of instance sizes to see how they perform.

The test methodology was as follows:

  • Start up a Wowza instance on EC2 with no startup packages (us-east)
  • Install the server-side piece of Willow (from Solid Thinking Interactive)
  • Configure a 1Mbps stream in Wirecast
  • Monitor the stream in JWPlayer 5 with the Quality Monitor Plugin
  • Configure the Wowza Load Test Tool on one of my Wowza Hotrods located at Softlayer’s Washington DC datacenter
    • Server is 14 hops/2ms from us-east-1
  • Increase the load until:
    • the measured bandwidth on JW player drops below stream bandwidth
    • frame drops get frequent
    • Bandwidth levels out on the Willow Graphs while connection count increases
  • Let it run in that condition for a few minutes

In Willow, it basically looked like this (this was from the m1.small test). You can see ramping up to 100, 150, 200, 250, 275, and 300 streams. The last 3 look very similar because the server maxed out at 250 Mbps. (Yes, the graph says MBytes, that was a bug in Willow which Graeme fixed as soon as I told him about it)

Willow Bandwidth

Meanwhile, this is what happens on the server.. the CPU has maxed out.

EC2 CPU Usage

So that’s the basic methodology. Here are the results:

[table id=1 /]

There are a couple of things to note here. Naturally, if you’re not expecting a huge audience, stick to m1.small. But the best bang for the buck is the c1.medium (High-CPU Medium), which is a relatively new instance type, which gives you 4x the performance of a m1.small at less than 2x the price. The big surprise here was the m2.xlarge. It performs only marginally better than an m1.small at 4x the price.
All the instances that show 950 are effectively giving you the full benefit of the gigabit connection on that server and maxed out the interface long before the CPU maxes out. In the case of the c1.xlarge, there’s lots of CPU to spare for things like transcoding and such if you’re using a BYOL image. If you want to go faster, you’ll need to roll your own Cluster Quad or do a load-balanced set.

Disclaimers: Your mileage may vary, these are just guidelines, although I think they’re pretty close. I have not tested this anywhere but us-east-1, so if you’re using one of amazon’s other datacenters, you may get different results. I hope to test the other zones soon and see how the results compare.

Creating a global streaming CDN with Wowza

I’ve posted in the past about the various components involved in doing edge/origin setups with Wowza, as well as startup packages and Route 53 DNS magic. In this post, I’ll tie the various pieces together to put together a geographically-distributed CDN using Wowza.

For the purposes of this hypothetical CDN, we’ll do it all within EC2 (although there’s no reason it has to be), with three edge servers in each availability zone.

There are two ways we can do load balancing within each zone. One is to use the Wowza load balancer, and the other is to use round-robin DNS. The latter is not as evenly balanced as the former, but should work reasonably well. If you’re using set-top devices like Roku, you’ll likely want to use DNS as redirects aren’t well supported in the Roku code.

For each zone, create the following DNS entries in Route 53. If you use your registrar’s DNS, you’ll probably want to create a domain name as many registrar DNS servers don’t deal well with third-level domains (sub.domain.tld). For the purposes of this example, I’ll use nhicdn.net, the domain I use for Nerd Herd streaming services. None of these hostnames actually exist.

We’ll start with Wowza’s load balancer module. Since the load balancer is not in any way tied to the origin server (although they frequently run on the same system), you can create a load balancer on one of the edge servers in that zone. Point the loadbalancertargets.txt on each edge server in that zone to that server, and point to the origin stream in the Application.xml configuration file. Once the load balancer is created, create a simple CNAME entry in Route 53 called lb-zone.nhicdn.net, pointing to the hostname of the load balancer server:

lb-us-east.nhicdn.net CNAME ec2-10-20-30-40.compute-1.amazonaws.com

If you opt to use DNS for load balancing, again point the Application.xml to the origin stream, and then create a weighted CNAME entry to each server in that zone, all with the same host record, each with equal weight (1 works):

lb-us-east.nhicdn.net CNAME ec2-10-20-30-40.compute-1.amazonaws.com
lb-us-east.nhicdn.net CNAME ec2-10-20-30-41.compute-1.amazonaws.com
lb-us-east.nhicdn.net CNAME ec2-10-20-30-42.compute-1.amazonaws.com

If you wish, you can also create a simple CNAME alias for each server for easier tracking – this is strictly optional:

edge-us-east1.nhicdn.net CNAME ec2-10-20-30-40.compute-1.amazonaws.com
edge-us-east2.nhicdn.net CNAME ec2-10-20-30-41.compute-1.amazonaws.com
edge-us-east3.nhicdn.net CNAME ec2-10-20-30-42.compute-1.amazonaws.com

Once you’ve done this in each zone where you want servers, Create a set of latency-based CNAME entries for each lb entry:

lb.nhicdn.net CNAME lb-us-east.nhicdn.net
lb.nhicdn.net CNAME lb-us-west.nhicdn.net
lb.nhicdn.net CNAME lb-europe.nhicdn.net
lb.nhicdn.net CNAME lb-brazil.nhicdn.net
lb.nhicdn.net CNAME lb-singapore.nhicdn.net
lb.nhicdn.net CNAME lb-japan.nhicdn.net

Once your DNS entries are in place, point your players to lb.nhicdn.net and use either the name of the edge application (if using DNS) or the redirect application (if using Wowza load balancer), and clients will go to either the edge server or load balancer for the closest region.

One other thing to consider is to set up a load balancer on your origin, and use that for gathering stats from the various servers. You can put multiple load balancers in the loadbalancertargets.txt file, so you can have each server report in to both its regional load balancer and the global one, except the global one is not being used to redirect clients, but rather only for statistics gathering. You can also put multiple load balancers in each region for redundancy and use a weighted CNAME entry.

If you would like assistance setting up a system like this for your organization, I am available for hire for consulting and/or training.

Adding EC2 instances to Route53

Today’s nifty bit of code is a startup/shutdown script for linux that allows you to add an EC2 instance to Amazon’s Route53 DNS automatically when you start up the instance, and remove it when the instance is knocked down. This script also allows you to add the instance to a weighted round-robin group.

This makes use of the very useful Python-based boto tool which is available in both Yum and Debian repositories under the package name python-boto.

Create this script in /etc/init.d and make it executable, and then add it to the requisite rcX.d directory for startup/shutdown.

#       /etc/rc.d/init.d/<servicename>
#       Registers DNS with Route 53
# chkconfig 345 20 80

# Source function library.
. /etc/init.d/functions

# Best practice here is to create an IAM user/group with access to only 
# Route53 and use this keypair.
export AWS_ACCESS_KEY_ID=<Access Key ID Here>
export AWS_SECRET_ACCESS_KEY=<Secret Key ID here>

# Use the domain you have configured in Route 53
export DOMAIN=ianbeyer.com

# This is the hostname of the Weighted Round Robin Group.
export RR=wrr

# Gets the current public hostname - you don't want to use an A record with 
# the IP because the CNAME works in conjunction with Amazon's internal DNS
# to correctly resolve instances to their internal address
export PUB_HOSTNAME=`curl -s --fail`

# Gets the Route 53 Zone ID
export ZONEID=`route53 ls | awk '($2 == "ID:"){printf "%s ",$3;getline;printf "%s\n",$3}' | grep $DOMAIN | awk '{print $1}'`

start() {
        echo -n "Registering host with Route 53 : "

# This is the base name of the host you want to use. It will increase the index
# number until it finds one that's not in use, and then use that. 

        export HOST=amz
        export HOSTINDEX=1

        INUSE=`route53 get $ZONEID | grep ${HOST}${HOSTINDEX}\.${DOMAIN} | wc -l`
        while [[ $INUSE > 0 ]]
            HOSTINDEX=$((HOSTINDEX + 1))
            INUSE=`route53 get $ZONEID | grep ${HOST}${HOSTINDEX}\.${DOMAIN} | wc -l`
        if [[ "$HOSTINDEX" == "1" ]]; then
            # set the new fqdn hostname and shortname

        # Set the instance hostname -- If you want to make sure that bash
        # updates the prompt, run "exec bash" after the script. If you do so
        # in the script, bad stuff happens on startup.

        hostname $FQDN
        echo -n $FQDN

        # Add the DNS record
        RESULT=`route53 add_record $ZONEID $FQDN CNAME $PUB_HOSTNAME | grep "PENDING"`
        if [[ "$RESULT" == "" ]]; then
                echo "... failed.";
                echo "... success.";

        # Add the CNAME record
        echo -n "Adding host to round-robin group...";

        # Checking to make sure it's not already there. 
        CNAME=`route53 get $ZONEID | grep $RR | grep $PUB_HOSTNAME | wc -l`
        if [[ $CNAME = 0 ]]; then
                RESULT=`route53 add_record $ZONEID $RR.$DOMAIN CNAME $PUB_HOSTNAME 60 ${HOST}${HOSTINDEX} 1 | grep "PENDING"`
                if [[ "$RESULT" == "" ]]; then
                        echo "... failed.";
                        echo "... success.";
                echo "already exists, ignoring";


stop() {
        echo -n "Deregistering host with Route 53 : "
        HOST=`hostname | cut -f1 -d.`

        # check to make sure it exists in Route53
        CNAME=`route53 get $ZONEID | grep $FQDN | grep $PUB_HOSTNAME | wc -l`
        if [[ $CNAME > 0 ]]; then
                RESULT=`route53 del_record $ZONEID $FQDN CNAME $PUB_HOSTNAME | grep "PENDING"`
                if [[ "$RESULT" == "" ]]; then
                        echo "... failed.";
                        echo "... success.";
                echo "... not found, ignoring";

        echo -n "Deregistering host from RR CNAME..."

        # Checking to make sure it exists
        CNAME=`route53 get $ZONEID | grep $RR | grep $PUB_HOSTNAME | wc -l`
        if [[ $CNAME > 0 ]]; then
                RESULT=`route53 del_record $ZONEID $RR.$DOMAIN CNAME $PUB_HOSTNAME 60 ${HOST} 1 | grep "PENDING"`
                if [[ "$RESULT" == "" ]]; then
                        echo "... failed.";
                        echo "... success.";
                echo "... not found, ignoring";

        # resets the hostname to default
        hostname `curl -s --fail`


case "$1" in
        echo "Usage: <servicename> {start|stop|restart}"
        exit 1
exit $?

Many thanks to Dave McCormick for his blog post on the subject from which I borrowed heavily.

Converting EC2 S3/instance-store image to EBS

Amazon’s instance-store images are convenient, but ephemeral in nature. Once you shut them down, they’re history. If you want persistence of data, you want to use an EBS instance that can be stopped and started at will without losing your info. Here’s the process I went through to convert a Wowza image to EBS for a client to use with a reserved instance. I’m going to assume no configuration changes for Wowza Media Server, as the default startup package is fairly full-featured. This process works for any other instance-store AMI, just ignore the Wowza bits if that’s your situation.

Boot up a 64-bit Wowza lickey instance. I was working in us-east-1, so I used ami-e6e4418f, which was the latest as of this blog post.

Once it’s booted up, log in.

Elevate yourself to root. You deserve it:

sudo su -

Stop the Wowza service:

service WowzaMediaServer stop

delete the Server.guid file. This will cause new instances to regenerate their GUID.

rm /usr/local/WowzaMediaServer/conf/Server.guid

Go into the AWS management console and create a blank EBS volume in the same zone as your instance.

Attach that volume to your instance (I’m going to assume /dev/sdf here)

Create a filesystem on it (note: while the console refers to it as /dev/sdf, Amazon Linux uses the Xen virtual disk notation /dev/xvdf):

mkfs.ext4 /dev/xvdf

Create a mount point for it, and mount the volume:

mkdir /mnt/ebs
mount /dev/xvdf /mnt/ebs

Sync the root and dev filesystems to the EBS disk:

rsync -avHx / /mnt/ebs
rsync -avHx /dev /mnt/ebs

Label the disk:

tune2fs -L '/' /dev/xvdf

Flush all writes and unmount the disk:

sync;sync;sync;sync && umount /mnt/ebs

Using the web console, create a snapshot of the EBS volume. Make a note of the snapshot ID.

Still in the web console, go to the instances and make a note of the kernel ID your instance is using. This will be aki-something. In this case, it was aki-88aa75e1.

For the next step, you’ll need an EC2 X.509 certificate and private key. You get these through the web console’s “Security Credentials” area. This is NOT the private key you use to SSH into an instance. You can have as many as you want, just keep track of the private key because Amazon doesn’t keep it for you. If you lose it, it’s gone for good. Download both the private key and certificate. You can either upload them to the instance or open them in a text editor and copy the text, and paste it into a file. Best place to do this is in /root. Once you have the files, set some environment variables to make it easy:

export EC2_CERT=`pwd`/cert-*.pem
export EC2_PRIVATE_KEY=`pwd`/pk-*.pem

once this is done, you’ll need to register the snapshot as an AMI. It’s important here to specify the root device name as well as map out the ephemeral storage as Wowza uses those for content and logs. Ephemeral storage will persist through a reboot, but not a termination. If you have data that needs to persist through termination, use an additional EBS volume.

ec2-register --snapshot [snapshot ID] --description "Descriptive Text" --name "Unique-Name" --kernel [kernel ID] --block-device-mapping /dev/sdb=ephemeral0 --block-device-mapping /dev/sdc=ephemeral1 --block-device-mapping /dev/sdd=ephemeral2 --block-device-mapping /dev/sde=ephemeral3 --architecture x86_64 --root-device-name /dev/sda1

Once it’s registered, you should be able to boot it up and customize to your heart’s content. Once you have a configuration you like, right-click on the instance in the AWS web console and select “Create Image (EBS AMI)” to save to a new AMI.

Note: As of right now, I don’t think startup packages are working with the EBS AMI. I don’t know if they’re supposed to or not.