Ian doing a Site Survey

“We want wi-fi. Now what?”

I’ve been spending the past week at the annual Wireless LAN Professionals Conference in Phoenix. This is one of my favorite conferences along with the Church IT Network conference, because I get to spend a couple of days geeking out hard with a whole bunch of REALLY smart people. The amount of information I’ve stuffed into my brain since last Friday is a little bit, well, mind-blowing…

I spent the first 3 days getting my Ekahau Certified Survey Engineer credential. For those who are not familiar with the Wi-Fi side of my consulting practice, Ekahau Site Survey is a fantastic tool for developing predictive RF designs for wireless networks, allowing me to optimize the design before I ever pull any new cable or hang access points. One of the key points that’s been touched on frequently throughout the training and the conference is what was termed by one attendee as the “Sacred Ritual of the Gathering of Requirements”. It sounds silly, but this one step is probably the single most important part of the entire process of designing a wireless network.

In the church world (and in the business world), your mission statement is what informs everything you do. Every dollar you spend, every person you hire, every program you offer, should in some way support that mission focus in a clearly defined and measurable manner. A former boss (and current client) defines his IT department’s mission like this: “Our users’ mission is our mission.” This clearly laid out that in IT, we existed to help everyone else accomplish their mission, which in turn accomplished the organization’s mission.

I’ve had more than a few clients say initially that their requirement is “we want wi-fi”. My job as a consultant and an engineer is to flesh out just what exactly “wi-fi” means in your particular context, so that I can deliver a design and a network that will make you happy to write the check at the end of the process. I can’t expect a client to know what they want in terms of specific engineering elements relating to the design. If they did, I’m already redundant.

Whiteboard

Photo: Mitch Dickey/@Badger_Fi

During the conference someone put up a whiteboard, with the following question:

“What are the top key questions to ask a client in order to develop a WLAN design or remediation?”

The board quickly filled up, and I’ll touch on a few really important ones here:

“What do you expect wi-fi to do for you? What problem does it solve?”

It was also stated as:

“What is your desired outcome? How does it support your business?”

This is one of the fundamental questions. It goes back to your mission statement. Another way of putting it is “How do you hope to use the wi-fi to support you mission?” What you hope to do with wi-fi will drive every single other design decision. The immediate follow-up question should be a series of “why?” questions to get to the root cause of why these outcomes are important to the business goals. You can learn an awful lot by asking “why?” over and over like a 4-year-old child trying to understand the world. This is critical for managing expectations and delivering what the client is paying you a large sum of money to do.

“What is your most critical device/application?”

“What is your least capable and most important device?”

“What other types of devices require wi-fi?”

“What type of devices do your guests typically have?”

It’s nice to have shiny new devices with the latest and greatest technology, but if the wi-fi has to work for everyone, your design has to assume the least capable device that’s important, and design for that. If you use a bunch of “vintage” Samsung Galaxy phones for barcode scanning or checking in children, then we need to make sure that the coverage will be adequate everywhere you need to use them, and that you select the proper spectrum to support those devices. For the guest network, having at least a rough idea of what mix of iOS and Android devices the guests bring into the facility can inform several design choices.

“What regulatory/policy constraints are there on the network?”

This is hugely important. Another mantra I’ve heard repeated often is, “‘Because you can’ is NOT a strategy!” If your network has specific privacy requirements such as PCI-DSS, HIPAA, any number of industry-specific policies, or even just organizational practices about guest hospitality, network access, etc., these also need to factor into the design and planning process.

I have one client whose organization is a church that is focused on a 5-star guest experience. What this translated to in terms of Wi-Fi is that they did not want to name the SSIDs with the standard “Guest” and “Staff” monikers that are common. The reasoning for this was that merely naming the private LAN SSID “Staff” would create in a guest’s mind that there are two classes of people, one of which may get better network performance because you’re one of the elect. It’s also a challenge when you have a lot of volunteers who perform staff-like functions and who need access to the LAN. Ultimately, we simply called this network “LAN”. Meaningful to the IT staff, and once the staff is connected to it, they no longer think about it. Something as simple as the SSID list presented by a wifi beacon is an important consideration in the overall guest experience.

“What is your budget?”

This one is so obvious it’s often overlooked. As engineers, we like to put shiny stuff into our designs. The reality is, most customers don’t have a bottomless pit of money, especially when they’re non-profits relying on donated funds. While I’d love to design a big fancy Ruckus or Aruba system everywhere I go, the reality is that it’s probably overkill for a lot of places, when a Ubiquiti or EnGenius system will meet all the requirements.

“What are the installation constraints?”

“Which of those constraints are negotiable? Which aren’t?”

Another obvious one that is overlooked. You need to know when the installation can happen (or can’t happen), or if there are rooms that are off-limits, potential mounting locations that are inaccessible. Areas that can’t support a lift, or areas that you simply can’t get cable to without major work. Aesthetics can be a significant factor for both AP selection and placement, wiring, and even configuration (such as turning off the LEDs). While one particular AP may be technically suited to a particular location, how it looks in the room may dictate the choice of something else.

“What is your relationship with your landlord/neighbors/facility manager like?”

I kid you not, this is a bigger factor than you might think. In an office building, being a good wifi neighbor is an important consideration. If the landlord is very picky about where and how communications infrastructure is installed outside the leased space (such as fiber runs through hallways, roof access, antennas outside the building, extra lease charges for technology access), you may encounter some challenges. If your facility manager is particular about damage, you need to factor that into the process as well. This likely also will come into play when you’re doing your site surveys and need access to some parts of the building.

There are a whole host of followup questions beyond these that focus on the more technical aspects of the requirements gathering, and your client may or may not have an answer:

“How many people does this need to support at one time?”

“Where are all these people located?”

“When are they in the building?”

“Where do you need coverage?”

“Where do you NOT need coverage?”

“What is your tolerance level for outages/downtime?”

… and many more that you will develop during this sacred requirements gathering ritual. Many of the technical aspects of the environment (existing RF, channel usage, airtime usage, interference source, etc) don’t need to be asked of the client, as you will find them during your initial site survey.

If you’re a wifi engineer, having these questions in your mind will help you develop a better design. If you’re the client, having answers to these questions available will help you get a better design.

What questions are important to your network? Sound off below!

Video Control Room

Automating Video Workflows With PowerShell

Linking today to some great content from another Ian (ProTip: get to know an Ian, we’re full of useful knowledge). Ian Morrish posts about automating a variety of methods of automating A/V equipment using PowerShell. Lots of useful stuff in here.

No Windows? No worries, you can install PowerShell on MacOS and Linux too.

I’ve put some feelers out to some of my streaming equipment vendors to find out what kind of automation hooks and APIs they support.

Meanwhile, Wowza has a REST API for both its Streaming Engine and Cloud products. Integrating this into PowerShell should be relatively straightforward. Any PowerShell wizards wanna take a stab at it?

Stay tuned.

 

Products_ContentManagement

Going Serverless: Office 365

Recently I just completed a project for a small church in Kansas. Several months ago, the senior pastor asked me for a quote on a Windows server to provide authentication as well as file and print share services. During the conversation, a few things became clear:

  1. Their desktop infrastructure was completely on Windows 10. Files were being kept locally or in a shared OneDrive account.
  2. The budget they had for this project was not going to allow for a proper server infrastructure with data protection, etc.
  3. This church already uses a web-based Church Management System, so they’re somewhat used to “the cloud” already as part of their workflows.

One of the key features provided by Windows 10 was the ability to use Office 365 as a login to your desktop (Windows 8 allowed it against a Microsoft Live account). Another is that for churches and other nonprofits, Office 365 is free of charge for the E2 plan.

I set about seeing how we could go completely serverless and provide access not only to the staff for shared documents, but also give access to key volunteer teams and church committees.

The first step was to make sure everybody was on Windows 10 Pro (we found a couple of machines running Windows 10 Home). Tech Soup gave us inexpensive access to licenses to get everyone up to Pro.

Then we needed to make sure the internet connection and internal networking at the site was sufficient to take their data to the cloud. We bumped up the internet speed and overhauled the internal network, replacing a couple of consumer-grade unmanaged switches and access points with a Ubiquiti UniFi solution for the firewall/router, network switch, and access points. This allows me and key church staff to remotely manage the network, as the UniFi controller operated on an Amazon Web Services EC2 instance (t2.micro). This new network also gave the church the ability to offer guest wifi access without compromising their office systems.

The next step was to join everyone to the Azure domain provided by Office 365. At this point, all e-mail was still on Google Apps, until we made the cutover.

Once we had login authentication in place, I set about building the file sharing infrastructure. OneDrive seemed to be the obvious solution, as they were already using a shared OneDrive For Business account.

One of OneDrive’s biggest challenges is that, like FedEx, it is actually several different products trying to behave as a single, seamless product. At this, OneDrive still misses the mark. The OneDrive brand consists of the following:

  • OneDrive Personal
  • OneDrive for Business
  • OneDrive for Business in Office 365 (a product formerly known as Groove)
  • Sharepoint Online

All the OneDrive for Business stuff is Sharepoint/Groove under the hood. If you’re not on Office 2016, you’ll want to make the upgrade, because getting the right ODB client in previous versions of Office is a nightmare. Once you get it sorted, it generally works. If you’ve got to pay full price for O365, I would recommend DropBox for Business as an alternative. But it’s hard to beat the price of Office 365 when you’re a small business.

It is very important to understand some of the limitations of OneDrive for Business versus other products like DropBox for Business. Your “personal” OneDrive for Business files can be shared with others by sending them a link, and they can download the file, but you can’t give other users permission to modify them and collaborate on a document. For this, you need to go back to the concept of shared folders, and ODB just doesn’t do this. This is where Sharepoint Online comes in to play.

Naturally, this being Sharepoint, it’s not the easiest thing in the world to set up. It’s powerful once you get it going, but I wasn’t able to simply drop all the shared files into a Sharepoint document library — There’s a 5000-file limit imposed by the software. Because the church’s shared files included a photo archive, there were WAY more than 5000 files in it.

Sharepoint is very picky about getting the right information architecture (IA) set up to begin with. Some things you can’t change after the fact, if you decide you got them wrong. Careful planning is a must.

What I ended up doing for this church is creating a single site collection for the whole organization, and several sites within that collection for each ministry/volunteer team. Each site in Sharepoint has 3 main security groups for objects within a site collection:

  • Visitors (Read-Only)
  • Members (Read/Write)
  • Owners (Read/Write/Admin)

In Office 365, much as it is with on-premises, you’re much better off creating your security groups outside of Sharepoint and then adding those groups to the security groups that are created within Sharepoint. So in this case, I created a “Worship Production” team, added the team members to the group, and then added that group to the Worship Site Owners group in Sharepoint. The Staff group was added to all the Owners groups, and the visitors group was left empty in most cases. This makes group membership administration substantially easier for the on-site admin who will be handling user accounts most of the time. It’s tedious to set up, but once it’s going, it’s smooth sailing.

Once the security permissions were set up for the various team sites, I went into the existing flat document repository and began moving files to the Sharepoint document libraries. The easiest way to do this is to go to the library in Sharepoint, and click the “Sync” button, which then syncs them to a local folder on the computer, much like OneDrive (although it’s listed as Sharepoint). There is no limit to how many folders you can sync to the local machine (well, there probably is, but for all practical purposes, there isn’t). From there it’s a matter of drag and drop. For the photos repository, I created a separate document library in the main site, and told Sharepoint it was a photo library. This gives the user some basic Digital Asset Management capabilities such as adding tags and other metadata to each picture in the library.

So far, it’s going well, and the staff enjoys having access to their Sharepoint libraries as well as Microsoft Office on their mobile devices (iOS and Android). Being able to work from anywhere also gives this church some easy business continuity should a disaster befall the facility — all they have to do is relocate to the local café that has net access, and they can continue their ministry work. Their data has now been decoupled from their facility. I have encountered dozens of churches over the years whose idea of data backup is either “what backup?” or a hard drive sitting next to the computer 24×7, which is of no use if the building burns to the ground or is spontaneously relocated to adjacent counties by a tornado. The staff doesn’t have to worry about the intricacies of running Exchange or Sharepoint on Windows Small Business Server/Essentials. Everything is a web-based administrative panel, and support from Microsoft is excellent in case there’s trouble.

If you’re interested in how to take your church or small business serverless, contact me and I’ll come up with a custom solution.

ProQu24

Controlling Audio With ProPresenter

Our church is a small one. So its not always especially easy to fully staff our tech booth, and sometimes, one must fly solo, which adds to the workload, and sometimes stuff gets forgotten, like unmuting microphones for the choir or the person reading the scripture.

Fortunately, there is some tech than can help us in this regard. We use ProPresenter for our graphics presentation, and an Allen & Heath QU-24 console for our audio. The Qu-24 is connected to the Mac that runs ProPresenter via a USB cable, which shows up in the Mac as a 32 in/32 out audio device, as well as a MIDI device. This is primarily to be able to use the console as a multitrack and DAW interface, but it also lets us play back audio from ProPresenter media cues without ever leaving the digital domain, and saving us a couple of inputs on the board (although there’s no shortage of those). But because it’s also a MIDI device, this gives us some options with ProPresenter’s $99 MIDI module add-on. The Qu series boards can also do MIDI over IP (in fact, the Qu-Pad remote control app for iPad uses MIDI over IP to work its magic). If you’re using MIDI over IP with a Mac, you’ll need a special driver for the Mac. No driver is needed for USB.

First, a few resources we’ll need:

In the Qu Series, mutes and mute groups are controlled by a sequence of a Note On/Off message. The specific note determines the channel or mute group being controlled, and a the velocity value determines if it’s being turned on (Muted) or off (Unmuted). Velocity values below 64 turn the mute off, and above turn it on.

Meanwhile, over in ProPresenter, since Version 6, we have the ability to add MIDI Note On/Off cues to a slide. See where this is going? Unfortunately, ProPresenter doesn’t have the ability to do anything other than MIDI notes in a slide at the moment, so we can’t get really crazy with starting recordings or anything else requiring non-note MIDI messages.

So how do we know what notes emulate button presses? The documentation provides this handy method:

OK, this requires thinking and math. Not so helpful. This is where the MIDI monitor comes in. Download it and run it, and it shows everything coming across the MIDI interface. Push the button you’re interested in, and lo, MIDI Monitor helpfully shows you what note you’re interested in:

In this case, G#4 is the mute group for our choir. A4 is the mute group for the speaking mics on the chancel. A1 is the lectern mic.

Screenshot 2016-11-20 13.51.30So now, to be able to add a cue at the beginning of a song the choir is singing, I simply have to add two cues to the first slide to turn on the choir microphones:

  • NOTE ON, G#4(80), 63
  • NOTE OFF, G#4(80)

Then I can add a slide at the end of the playlist entry that then turns them back off, or add these to the beginning of the next playlist entry:

  • NOTE ON, G#4(80), 127
  • NOTE OFF, G#4(80)

Likewise, when someone is at the lectern reading scripture, I can unmute that channel automatically using the corresponding note number, and mute them again when they’re done.

On the flip side, you can also use note on/off commands to control ProPresenter. So you *could* also use the Mute, SEL, and PAFL buttons on unused channels to trigger things in ProPresenter (you also want to make sure that you don’t overlap these with the mutes and mute groups that you are actively using so as not to inadvertently advance a slide when hurriedly muting a channel). ProPresenter also conveniently tells you what the last note sent was, so you can actively push the button you want to use, make a note of its number, and put it in the action you wish.

 

Another approach you can take is to create a presentation in ProPresenter containing blank slides with the various functions you wish to use. Then you can copy these slides into presentations and add a Go To Next timer to them to automatically advance to the next slide. I would also recommend using slide labels and colors to clearly identify what each slide is doing:

Screenshot 2016-11-20 13.47.55

 

If you have controllable lighting and your lighting console also has MIDI capability, This comes in handy as well. And if you’re really a one-man band, and like to do things like pads underneath certain worship elements, you can use this to trigger those as well. But if you get to that point, you may want to look into QLab to control all of them at the same time.

So there you have it: a quick and easy way to automate some of your workload with the Qu series boards. If you’ve got another board that you use, let me know in the comments if you do (or would like to do) something like this. Would also love to hear if anyone is using hardware MIDI controllers like the Novation LaunchPad and how you have it set up.

Additional Info:

Summary of MIDI Messages (midi.org)

Microsoft Azure

Nonprofit Tech Deals: Microsoft Azure

Last week while I was at the Church IT Network National Conference in Anderson, SC, a colleague pointed me to a fantastic donation from Microsoft via TechSoup: $5000/year in Azure credit. At a hair over $400/month, this means you can run a pretty substantial amount of stuff. Microsoft just announced this program at the end of September, so it’s still very new. And very cool. Credits are good any time within the 12-month period, so you don’t have to split them up month by month. They do not, however, roll over to the following year.

The context of the conversation was for hosting the open-source RockRMS Church/Relationship Management System, but Wowza Streaming Engine is also available ready to go on Azure. And many other things. (and for those of us in the midwest, Microsoft’s biggest Azure datacenter is “US Central” located in Des Moines, as Iowa is currently a very business-friendly place to put a huge datacenter)

If you’re a registered 501c3 non-profit (or your local country’s equivalent if you’re outside the US), head on over to Tech Soup to take advantage of this fantastic deal.

As an added bonus, if you have Windows Server Datacenter licenses from TechSoup or that your organization purchased with Software Assurance, each 2-socket license can be run on up to two Azure compute instances each with up to 8 virtual cores, reducing the cost of your instances even further (as standard Windows instances include the cost of the Windows license at full nonprofit prices.). This also applies to SQL Server.

Here’s the process:

  1. Read the FAQ.
  2. Register your organization with TechSoup if you haven’t already done so.
  3. Head over to Microsoft’s Azure Product Donations page and hit “Get Started”
  4. At some point in the process you’ll also want to create an Azure account to associate the credits with. If you’re already using Office 365 for nonprofits, it’s best to tie an account to your O365 domain.
Wowza Ninja

Wowza Stream Scheduler Hacks: Google Calendar

One of Wowza’s most underutilized yet most powerful features is the stream scheduler. I’ve blogged about it extensively in the past, and I’ll return from a long hiatus to do it again.

To recap some of the things you can do with this add-on:

  • Create a virtual stream that plays a loop of server-side content
  • Play a sequence of video content (think TV programming)
  • A combination of both
  • Play portions of a video file (in/out points)
  • In combination with the LoopUntilLive module, do all that and then interrupt with a live stream

This gives you the ability to have a continuous 24/7 stream of programming including advertising. The output of this schedule is then treated by Wowza like any other stream, meaning it can be used as input to a transcoder, nDVR, or sent somewhere with Stream Targets.

The challenge we run into is that building the schedule in XML is not the most obvious thing in the world as there is not currently any integration of the module into the Wowza Streaming Engine Manager’s GUI.

As the schedule is written as a SMIL file (a specific XML schema) in an application’s content directory, It requires either logging in to the server and manipulating files with a text editor, or uploading into the content directory.

The other way is to build the schedule programmatically. Command-line PHP is an easy way to do this as PHP has some excellent PHP processing tools.

If you want to peek at the Java code for the scheduler module, Wowza has it up on GitHub.

A quick recap of the structure of the stream scheduler’s XML Schema:

  • The entire file is wrapped in <SMIL></SMIL> tags to indicate that this is in fact a SMIL file.
  • an empty <HEAD/> block – Wowza doesn’t currently make use of anything in here, but it’s a good place to put comments, and it makes for good XML.
  • The meat of the file, a <BODY></BODY> block that contains all the good stuff.
  • Within the body block, there are two key element types:
    1. One or more <STREAM> blocks that define the names of the virtual streams that are created by the schedule.
    2. One or more <PLAYLIST> blocks that define the content and timing of what gets published. Each playlist tag specifies the following attributes:
      • name : The name of the playlist. This is arbitrary but should be unique within the file
      • playOnStream: specifies which of the streams created in the <STREAM> block this playlist’s content will go to
      • repeat: a boolean (true/false) value that specifies if this playlist loops until something else happens. If it runs out of content, the virtual stream will stop.
      • scheduled: The date and time (based on server timezone) this playlist will be published to the stream. This is in ISO 8601 format without the T delimiter (YYYY-MM-DD HH:MM:SS)
    3. Within the <PLAYLIST> block are one or more <VIDEO> tags with the following attributes:
      • src: The path and filename (relative to the application’s content directory) of the video file to play. This should be prefixed with mp4: as you would any other video file within Wowza. You can also put in the name of a live stream published within the same application.
      • start: The offset (in seconds) from the beginning of the file where playback is to begin.
      • length: Play duration (in seconds) from the start point. A value of -1 will play to the end of the file. A value of -2 indicates that this is a live stream.
      • Once the end of this item is reached, it will move to the next element in the playlist. If there is no more content it will either loop (if repeat is set to true) or stop. If there is nothing further on the schedule, the stream will unpublish and stop. If this is not a repeating playlist, It’s generally a good idea to put a buffer video (a number of minutes of black video or a logo works just fine) at the end of it to fill any gaps to the next playlist.

So, the schedule is pretty straightforward, but it can get tedious to build. I previously posted about a way to generate this with a spreadsheet in Excel. This is clunky, but can save a lot of typing, and is good for repeating events.

But this lacked a good visual interface. As I was working on a project for a client to translate a schedule generated from their video content management system into the Wowza Stream Scheduler’s XML, it occurred to me that there was another structured schedule format that could be translated easily into XML: iCal. This calendar format is defined in RFC 2445 and is widely used by many calendaring systems.

Unfortunately, iCal is not XML to begin with (iCal/RFC2445 predates XML by a decade), which would be WAY too easy. Here is a sample of iCal data out of Google Calendar that contains two events (Google used to make their calendar shares available in XML but it seems that is no longer the case):

BEGIN:VCALENDAR
PRODID:-//Google Inc//Google Calendar 70.9054//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Wowza Event Scheduler Calendar
X-WR-TIMEZONE:America/Chicago
X-WR-CALDESC:
BEGIN:VEVENT
DTSTART:20161017T160000Z
DTEND:20161017T170000Z
DTSTAMP:20161016T164145Z
UID:xxxxxx@google.com
CREATED:20161012T212924Z
DESCRIPTION:mp4:video1.mp4\,0\,-1
LAST-MODIFIED:20161016T164138Z
LOCATION:teststream
SEQUENCE:3
STATUS:CONFIRMED
SUMMARY:11am Broadcast
TRANSP:OPAQUE
END:VEVENT
BEGIN:VEVENT
DTSTART:20161017T170000Z
DTEND:20161017T180000Z
DTSTAMP:20161016T164145Z
UID:xxxxxx@google.com
CREATED:20161016T164116Z
DESCRIPTION:mp4:video2.mp4\,0\,1800\nmp4:video3.mp4\,0\,1800
LAST-MODIFIED:20161016T164118Z
LOCATION:teststream
SEQUENCE:1
STATUS:CONFIRMED
SUMMARY:Noon Broadcast
TRANSP:OPAQUE
END:VEVENT
END:VCALENDAR

As you can see, this has some hints of XML: Opening and closing tags, attributes, and the like. Fortunately, Evert Pot wrote a handy little PHP function to make the conversion to XML.

One of the really nice things about JSON and XML in PHP is that the objects that contain them work just like any other nested arrays, and so extracting specific items is ridiculously easy. There’s a lot of data within the VEVENT block that we just aren’t interested in. We really only care about the start and stop times, and a few other fields like DESCRIPTION, LOCATION and SUMMARY, which we can hack to contain the names of the streams and content. In this example, I use DESCRIPTION to contain the names of the video files on each line (and additional comma-separated data regarding start and end points, and LOCATION to specify what stream it should be published on. SUMMARY can be used as the playlist name attribute There are a number of other iCal fields that can be used for this as well.

In order to use this data, we need to do the following:

  • Use the start/end times to calculate a duration
  • Make a list of the streams to publish to
  • figure out what video to play when
  • Convert datestamps to the local server time

For starters, we’re going to need to set a few defaults:

ini_set("allow_url_fopen", 1);
error_reporting(0);
date_default_timezone_set("US/Central");

Using Evert’s conversion function, we get the schedule into an XML object:

$calUrl = "https://calendar.google.com/calendar/ical/xxxxxxxxxxxxx8%40group.calendar.google.com/private-xxxxx/basic.ics";
// get your private calendar URL from the calendar settings. 
$CalData=file_get_contents($calUrl);
$xmlString=iCalendarToXML($CalData);
$xmlObj = simplexml_load_string($xmlString);

The object now looks like this:

SimpleXMLElement Object
(
    [PRODID] => -//Google Inc//Google Calendar 70.9054//EN
    [VERSION] => 2.0
    [CALSCALE] => GREGORIAN
    [METHOD] => PUBLISH
    [X-WR-CALNAME] => Wowza Event Scheduler Calendar
    [X-WR-TIMEZONE] => America/Chicago
    [X-WR-CALDESC] => SimpleXMLElement Object
        (
        )

    [VEVENT] => Array
        (
            [0] => SimpleXMLElement Object
                (
                    [DTSTART] => 20161017T160000Z
                    [DTEND] => 20161017T170000Z
                    [DTSTAMP] => 20161016T182755Z
                    [UID] => 72klt8s5ssrbjp9ofdk8ucovoo@google.com
                    [CREATED] => 20161012T212924Z
                    [DESCRIPTION] => mp4:video1.mp4\,0\,-1
                    [LAST-MODIFIED] => 20161016T164138Z
                    [LOCATION] => teststream
                    [SEQUENCE] => 3
                    [STATUS] => CONFIRMED
                    [SUMMARY] => 11am Broadcast
                    [TRANSP] => OPAQUE
                )

            [1] => SimpleXMLElement Object
                (
                    [DTSTART] => 20161017T170000Z
                    [DTEND] => 20161017T180000Z
                    [DTSTAMP] => 20161016T182755Z
                    [UID] => ac3lgjmjmijj2910au0fnv5vig@google.com
                    [CREATED] => 20161016T164116Z
                    [DESCRIPTION] => mp4:video2.mp4\,0\,1800\nmp4:video3.mp4\,0\,1800
                    [LAST-MODIFIED] => 20161016T164118Z
                    [LOCATION] => teststream
                    [SEQUENCE] => 1
                    [STATUS] => CONFIRMED
                    [SUMMARY] => Noon Broadcast
                    [TRANSP] => OPAQUE
                )

        )

)

So now we need to create another XML object for our schedule and give it the basic structure:

$smilXml = new SimpleXMLElement('<smil/>');
$smilHead = $smilXml->addChild('head');
$smilBody = $smilXml->addChild('body');

Now we need to iterate once through the VEVENT objects to get stream names:

$playonstream = [];

foreach ($xmlObj->VEVENT as $event) {
        $loc = $event->LOCATION;
        $playOnStream["$loc"]=true;
        // We don't really care about the value of this array element, as long as it exists.
        // This way we only get one array element for each unique stream name
}

// Iterate through the list of streams and create them in the SMIL
foreach ($playOnStream as $key => $value) {

$smilStream = $smilBody->addChild('stream');
$smilStream->addAttribute('name',$key);

}

So now we have the beginnings of a schedule:

<?xml version="1.0"?>
<smil>
  <head/>
  <body>
    <stream name="teststream"/>
  </body>
</smil>

We now need to iterate through the list again to add in the fallback items for each stream that starts when the stream starts (this is done as a separate loop to keep the output XML cleaner):

// Add in default fallback entries
foreach ($playOnStream as $key => $value) {
        $defaultPl=$smilBody->addChild('playlist');
        $defaultPl->addAttribute('name',"default-$key");
        $defaultPl->addAttribute('playOnStream',$key);
        $defaultPl->addAttribute('repeat','true');
        $defaultPl->addAttribute('scheduled',"2016-01-01 00:00:01");
        $contentItem = $defaultPl->addChild('video');
        $contentItem->addAttribute('src','mp4:padding.mp4');
        $contentItem->addAttribute('start','0');
        $contentItem->addAttribute('length','-1');

}

Which then gives us these new items:

<smil>
  <head/>
  <body>
    <stream name="teststream"/>
    <playlist name="default-teststream" playOnStream="teststream" repeat="true" scheduled="2016-01-01 00:00:01">
      <video src="mp4:padding.mp4" start="0" length="-1"/>
    </playlist>
  </body>
</smil>

And then we need to iterate again through the VEVENTS to create the actual schedule items:

foreach ($xmlObj->VEVENT as $event) {

        //parse the times into Unix time stamps using the ever-useful strtotime() function;
        $eventStart = strtotime($event->DTSTART);
        $eventEnd = strtotime($event->DTEND);

        //format them into the ISO 8601 format for use in the schedule
        //Note that we're using H:i:s rather than h:i:s because 24-hour time is important here
        $start = date("Y-m-d H:i:s", $eventStart);
        $end = date("Y-m-d H:i:s", $eventEnd);

        //extract summary for playlist name
        $plName = $event->SUMMARY;
        $plLoc = $event->LOCATION;

        //extract description for content
        $description = $event->DESCRIPTION;
        $videos=preg_split('/\\\\n/',$description);
        // add on a padding video at the end of this list
        $videos[]="mp4:padding.mp4";

        //create playlist
        $playlist = $smilBody->addChild('playlist');
        $playlist->addAttribute('name',$plName);
        $playlist->addAttribute('playOnStream',$plLoc);
        $playlist->addAttribute('repeat','false');
        $playlist->addAttribute('scheduled',$start);

        //iterate through playlist items
        foreach($videos as $plItem) {
                echo "$plItem\n";
                $attrs=preg_split('/\\\\,/',$plItem);
                // set defaults for stream start/duration if not specified
                // assume start at beginning and play all the way through
                if(!$attrs[1]) { $attrs[1] = 0; }
                if(!$attrs[2]) { $attrs[2] = -1; }

                $contentItem = $playlist->addChild('video');
                $contentItem->addAttribute('src',$attrs[0]);
                $contentItem->addAttribute('start',$attrs[1]);
                $contentItem->addAttribute('length',$attrs[2]);

        } // end of playlist loop

} // end of event loop

And, finally, we need to add a little bit of code to format the XML object for use with Wowza:

$dom = dom_import_simplexml($smilXml)->ownerDocument;
$dom->formatOutput = true;
$output=$dom->saveXML();
echo "$output\n"; // outputs to STDOUT
$dom->save('streamschedule.smil'); // save to file

For the purposes of this last section, I’ve created some additional events to add a secondary stream:

Schedule Overview

11am Broadcast Event

11am Alternate Broadcast Event

Noon Broadcast Event

Event Broadcast

The iCal looks like this:

BEGIN:VCALENDAR
PRODID:-//Google Inc//Google Calendar 70.9054//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Wowza Event Scheduler Calendar
X-WR-TIMEZONE:America/Chicago
X-WR-CALDESC:
BEGIN:VEVENT
DTSTART:20161017T160000Z
DTEND:20161017T170000Z
DTSTAMP:20161016T182604Z
UID:8m4hcp98fmuosuoe48o20drl7k@google.com
CREATED:20161016T180813Z
DESCRIPTION:mp4:video5.mp4\nmp4:video6.mp4
LAST-MODIFIED:20161016T180813Z
LOCATION:altstream
SEQUENCE:0
STATUS:CONFIRMED
SUMMARY:11am alternate broadcast
TRANSP:OPAQUE
END:VEVENT
BEGIN:VEVENT
DTSTART:20161017T180000Z
DTEND:20161017T190000Z
DTSTAMP:20161016T182604Z
UID:c1ijl48srikc22vkkvrgvpb718@google.com
CREATED:20161016T180725Z
DESCRIPTION:mp4:video4.mp4
LAST-MODIFIED:20161016T180725Z
LOCATION:teststream
SEQUENCE:0
STATUS:CONFIRMED
SUMMARY:Event
TRANSP:OPAQUE
END:VEVENT
BEGIN:VEVENT
DTSTART:20161017T160000Z
DTEND:20161017T170000Z
DTSTAMP:20161016T182604Z
UID:72klt8s5ssrbjp9ofdk8ucovoo@google.com
CREATED:20161012T212924Z
DESCRIPTION:mp4:video1.mp4\,0\,-1
LAST-MODIFIED:20161016T164138Z
LOCATION:teststream
SEQUENCE:3
STATUS:CONFIRMED
SUMMARY:11am Broadcast
TRANSP:OPAQUE
END:VEVENT
BEGIN:VEVENT
DTSTART:20161017T170000Z
DTEND:20161017T180000Z
DTSTAMP:20161016T182604Z
UID:ac3lgjmjmijj2910au0fnv5vig@google.com
CREATED:20161016T164116Z
DESCRIPTION:mp4:video2.mp4\,0\,1800\nmp4:video3.mp4\,0\,1800
LAST-MODIFIED:20161016T164118Z
LOCATION:teststream
SEQUENCE:1
STATUS:CONFIRMED
SUMMARY:Noon Broadcast
TRANSP:OPAQUE
END:VEVENT
END:VCALENDAR

And when we run the process, we get this spiffy code coming out:

<smil>
  <head/>
  <body>
    <stream name="altstream"/>
    <stream name="teststream"/>
    <playlist name="default-altstream" playOnStream="altstream" repeat="true" scheduled="2016-01-01 00:00:01">
      <video src="mp4:padding.mp4" start="0" length="-1"/>
    </playlist>
    <playlist name="default-teststream" playOnStream="teststream" repeat="true" scheduled="2016-01-01 00:00:01">
      <video src="mp4:padding.mp4" start="0" length="-1"/>
    </playlist>
    <playlist name="11am alternate broadcast" playOnStream="altstream" repeat="false" scheduled="2016-10-17 11:00:00">
      <video src="mp4:video5.mp4" start="0" length="-1"/>
      <video src="mp4:video6.mp4" start="0" length="-1"/>
      <video src="mp4:padding.mp4" start="0" length="-1"/>
    </playlist>
    <playlist name="Event" playOnStream="teststream" repeat="false" scheduled="2016-10-17 13:00:00">
      <video src="mp4:video4.mp4" start="0" length="-1"/>
      <video src="mp4:padding.mp4" start="0" length="-1"/>
    </playlist>
    <playlist name="11am Broadcast" playOnStream="teststream" repeat="false" scheduled="2016-10-17 11:00:00">
      <video src="mp4:video1.mp4" start="0" length="-1"/>
      <video src="mp4:padding.mp4" start="0" length="-1"/>
    </playlist>
    <playlist name="Noon Broadcast" playOnStream="teststream" repeat="false" scheduled="2016-10-17 12:00:00">
      <video src="mp4:video2.mp4" start="0" length="1800"/>
      <video src="mp4:video3.mp4" start="0" length="1800"/>
      <video src="mp4:padding.mp4" start="0" length="-1"/>
    </playlist>
  </body>
</smil>

So there you have a relatively simple one-way hack to spit Google Calendar/iCal events out into a Wowza Schedule. You would still need to manually run this every time you wanted to update the broadcast schedule (and reload the Wowza server), and this does not send any confirmation back to your iCal that the event has been scheduled.

Stay tuned for a variation on this code that uses the Google Calendar API (a much more elegant approach)

Multi-tenant Virtual Hosting with Wowza on EC2

That’s a mouthful, isn’t it?

I recently needed to migrate a couple of Wowza Streaming Engine tenants on a baremetal server that was getting long in the tooth, and was getting rather expensive. These tenants were low-volume DVR or HTTP transmuxing customers, with one transcoding customer that required some more CPU power. But this box was idle most of the time. So I decided to move it over to AWS and fire up the box only when necessary. Doing this used to be a cumbersome process with the AWS command-line tools that were Java-based. The current incarnation of tools is quite intuitive and runs in Python, so there’s not a lot of insane configuration and scripting to do.

You may recall my post from a few years back about multi-tenant virtual hosting. I’m going to expand on this and describe how to do it within the Amazon EC2 environment, which has historically limited you to  a single IP address on a system.

The first step to getting multiple network interfaces on EC2 is to create a Virtual Private Cloud (VPC) and start your EC2 instances within your VPC. “Classic” EC2 does not support multiple network interfaces.

Once you’ve started your Wowza instance within your VPC (for purposes of transcoding a single stream, I’m using a c4.2xlarge instance), you then go to the EC2 console, and on the left-hand toolbar, under “network and security” is a link labeled “Network Interfaces”. When you click on that, you have a page listing all your active interfaces.

To add an interface to an instance, simply create a network interface, select the VPC subnet it’s on, and optionally set its IP (the VPC subnet is all yours, in dedicated RFC1918 space, so you can select your IP). Once it’s created, you can then assign that interface to any running instance. It shows up immediately within the instance without needing to reboot.

Since this interface is within the VPC, it doesn’t get an external IP address by default, so you’ll want to assign an ElasticIP to it if you wish to have it available externally (in most cases, that’s the whole point of this exercise)

Once you have the new interface assigned, simply configure the VHosts.xml and associated VHost.xml files to listen to those specific internal IP addresses, and you’re in business.
As for scheduling the instance? On another machine that IS running 24/7 (if you want to stick to the AWS universe, you can do this in a free tier micro instance), set up the AWS command line tools and then make a crontab entry like this:

30 12 * * 1-5 aws ec2 start-instances --instance-ids i-XXXXXXXX
35 12 * * 1-5 aws ec2 associate-address --network-interface-id eni-XXXXXXXX --allocation-id eipalloc-XXXXXXXX
35 12 * * 1-5 aws ec2 associate-address --network-interface-id eni-XXXXXXXX --allocation-id eipalloc-XXXXXXXX
30 15 * * 1-5 aws ec2 stop-instances --instance-ids i-XXXXXXXX 

This fires up the instance at 12:30pm on weekdays, assigns the elastic IPs to the interfaces, and then shuts it all down 3 hours later (because this is an EBS-backed instance in a VPC, stopping the instance doesn’t nuke it like terminating does, so any configuration you make on the system is persistent)

Another way you can use this is to put multiple interfaces on an instance with high networking performance and gain the additional bandwidth of the multiple interfaces (due to Java limitations, there’s no point in going past 4 interfaces in this use case), and then put the IP addresses in either a round-robin DNS or a load balancer, and simply have Wowza bind to all IPs (which it does by default).

Haiti Internet

Mobile Internet In Haiti, Part 2

A while back, I posted about getting mobile Internet in Haiti. As technology changes rapidly, especially when it comes to Haitian internet access, I figured I’d post an update, having just returned from there in late February.

If you have a GSM-capable US phone (most Samsung Galaxy devices use software-defined radios and can speak CDMA or GSM fluently, simply by switching an option in the software), you’ll need to unlock it for international use:

Sprint: Contact Sprint Customer Service while still in the US and ask them for an international unlock. As long as your account has been active for more than 60 days, this should be no problem. They’ll walk you through the UICC unlock process. It helps to be on the Sprint network while this unlock happens, but it can also happen over Wi-Fi if you’re already out of the country.

Verizon: Verizon generally does not lock their phones. You may want to check with Verizon to make sure yours is unlocked. See item #18 in their Global Roaming FAQ.

AT&T: If your phone is under contract with AT&T or is an iPhone, you’re pretty much out of luck. AT&T is so terrified of losing their customers that they will only unlock the phone if you buy out your installment contract or pay an ETF. The good news is that most cell phone repair shops know the unlock codes and will unlock them for you for a small fee. (This is a tip I got from the manager of a local AT&T store who thinks corporate policy on unlocking for international use is dumb). If your phone is out of contract, simply go to https://www.att.com/deviceunlock and fill out the form. There is nobody at AT&T you can talk to about this, nor can the store personnel help you. If the process fails, then you’re simply out of luck, and should consider choosing a more customer-friendly carrier next time.

T-Mobile: No idea. I don’t know anyone who has a T-Mobile device. I expect their policy is probably very similar to AT&T.

Once you get to Haiti, you can stop at either the Digicel or Natcom shops just outside customs at the airport in Port-Au-Prince. (I would expect that there’s a similar setup at Cap-Haitien.) Natcom will load you up with 5GB of data and some voice minutes for 1000 Gdes ($25 US). I don’t know what Digicel’s current pricing is, but I expect it’s comparable. If you’re going to be out in the provinces, Natcom seems to have a better network than Digicel. If you’re staying in and around Port-Au-Prince, either network should work fine for you as both carriers have HSPA+ networks. I don’t know what the Natcom coverage situation is like on La Gonâve, but Digicel has EDGE coverage on most of the island, and HSPA/+ around Anse-a-Galets.

The staff at the Natcom shop had no trouble setting up my Galaxy S4, and in 15 minutes I walked out of there on the Haitian network. Using it as a hotspot was merely a matter of turning it on, and didn’t require any further configuration. Internet speeds in PAP average in the 2-3Mbps range.

It should be noted here that with both carriers, all Facebook traffic is free and doesn’t count toward your data plan usage. This is a pretty cool deal. My understanding is that Facebook located an edge node within Haiti to reduce transit off-island, and free access to the growing smartphone population in Haiti was part of the deal.

On a similar vein, Google also seems to be getting better presence in Haiti, and I’m told they too have edge nodes located in-country. Their maps product actually has pretty good data in PAP, although directions are still iffy as the addressing system there is a little tricky, and there aren’t necessarily names attached to many of the minor streets. It’s pretty good at figuring out where you are though. I wonder how soon they’ll get a Street View rig down there.

When you leave, your SIM will still be usable for 90 days, after which it will expire and no longer function on the network. There is currently excellent public wifi at the PAP airport, so handing your SIM off to one of your Haitian hosts is probably your best bet, as they can get some additional usage out of whatever unused data/minutes are left on it.

(I also discovered that on my Galaxy S4, GPS didn’t work unless there was a SIM in the slot)

 

Streaming on the go

Over the past several months/years, I’ve been accumulating various pieces of gear that, when put together, give me a solid kit to take on the road for doing onsite streaming or demonstration events. It currently consists of:

I still probably should add an SDI Distribution Amp to the kit, but I haven’t had need for it… yet.

The Canon and GoPro each have their own Pelican 1200 cases, and don’t travel with me unless I need to provide cameras (usually I’m getting a feed from video world and streaming it from there). The SD cards travel in a Pelican 0915 case, which is along with the rest of the gear in a Pelican 1510.

I love the Pelican 1510 – It’s legal carry-on size, so when traveling, all that expensive gear is never out of reach, never at the whims of a sticky-fingered TSA agent or baggage handler inside the bowels of the luggage system where nobody can see them. When flying, I’ll take the Pelican and my laptop bag with me, my clothes go as checked luggage (yay for airlines that give me free checked bags!). I modified the 1510 to include a mesh organizer in the lid instead of the egg-crate foam that it normally comes with, which lets me keep track of the various small bits that go with all that gear.

(because the foam inserts are removable, the 1510 along with a borrowed 1610 came in very handy this past summer when I was on vacation and traveling on a float plane – in case my luggage got dunked in the drink, the cases would float and my clothes would stay nice and dry. Pelican also makes a luggage version of the 1510. I love Pelican cases.)

Lots of Wall WartsHere’s the problem with all that gear though: Except for 1 or 2 devices, every single one of them requires a “wall wart” power adapter. There’s no room in that case for the several power strips that I’d need to do this in a self-contained manner, where all I need from the venue is an outlet and (optionally) an ethernet drop. Additionally, all those adapters in the lid make for a huge jumbled mess on the TSA’s x-ray machines, so more often than not, they want to take a look inside, and swab it for residues. I got to looking at the gear and realized that every single piece of it that used external power would accept a 12VDC input, and they all even shared the same polarity.

[table id=2 /]

Another thing I discovered along the way is that manufacturers rarely specify the details of the DC connector beyond the voltage and only occasionally the current draw. Trying to get connector information from vendor specs is a pain in the rear. This sucks if you have to order a replacement power supply because yours broke or got lost. With the help of a pair of calipers and some trial and error, I was able to figure out what each one was.

I started hunting around for 2 items: A distribution bus, and a compact 6A (or bigger) DC power supply.

The DC bus proved to be problematic, until I hit upon the right combination of keywords that revealed what I needed on Amazon: an 8-way fanout meant for use on security cameras, which had the 5.5×2.1mm connector that I’m discovering is nearly ubiquitous. Bonus: I didn’t have to make my own splitter.

On the power supply front, I found several meant for A/V use, but all of them were large and not well suited to portability. I found my solution on eBay: There is an endless variety of  OEM laptop power supplies that put out 12V and 6A. Many of them are sold as an “LED Power Supply”, and run about 10-15 bucks. I found one that had the same 5.5×2.1mm connector that all my gear needed. Due to difficulties in getting the calipers down inside the connectors, I initially thought the BMD converters were 5.5×2.1mm, but they’re 5.5×2.5mm, and the center pin is too fat – but 5.5×2.5mm female connectors will also accommodate the smaller 2.1mm pins just fine. I should have ordered a 5.5×2.5mm fanout instead. Lesson learned. In order to adapt the 5.5×2.1mm splitter to the various devices, I dug around amazon to find the various adapters I’d need. The only problem is the Lemo connector used by the Teradek Cube: Those locking connectors are $100 each. Ouch.

By a happy coincidence, my wife has a battery booster pack in her van that is float-charged by a 12V connection, which also happens to be 5.5mmx2.1mm. I recently had to order a replacement CLA adapter for it, and picked up an extra one, which would allow me to run this whole streaming rig from automotive, solar or battery power if needed. The whole setup draws about 70W at full load if all of it is running.

I also ordered (but haven’t yet received) a female 5.5×2.1mm to CLA socket, so that I can pop in a CLA USB charger to power my iPad, charge the GoPro, and other USB devices so I don’t eat up a port on the computer just for power, as I’ve only got two.

(As a side note, Ruckus/XClaim and AirTight access points also use 12V 5.5×2.5mm connectors as an alternative to PoE, but if I need wifi the AeroHive unit will do the job. Aruba APs use a smaller connector, whose dimensions I am presently unsure of)

Now my whole rig can be run off two AC outlets (plus a third until I can somehow find a cheaper Lemo connector!). I think the next step is to find some sort of way of putting a battery inline, effectively giving me a UPS for the whole stack (although the laptop , iPad, and the Teradek units all have internal batteries as well) Edit : I since acquired an Anker Astro Pro2 External Battery which has not only the ever-convenient 5.5×2.1mm 12V input socket, but also a DC output (which includes an adapter that goes from the battery pack to a 5.5×2.1mm output plug) that effectively turns this into a 12V UPS which can deliver up to 22W on the USB ports and 18W on the DC port (which is selectable between 9V and 12V), meaning a 10 hour runtime at full load. The unit is only slightly bigger than a small tablet. I can’t run ALL the gear on it at once, but I can at least put the really critical stuff on it. The 1st-generation model of that charger has a beefier 48W DC output that can go to 16V and 19V to power laptops.

The completed kit, with much fewer wall warts!

The completed kit, with much fewer wall warts!

Here’s the DC parts list, with links to Amazon:



Instant Replays on Wowza

One of the useful features of Wowza is its ability to record a stream to disk and then be able to use that recording for a replay. In version 3.5, it would simply take the stream name, slap an MP4 extension on the end, and version any previous ones with _0, _1, etc. In 3.6, the default naming scheme for these recordings was a timestamp, with a configuration option to use the legacy naming convention. In Version 4, it appears this legacy naming convention option has disappeared altogether, meaning you can’t set up a player to just play back “streamname.mp4” and it would always grab the most recent one. EDIT: It appears that this loss of functionality was unintentional and has been classified as a bug, which should be fixed very soon.

This became a problem for one of my clients after their Wowza server got updated to V4. It wasn’t practical to re-code the player every week, or to go into the server and manually rename the file. Since it’s on a Windows server, PowerShell to the rescue:

$basepath= "C:\Program Files (x86)\Wowza Media Systems\Wowza Streaming Engine 4.0.3\content\"
$replayfile =  gci $basepath\streamname*.mp4 | sort LastWriteTime | select -last 1
$link = $replayfile.Name

cmd /c del $basepath\replay.mp4
cmd /c mklink $basepath\replay.mp4 $basepath\$link

I then put this into a scheduled task, with time-based triggers. Powershell is a little tricky to get into a scheduled task, but I finally got the syntax right:

Action: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
Arguments: -nologo -file “C:\Program Files (x86)\Wowza Media Systems\Wowza Streaming Engine 4.0.3\content\replay.ps1”

If you’re on Linux or OSX, you can do this in bash instead:

#!/bin/bash

basepath='/usr/local/WowzaStreamingEngine/content'
unset -v replayfile
for file in "$basepath"/streamname*.mp4
 do
  [[ -z $replayfile || $file -nt $replayfile ]] && replayfile=$file
 done;
rm -f $basepath/replay.mp4
ln -s $replayfile $basepath/replay.mp4

and put it in your crontab (this example is every sunday at 11:30am)

30 11 * * 0 /bin/bash /usr/local/WowzaMediaServer/content/replay.sh