Mission Control

EC2 Monitoring with Raspberry Pi

I’ve been doing a little Raspberry Pi hacking lately, and put together a neat way to have physical status LEDs on your desk for things like EC2 instances.

The Hardware

In its most basic form, you can simply hook up an LED and a bias resistor between a ground line and a GPIO line on the Pi, but that doesn’t scale especially well – You can run out of GPIO lines pretty quickly, especially if you’re doing different colors for each status. Plus, it’s not overly elegant.

The solution? Unicorns!

No, really. The fine folks at Pimoroni in Sheffield, UK have made a lovely little HAT device for the Pi called a Unicorn. Its primary purpose is lots of blinky lights to make pretty rainbows and stuff, hence the name. However, this HAT is a 4×8 (or an 8×8) array of RGB LEDs, addressable via the I2C bus, which doesn’t eat up a line per LED (good thing, otherwise it would require 96 analog lines). The unicornhat library (python3-unicornhat) is available for Python 2 and Python 3 in the Raspbian repo. When installed onto the Pi, the Unicorn will fit within a standard Raspberry Pi case.

The Code

This is my first foray into Python, so there was a bit of a learning curve. If you’re familiar with object-oriented code concepts, this should be easy for you. Python is much more parsimonious with punctuation than PHP or perl are.

For accessing the EC2 data, we’ll need Amazon’s boto3 library, also available in the Raspbian repo (python3-boto). One area where boto3 is really nice is that the data is returned directly as a dict object (what users of other languages would call an array), so you don’t have to mess with converting JSON or XML into an object structure, and it can be manipulated as you would any other associative array (or a hash for you old-timers that use perl). AWS returns a fairly complex object, so you kind of have to dig into it via a few iterative loops to extract the data you’re after.

From there, it’s a matter of assigning different RGB values to the states. I chose these ones:

  • stopped: red
  • pending: green
  • running: blue
  • stopping: yellow(ish)

I also discovered that I needed to assign a specific pixel to each instance ID, otherwise they tended to move around a bit depending on what order AWS returned them on a particular request.

Here’s what the second iteration looks like in action:

import boto3 as aws
import unicornhat as unicorn
import time

# Initialize the Unicorn
unicorn.clear()
unicorn.show()
unicorn.brightness(0.5)

# Create an EC2 object 
ec2 = aws.client('ec2')

# Define colors and positions
color = {}
color['stopped']={'red':255,'green':0,'blue':0}
color['pending']={'red':64,'green':255,'blue':0}
color['running']={'red':32,'green':32,'blue':255}
color['stopping']={'red':192,'green':128,'blue':32}
	
pixel = {}
pixel['i-0fa4ea2560aa17ffd']={'x':0,'y':0}
pixel['i-06b95cd864acb1a8c']={'x':0,'y':1}
pixel['i-0661da0f50ffb604c']={'x':0,'y':2}
pixel['i-063ec151e0f44ef9b']={'x':0,'y':3}
pixel['i-02c514ca567d8a033']={'x':0,'y':4}

# Loop until forever
while True:

	response = ec2.describe_instances()
		
	
	statetable = {}
	resarray = response['Reservations']
	for res in resarray:
		instarray = res['Instances']
		for inst in instarray:
			iid = inst['InstanceId']
			state = inst['State']['Name']
			# print(iid)
			# print(state)
			statetable[iid] = state
	
	
	for ec2inst in statetable:
		x = pixel[ec2inst]['x']
		y = pixel[ec2inst]['y']
		r = color[statetable[ec2inst]]['red']
		g = color[statetable[ec2inst]]['green']
		b = color[statetable[ec2inst]]['blue']
		# print(x,y,r,g,b)
	
		unicorn.set_pixel(x,y,r,g,b)
		unicorn.show()


	time.sleep(1)

For the moment, this is just monitoring EC2 status, but I’m going to be adding checks in the near future to do things like ping tests, HTTP checks, etc. Stay tuned.

6820932624_b6ca9295c7_k

Streaming to multiple simultaneous destinations

Live streaming has been a “thing” for some time. I work with many churches to help them solve their streaming challenges and develop their technology strategy for streaming. One of the most frequent questions I hear is, “can I stream to Facebook Live and still keep my other stream?” Fortunately, this is a lot easier than it used to be. There are variations on this question, but they all boil down to wanting to know how to send one stream to multiple outlets to expand audience reach.

Method 1:

Multiple outputs from your encoder

Several software encoder platforms support multiple outputs. The easiest among these is probably Telestream’s Wirecast software. (The free/open-source Open Broadcast Studio does this as well, but I don’t have much experience with it, and I prefer the Wirecast interface, which is much more polished.) With Wirecast, it’s merely a matter of adding the additional outputs to the various streaming services that are supported. The downside to this approach is that you’ll need more bandwidth, as you are sending the same stream multiple times.

Screenshot 2017-04-16 09.56.42

Method 2:

The Cloud

1. Teradek Core

This is a vendor-specific approach that integrates with Teradek‘s pro-grade encoders (Cube, Bond, Slice, and T-Rax). It provides a single pane of glass that lets you manage your entire fleet of encoder devices (and control/configure them remotely), and then virtually patch the output of those encoders to one or more outputs. You can also use their Live::Air apps for iOS as an input (stay tuned for a post about using Live::Air). If you are using a Bond product, the input is via their Sputnik server, which allows you to spread the stream across multiple connections for extra bandwidth and redundancy, and then it’s reassembled before sending it on to the next step.

In this example, I’m taking an input stream from the Live::Air Solo app on my iPhone, and sending it to Wowza Streaming Cloud, and Facebook Live, all while recording the incoming stream:

Screenshot 2017-04-16 07.10.22

This is a simple drag and drop operation: Drag a source on the left into the workspace, and then drag one or more destinations from the left – this can be:

Teradek decoders (this is great for a multisite church scenario)

Channels (which are external stream destinations):

Screenshot 2017-04-16 09.49.05

Groups (a combination of the above):

Screenshot 2017-04-16 09.50.31

If you click the “Auto” box on the outputs, it will start that output automatically when the stream is available from the input.

When you create stream destinations for social sites, it will authenticate you against that site and keep that authentication.

You can manage a lot of inputs and outputs this way. This example from Teradek’s marketing department shows the scale:

core-management-platform-user-interface_e811873e-7cfb-4514-a465-1467d487d8d7

 

2. Wowza Streaming Engine/Streaming Cloud

Similar to Core, but not tied to a specific vendor, Wowza Streaming Engine provides Stream Targets as of version 4.4 (although the functionality has been in the software since sometime in version 2, as the PushPublish module, Stream Targets integrates it into the UI). Facebook Live support has been an option since almost the very beginning of Facebook Live. YouTube Live support is there, but as a standard RTMP destination.

Similarly, Wowza Streaming Cloud also offers this capability under the “Advanced Menu”:

Screenshot 2017-04-16 10.07.49

From there, you can create a stream target:

Screenshot 2017-04-16 10.08.02

 

Once that target is created, simply go into a transcoder output and add it (you can also create a target directly from there):

Screenshot 2017-04-16 10.12.26

 

As with Core, you can add multiple destinations to a transcoder output – Generally speaking you’ll want to send your best output to places like FB Live, YouTube, etc, as they do their own internal transcoding.

Screenshot 2017-04-16 10.13.31

 

Method 3:

Multiple Encoders

This is the obvious one, but also the least efficient both in terms of hardware and bandwidth. Each encoder goes to its own destination. This generally requires signal distribution amplifiers and other extra hardware.

 

 

Church Online Platform

Using Bitmovin Player with Church Online Platform

Today’s post will be a brief tutorial on using Bitmovin‘s excellent HTML5 video player with Church Online Platform.

If you’re a church that is wanting to go live, and you haven’t discovered COP, it’s a marvelous product. The fine folks on the life.church Digerati Team (who created the Bible App and made it available on just about every platform known to mankind). It’s a free hosted platform that lets you deliver church online. All you have to do is bring your own streaming provider and provide an embed code. You can use your provider’s player, or you can use your own player. The Digerati team are also a client of mine, and I really enjoy working with them – they’re talented, nerdy, and very good at what they do. (most recently, I helped them build out their Wowza Streaming Engine capability for automating the scheduling and delivery of simulated live events.)

One of my favorite video players out there right now is from Bitmovin, and they provide a CDN-hosted player that provides excellent analytics (complete with API access for the especially nerdy), and usage is free for the first 5000 impressions (and pricing is quite reasonable as you scale up from there). For this reason alone, it’s an excellent choice for churches getting started with streaming. Its other major benefit is that because it is written in HTML5 and Javascript, it will work on just about anything you can throw at it (for the really archaic devices, it still has a Flash component). It also is designed from the ground up to support the new MPEG-DASH standard, but if you’re using a streaming CDN or service that doesn’t provide DASH, no big deal, as the player also supports HLS, even for Flash delivery for those 3 devices that still haven’t discovered modern streaming technology or are running a particularly ancient version of Android. Added bonus, BitMovin’s player also supports VR and 360 streaming (as does Wowza Streaming Cloud).

For starters, you’ll need to sign up for an account, which will give you player information. One thing you’ll want to make sure you do is add your churchonline.org domain to the allowed domains for your license key. This is under Player/Overview:

Bitmovin Player Domain Config

If you forget to do this, the player will simply show an error telling you you need to do it.  This keeps someone from using your player key on their site, so be sure to use yourdomain.churchonline.org, not just churchonline.org.

To put this in your COP page, go to the event where you wish to use the player, and go to the Video tab:

ChurchOnline Event Settings

When you go to the Embed menu, you will see code to put it on the page (under Default video embed code). This is a little more involved than your standard embed code.

Bitmovin Embed Controls

A couple of key things to note here with regards to COP:

  1. In order to put the <script> stuff in your <head> section, you’d need to create a custom theme in COP. This is not necessary (in fact, putting that script statement in the head that way doesn’t work). What you’ll need to do is simply put the <script> piece just above the rest of it in the default embed code section.
  2. You’ll need to edit the source section in that code. If all you’re doing is HLS, you can remove the dash and progressive entries. Leave the HLS entry in place and put in the HLS URL provided by your streaming platform. In the case of Wowza Streaming Cloud, this is located at the bottom of the Overview tab of your streaming application under “Playback URLs”.
  3. The “poster” entry is the image the player shows when you’re not streaming any video.

So, for my test stream, the embed code looks like this:


<script type="text/javascript" src="https://bitmovin-a.akamaihd.net/bitmovin-player/stable/7/bitmovinplayer.js"></script>

<div id="player"></div>
<script type="text/javascript">
var conf = {
key: "d8XXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXX2e",
source: {
hls: "http://wowzaprod103-i.akamaihd.net/hls/live/######/########/playlist.m3u8",
poster: "http://dontfenceme.in/wp-content/uploads/2013/09/g-global-background.jpg"
}
};
var player = bitmovin.player("player");
player.setup(conf).then(function(value) {
// Success
console.log("Successfully created bitmovin player instance");
}, function(reason) {
// Error!
console.log("Error while creating bitmovin player instance");
});
</script>

The console.log lines aren’t necessary, but potentially useful when trying to debug why it can’t instantiate the player.

If you want to run a separate video when the doors aren’t open, put that under the offline video embed code section. You can leave the mobile and low sections empty, as your stream is probably already adaptive from your streaming provider.

Save it, and this is what you get:

BitMovin in ChurchOnline Platform

In order to remove the Bitmovin logo, edit the theme’s CSS and add the following lines:


/* Remove Bitmovin Logo on player */
.bmpui-ui-watermark {
display: none;
}

ChurchOnline Platform CSS Edit

Puzzle The Cat

A Story of Cats

This is the internet, so at some point we’ve got to talk about cats. It’s in the rule book.  The Internet runs on cats. Cat pictures, cat videos, and… cat cables.

Those of you not familiar with the intricacies of the first layer of the OSI “7-layer Burrito” (Internet old-timers will remember this) are probably blissfully unaware of the gory details of the wiring that makes everything (including wireLESS) work.

Dilbert (April 24, 2010)

Dilbert (April 24, 2010)

So who are all these cats, anyway?

Simply put, it’s an abbreviation for “Category”. The Telecommunications Industry Association (TIA) has adopted a series of specifications over the years defining cable performance to transport various types of networks.

Here’s a quick rundown. We’re gonna get a tech lesson AND a history lesson all rolled into one.

Category 1 (pre-1980)

An IBM "Type 1" Token-Ring connector. Known colloquially as a "Boy George Connector" due to its ambiguous gender.

An IBM “Type 1” Token-Ring connector. Known colloquially as a “Boy George Connector” due to its ambiguous gender. Photo: Computer History Museum

This never officially existed, and was a retroactive term used to define “Level 1” cable offered by a major distributor. It is considered “voice grade copper”, sufficient to run signals up to 1MHz, and not suitable for data of any sort (except telephone modems). You could probably meet category 1 requirements with a barbed wire fence. You laugh, but it’s been done. Extensively.

Category 2 (mid-1980s)

Like Category 1, never officially existed, and was a name retroactively given to Level 2 cable from said same distributor. Cat2 brought voice into the digital age. It could support 4MHz of bandwidth, and was used extensively for early Token-Ring networks that operated at 4Mbps, as well as ARCNet, which operated at 2.5Mbps on twisted pair (it had previously used coaxial cable).

Category 3 (1991)

This is the first of the cable categories officially recognized by TIA. It is capable of operating 10Mbps Ethernet over twisted pair (like ARCNet, Ethernet also ran on coaxial cable in the very early days). Category 3 wire was deployed extensively in the early 1990s as it was a much better alternative to 2Mbps ethernet over coax. This is where the now nearly ubiquitous 8P8C connector (often incorrectly referred to as “RJ45”) came into usage for Ethernet, and it’s still in use nearly 3 decades later. Both the connector pinout and the cable performance are defined in TIA standard 568.  Since token-ring networks still operated at 4Mbps, they ran quite happily over this new spec. In 2017, one can still occasionally find Cat3 in use for analog and digital phone lines. The 802.3af Power over Ethernet specification is compatible with this type of wire.

Category 4 (early 1990s)

This stuff existed only for a very brief period of time. In the late 1980s, IBM standardized a newer version of Token Ring that ran at 16Mbps, which required more cable bandwidth than what Category 3 could offer. Category 4 offered 20MHz to work with (which may sound familiar to the wifi folks, who use 20MHz channels a lot). But Category 5 came along pretty quickly, and Category 4 was relegated to history and is no longer recognized in the current TIA-568 standard.

Category 5 (1995)

TIA revised their 568 standard in 1995 to include a new category of cable, supporting 100MHz of bandwidth. This enabled the use of new 100Mbps ethernet (a 100Mbps version of Token Ring soon followed, which also used the same 8P8C connector as Ethernet).

An 8P8C connector, commonly (and incorrectly) referred to as "RJ45". The standard twisted-pair ethernet connector for the last quarter century.

An 8P8C connector, commonly (but incorrectly) referred to as “RJ45”. This has been the standard twisted-pair Ethernet connector for the last quarter century.

Category 5e (2001)

TIA refined their spec on Category 5 to improve the performance of Category 5, to support the new gigabit ethernet standard. It is still a 100MHz cable, but new coding schemes and the use of all four pairs allowed the gigabit rate. IBM and the 802.5 working group even approved a gigabit standard for token ring in 2001, but no products ever made it to market, as Ethernet had taken over completely by that point.

Category 6 (2002)

Not long after Category 5e came to be, Along comes category 6, with 250MHz of bandwidth. This was accomplished partly with better cable geometry and by going from 24AWG conductors to 23AWG. This increased bandwidth allows 10Gbps ethernet to operate on cables up to 55 meters in length.

Category 6a (2009)

This refinement to Category 6 increased cable bandwidth to 500MHz in order to allow 10Gbps ethernet to operate at the full 100m length limit for Ethernet. Categories 6 and 6a will support the new 802.11bt Power Over Ethernet Level 3 (60W) and Level 4 (90W) standards (expected 2018) provided that cable bundles do not exceed 24 cables for thermal reasons.

Category 7/7a

Category 7 cable. Who would want to terminate that? What a pain!

Category 7 cable. Who would want to terminate that? What a pain!

This one never existed in the eyes of the TIA. It still lives as an ISO standard defining several different types of shielded cable whose performance is comparable to Category 6 (bandwidth up to 600MHz for Cat7, 1GHz for Cat7a). Both these specs were rendered moot by 10Gbps Ethernet operating on Category 6a with standard 8P8C connectors. This cat was so ugly, TIA left it at the shelter.

Category 8 (2017)

The latest and greatest, this cable exists to run 40Gbps ethernet. It comes in two flavors, unshielded as 8.1, and shielded (supplanting the Category 7 specs) as 8.2. This cable has a bandwidth of 1600MHz for unshielded, and 2000MHz for shielded.

 

So there you have it. The cats that put the WORK in “Network”. And because this is the internet, I leave you with gratuitous kittens.

Gratuitous Kittens

 

 

 

 

 

Mac OS Server

Enhancing the public Wi-Fi experience

Recently, there was an excellent blog post from WLAN Pros about “Rules for successful hotel wi-fi“. While it is aimed primarily at Wi-Fi in the hotel business (where there is an overabundance of Bad-Fi), many of the tips presented also apply to a wide variety of large-scale public venue wifi installations. Lots of great information in the post, and well worth a read.

At the 2016 WLPC there was an interesting TENTalk from Mike Liebovitz at Extreme Networks about the pop-up wifi at Super Bowl City in San Francisco, where analytics pointed to a significant portion of the traffic being headed to Apple.

Meanwhile, a few months later at the 2016 National Church IT Network conference, I heard a TENTalk about Apple’s MacOS Server, where I first heard about this incredibly useful feature (sadly, it wasn’t recorded, that I know of, so I can’t give credit…)

With most of the LPV installations I’ve worked on, I’ve found the typical client mix includes about 60% Apple devices (mostly iOS). For example, this is at a large church whose wireless network I installed. (Note that Windows machines make up less than 10% of the client mix on wifi!)

Client mix from Ruckus ZoneDirector

OK, So what?

This provides an opportunity to make the wifi experience even better for your (Apple-toting) guests. Whenever possible, as part of the “WiFi System” I will install an Apple Mac Mini loaded with MacOS Server. This allows me to turn on caching. This is not just plain old web caching like you would get with a proxy server such as Squid, but rather a cache for all things Apple. What does this do for your fruited guests? It speeds up the download of software distributed by Apple through the Internet. It caches all software and app updates, App Store purchases, iBook downloads, iTunes U downloads (apps and books purchases only), and Internet Recovery software that local Mac and iOS devices download.

Why is this of interest and importance? Let me give you an example: A few years ago, we were hosting a national Church IT Round Table conference at Resurrection on a day when Apple released major updates to MacOS, iOS, and their iWork suite. In addition to the 50 or so staff Mac machines on the network, there were another hundred or two Mac laptops and iThings among the conference attendees. The 200MB internet pipe melted almost instantly under the load of 250 devices each requesting 3-5GB of updates. That would have melted even a gigabit pipe, and probably given a 10Gbps pipe a solid run for its money (not to mention bogging down some of the uplinks on the internal network!. Having a caching server would have mitigated this. It didn’t do great things to the access points in the conference venue either, all of which were not only struggling for airtime, but also for backhaul.

Just by way of an example, Facebook updates their app every two weeks and its current incarnation (86.0, March 30, 2017) weighs in at 320MB (the previous one was about half that!), and its close pal Messenger clocked in at 261MB. Almost everyone has those to apps, so they’re going to find itself in your cache almost instantly, along with numerous other popular apps. Apple’s iWork suite apps and Microsoft Office apps all weigh in around 300-500MB apiece as well. This has potential to murder your network when you least expect it.

In any case, check out the network usage analytics from either your wireless controller or your firewall. If Apple.com is anywhere near the top of the list (or on it at all), you owe it to yourself and your guests to implement this type of solution.Network Statistics from Ubiquiti UniFi

The Technical Mumbo-Jumbo

Hardware

As mentioned previously, a Mac Mini will do the job nicely. If you’re looking to do this on the cheap, it will happily run on a 2011-vintage Mini (you can find used Mac Minis on Craigslist or eBay all day long for cheap), just make sure you add some extra RAM and a storage drive that doesn’t suck (the stock 5400rpm spinning disks on the pre-2012 era Mac Mini and iMacs were terrible.) Fortunately, 2.5″ SSDs are pretty cheap these days. Newer Minis will have SSD baked in already.

If you’re wanting to put the Mac Mini in the datacenter, you might want to consider using a Sonnet RackMac Mini (which is available on Amazon for about $139) and can hold one or two machines.

Sonnet RackMac Mini

You can also happily run this off of one of the 2008-era “cheese grater” Mac Pros that has beefier processing and storage (and also fits in a rack, albeit not in the svelte 1U space the Sonnet box uses). If you have money to burn, then by all means use the “trash can” Mac Pro (Sonnet also makes a rack chassis for that model!).

This is a great opportunity to re-purpose some of those Macs sitting on the shelf after your users have upgraded to something faster and shinier.

Naturally, if you’re running a REALLY big guest network, you’ll want to look at something beefy, or a small farm of them Minis with SSD storage (the MacOS Server caching system makes it quite easy to deploy multiple machines to support the caching.)

The Software

MacOS Server (Mac App Store, $19.99)

Since most of your iOS guests will have updates turned on, one of the first things an iOS device does when it sees a big fat internet pipe that isn’t from a cell tower is check for app updates. If you have lots of guests, you will need to fortify your network against the onslaught of app update requests that will inevitably hit whenever you have lots of guests in the building.

The way it works is this: When an Apple device makes a request to the CDN, Apple looks at the IP you’re coming from and says, “You have a local server on your LAN, get your content from there, here’s its IP.” The result being that your Apple users will get their updates and whatnot at LAN speeds without thrashing your WAN pipe every time anyone pushes out a fat update to an app or the OS, which is then consumed by several hundred people using your guest wifi over the course of a week. You’ve effectively just added an edge node to Apple’s CDN within your network.

Content will get cached the first time a client requests it, and it does not need to completely download to the cache before starting to send it to the client. For that first request, it will perform just as if they were downloading it directly from Apple’s servers. If your server starts running low on disk space, the cache server will purge older content that hasn’t been used recently in order to maintain at least 25GB of free disk space.

MacOS Caching Server Configuration

The configuration

If you have multiple subnets and multiple external IPs that you want to do this for, you can either do multiple caching servers (they can share cache between them), or you can configure the Mini to listen on multiple VLANs:

Mac OS network preferences panel

Once you have the machine listening on multiple VLANs, you can tell the caching server which ones to pay attention to, and which public IPs. The Mac itself only needs Internet access from one of those subnets.

MacOS Server Caching Preferences

The first dropdown will give you the option of “All Networks”, “Only Local Subnets”, and “Only Some Networks”. Choosing the last one opens an additional properties box that allows you to define those networks:

Mac OS Server Cache Network Settings

The second one gives you the options of “Matching this server’s network” or “On other networks”. As with the first options, an additional properties box is displayed.

In both cases, hit the plus sign to create a network object:

Mac OS Server Create a New Network

It should be noted here that this only tells the server about existing networks, but it won’t actually create them on the network interface. You’ll still need to do that through the system network preferences mentioned previously. If you don’t want to have the server listen on multiple VLANs, you can just make sure its address is routable from the subnets you wish to have the cache server available, define the external and internal networks it provides service to, and you should be off to the races. This will provide caching for subnet A that NATs to the internet via public IP A, and B to B, and so on. Defining a range of external IPs also has you covered if you use NAT pooling.

There’s also some DNS SRV trickery that may need to happen depending on your environment. There are some additional caveats if your DNS servers are Active Directory read-only domain controllers. This post elaborates on it.

 

Is it working?

Click the stats link near the top left of the server management window. At the bottom is a dropdown where you can see your cache stats. The red bar shows bytes served from the origin, and green shows from the cache. If you only have one server doing this, you won’t see any blue bars, which are for cache from peer servers. Downside is that you can only go back 7 days.

On this graph, 3/28 was when there were both a major MacOS and iOS update released, hence the huge spike from the origin servers on Apple’s CDN. Nobody has updated from the network yet… But guest traffic at this site is pretty light during the week. I’ll update the image early next week.

MacOS Server Cache Stats

Other useful features

A side benefit of this is that you can also use this to provide a network recovery boot image on the network, in case someone’s OS install ate itself – on the newer Macs with no optical drive, this boots a recovery image from the internet by default. This requires some additional configuration, and the instructions to set up NetInstall are readily available with a quick Google search.

If you want, you can also make this machine the DHCP and local DNS server for your guest network. With some third-party applications, you can also serve up AirPrint to your wireless guests if they need it.

Conclusion

From a guest experience perspective, your guests see their updates downloading really fast and think your WiFi is awesome, and it’s shockingly easy to set up (the longest and most difficult part is probably the actual acquisition of the Mac Mini) It will even cache iCloud data (and encrypts it in the cache storage so nobody’s data is exposed). Even if you have a fat internet pipe, you should really consider doing this, as the transfers at LAN speed will reduce the amount of airtime consumed on the wireless and the overall load on your wireless network. (Side note, if you’re a Wireless ISP, this sort of setup is just the sort of thing you ought to put between your customer edge network and your IP transit)

Of course, you could also firewall off Apple iCloud and Updates instead, but why would you do that to your guests? Are you punishing them for something?

 

Android/Windows users: So sad, Google and Microsoft don’t give you this option (Although Microsoft sort of does in a corporate environment with WSUS, but it’s not nearly as easy to pull off, nor is it set up for casual and transient users). I would love it if Google would set up something like this for play store, chromebook, etc, as about half of the client mix that isn’t from Apple is running on Android.

Now, if only we could do the same for Netflix’s CDN. The bandwidth savings would be immense.

Ian doing a Site Survey

“We want wi-fi. Now what?”

I’ve been spending the past week at the annual Wireless LAN Professionals Conference in Phoenix. This is one of my favorite conferences along with the Church IT Network conference, because I get to spend a couple of days geeking out hard with a whole bunch of REALLY smart people. The amount of information I’ve stuffed into my brain since last Friday is a little bit, well, mind-blowing…

I spent the first 3 days getting my Ekahau Certified Survey Engineer credential. For those who are not familiar with the Wi-Fi side of my consulting practice, Ekahau Site Survey is a fantastic tool for developing predictive RF designs for wireless networks, allowing me to optimize the design before I ever pull any new cable or hang access points. One of the key points that’s been touched on frequently throughout the training and the conference is what was termed by one attendee as the “Sacred Ritual of the Gathering of Requirements”. It sounds silly, but this one step is probably the single most important part of the entire process of designing a wireless network.

In the church world (and in the business world), your mission statement is what informs everything you do. Every dollar you spend, every person you hire, every program you offer, should in some way support that mission focus in a clearly defined and measurable manner. A former boss (and current client) defines his IT department’s mission like this: “Our users’ mission is our mission.” This clearly laid out that in IT, we existed to help everyone else accomplish their mission, which in turn accomplished the organization’s mission.

I’ve had more than a few clients say initially that their requirement is “we want wi-fi”. My job as a consultant and an engineer is to flesh out just what exactly “wi-fi” means in your particular context, so that I can deliver a design and a network that will make you happy to write the check at the end of the process. I can’t expect a client to know what they want in terms of specific engineering elements relating to the design. If they did, I’m already redundant.

Whiteboard

Photo: Mitch Dickey/@Badger_Fi

During the conference someone put up a whiteboard, with the following question:

“What are the top key questions to ask a client in order to develop a WLAN design or remediation?”

The board quickly filled up, and I’ll touch on a few really important ones here:

“What do you expect wi-fi to do for you? What problem does it solve?”

It was also stated as:

“What is your desired outcome? How does it support your business?”

This is one of the fundamental questions. It goes back to your mission statement. Another way of putting it is “How do you hope to use the wi-fi to support you mission?” What you hope to do with wi-fi will drive every single other design decision. The immediate follow-up question should be a series of “why?” questions to get to the root cause of why these outcomes are important to the business goals. You can learn an awful lot by asking “why?” over and over like a 4-year-old child trying to understand the world. This is critical for managing expectations and delivering what the client is paying you a large sum of money to do.

“What is your most critical device/application?”

“What is your least capable and most important device?”

“What other types of devices require wi-fi?”

“What type of devices do your guests typically have?”

It’s nice to have shiny new devices with the latest and greatest technology, but if the wi-fi has to work for everyone, your design has to assume the least capable device that’s important, and design for that. If you use a bunch of “vintage” Samsung Galaxy phones for barcode scanning or checking in children, then we need to make sure that the coverage will be adequate everywhere you need to use them, and that you select the proper spectrum to support those devices. For the guest network, having at least a rough idea of what mix of iOS and Android devices the guests bring into the facility can inform several design choices.

“What regulatory/policy constraints are there on the network?”

This is hugely important. Another mantra I’ve heard repeated often is, “‘Because you can’ is NOT a strategy!” If your network has specific privacy requirements such as PCI-DSS, HIPAA, any number of industry-specific policies, or even just organizational practices about guest hospitality, network access, etc., these also need to factor into the design and planning process.

I have one client whose organization is a church that is focused on a 5-star guest experience. What this translated to in terms of Wi-Fi is that they did not want to name the SSIDs with the standard “Guest” and “Staff” monikers that are common. The reasoning for this was that merely naming the private LAN SSID “Staff” would create in a guest’s mind that there are two classes of people, one of which may get better network performance because you’re one of the elect. It’s also a challenge when you have a lot of volunteers who perform staff-like functions and who need access to the LAN. Ultimately, we simply called this network “LAN”. Meaningful to the IT staff, and once the staff is connected to it, they no longer think about it. Something as simple as the SSID list presented by a wifi beacon is an important consideration in the overall guest experience.

“What is your budget?”

This one is so obvious it’s often overlooked. As engineers, we like to put shiny stuff into our designs. The reality is, most customers don’t have a bottomless pit of money, especially when they’re non-profits relying on donated funds. While I’d love to design a big fancy Ruckus or Aruba system everywhere I go, the reality is that it’s probably overkill for a lot of places, when a Ubiquiti or EnGenius system will meet all the requirements.

“What are the installation constraints?”

“Which of those constraints are negotiable? Which aren’t?”

Another obvious one that is overlooked. You need to know when the installation can happen (or can’t happen), or if there are rooms that are off-limits, potential mounting locations that are inaccessible. Areas that can’t support a lift, or areas that you simply can’t get cable to without major work. Aesthetics can be a significant factor for both AP selection and placement, wiring, and even configuration (such as turning off the LEDs). While one particular AP may be technically suited to a particular location, how it looks in the room may dictate the choice of something else.

“What is your relationship with your landlord/neighbors/facility manager like?”

I kid you not, this is a bigger factor than you might think. In an office building, being a good wifi neighbor is an important consideration. If the landlord is very picky about where and how communications infrastructure is installed outside the leased space (such as fiber runs through hallways, roof access, antennas outside the building, extra lease charges for technology access), you may encounter some challenges. If your facility manager is particular about damage, you need to factor that into the process as well. This likely also will come into play when you’re doing your site surveys and need access to some parts of the building.

There are a whole host of followup questions beyond these that focus on the more technical aspects of the requirements gathering, and your client may or may not have an answer:

“How many people does this need to support at one time?”

“Where are all these people located?”

“When are they in the building?”

“Where do you need coverage?”

“Where do you NOT need coverage?”

“What is your tolerance level for outages/downtime?”

… and many more that you will develop during this sacred requirements gathering ritual. Many of the technical aspects of the environment (existing RF, channel usage, airtime usage, interference source, etc) don’t need to be asked of the client, as you will find them during your initial site survey.

If you’re a wifi engineer, having these questions in your mind will help you develop a better design. If you’re the client, having answers to these questions available will help you get a better design.

What questions are important to your network? Sound off below!

If you need a wireless network designed, overhauled, or expanded, please contact me and we can work on making it work for your organization.

Video Control Room

Automating Video Workflows With PowerShell

Linking today to some great content from another Ian (ProTip: get to know an Ian, we’re full of useful knowledge). Ian Morrish posts about automating a variety of methods of automating A/V equipment using PowerShell. Lots of useful stuff in here.

No Windows? No worries, you can install PowerShell on MacOS and Linux too.

I’ve put some feelers out to some of my streaming equipment vendors to find out what kind of automation hooks and APIs they support.

Meanwhile, Wowza has a REST API for both its Streaming Engine and Cloud products. Integrating this into PowerShell should be relatively straightforward. Any PowerShell wizards wanna take a stab at it?

Stay tuned.

 

Products_ContentManagement

Going Serverless: Office 365

Recently I just completed a project for a small church in Kansas. Several months ago, the senior pastor asked me for a quote on a Windows server to provide authentication as well as file and print share services. During the conversation, a few things became clear:

  1. Their desktop infrastructure was completely on Windows 10. Files were being kept locally or in a shared OneDrive account.
  2. The budget they had for this project was not going to allow for a proper server infrastructure with data protection, etc.
  3. This church already uses a web-based Church Management System, so they’re somewhat used to “the cloud” already as part of their workflows.

One of the key features provided by Windows 10 was the ability to use Office 365 as a login to your desktop (Windows 8 allowed it against a Microsoft Live account). Another is that for churches and other nonprofits, Office 365 is free of charge for the E2 plan.

I set about seeing how we could go completely serverless and provide access not only to the staff for shared documents, but also give access to key volunteer teams and church committees.

The first step was to make sure everybody was on Windows 10 Pro (we found a couple of machines running Windows 10 Home). Tech Soup gave us inexpensive access to licenses to get everyone up to Pro.

Then we needed to make sure the internet connection and internal networking at the site was sufficient to take their data to the cloud. We bumped up the internet speed and overhauled the internal network, replacing a couple of consumer-grade unmanaged switches and access points with a Ubiquiti UniFi solution for the firewall/router, network switch, and access points. This allows me and key church staff to remotely manage the network, as the UniFi controller operated on an Amazon Web Services EC2 instance (t2.micro). This new network also gave the church the ability to offer guest wifi access without compromising their office systems.

The next step was to join everyone to the Azure domain provided by Office 365. At this point, all e-mail was still on Google Apps, until we made the cutover.

Once we had login authentication in place, I set about building the file sharing infrastructure. OneDrive seemed to be the obvious solution, as they were already using a shared OneDrive For Business account.

One of OneDrive’s biggest challenges is that, like FedEx, it is actually several different products trying to behave as a single, seamless product. At this, OneDrive still misses the mark. The OneDrive brand consists of the following:

  • OneDrive Personal
  • OneDrive for Business
  • OneDrive for Business in Office 365 (a product formerly known as Groove)
  • Sharepoint Online

All the OneDrive for Business stuff is Sharepoint/Groove under the hood. If you’re not on Office 2016, you’ll want to make the upgrade, because getting the right ODB client in previous versions of Office is a nightmare. Once you get it sorted, it generally works. If you’ve got to pay full price for O365, I would recommend DropBox for Business as an alternative. But it’s hard to beat the price of Office 365 when you’re a small business.

It is very important to understand some of the limitations of OneDrive for Business versus other products like DropBox for Business. Your “personal” OneDrive for Business files can be shared with others by sending them a link, and they can download the file, but you can’t give other users permission to modify them and collaborate on a document. For this, you need to go back to the concept of shared folders, and ODB just doesn’t do this. This is where Sharepoint Online comes in to play.

Naturally, this being Sharepoint, it’s not the easiest thing in the world to set up. It’s powerful once you get it going, but I wasn’t able to simply drop all the shared files into a Sharepoint document library — There’s a 5000-file limit imposed by the software. Because the church’s shared files included a photo archive, there were WAY more than 5000 files in it.

Sharepoint is very picky about getting the right information architecture (IA) set up to begin with. Some things you can’t change after the fact, if you decide you got them wrong. Careful planning is a must.

What I ended up doing for this church is creating a single site collection for the whole organization, and several sites within that collection for each ministry/volunteer team. Each site in Sharepoint has 3 main security groups for objects within a site collection:

  • Visitors (Read-Only)
  • Members (Read/Write)
  • Owners (Read/Write/Admin)

In Office 365, much as it is with on-premises, you’re much better off creating your security groups outside of Sharepoint and then adding those groups to the security groups that are created within Sharepoint. So in this case, I created a “Worship Production” team, added the team members to the group, and then added that group to the Worship Site Owners group in Sharepoint. The Staff group was added to all the Owners groups, and the visitors group was left empty in most cases. This makes group membership administration substantially easier for the on-site admin who will be handling user accounts most of the time. It’s tedious to set up, but once it’s going, it’s smooth sailing.

Once the security permissions were set up for the various team sites, I went into the existing flat document repository and began moving files to the Sharepoint document libraries. The easiest way to do this is to go to the library in Sharepoint, and click the “Sync” button, which then syncs them to a local folder on the computer, much like OneDrive (although it’s listed as Sharepoint). There is no limit to how many folders you can sync to the local machine (well, there probably is, but for all practical purposes, there isn’t). From there it’s a matter of drag and drop. For the photos repository, I created a separate document library in the main site, and told Sharepoint it was a photo library. This gives the user some basic Digital Asset Management capabilities such as adding tags and other metadata to each picture in the library.

So far, it’s going well, and the staff enjoys having access to their Sharepoint libraries as well as Microsoft Office on their mobile devices (iOS and Android). Being able to work from anywhere also gives this church some easy business continuity should a disaster befall the facility — all they have to do is relocate to the local café that has net access, and they can continue their ministry work. Their data has now been decoupled from their facility. I have encountered dozens of churches over the years whose idea of data backup is either “what backup?” or a hard drive sitting next to the computer 24×7, which is of no use if the building burns to the ground or is spontaneously relocated to adjacent counties by a tornado. The staff doesn’t have to worry about the intricacies of running Exchange or Sharepoint on Windows Small Business Server/Essentials. Everything is a web-based administrative panel, and support from Microsoft is excellent in case there’s trouble.

If you’re interested in how to take your church or small business serverless, contact me and I’ll come up with a custom solution.

ProQu24

Controlling Audio With ProPresenter

Our church is a small one. So its not always especially easy to fully staff our tech booth, and sometimes, one must fly solo, which adds to the workload, and sometimes stuff gets forgotten, like unmuting microphones for the choir or the person reading the scripture.

Fortunately, there is some tech than can help us in this regard. We use ProPresenter for our graphics presentation, and an Allen & Heath QU-24 console for our audio. The Qu-24 is connected to the Mac that runs ProPresenter via a USB cable, which shows up in the Mac as a 32 in/32 out audio device, as well as a MIDI device. This is primarily to be able to use the console as a multitrack and DAW interface, but it also lets us play back audio from ProPresenter media cues without ever leaving the digital domain, and saving us a couple of inputs on the board (although there’s no shortage of those). But because it’s also a MIDI device, this gives us some options with ProPresenter’s $99 MIDI module add-on. The Qu series boards can also do MIDI over IP (in fact, the Qu-Pad remote control app for iPad uses MIDI over IP to work its magic). If you’re using MIDI over IP with a Mac, you’ll need a special driver for the Mac. No driver is needed for USB.

First, a few resources we’ll need:

In the Qu Series, mutes and mute groups are controlled by a sequence of a Note On/Off message. The specific note determines the channel or mute group being controlled, and a the velocity value determines if it’s being turned on (Muted) or off (Unmuted). Velocity values below 64 turn the mute off, and above turn it on.

Meanwhile, over in ProPresenter, since Version 6, we have the ability to add MIDI Note On/Off cues to a slide. See where this is going? Unfortunately, ProPresenter doesn’t have the ability to do anything other than MIDI notes in a slide at the moment, so we can’t get really crazy with starting recordings or anything else requiring non-note MIDI messages.

So how do we know what notes emulate button presses? The documentation provides this handy method:

OK, this requires thinking and math. Not so helpful. This is where the MIDI monitor comes in. Download it and run it, and it shows everything coming across the MIDI interface. Push the button you’re interested in, and lo, MIDI Monitor helpfully shows you what note you’re interested in:

In this case, G#4 is the mute group for our choir. A4 is the mute group for the speaking mics on the chancel. A1 is the lectern mic.

Screenshot 2016-11-20 13.51.30So now, to be able to add a cue at the beginning of a song the choir is singing, I simply have to add two cues to the first slide to turn on the choir microphones:

  • NOTE ON, G#4(80), 63
  • NOTE OFF, G#4(80)

Then I can add a slide at the end of the playlist entry that then turns them back off, or add these to the beginning of the next playlist entry:

  • NOTE ON, G#4(80), 127
  • NOTE OFF, G#4(80)

Likewise, when someone is at the lectern reading scripture, I can unmute that channel automatically using the corresponding note number, and mute them again when they’re done.

On the flip side, you can also use note on/off commands to control ProPresenter. So you *could* also use the Mute, SEL, and PAFL buttons on unused channels to trigger things in ProPresenter (you also want to make sure that you don’t overlap these with the mutes and mute groups that you are actively using so as not to inadvertently advance a slide when hurriedly muting a channel). ProPresenter also conveniently tells you what the last note sent was, so you can actively push the button you want to use, make a note of its number, and put it in the action you wish.

 

Another approach you can take is to create a presentation in ProPresenter containing blank slides with the various functions you wish to use. Then you can copy these slides into presentations and add a Go To Next timer to them to automatically advance to the next slide. I would also recommend using slide labels and colors to clearly identify what each slide is doing:

Screenshot 2016-11-20 13.47.55

 

If you have controllable lighting and your lighting console also has MIDI capability, This comes in handy as well. And if you’re really a one-man band, and like to do things like pads underneath certain worship elements, you can use this to trigger those as well. But if you get to that point, you may want to look into QLab to control all of them at the same time.

So there you have it: a quick and easy way to automate some of your workload with the Qu series boards. If you’ve got another board that you use, let me know in the comments if you do (or would like to do) something like this. Would also love to hear if anyone is using hardware MIDI controllers like the Novation LaunchPad and how you have it set up.

Additional Info:

Summary of MIDI Messages (midi.org)

Microsoft Azure

Nonprofit Tech Deals: Microsoft Azure

Last week while I was at the Church IT Network National Conference in Anderson, SC, a colleague pointed me to a fantastic donation from Microsoft via TechSoup: $5000/year in Azure credit. At a hair over $400/month, this means you can run a pretty substantial amount of stuff. Microsoft just announced this program at the end of September, so it’s still very new. And very cool. Credits are good any time within the 12-month period, so you don’t have to split them up month by month. They do not, however, roll over to the following year.

The context of the conversation was for hosting the open-source RockRMS Church/Relationship Management System, but Wowza Streaming Engine is also available ready to go on Azure. And many other things. (and for those of us in the midwest, Microsoft’s biggest Azure datacenter is “US Central” located in Des Moines, as Iowa is currently a very business-friendly place to put a huge datacenter)

If you’re a registered 501c3 non-profit (or your local country’s equivalent if you’re outside the US), head on over to Tech Soup to take advantage of this fantastic deal.

As an added bonus, if you have Windows Server Datacenter licenses from TechSoup or that your organization purchased with Software Assurance, each 2-socket license can be run on up to two Azure compute instances each with up to 8 virtual cores, reducing the cost of your instances even further (as standard Windows instances include the cost of the Windows license at full nonprofit prices.). This also applies to SQL Server.

Here’s the process:

  1. Read the FAQ.
  2. Register your organization with TechSoup if you haven’t already done so.
  3. Head over to Microsoft’s Azure Product Donations page and hit “Get Started”
  4. At some point in the process you’ll also want to create an Azure account to associate the credits with. If you’re already using Office 365 for nonprofits, it’s best to tie an account to your O365 domain.