On Network Models

One of the most fundamental concepts underlying modern data networking is that of the network

“stack”, which consists of individual “layers” that allow one to describe a network without actually getting lost in the weeds of specific underlying technologies. There are two models that are in common usage (there are several others as well but are less common):

  • the seven-layer OSI Model (which is largely theoretical), published in 1984 as ISO standard 7498 and officially known as the “Open Systems Interconnection Reference Model” (Kansas connection: the OSI model’s designer, Charles Bachman III, was born in Manhattan, the son of the head football coach at K-State at the time)
  • the four-layer TCP/IP Model (which is a more practical model owing to the widespread use of the internet). The TCP/IP model predates the OSI model and can trace some of its roots to BBN’s early work on internetworking in the late 1960s.

One of the key principles of the model is that each layer is carried by the layer below it. The layers each have their own methods and protocols, which are (for the most part) independent of the layers below that are carrying them from A to B. In the TCP/IP column, I’ve also indicated what type of system operates at that layer.

Network Model
OSI TCP/IP OSI Protocol data unit (PDU) Function
Host
layers
7. Application Application

(Computer)

Data High-level APIs, including resource sharing, remote file access
6. Presentation Translation of data between a networking service and an application; including
5. Session Managing communication sessions, i.e. continuous exchange of information in the form of multiple back-and-forth transmissions between two nodes
4. Transport Transport

(ISP)

Segment

Reliable transmission of data segments between points on a network, including segmentation, acknowledgement and multiplexing
Media
layers
3. Network Internetwork

(Router)

Packet Structuring and managing a multi-node network, including addressing, routing, and traffic control
2. Data link Link

(Switch)

Frame Reliable transmission of data frames between two nodes connected by a physical layer
1. Physical Bit Transmission and reception of raw bit streams over a physical medium

The importance of understanding these network models comes into play is when you are designing or troubleshooting a network. Understanding at what level your problem is happening is a major step towards solving it. I’ve seen and answered countless questions on Quora about “why doesn’t X” work, or “can someone on the internet trace me by my MAC address?” and various other questions that can be enlightened by an understanding of the network models. As a general rule, the lower you are in the model, the more physically localised you’re dealing with.

It’s probably difficult to wrap your head around if you’re not used to this kind of stuff. So let me offer up an example of how this same network model manifests itself in the real world, completely unrelated to computer networking. You’ve almost certainly seen it in action. You’ve benefited from it in your life. I give you: Container Shipping.

Container shipping relies on a standardized set of steel containers (also defined by the ISO) that can be used to haul goods efficiently around the world.

Here’s what the Transport Layer looks like:

Notice that the container is itself full of smaller containers (plain cardboard boxes: The session layer) – which themselves may contain additional boxes for retail (presentation layer), which contain an actual product (application layer). When the container is closed and sealed, its contents go wherever it goes.

But how does it get there? It makes use of the Network Layer. This is where it goes through one or more shipping companies (like ISPs) that get the contents from the factory in China (server) to the buyer (client). As in computer networking, This transportation company can use multiple types of ways to get it there, such as trucks, trains, ships, and airplanes. These are Layer 2, the Link Layer, and are all capable of carrying these containers.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Each of these Layer 2 conveyances rides on a different physical medium: Roads (land), Rails (land), Shipping routes (sea), flight paths (air).

It’s also worth noting that two of these physical media are bounded media (roads and rails) which constrain the path the vehicle takes. This is akin to a wire or fiber optic cable.  The other two (sea and air) are unbounded, which means the vehicles can take any path they choose. This is akin to free-space optical transmission and wireless RF transmission – it also means those are more susceptible to interception (hijacking/piracy).

I mentioned earlier that a layer 3 device is a router. Its job is to get the data from one network to another. What does this look like in container shipping? One of these giant cranes that remove the containers from one conveyance, to be loaded onto another. Sometimes, it will buffer them in a container yard before sending it on to its destination.

 

Once the container gets to its destination, it is signed for (an ACKnowledgement in the network world), and the signature is sent back to the shipper to confirm that it arrived at its destination – this is a transport layer function, as it is the shipper’s responsibility to make sure stuff gets there on time. The buyer and shipper (Layer 7) don’t really care *how* it got there, just that it did.

 

EC2 Monitoring with Raspberry Pi

I’ve been doing a little Raspberry Pi hacking lately, and put together a neat way to have physical status LEDs on your desk for things like EC2 instances.

The Hardware

In its most basic form, you can simply hook up an LED and a bias resistor between a ground line and a GPIO line on the Pi, but that doesn’t scale especially well – You can run out of GPIO lines pretty quickly, especially if you’re doing different colors for each status. Plus, it’s not overly elegant.

The solution? Unicorns!

No, really. The fine folks at Pimoroni in Sheffield, UK have made a lovely little HAT device for the Pi called a Unicorn. Its primary purpose is lots of blinky lights to make pretty rainbows and stuff, hence the name. However, this HAT is a 4×8 (or an 8×8) array of RGB LEDs, addressable via the I2C bus, which doesn’t eat up a line per LED (good thing, otherwise it would require 96 analog lines). The unicornhat library (python3-unicornhat) is available for Python 2 and Python 3 in the Raspbian repo. When installed onto the Pi, the Unicorn will fit within a standard Raspberry Pi case.

The Code

This is my first foray into Python, so there was a bit of a learning curve. If you’re familiar with object-oriented code concepts, this should be easy for you. Python is much more parsimonious with punctuation than PHP or perl are.

For accessing the EC2 data, we’ll need Amazon’s boto3 library, also available in the Raspbian repo (python3-boto). One area where boto3 is really nice is that the data is returned directly as a dict object (what users of other languages would call an array), so you don’t have to mess with converting JSON or XML into an object structure, and it can be manipulated as you would any other associative array (or a hash for you old-timers that use perl). AWS returns a fairly complex object, so you kind of have to dig into it via a few iterative loops to extract the data you’re after.

From there, it’s a matter of assigning different RGB values to the states. I chose these ones:

  • stopped: red
  • pending: green
  • running: blue
  • stopping: yellow(ish)

I also discovered that I needed to assign a specific pixel to each instance ID, otherwise they tended to move around a bit depending on what order AWS returned them on a particular request.

Here’s what the second iteration looks like in action:

import boto3 as aws
import unicornhat as unicorn
import time

# Initialize the Unicorn
unicorn.clear()
unicorn.show()
unicorn.brightness(0.5)

# Create an EC2 object 
ec2 = aws.client('ec2')

# Define colors and positions
color = {}
color['stopped']={'red':255,'green':0,'blue':0}
color['pending']={'red':64,'green':255,'blue':0}
color['running']={'red':32,'green':32,'blue':255}
color['stopping']={'red':192,'green':128,'blue':32}
	
pixel = {}
pixel['i-0fa4ea2560aa17ffd']={'x':0,'y':0}
pixel['i-06b95cd864acb1a8c']={'x':0,'y':1}
pixel['i-0661da0f50ffb604c']={'x':0,'y':2}
pixel['i-063ec151e0f44ef9b']={'x':0,'y':3}
pixel['i-02c514ca567d8a033']={'x':0,'y':4}

# Loop until forever
while True:

	response = ec2.describe_instances()
		
	
	statetable = {}
	resarray = response['Reservations']
	for res in resarray:
		instarray = res['Instances']
		for inst in instarray:
			iid = inst['InstanceId']
			state = inst['State']['Name']
			# print(iid)
			# print(state)
			statetable[iid] = state
	
	
	for ec2inst in statetable:
		x = pixel[ec2inst]['x']
		y = pixel[ec2inst]['y']
		r = color[statetable[ec2inst]]['red']
		g = color[statetable[ec2inst]]['green']
		b = color[statetable[ec2inst]]['blue']
		# print(x,y,r,g,b)
	
		unicorn.set_pixel(x,y,r,g,b)
		unicorn.show()


	time.sleep(1)

For the moment, this is just monitoring EC2 status, but I’m going to be adding checks in the near future to do things like ping tests, HTTP checks, etc. Stay tuned.

Streaming to multiple simultaneous destinations

Live streaming has been a “thing” for some time. I work with many churches to help them solve their streaming challenges and develop their technology strategy for streaming. One of the most frequent questions I hear is, “can I stream to Facebook Live and still keep my other stream?” Fortunately, this is a lot easier than it used to be. There are variations on this question, but they all boil down to wanting to know how to send one stream to multiple outlets to expand audience reach.

Method 1:

Multiple outputs from your encoder

Several software encoder platforms support multiple outputs. The easiest among these is probably Telestream’s Wirecast software. (The free/open-source Open Broadcast Studio does this as well, but I don’t have much experience with it, and I prefer the Wirecast interface, which is much more polished.) With Wirecast, it’s merely a matter of adding the additional outputs to the various streaming services that are supported. The downside to this approach is that you’ll need more bandwidth, as you are sending the same stream multiple times.

Screenshot 2017-04-16 09.56.42

Method 2:

The Cloud

1. Teradek Core

This is a vendor-specific approach that integrates with Teradek‘s pro-grade encoders (Cube, Bond, Slice, and T-Rax). It provides a single pane of glass that lets you manage your entire fleet of encoder devices (and control/configure them remotely), and then virtually patch the output of those encoders to one or more outputs. You can also use their Live::Air apps for iOS as an input (stay tuned for a post about using Live::Air). If you are using a Bond product, the input is via their Sputnik server, which allows you to spread the stream across multiple connections for extra bandwidth and redundancy, and then it’s reassembled before sending it on to the next step.

In this example, I’m taking an input stream from the Live::Air Solo app on my iPhone, and sending it to Wowza Streaming Cloud, and Facebook Live, all while recording the incoming stream:

Screenshot 2017-04-16 07.10.22

This is a simple drag and drop operation: Drag a source on the left into the workspace, and then drag one or more destinations from the left – this can be:

Teradek decoders (this is great for a multisite church scenario)

Channels (which are external stream destinations):

Screenshot 2017-04-16 09.49.05

Groups (a combination of the above):

Screenshot 2017-04-16 09.50.31

If you click the “Auto” box on the outputs, it will start that output automatically when the stream is available from the input.

When you create stream destinations for social sites, it will authenticate you against that site and keep that authentication.

You can manage a lot of inputs and outputs this way. This example from Teradek’s marketing department shows the scale:

core-management-platform-user-interface_e811873e-7cfb-4514-a465-1467d487d8d7

 

2. Wowza Streaming Engine/Streaming Cloud

Similar to Core, but not tied to a specific vendor, Wowza Streaming Engine provides Stream Targets as of version 4.4 (although the functionality has been in the software since sometime in version 2, as the PushPublish module, Stream Targets integrates it into the UI). Facebook Live support has been an option since almost the very beginning of Facebook Live. YouTube Live support is there, but as a standard RTMP destination.

Similarly, Wowza Streaming Cloud also offers this capability under the “Advanced Menu”:

Screenshot 2017-04-16 10.07.49

From there, you can create a stream target:

Screenshot 2017-04-16 10.08.02

 

Once that target is created, simply go into a transcoder output and add it (you can also create a target directly from there):

Screenshot 2017-04-16 10.12.26

 

As with Core, you can add multiple destinations to a transcoder output – Generally speaking you’ll want to send your best output to places like FB Live, YouTube, etc, as they do their own internal transcoding.

Screenshot 2017-04-16 10.13.31

 

Method 3:

Multiple Encoders

This is the obvious one, but also the least efficient both in terms of hardware and bandwidth. Each encoder goes to its own destination. This generally requires signal distribution amplifiers and other extra hardware.

 

 

Using Bitmovin Player with Church Online Platform

Today’s post will be a brief tutorial on using Bitmovin‘s excellent HTML5 video player with Church Online Platform.

If you’re a church that is wanting to go live, and you haven’t discovered COP, it’s a marvelous product. The fine folks on the life.church Digerati Team (who created the Bible App and made it available on just about every platform known to mankind). It’s a free hosted platform that lets you deliver church online. All you have to do is bring your own streaming provider and provide an embed code. You can use your provider’s player, or you can use your own player. The Digerati team are also a client of mine, and I really enjoy working with them – they’re talented, nerdy, and very good at what they do. (most recently, I helped them build out their Wowza Streaming Engine capability for automating the scheduling and delivery of simulated live events.)

One of my favorite video players out there right now is from Bitmovin, and they provide a CDN-hosted player that provides excellent analytics (complete with API access for the especially nerdy), and usage is free for the first 5000 impressions (and pricing is quite reasonable as you scale up from there). For this reason alone, it’s an excellent choice for churches getting started with streaming. Its other major benefit is that because it is written in HTML5 and Javascript, it will work on just about anything you can throw at it (for the really archaic devices, it still has a Flash component). It also is designed from the ground up to support the new MPEG-DASH standard, but if you’re using a streaming CDN or service that doesn’t provide DASH, no big deal, as the player also supports HLS, even for Flash delivery for those 3 devices that still haven’t discovered modern streaming technology or are running a particularly ancient version of Android. Added bonus, BitMovin’s player also supports VR and 360 streaming (as does Wowza Streaming Cloud).

For starters, you’ll need to sign up for an account, which will give you player information. One thing you’ll want to make sure you do is add your churchonline.org domain to the allowed domains for your license key. This is under Player/Overview:

Bitmovin Player Domain Config

If you forget to do this, the player will simply show an error telling you you need to do it.  This keeps someone from using your player key on their site, so be sure to use yourdomain.churchonline.org, not just churchonline.org.

To put this in your COP page, go to the event where you wish to use the player, and go to the Video tab:

ChurchOnline Event Settings

When you go to the Embed menu, you will see code to put it on the page (under Default video embed code). This is a little more involved than your standard embed code.

Bitmovin Embed Controls

A couple of key things to note here with regards to COP:

  1. In order to put the <script> stuff in your <head> section, you’d need to create a custom theme in COP. This is not necessary (in fact, putting that script statement in the head that way doesn’t work). What you’ll need to do is simply put the <script> piece just above the rest of it in the default embed code section.
  2. You’ll need to edit the source section in that code. If all you’re doing is HLS, you can remove the dash and progressive entries. Leave the HLS entry in place and put in the HLS URL provided by your streaming platform. In the case of Wowza Streaming Cloud, this is located at the bottom of the Overview tab of your streaming application under “Playback URLs”.
  3. The “poster” entry is the image the player shows when you’re not streaming any video.

So, for my test stream, the embed code looks like this:


<script type="text/javascript" src="https://bitmovin-a.akamaihd.net/bitmovin-player/stable/7/bitmovinplayer.js"></script>

<div id="player"></div>
<script type="text/javascript">
var conf = {
key: "d8XXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXX2e",
source: {
hls: "http://wowzaprod103-i.akamaihd.net/hls/live/######/########/playlist.m3u8",
poster: "http://dontfenceme.in/wp-content/uploads/2013/09/g-global-background.jpg"
}
};
var player = bitmovin.player("player");
player.setup(conf).then(function(value) {
// Success
console.log("Successfully created bitmovin player instance");
}, function(reason) {
// Error!
console.log("Error while creating bitmovin player instance");
});
</script>

The console.log lines aren’t necessary, but potentially useful when trying to debug why it can’t instantiate the player.

If you want to run a separate video when the doors aren’t open, put that under the offline video embed code section. You can leave the mobile and low sections empty, as your stream is probably already adaptive from your streaming provider.

Save it, and this is what you get:

BitMovin in ChurchOnline Platform

In order to remove the Bitmovin logo, edit the theme’s CSS and add the following lines:


/* Remove Bitmovin Logo on player */
.bmpui-ui-watermark {
display: none;
}

ChurchOnline Platform CSS Edit

A Story of Cats

This is the internet, so at some point we’ve got to talk about cats. It’s in the rule book.  The Internet runs on cats. Cat pictures, cat videos, and… cat cables.

Those of you not familiar with the intricacies of the first layer of the OSI “7-layer Burrito” (Internet old-timers will remember this) are probably blissfully unaware of the gory details of the wiring that makes everything (including wireLESS) work.

Dilbert (April 24, 2010)

Dilbert (April 24, 2010)

So who are all these cats, anyway?

Simply put, it’s an abbreviation for “Category”. The Telecommunications Industry Association (TIA) has adopted a series of specifications over the years defining cable performance to transport various types of networks.

Here’s a quick rundown. We’re gonna get a tech lesson AND a history lesson all rolled into one.

Category 1 (pre-1980)

An IBM "Type 1" Token-Ring connector. Known colloquially as a "Boy George Connector" due to its ambiguous gender.

An IBM “Type 1” Token-Ring connector. Known colloquially as a “Boy George Connector” due to its ambiguous gender. Photo: Computer History Museum

This never officially existed, and was a retroactive term used to define “Level 1” cable offered by a major distributor. It is considered “voice grade copper”, sufficient to run signals up to 1MHz, and not suitable for data of any sort (except telephone modems). You could probably meet category 1 requirements with a barbed wire fence. You laugh, but it’s been done. Extensively.

Category 2 (mid-1980s)

Like Category 1, never officially existed, and was a name retroactively given to Level 2 cable from said same distributor. Cat2 brought voice into the digital age. It could support 4MHz of bandwidth, and was used extensively for early Token-Ring networks that operated at 4Mbps, as well as ARCNet, which operated at 2.5Mbps on twisted pair (it had previously used coaxial cable).

Category 3 (1991)

This is the first of the cable categories officially recognized by TIA. It is capable of operating 10Mbps Ethernet over twisted pair (like ARCNet, Ethernet also ran on coaxial cable in the very early days). Category 3 wire was deployed extensively in the early 1990s as it was a much better alternative to 2Mbps ethernet over coax. This is where the now nearly ubiquitous 8P8C connector (often incorrectly referred to as “RJ45”) came into usage for Ethernet, and it’s still in use nearly 3 decades later. Both the connector pinout and the cable performance are defined in TIA standard 568.  Since token-ring networks still operated at 4Mbps, they ran quite happily over this new spec. In 2017, one can still occasionally find Cat3 in use for analog and digital phone lines. The 802.3af Power over Ethernet specification is compatible with this type of wire.

Category 4 (early 1990s)

This stuff existed only for a very brief period of time. In the late 1980s, IBM standardized a newer version of Token Ring that ran at 16Mbps, which required more cable bandwidth than what Category 3 could offer. Category 4 offered 20MHz to work with (which may sound familiar to the wifi folks, who use 20MHz channels a lot). But Category 5 came along pretty quickly, and Category 4 was relegated to history and is no longer recognized in the current TIA-568 standard.

Category 5 (1995)

TIA revised their 568 standard in 1995 to include a new category of cable, supporting 100MHz of bandwidth. This enabled the use of new 100Mbps ethernet (a 100Mbps version of Token Ring soon followed, which also used the same 8P8C connector as Ethernet).

An 8P8C connector, commonly (and incorrectly) referred to as "RJ45". The standard twisted-pair ethernet connector for the last quarter century.

An 8P8C connector, commonly (but incorrectly) referred to as “RJ45”. This has been the standard twisted-pair Ethernet connector for the last quarter century.

Category 5e (2001)

TIA refined their spec on Category 5 to improve the performance of Category 5, to support the new gigabit ethernet standard. It is still a 100MHz cable, but new coding schemes and the use of all four pairs allowed the gigabit rate. IBM and the 802.5 working group even approved a gigabit standard for token ring in 2001, but no products ever made it to market, as Ethernet had taken over completely by that point.

Category 6 (2002)

Not long after Category 5e came to be, Along comes category 6, with 250MHz of bandwidth. This was accomplished partly with better cable geometry and by going from 24AWG conductors to 23AWG. This increased bandwidth allows 10Gbps ethernet to operate on cables up to 55 meters in length.

Category 6a (2009)

This refinement to Category 6 increased cable bandwidth to 500MHz in order to allow 10Gbps ethernet to operate at the full 100m length limit for Ethernet. Categories 6 and 6a will support the new 802.11bt Power Over Ethernet Level 3 (60W) and Level 4 (90W) standards (expected 2018) provided that cable bundles do not exceed 24 cables for thermal reasons.

Category 7/7a

Category 7 cable. Who would want to terminate that? What a pain!

Category 7 cable. Who would want to terminate that? What a pain!

This one never existed in the eyes of the TIA. It still lives as an ISO standard defining several different types of shielded cable whose performance is comparable to Category 6 (bandwidth up to 600MHz for Cat7, 1GHz for Cat7a). Both these specs were rendered moot by 10Gbps Ethernet operating on Category 6a with standard 8P8C connectors. This cat was so ugly, TIA left it at the shelter.

Category 8 (2017)

The latest and greatest, this cable exists to run 40Gbps ethernet. It comes in two flavors, unshielded as 8.1, and shielded (supplanting the Category 7 specs) as 8.2. This cable has a bandwidth of 1600MHz for unshielded, and 2000MHz for shielded.

 

So there you have it. The cats that put the WORK in “Network”. And because this is the internet, I leave you with gratuitous kittens.

Gratuitous Kittens

 

 

 

 

 

Enhancing the public Wi-Fi experience

Recently, there was an excellent blog post from WLAN Pros about “Rules for successful hotel wi-fi“. While it is aimed primarily at Wi-Fi in the hotel business (where there is an overabundance of Bad-Fi), many of the tips presented also apply to a wide variety of large-scale public venue wifi installations. Lots of great information in the post, and well worth a read.

At the 2016 WLPC there was an interesting TENTalk from Mike Liebovitz at Extreme Networks about the pop-up wifi at Super Bowl City in San Francisco, where analytics pointed to a significant portion of the traffic being headed to Apple.

Meanwhile, a few months later at the 2016 National Church IT Network conference, I heard a TENTalk about Apple’s MacOS Server, where I first heard about this incredibly useful feature (sadly, it wasn’t recorded, that I know of, so I can’t give credit…)

With most of the LPV installations I’ve worked on, I’ve found the typical client mix includes about 60% Apple devices (mostly iOS). For example, this is at a large church whose wireless network I installed. (Note that Windows machines make up less than 10% of the client mix on wifi!)

Client mix from Ruckus ZoneDirector

OK, So what?

This provides an opportunity to make the wifi experience even better for your (Apple-toting) guests. Whenever possible, as part of the “WiFi System” I will install an Apple Mac Mini loaded with MacOS Server. This allows me to turn on caching. This is not just plain old web caching like you would get with a proxy server such as Squid, but rather a cache for all things Apple. What does this do for your fruited guests? It speeds up the download of software distributed by Apple through the Internet. It caches all software and app updates, App Store purchases, iBook downloads, iTunes U downloads (apps and books purchases only), and Internet Recovery software that local Mac and iOS devices download.

Why is this of interest and importance? Let me give you an example: A few years ago, we were hosting a national Church IT Round Table conference at Resurrection on a day when Apple released major updates to MacOS, iOS, and their iWork suite. In addition to the 50 or so staff Mac machines on the network, there were another hundred or two Mac laptops and iThings among the conference attendees. The 200MB internet pipe melted almost instantly under the load of 250 devices each requesting 3-5GB of updates. That would have melted even a gigabit pipe, and probably given a 10Gbps pipe a solid run for its money (not to mention bogging down some of the uplinks on the internal network!. Having a caching server would have mitigated this. It didn’t do great things to the access points in the conference venue either, all of which were not only struggling for airtime, but also for backhaul.

Just by way of an example, Facebook updates their app every two weeks and its current incarnation (86.0, March 30, 2017) weighs in at 320MB (the previous one was about half that!), and its close pal Messenger clocked in at 261MB. Almost everyone has those to apps, so they’re going to find itself in your cache almost instantly, along with numerous other popular apps. Apple’s iWork suite apps and Microsoft Office apps all weigh in around 300-500MB apiece as well. This has potential to murder your network when you least expect it. (A few years back, the church where I was working hosted the national Church IT conference that happened to coincide with Apple’s release of OSX Mavericks, and a major iWork update for both iOS and MacOS. The conference Wi-Fi and the church’s 200Mbps WAN pipe melted under the onslaught of a couple hundred Apple devices belonging to the guest nerds and media staff dutifully downloading the updates.)

In any case, check out the network usage analytics from either your wireless controller or your firewall. If Apple.com is anywhere near the top of the list (or on it at all), you owe it to yourself and your guests to implement this type of solution.Network Statistics from Ubiquiti UniFi

The Technical Mumbo-Jumbo

Hardware

As mentioned previously, a Mac Mini will do the job nicely. If you’re looking to do this on the cheap, it will happily run on a 2011-vintage Mini (you can find used Mac Minis on Craigslist or eBay all day long for cheap), just make sure you add some extra RAM and a storage drive that doesn’t suck (the stock 5400rpm spinning disks on the pre-2012 era Mac Mini and iMacs were terrible.) Fortunately, 2.5″ SSDs are pretty cheap these days. Newer Minis will have SSD baked in already.

If you’re wanting to put the Mac Mini in the datacenter, you might want to consider using a Sonnet RackMac Mini (which is available on Amazon for about $139) and can hold one or two machines.

Sonnet RackMac Mini

You can also happily run this off of one of the 2008-era “cheese grater” Mac Pros that has beefier processing and storage (and also fits in a rack, albeit not in the svelte 1U space the Sonnet box uses). If you have money to burn, then by all means use the “trash can” Mac Pro (Sonnet also makes a rack chassis for that model!).

This is a great opportunity to re-purpose some of those Macs sitting on the shelf after your users have upgraded to something faster and shinier.

Naturally, if you’re running a REALLY big guest network, you’ll want to look at something beefy, or a small farm of them Minis with SSD storage (the MacOS Server caching system makes it quite easy to deploy multiple machines to support the caching.)

The Software

MacOS Server (Mac App Store, $19.99)

Since most of your iOS guests will have updates turned on, one of the first things an iOS device does when it sees a big fat internet pipe that isn’t from a cell tower is check for app updates. If you have lots of guests, you will need to fortify your network against the onslaught of app update requests that will inevitably hit whenever you have lots of guests in the building.

The way it works is this: When an Apple device makes a request to the CDN, Apple looks at the IP you’re coming from and says, “You have a local server on your LAN, get your content from there, here’s its IP.” The result being that your Apple users will get their updates and whatnot at LAN speeds without thrashing your WAN pipe every time anyone pushes out a fat update to an app or the OS, which is then consumed by several hundred people using your guest wifi over the course of a week. You’ve effectively just added an edge node to Apple’s CDN within your network.

Content will get cached the first time a client requests it, and it does not need to completely download to the cache before starting to send it to the client. For that first request, it will perform just as if they were downloading it directly from Apple’s servers. If your server starts running low on disk space, the cache server will purge older content that hasn’t been used recently in order to maintain at least 25GB of free disk space.

MacOS Caching Server Configuration

The configuration

If you have multiple subnets and multiple external IPs that you want to do this for, you can either do multiple caching servers (they can share cache between them), or you can configure the Mini to listen on multiple VLANs:

Mac OS network preferences panel

Once you have the machine listening on multiple VLANs, you can tell the caching server which ones to pay attention to, and which public IPs. The Mac itself only needs Internet access from one of those subnets.

MacOS Server Caching Preferences

The first dropdown will give you the option of “All Networks”, “Only Local Subnets”, and “Only Some Networks”. Choosing the last one opens an additional properties box that allows you to define those networks:

Mac OS Server Cache Network Settings

The second one gives you the options of “Matching this server’s network” or “On other networks”. As with the first options, an additional properties box is displayed.

In both cases, hit the plus sign to create a network object:

Mac OS Server Create a New Network

It should be noted here that this only tells the server about existing networks, but it won’t actually create them on the network interface. You’ll still need to do that through the system network preferences mentioned previously. If you don’t want to have the server listen on multiple VLANs, you can just make sure its address is routable from the subnets you wish to have the cache server available, define the external and internal networks it provides service to, and you should be off to the races. This will provide caching for subnet A that NATs to the internet via public IP A, and B to B, and so on. Defining a range of external IPs also has you covered if you use NAT pooling.

There’s also some DNS SRV trickery that may need to happen depending on your environment. There are some additional caveats if your DNS servers are Active Directory read-only domain controllers. This post elaborates on it.

 

Is it working?

Click the stats link near the top left of the server management window. At the bottom is a dropdown where you can see your cache stats. The red bar shows bytes served from the origin, and green shows from the cache. If you only have one server doing this, you won’t see any blue bars, which are for cache from peer servers. Downside is that you can only go back 7 days.

On this graph, 3/28 was when there were both a major MacOS and iOS update released, hence the huge spike from the origin servers on Apple’s CDN. Nobody has updated from the network yet… But guest traffic at this site is pretty light during the week. I’ll update the image early next week.

MacOS Server Cache Stats

Other useful features

A side benefit of this is that you can also use this to provide a network recovery boot image on the network, in case someone’s OS install ate itself – on the newer Macs with no optical drive, this boots a recovery image from the internet by default. This requires some additional configuration, and the instructions to set up NetInstall are readily available with a quick Google search.

If you want, you can also make this machine the DHCP and local DNS server for your guest network. With some third-party applications, you can also serve up AirPrint to your wireless guests if they need it.

Conclusion

From a guest experience perspective, your guests see their updates downloading really fast and think your WiFi is awesome, and it’s shockingly easy to set up (the longest and most difficult part is probably the actual acquisition of the Mac Mini) It will even cache iCloud data (and encrypts it in the cache storage so nobody’s data is exposed). Even if you have a fat internet pipe, you should really consider doing this, as the transfers at LAN speed will reduce the amount of airtime consumed on the wireless and the overall load on your wireless network. (Side note, if you’re a Wireless ISP, this sort of setup is just the sort of thing you ought to put between your customer edge network and your IP transit)

Of course, you could also firewall off Apple iCloud and Updates instead, but why would you do that to your guests? Are you punishing them for something?

Android/Windows users: So sad, Google and Microsoft don’t give you this option (Although Microsoft sort of does in a corporate environment with WSUS, but it’s not nearly as easy to pull off, nor is it set up for casual and transient users). I would love it if Google would set up something like this for play store, Chromebook, etc, as about half of the client mix that isn’t from Apple is running on Android. You can sort of do it by installing a transparent proxy like squid.

Now, if only we could do the same for Netflix’s CDN. The bandwidth savings would be immense.

Update

(Added November 16, 2017)

As of the release of MacOS High Sierra and MacOS Server 5.4 (release notes), the caching service is now integrated into the core of MacOS, so any Mac on the network can do it, without even needing to install Server. The new settings are under System Preferences > Sharing:

 

 

Ian doing a Site Survey

“We want wi-fi. Now what?”

I’ve been spending the past week at the annual Wireless LAN Professionals Conference in Phoenix. This is one of my favorite conferences along with the Church IT Network conference, because I get to spend a couple of days geeking out hard with a whole bunch of REALLY smart people. The amount of information I’ve stuffed into my brain since last Friday is a little bit, well, mind-blowing…

I spent the first 3 days getting my Ekahau Certified Survey Engineer credential. For those who are not familiar with the Wi-Fi side of my consulting practice, Ekahau Site Survey is a fantastic tool for developing predictive RF designs for wireless networks, allowing me to optimize the design before I ever pull any new cable or hang access points. One of the key points that’s been touched on frequently throughout the training and the conference is what was termed by one attendee as the “Sacred Ritual of the Gathering of Requirements”. It sounds silly, but this one step is probably the single most important part of the entire process of designing a wireless network.

In the church world (and in the business world), your mission statement is what informs everything you do. Every dollar you spend, every person you hire, every program you offer, should in some way support that mission focus in a clearly defined and measurable manner. A former boss (and current client) defines his IT department’s mission like this: “Our users’ mission is our mission.” This clearly laid out that in IT, we existed to help everyone else accomplish their mission, which in turn accomplished the organization’s mission.

I’ve had more than a few clients say initially that their requirement is “we want wi-fi”. My job as a consultant and an engineer is to flesh out just what exactly “wi-fi” means in your particular context, so that I can deliver a design and a network that will make you happy to write the check at the end of the process. I can’t expect a client to know what they want in terms of specific engineering elements relating to the design. If they did, I’m already redundant.

Whiteboard

Photo: Mitch Dickey/@Badger_Fi

During the conference someone put up a whiteboard, with the following question:

“What are the top key questions to ask a client in order to develop a WLAN design or remediation?”

The board quickly filled up, and I’ll touch on a few really important ones here:

“What do you expect wi-fi to do for you? What problem does it solve?”

It was also stated as:

“What is your desired outcome? How does it support your business?”

This is one of the fundamental questions. It goes back to your mission statement. Another way of putting it is “How do you hope to use the wi-fi to support you mission?” What you hope to do with wi-fi will drive every single other design decision. The immediate follow-up question should be a series of “why?” questions to get to the root cause of why these outcomes are important to the business goals. You can learn an awful lot by asking “why?” over and over like a 4-year-old child trying to understand the world. This is critical for managing expectations and delivering what the client is paying you a large sum of money to do.

“What is your most critical device/application?”

“What is your least capable and most important device?”

“What other types of devices require wi-fi?”

“What type of devices do your guests typically have?”

It’s nice to have shiny new devices with the latest and greatest technology, but if the wi-fi has to work for everyone, your design has to assume the least capable device that’s important, and design for that. If you use a bunch of “vintage” Samsung Galaxy phones for barcode scanning or checking in children, then we need to make sure that the coverage will be adequate everywhere you need to use them, and that you select the proper spectrum to support those devices. For the guest network, having at least a rough idea of what mix of iOS and Android devices the guests bring into the facility can inform several design choices.

“What regulatory/policy constraints are there on the network?”

This is hugely important. Another mantra I’ve heard repeated often is, “‘Because you can’ is NOT a strategy!” If your network has specific privacy requirements such as PCI-DSS, HIPAA, any number of industry-specific policies, or even just organizational practices about guest hospitality, network access, etc., these also need to factor into the design and planning process.

I have one client whose organization is a church that is focused on a 5-star guest experience. What this translated to in terms of Wi-Fi is that they did not want to name the SSIDs with the standard “Guest” and “Staff” monikers that are common. The reasoning for this was that merely naming the private LAN SSID “Staff” would create in a guest’s mind that there are two classes of people, one of which may get better network performance because you’re one of the elect. It’s also a challenge when you have a lot of volunteers who perform staff-like functions and who need access to the LAN. Ultimately, we simply called this network “LAN”. Meaningful to the IT staff, and once the staff is connected to it, they no longer think about it. Something as simple as the SSID list presented by a wifi beacon is an important consideration in the overall guest experience.

“What is your budget?”

This one is so obvious it’s often overlooked. As engineers, we like to put shiny stuff into our designs. The reality is, most customers don’t have a bottomless pit of money, especially when they’re non-profits relying on donated funds. While I’d love to design a big fancy Ruckus or Aruba system everywhere I go, the reality is that it’s probably overkill for a lot of places, when a Ubiquiti or EnGenius system will meet all the requirements.

“What are the installation constraints?”

“Which of those constraints are negotiable? Which aren’t?”

Another obvious one that is overlooked. You need to know when the installation can happen (or can’t happen), or if there are rooms that are off-limits, potential mounting locations that are inaccessible. Areas that can’t support a lift, or areas that you simply can’t get cable to without major work. Aesthetics can be a significant factor for both AP selection and placement, wiring, and even configuration (such as turning off the LEDs). While one particular AP may be technically suited to a particular location, how it looks in the room may dictate the choice of something else.

“What is your relationship with your landlord/neighbors/facility manager like?”

I kid you not, this is a bigger factor than you might think. In an office building, being a good wifi neighbor is an important consideration. If the landlord is very picky about where and how communications infrastructure is installed outside the leased space (such as fiber runs through hallways, roof access, antennas outside the building, extra lease charges for technology access), you may encounter some challenges. If your facility manager is particular about damage, you need to factor that into the process as well. This likely also will come into play when you’re doing your site surveys and need access to some parts of the building.

There are a whole host of followup questions beyond these that focus on the more technical aspects of the requirements gathering, and your client may or may not have an answer:

“How many people does this need to support at one time?”

“Where are all these people located?”

“When are they in the building?”

“Where do you need coverage?”

“Where do you NOT need coverage?”

“What is your tolerance level for outages/downtime?”

… and many more that you will develop during this sacred requirements gathering ritual. Many of the technical aspects of the environment (existing RF, channel usage, airtime usage, interference source, etc) don’t need to be asked of the client, as you will find them during your initial site survey.

If you’re a wifi engineer, having these questions in your mind will help you develop a better design. If you’re the client, having answers to these questions available will help you get a better design.

What questions are important to your network? Sound off below!

If you need a wireless network designed, overhauled, or expanded, please contact me and we can work on making it work for your organization.

Automating Video Workflows With PowerShell

Linking today to some great content from another Ian (ProTip: get to know an Ian, we’re full of useful knowledge). Ian Morrish posts about automating a variety of methods of automating A/V equipment using PowerShell. Lots of useful stuff in here.

No Windows? No worries, you can install PowerShell on MacOS and Linux too.

I’ve put some feelers out to some of my streaming equipment vendors to find out what kind of automation hooks and APIs they support.

Meanwhile, Wowza has a REST API for both its Streaming Engine and Cloud products. Integrating this into PowerShell should be relatively straightforward. Any PowerShell wizards wanna take a stab at it?

Stay tuned.

 

Streaming Church to Facebook Live

Note: Somehow this got stuck in the publishing queue and never got the green light… So here it is, a few months after writing, but still relevant…

This past weekend saw much of the upper midwest plunged into an arctic deep freeze, leading many churches in the region to cancel services (we woke up Sunday morning to temperatures near -10°F and a stiff wind). I saw many pastors on my Facebook feed wondering if there was a way they could do church using Facebook’s relatively new live video feature.

Short answer: Absolutely.

But there are a few caveats in order to make it a good experience. With a little bit of advance planning, you can be prepared at very little cost. I’ll go over a few of the ways you can do the Facebook Live “thing” in increasing order of complexity.

Getting the video signal to Facebook

Using your smartphone and its onboard camera

This is the basic method that Facebook has in mind for its streaming service – people sharing live video on the fly. Whether you use an Android phone or an iPhone, these apply (mostly) equally.

  • Remember that your phone’s camera has a wide-angle lens. These are designed for those great landscape and sunset shots. All fine and good, but if you’re going for a tight shot, you have to get REALLY close. (The iPhone 7 Plus also has a 2X camera that works very well at longer distances)
  • Keep the phone steady. Ideally, some sort of tripod mount. These can be had on Amazon for under 10 bucks. My personal favourite is the Ztylus Z-Grip (Amazon, $10) which has a cold-shoe adapter (more on that in a bit). I also really like the Reticam Smartphone Tripod Mount (Amazon, $25) as it is an all-metal mount and is very durable. These will support a phone on even one of those little tabletop tripods.
  • Audio. Let’s face it, the onboard microphones on smartphones are terrible. They’re designed to capture sound close up.
    • If you’re doing a tight shot while preaching from home in your pajamas (I won’t tell!), a simple lapel mic such as the Audio-Technica ATR3350 (Amazon, $30) will do wonders for your sound quality (They also offer a “Smartphone” version of the ATR3350 that comes bundled with a mic/headphone splitter).
    • If you want to use an existing microphone, you’ll need to get a splitter for your headphone jack that breaks it out to separate headphone and mic jacks (Amazon, $7) and use a 1/8″ to XLR cable (available just about anywhere).
    • You can also use a shotgun microphone designed for a DSLR that has a 1/8″ jack on it. I like Røde mics for this (and these mount to the cold shoe on the Ztylus grip) such as the Røde Video Mic Go (Amazon, $100) or any other shotgun. If you have an iPhone 7, there are a few out there with a direct lightning interface.
    • If you wish to interface your phone to your church’s existing sound board, you have a few options. If your board offers a mic-level output, you can bring it straight in. If it offers line-level output (like most), you can use a DI box to convert it to line level or use a device like the BeachTek DXA-SLR-ULTRA (Amazon, $300) I also have a used one of these for sale. Contact me if you’re interested. If you’re coming off your sound board, it’s good to have a separate mix that gives online viewers a better audible context of the room. This is especially important if you’re using acoustic instruments that don’t necessarily need to be amplified.
    • Lastly, if you’re preaching from home, try to minimize external noise.
  • Lighting. Most camera phones have very small apertures, which means they don’t collect as much light as a bigger camera, so you need to have your subject well-lit for good video. This is a good time to familiarize yourself with the basics of three-point lighting.
  • Power. Make sure your phone is plugged into power before you do this. Video and live encoding is murder on a battery.
  • Bandwidth. Unless you really love sending your cell carrier lots of money or have an unlimited plan with really good LTE coverage, do this over wifi. Make sure your outbound bandwidth is sufficient (Facebook app typically streams at 2Mbps).

Using a tablet

Much of the smartphone discussion applies here as well, but consider that most tablet cameras simply aren’t as good as their smartphone brethren. Naturally, you’ll need a bigger tripod mount (and a tabletop tripod likely won’t cut it anymore).

Using an iPad opens up an additional production option with Teradek’s Live:Air application which allows you to add titles and such to your stream, as well as bring in additional camera shots from other devices including other iPhones. The Live:Air Solo app for iPhones does not allow streaming to Facebook because of an obscure clause in the Facebook Terms Of Service that prohibits streaming to FBL via third-party phone apps (but not tablet apps).

Using a DSLR or other video camera

If you already have a “good” camera such as a DSLR or a Semi-pro/Pro grade video camera, you can take the SDI or HDMI output from the camera into an encoding appliance such as the Teradek VidiU Pro (Amazon, $999), which will support streaming to Facebook directly without the need for a smartphone or a laptop (although you will need one to set it up).

If you prefer to use a computer with a capture card (Mac: BlackMagic Design Ultrastudio Mini Recorder, $140, Windows: BlackMagic Design Intensity Shuttle for USB 3.0, $190). You can then use Wirecast software to publish to Facebook. You can also do this with a USB webcam, but the results won’t be great.

Using your existing video system

If your church is a little more sophisticated and already has a video switching system, it’s relatively easy to use an encoding appliance or software as previously mentioned.

But I’m already streaming!

Great! you’ve mastered most of the technical stuff already, you just want to add Facebook as an additional outlet. This can be accomplished in Wirecast simply by adding another publishing destination. If your encoding software doesn’t let you do that (or you’re using an appliance with a single destination), you can use Wowza Streaming Engine or Wowza Streaming Cloud as an initial publish point and then use it to send your stream to multiple destinations. That’s a little beyond this blog post, but it’s not especially complex.

 

OK, that’s the easy technical part. Now comes the fun stuff:

Legal Considerations

If all you’re doing is preaching over Facebook, you’re in the clear. Unless you’re showing pre-recorded video illustrations that you didn’t create. If you’re performing music in church, you’ll need a streaming license. If you’re using pre-recorded music, that music needs to be licensed with a “sync license”. The good news is that the sync license is the responsibility of the site where the stream is published, so in the case of Facebook or YouTube Live, Facebook and Youtube need to get those licenses (and they have them, since they’re the ones monetizing your content)

If all you have is the standard CCLI license, this does NOT cover streaming. This is only a “mechanical license” that allows you to reproduce the song lyrics, whether in the bulletin or on your screen. CCLI and CCS both offer blanket streaming licenses that cover you.

Also bear in mind that if you are using a smartphone camera, Facebook’s TOS do not allow live streaming from any applications other than Facebook’s own app. Tablets and computers are another matter entirely. Check into Teradek’s Live::Air suite of applications (think Wirecast, for your iPad, using iPhones as remote cameras)

Analytics

One of the great benefits to video streaming on Facebook is the analytics you get from it. For more details, check out this page from Facebook about live video analytics.