Morning Fog along the Flint Hills National Scenic Byway

Mist Deployment, Part Deux

Second in a series about our first deployment of a Mist Systems wireless network. 

In my last post, I gave you an overview of the various components of the Mist Wireless system. This post will go into some of the design considerations pertaining to this particular project.

Because we’re now designing for more than just Wi-Fi, there are a few additional things to factor in when planning the network.

Floor Plans

It’s not uncommon for your floor plans to have a “Plan North” that doesn’t always line up with “Geographic North”. Usually this isn’t a factor, but looking at it in hindsight, I would strongly encourage you to build your floor plans aimed at geographic north from the start, as the Mist AI will also use that floor plan for direction/wayfinding and the compass in mobile devices will be offset if you just go with straight plan north. You can also design on plan north, but then output a second floor plan file that is oriented to true north. Feature request to Mist: Be able to specify the angle offset of the plan from true north and correct that for user display in the SDK.

For this project, I had access to layered AutoCAD files for the entire facility, which (sort of) makes things easier in Ekahau Site Survey, but sort of doesn’t – the import can get a little overzealous with things like door frames. I had to go do a fair bit of cleanup afterwards, and might have been better off just drawing the walls in the first place. This was partly due to the general lack of any good CAD tools on MacOS that would have allowed me to look at the data in detail and massage it before attempting the import into Ekahau. The other challenge is that ESS imported the ENTIRE sheet as its view window, which made good reporting impossible as the images had wide swaths of white space. Having the ability to crop the CAD file would have been nice.

Density Considerations

View from the rear of the main sanctuary at College Park Church in Indianapolis.

Since one of the areas being covered is a large auditorium, we had to plan on multiple small cells within the space. We needed to put the APs in the catwalks, as we did not have the option of mounting the units on the floor because of the sanctuary being constructed onslab (and while the cloud controller allows you to specify AP height and rotation from plan north, there is no provision to tell it the AP is facing *up* and located on/near the floor). This posed a few challenges, the first being that we were well above the recommended 4-5m (the APs were at 10m from the floor), the other being that we needed to create smaller cells. For this, we used the AP41E with an AccelTex 60-degree patch antenna.

Acceltex 8/10 dBi 60° 4-element patch antenna

 We also needed to either run a whole lot of cables up to the theatrical catwalks, or place a couple of small managed PoE switches – we unsurprisingly opted for the latter, using two 8-port Meraki switches, and uplinked them using the existing data cabling that was feeding the two UniFi APs that were up there.

As an added bonus, the sanctuary area was built with tilt-up precast concrete panels, which allowed us to use that heavy attenuation to our benefit and flood the sanctuary space with APs and not worry about spilling out too much.

Capacity-wise, we used 10 APs in the space, which seats 1700. Over the course of several church designs, I’ve found that a ratio of one active user for every three seats usually works out pretty well – in most church sanctuaries, the space feels packed when 2/3 of the seats are occupied, which means that we’re actually planning for one client for every two seats. Now, we’re talking active clients here, not associated clients. An access point can handle a lot more associations than it can active clients. As a general rule, I try to keep it to about 40 or 50 active clients per AP, before airtime starts becoming a significant factor.

In an environment like this, you want as many client devices in the room to associate to your APs, even if they’re not actively using them – when they’re not associated, they’re sitting out there, banging away with probe requests (especially if you have any hidden SSIDs), chewing up airtime (kind of like that scene from Family Guy where Stewie is hounding Lois just to say “Hi.”). Once they associate, they quiet down a whole bunch.

In addition to the main sanctuary, there are also a couple of other smaller but dense spaces: the chapel (seats 300) and the East Room (large classroom that can seat up to 250). In these areas, design focused on capacity, rather than coverage.

Structural Considerations

As is often the case with church facilities, College Park Church is an amalgamation of several different buildings built over a span of many years, accommodating church growth. What this ends up meaning is that the original building is then surrounded on multiple sides with an addition, and you end up with a lot of exterior walls in the middle of the building, as well as many different types of construction. Some parts of the building were wood-frame, others were steel frame, and others were cast concrete. The initial planning on this building was done without an onsite visit, but the drawings made it pretty obvious where those exterior (brick!) walls were. Naturally, this also makes ancillary tasks like cabling a little interesting.

Fortunately, the church had a display wall that showed the growth of the church which included several construction pictures of the building, which was almost as good as having x-ray vision.

Aesthetic Considerations

Because this is a public space, the visual appearance of the APs is also a key factor – Sometimes putting an AP out of sight takes precendence over placing for optimal Wi-Fi or BLE performance.

Placement Considerations

Coverage Area

Mist specifies that the BLE array can cover about 2500 square feet. The wifi can cover a little more, but it doesn’t hurt to keep your wifi cells that size as well, since you’ll get more capacity out of it. In most public areas of the building, we’re planning for capacity, not coverage. With Mist, if you need to fill some BLE coverage holes where your wifi is sufficient, you can use the BT11 as a Bluetooth-only AP.

AP Height

Mist recommends placing the APs at a height of 4-5m above the floor, in order to provide optimal BLE coverage. The cloud controller has a field in the AP record where you can specify the actual height above the floor.

AP Orientation

Because the BLE array is directional, you can’t just mount the APs facing any direction you please. These APs are really designed to be mounted horizontally, the “front” of the AP should be consistently towards plan north, but the controller does have the ability to specify rotation from plan north in case mounting it that way isn’t practical. The area, orientation and height are critical to accurate calculation of location information.

AP Location

Several of the existing APs in older sections of the building were mounted to hard ceiling areas, and we had to not only reuse the data cable that was there, but also the location. Fortunately, the previous system (Ubiquiti UniFi) was reasonably well-placed to begin with, and we were able to keep good coverage and reuse those locations without any trouble.

There were also some co-existence issues in the sanctuary where we had to make sure we stayed out of the way of theatrical lighting and fixtures that would pose a problem with physical or RF interference. In the sanctuary, we also have to consider the safety factor of the APs and keeping them from falling onto congregants like an Australian Drop-Bear.

Planning for BLE

Since starting this project, I’ve begun working with Ekahau on testing BLE coverage modeling as part of the overall wifi coverage, and it’s looking very promising. I was able to go back to the CPC design and replan it with BLE radios, and it’s awesome. Those guys in Helsinki keep coming up with great ideas. As far as Ekahau is concerned, multi-radio APs are nothing too difficult – They’ve been doing this for Xirrus arrays for some time now, as well as the newer dual-5GHz APs.

Stay tuned for a post about BLE in Ekahau when Jussi says I’m allowed to talk about it.

Up Next: The Installation

 

Cover Image: Explore Kansas: The Flint Hills National Scenic Byway (Kansas Highway 177)

Misty valley landscape with a tree on an island

Mist Deployment (Part The First)

First in a series about our first deployment of a Mist Systems wireless network. Mist Systems Logo

Over the course of the past few months, I’ve been working with the IT staff at College Park Church in Indianapolis to overhaul their aging Ubiquiti UniFi wireless system. They initially were looking at a Ruckus system, owing to its widespread use among other churches involved with the Church IT Network and its national conference (where I gave a presentation on Wi-Fi last fall). We had recently signed on as a partner with industry newcomer Mist Systems, and had prepared a few designs of similar size and scope for other churches in the Indianapolis area using the Mist system. We proposed a design with Ruckus, and another with Mist, with the church selecting Mist for its magic sauce, which is its Bluetooth Low Energy (BLE) capability for location engagement and analytics.

Fundamentally, the AP count, coverage, and capacity were not significantly different with Ruckus vs. Mist, and Mist offered a few advantages over the Ruckus in terms of the ability to add external antennas for creating smaller cells in the sanctuary from the APs mounted on the catwalks, as floor mounting was not an option.

About Mist

Mist is a young company that’s been around for about two or three years, and they have developed a couple of cool things in their platform – The first is what they call their AI cloud, the second is their BLE subsystem, and the last is their API.

Their AI component is a cloud management dashboard (similar to what you would see with Ruckus Cloud or Meraki — many of the engineers that started with Mist came over from Meraki), where the APs are constantly analyzing AP and client performance through frame capture and analysis, and reporting it back to the cloud controller. The philosophy here is that a large majority of the issues that users have with Wi-Fi performance is actually related to performance on the wired side of the network (“It’s always DNS.” Not always, but DNS — and DHCP — are major sources of Wi-Fi pain). The machine learning AI backend is looking at the stream of frames to detect problems, and then using that to generate Wi-Fi SLA metrics that can help determine where problems lie within the infrastructure, and doing some analysis of root causes. An example of this is monitoring the entire Station/AP conversation during and shortly following the association process. It looks at how long association took. How long DHCP took (and if it was successful), whether 4-way handshakes completed, and so on. It will also keep a frame capture of that conversation for further manual troubleshooting. It also keeps a log of AP-level events such as reboots and code changes so that client errors can be correlated on a timeline to those events. There’s a lot more it can do, and I’m just giving a brief summary here. Mist has lots of informational material on their website (and admittedly, there’s a goodly amount of marketing fluff in it, but that’s what you’d expect on the vendor website).

Graphs of connection metrics from the Mist system

 

 

 

 

 

 

 

 

 

 

Next, we have their BLE array. This is what really sets Mist apart from the others, and is one of the more interesting pieces of tech to show up in wifi hardware since Ruckus came on the scene with their adaptive antenna technology. Each AP has not one, but *eight* BLE radios in it, coupled with a 16-element antenna array (8 TX, 8RX). Each antenna provides an approximately 45° beam covering a full circle. Mist is able to use this in two key ways. One is the ability to get ridiculously precise BLE location information from their mobile SDK, (and by extension, locate a BLE transponder for asset visibility/tracking) and the other is the ability to use multiple APs to place a virtual BLE beacon anywhere you want without having to go physically install a battery-powered beacon. There are myriad uses for this in retail environments, and the possibilities for engagement and asset tracking are very interesting in the church world as well.

Lastly, we have their API. According to Mist, their cloud controller’s web UI only exposes about 40% of what their system can do. The remainder is available via a REST API that will allow you do do all kinds of neat tricks with it. I haven’t had a chance to dig into this much yet, but there’s a tremendous amount of potential there. Jake Snyder has taught a 3-day boot camp on using Python in network administration to leverage the power of APIs like the one from Mist (Ruckus also has an API on their Cloud and SmartZone controllers)

Mist is also updating their feature set on a weekly basis – rather than one big update every 6 months that may or may not break stuff, small weekly releases allow them to deploy features in a more controlled manner, making it easy to track down any potential show-stopper bugs, preferably before they get released into the wild. You can select whether your APs get the early-release updates, or use a more extensively tested stable channel.

Much like Meraki, having all your AP data in the cloud is tremendously useful when contacting support, as they have access to your controller data without you having to ship it to them. They can also take database snapshots and develop/test new features based on real data from the field rather than simulated data. No actual upper-layer traffic is captured.

The Hardware

note: all prices are US list – specific pricing will be up to your partner and geography.

There are four APs in the Mist line. The flagship 4×4 AP41 ($1385), the lower-end AP21 ($845), the outdoor AP61 ($?) , and the BLE-only BT11 ($?). The AP41 also comes in a connectorized version called the AP41E, at the same price as the AP41 with the internal antenna.

The AP41/41E is built on a cast aluminum heat sink, making the AP noticeably heavy. It offers an Ethernet output port, a USB port, a console port, and what they call an “IoT port” that provides for some analog sensor inputs, Arduino-style. It requires 802.3at (PoE+) power, or can use an external 12V supply with a standard 5.5×2.5mm coaxial connector. In addition to the 4-chain Wifi radio and the BLE array, the AP41 also has a scanning radio for reading the RF environment. On the AP41E, the antenna connectors are located on the downward face of the AP.

The AP21 is an all-plastic unit that uses the same mounting spacing as the AP41, and has an Ethernet pass-through port with PoE (presumably to power downstream BT11 units or cameras). Like the AP41, it also has the external 12V supply option.

This install didn’t make use of BT11 or AP61 units, so I don’t have much hands-on info about them.

It’s also important to note that none of these APs ship with a mounting bracket, nor does the AP have any kind of integrated mounting like you would find on a Ruckus AP. Mist currently offers 3 mounting brackets: a T-Rail bracket ($25), a drywall bracket ($25) and a threaded rod bracket ($40). The AP attaches to these brackets via four T10 metric shoulder screws (Drywall, Rod), or four metric Phillips screws (T-Rail). More on these later.

The Software

Each AP must be licensed, and there are three possibilities: Wifi-only, BLE Engagement, and BLE Asset tracking. Each subscription is nominally $150/year per AP, although there are bundles available with either two services or all three. Again, your pricing will depend on your location and your specific partner. Mist recently did away with multi-year pricing, so there’s no longer a cost advantage in pre-buying multiple years of subscriptions.

When the subscription expires, Mist won’t shut off the AP the way Meraki does, however, the APs will no longer have warranty coverage. After a subscription has been expired for two months, Mist will not reactivate an AP. The APs will continue to operate with their last configuration, however, but there will no longer be access to the cloud dashboard for that AP.

Links:

Mist Systems

Jake Snyder on Clear To Send podcast #114: Automate or Die

Mist Product Information

Up Next: The Design

A Story of Cats

This is the internet, so at some point we’ve got to talk about cats. It’s in the rule book.  The Internet runs on cats. Cat pictures, cat videos, and… cat cables.

Those of you not familiar with the intricacies of the first layer of the OSI “7-layer Burrito” (Internet old-timers will remember this) are probably blissfully unaware of the gory details of the wiring that makes everything (including wireLESS) work.

Dilbert (April 24, 2010)

Dilbert (April 24, 2010)

So who are all these cats, anyway?

Simply put, it’s an abbreviation for “Category”. The Telecommunications Industry Association (TIA) has adopted a series of specifications over the years defining cable performance to transport various types of networks.

Here’s a quick rundown. We’re gonna get a tech lesson AND a history lesson all rolled into one.

Category 1 (pre-1980)

An IBM "Type 1" Token-Ring connector. Known colloquially as a "Boy George Connector" due to its ambiguous gender.

An IBM “Type 1” Token-Ring connector. Known colloquially as a “Boy George Connector” due to its ambiguous gender. Photo: Computer History Museum

This never officially existed, and was a retroactive term used to define “Level 1” cable offered by a major distributor. It is considered “voice grade copper”, sufficient to run signals up to 1MHz, and not suitable for data of any sort (except telephone modems). You could probably meet category 1 requirements with a barbed wire fence. You laugh, but it’s been done. Extensively.

Category 2 (mid-1980s)

Like Category 1, never officially existed, and was a name retroactively given to Level 2 cable from said same distributor. Cat2 brought voice into the digital age. It could support 4MHz of bandwidth, and was used extensively for early Token-Ring networks that operated at 4Mbps, as well as ARCNet, which operated at 2.5Mbps on twisted pair (it had previously used coaxial cable).

Category 3 (1991)

This is the first of the cable categories officially recognized by TIA. It is capable of operating 10Mbps Ethernet over twisted pair (like ARCNet, Ethernet also ran on coaxial cable in the very early days). Category 3 wire was deployed extensively in the early 1990s as it was a much better alternative to 2Mbps ethernet over coax. This is where the now nearly ubiquitous 8P8C connector (often incorrectly referred to as “RJ45”) came into usage for Ethernet, and it’s still in use nearly 3 decades later. Both the connector pinout and the cable performance are defined in TIA standard 568.  Since token-ring networks still operated at 4Mbps, they ran quite happily over this new spec. In 2017, one can still occasionally find Cat3 in use for analog and digital phone lines. The 802.3af Power over Ethernet specification is compatible with this type of wire.

Category 4 (early 1990s)

This stuff existed only for a very brief period of time. In the late 1980s, IBM standardized a newer version of Token Ring that ran at 16Mbps, which required more cable bandwidth than what Category 3 could offer. Category 4 offered 20MHz to work with (which may sound familiar to the wifi folks, who use 20MHz channels a lot). But Category 5 came along pretty quickly, and Category 4 was relegated to history and is no longer recognized in the current TIA-568 standard.

Category 5 (1995)

TIA revised their 568 standard in 1995 to include a new category of cable, supporting 100MHz of bandwidth. This enabled the use of new 100Mbps ethernet (a 100Mbps version of Token Ring soon followed, which also used the same 8P8C connector as Ethernet).

An 8P8C connector, commonly (and incorrectly) referred to as "RJ45". The standard twisted-pair ethernet connector for the last quarter century.

An 8P8C connector, commonly (but incorrectly) referred to as “RJ45”. This has been the standard twisted-pair Ethernet connector for the last quarter century.

Category 5e (2001)

TIA refined their spec on Category 5 to improve the performance of Category 5, to support the new gigabit ethernet standard. It is still a 100MHz cable, but new coding schemes and the use of all four pairs allowed the gigabit rate. IBM and the 802.5 working group even approved a gigabit standard for token ring in 2001, but no products ever made it to market, as Ethernet had taken over completely by that point.

Category 6 (2002)

Not long after Category 5e came to be, Along comes category 6, with 250MHz of bandwidth. This was accomplished partly with better cable geometry and by going from 24AWG conductors to 23AWG. This increased bandwidth allows 10Gbps ethernet to operate on cables up to 55 meters in length.

Category 6a (2009)

This refinement to Category 6 increased cable bandwidth to 500MHz in order to allow 10Gbps ethernet to operate at the full 100m length limit for Ethernet. Categories 6 and 6a will support the new 802.11bt Power Over Ethernet Level 3 (60W) and Level 4 (90W) standards (expected 2018) provided that cable bundles do not exceed 24 cables for thermal reasons.

Category 7/7a

Category 7 cable. Who would want to terminate that? What a pain!

Category 7 cable. Who would want to terminate that? What a pain!

This one never existed in the eyes of the TIA. It still lives as an ISO standard defining several different types of shielded cable whose performance is comparable to Category 6 (bandwidth up to 600MHz for Cat7, 1GHz for Cat7a). Both these specs were rendered moot by 10Gbps Ethernet operating on Category 6a with standard 8P8C connectors. This cat was so ugly, TIA left it at the shelter.

Category 8 (2017)

The latest and greatest, this cable exists to run 40Gbps ethernet. It comes in two flavors, unshielded as 8.1, and shielded (supplanting the Category 7 specs) as 8.2. This cable has a bandwidth of 1600MHz for unshielded, and 2000MHz for shielded.

 

So there you have it. The cats that put the WORK in “Network”. And because this is the internet, I leave you with gratuitous kittens.

Gratuitous Kittens

 

 

 

 

 

Enhancing the public Wi-Fi experience

Recently, there was an excellent blog post from WLAN Pros about “Rules for successful hotel wi-fi“. While it is aimed primarily at Wi-Fi in the hotel business (where there is an overabundance of Bad-Fi), many of the tips presented also apply to a wide variety of large-scale public venue wifi installations. Lots of great information in the post, and well worth a read.

At the 2016 WLPC there was an interesting TENTalk from Mike Liebovitz at Extreme Networks about the pop-up wifi at Super Bowl City in San Francisco, where analytics pointed to a significant portion of the traffic being headed to Apple.

Meanwhile, a few months later at the 2016 National Church IT Network conference, I heard a TENTalk about Apple’s MacOS Server, where I first heard about this incredibly useful feature (sadly, it wasn’t recorded, that I know of, so I can’t give credit…)

With most of the LPV installations I’ve worked on, I’ve found the typical client mix includes about 60% Apple devices (mostly iOS). For example, this is at a large church whose wireless network I installed. (Note that Windows machines make up less than 10% of the client mix on wifi!)

Client mix from Ruckus ZoneDirector

OK, So what?

This provides an opportunity to make the wifi experience even better for your (Apple-toting) guests. Whenever possible, as part of the “WiFi System” I will install an Apple Mac Mini loaded with MacOS Server. This allows me to turn on caching. This is not just plain old web caching like you would get with a proxy server such as Squid, but rather a cache for all things Apple. What does this do for your fruited guests? It speeds up the download of software distributed by Apple through the Internet. It caches all software and app updates, App Store purchases, iBook downloads, iTunes U downloads (apps and books purchases only), and Internet Recovery software that local Mac and iOS devices download.

Why is this of interest and importance? Let me give you an example: A few years ago, we were hosting a national Church IT Round Table conference at Resurrection on a day when Apple released major updates to MacOS, iOS, and their iWork suite. In addition to the 50 or so staff Mac machines on the network, there were another hundred or two Mac laptops and iThings among the conference attendees. The 200MB internet pipe melted almost instantly under the load of 250 devices each requesting 3-5GB of updates. That would have melted even a gigabit pipe, and probably given a 10Gbps pipe a solid run for its money (not to mention bogging down some of the uplinks on the internal network!. Having a caching server would have mitigated this. It didn’t do great things to the access points in the conference venue either, all of which were not only struggling for airtime, but also for backhaul.

Just by way of an example, Facebook updates their app every two weeks and its current incarnation (86.0, March 30, 2017) weighs in at 320MB (the previous one was about half that!), and its close pal Messenger clocked in at 261MB. Almost everyone has those to apps, so they’re going to find itself in your cache almost instantly, along with numerous other popular apps. Apple’s iWork suite apps and Microsoft Office apps all weigh in around 300-500MB apiece as well. This has potential to murder your network when you least expect it. (A few years back, the church where I was working hosted the national Church IT conference that happened to coincide with Apple’s release of OSX Mavericks, and a major iWork update for both iOS and MacOS. The conference Wi-Fi and the church’s 200Mbps WAN pipe melted under the onslaught of a couple hundred Apple devices belonging to the guest nerds and media staff dutifully downloading the updates.)

In any case, check out the network usage analytics from either your wireless controller or your firewall. If Apple.com is anywhere near the top of the list (or on it at all), you owe it to yourself and your guests to implement this type of solution.Network Statistics from Ubiquiti UniFi

The Technical Mumbo-Jumbo

Hardware

As mentioned previously, a Mac Mini will do the job nicely. If you’re looking to do this on the cheap, it will happily run on a 2011-vintage Mini (you can find used Mac Minis on Craigslist or eBay all day long for cheap), just make sure you add some extra RAM and a storage drive that doesn’t suck (the stock 5400rpm spinning disks on the pre-2012 era Mac Mini and iMacs were terrible.) Fortunately, 2.5″ SSDs are pretty cheap these days. Newer Minis will have SSD baked in already.

If you’re wanting to put the Mac Mini in the datacenter, you might want to consider using a Sonnet RackMac Mini (which is available on Amazon for about $139) and can hold one or two machines.

Sonnet RackMac Mini

You can also happily run this off of one of the 2008-era “cheese grater” Mac Pros that has beefier processing and storage (and also fits in a rack, albeit not in the svelte 1U space the Sonnet box uses). If you have money to burn, then by all means use the “trash can” Mac Pro (Sonnet also makes a rack chassis for that model!).

This is a great opportunity to re-purpose some of those Macs sitting on the shelf after your users have upgraded to something faster and shinier.

Naturally, if you’re running a REALLY big guest network, you’ll want to look at something beefy, or a small farm of them Minis with SSD storage (the MacOS Server caching system makes it quite easy to deploy multiple machines to support the caching.)

The Software

MacOS Server (Mac App Store, $19.99)

Since most of your iOS guests will have updates turned on, one of the first things an iOS device does when it sees a big fat internet pipe that isn’t from a cell tower is check for app updates. If you have lots of guests, you will need to fortify your network against the onslaught of app update requests that will inevitably hit whenever you have lots of guests in the building.

The way it works is this: When an Apple device makes a request to the CDN, Apple looks at the IP you’re coming from and says, “You have a local server on your LAN, get your content from there, here’s its IP.” The result being that your Apple users will get their updates and whatnot at LAN speeds without thrashing your WAN pipe every time anyone pushes out a fat update to an app or the OS, which is then consumed by several hundred people using your guest wifi over the course of a week. You’ve effectively just added an edge node to Apple’s CDN within your network.

Content will get cached the first time a client requests it, and it does not need to completely download to the cache before starting to send it to the client. For that first request, it will perform just as if they were downloading it directly from Apple’s servers. If your server starts running low on disk space, the cache server will purge older content that hasn’t been used recently in order to maintain at least 25GB of free disk space.

MacOS Caching Server Configuration

The configuration

If you have multiple subnets and multiple external IPs that you want to do this for, you can either do multiple caching servers (they can share cache between them), or you can configure the Mini to listen on multiple VLANs:

Mac OS network preferences panel

Once you have the machine listening on multiple VLANs, you can tell the caching server which ones to pay attention to, and which public IPs. The Mac itself only needs Internet access from one of those subnets.

MacOS Server Caching Preferences

The first dropdown will give you the option of “All Networks”, “Only Local Subnets”, and “Only Some Networks”. Choosing the last one opens an additional properties box that allows you to define those networks:

Mac OS Server Cache Network Settings

The second one gives you the options of “Matching this server’s network” or “On other networks”. As with the first options, an additional properties box is displayed.

In both cases, hit the plus sign to create a network object:

Mac OS Server Create a New Network

It should be noted here that this only tells the server about existing networks, but it won’t actually create them on the network interface. You’ll still need to do that through the system network preferences mentioned previously. If you don’t want to have the server listen on multiple VLANs, you can just make sure its address is routable from the subnets you wish to have the cache server available, define the external and internal networks it provides service to, and you should be off to the races. This will provide caching for subnet A that NATs to the internet via public IP A, and B to B, and so on. Defining a range of external IPs also has you covered if you use NAT pooling.

There’s also some DNS SRV trickery that may need to happen depending on your environment. There are some additional caveats if your DNS servers are Active Directory read-only domain controllers. This post elaborates on it.

 

Is it working?

Click the stats link near the top left of the server management window. At the bottom is a dropdown where you can see your cache stats. The red bar shows bytes served from the origin, and green shows from the cache. If you only have one server doing this, you won’t see any blue bars, which are for cache from peer servers. Downside is that you can only go back 7 days.

On this graph, 3/28 was when there were both a major MacOS and iOS update released, hence the huge spike from the origin servers on Apple’s CDN. Nobody has updated from the network yet… But guest traffic at this site is pretty light during the week. I’ll update the image early next week.

MacOS Server Cache Stats

Other useful features

A side benefit of this is that you can also use this to provide a network recovery boot image on the network, in case someone’s OS install ate itself – on the newer Macs with no optical drive, this boots a recovery image from the internet by default. This requires some additional configuration, and the instructions to set up NetInstall are readily available with a quick Google search.

If you want, you can also make this machine the DHCP and local DNS server for your guest network. With some third-party applications, you can also serve up AirPrint to your wireless guests if they need it.

Conclusion

From a guest experience perspective, your guests see their updates downloading really fast and think your WiFi is awesome, and it’s shockingly easy to set up (the longest and most difficult part is probably the actual acquisition of the Mac Mini) It will even cache iCloud data (and encrypts it in the cache storage so nobody’s data is exposed). Even if you have a fat internet pipe, you should really consider doing this, as the transfers at LAN speed will reduce the amount of airtime consumed on the wireless and the overall load on your wireless network. (Side note, if you’re a Wireless ISP, this sort of setup is just the sort of thing you ought to put between your customer edge network and your IP transit)

Of course, you could also firewall off Apple iCloud and Updates instead, but why would you do that to your guests? Are you punishing them for something?

Android/Windows users: So sad, Google and Microsoft don’t give you this option (Although Microsoft sort of does in a corporate environment with WSUS, but it’s not nearly as easy to pull off, nor is it set up for casual and transient users). I would love it if Google would set up something like this for play store, Chromebook, etc, as about half of the client mix that isn’t from Apple is running on Android. You can sort of do it by installing a transparent proxy like squid.

Now, if only we could do the same for Netflix’s CDN. The bandwidth savings would be immense.

Update

(Added November 16, 2017)

As of the release of MacOS High Sierra and MacOS Server 5.4 (release notes), the caching service is now integrated into the core of MacOS, so any Mac on the network can do it, without even needing to install Server. The new settings are under System Preferences > Sharing:

 

 

Ian doing a Site Survey

“We want wi-fi. Now what?”

I’ve been spending the past week at the annual Wireless LAN Professionals Conference in Phoenix. This is one of my favorite conferences along with the Church IT Network conference, because I get to spend a couple of days geeking out hard with a whole bunch of REALLY smart people. The amount of information I’ve stuffed into my brain since last Friday is a little bit, well, mind-blowing…

I spent the first 3 days getting my Ekahau Certified Survey Engineer credential. For those who are not familiar with the Wi-Fi side of my consulting practice, Ekahau Site Survey is a fantastic tool for developing predictive RF designs for wireless networks, allowing me to optimize the design before I ever pull any new cable or hang access points. One of the key points that’s been touched on frequently throughout the training and the conference is what was termed by one attendee as the “Sacred Ritual of the Gathering of Requirements”. It sounds silly, but this one step is probably the single most important part of the entire process of designing a wireless network.

In the church world (and in the business world), your mission statement is what informs everything you do. Every dollar you spend, every person you hire, every program you offer, should in some way support that mission focus in a clearly defined and measurable manner. A former boss (and current client) defines his IT department’s mission like this: “Our users’ mission is our mission.” This clearly laid out that in IT, we existed to help everyone else accomplish their mission, which in turn accomplished the organization’s mission.

I’ve had more than a few clients say initially that their requirement is “we want wi-fi”. My job as a consultant and an engineer is to flesh out just what exactly “wi-fi” means in your particular context, so that I can deliver a design and a network that will make you happy to write the check at the end of the process. I can’t expect a client to know what they want in terms of specific engineering elements relating to the design. If they did, I’m already redundant.

Whiteboard

Photo: Mitch Dickey/@Badger_Fi

During the conference someone put up a whiteboard, with the following question:

“What are the top key questions to ask a client in order to develop a WLAN design or remediation?”

The board quickly filled up, and I’ll touch on a few really important ones here:

“What do you expect wi-fi to do for you? What problem does it solve?”

It was also stated as:

“What is your desired outcome? How does it support your business?”

This is one of the fundamental questions. It goes back to your mission statement. Another way of putting it is “How do you hope to use the wi-fi to support you mission?” What you hope to do with wi-fi will drive every single other design decision. The immediate follow-up question should be a series of “why?” questions to get to the root cause of why these outcomes are important to the business goals. You can learn an awful lot by asking “why?” over and over like a 4-year-old child trying to understand the world. This is critical for managing expectations and delivering what the client is paying you a large sum of money to do.

“What is your most critical device/application?”

“What is your least capable and most important device?”

“What other types of devices require wi-fi?”

“What type of devices do your guests typically have?”

It’s nice to have shiny new devices with the latest and greatest technology, but if the wi-fi has to work for everyone, your design has to assume the least capable device that’s important, and design for that. If you use a bunch of “vintage” Samsung Galaxy phones for barcode scanning or checking in children, then we need to make sure that the coverage will be adequate everywhere you need to use them, and that you select the proper spectrum to support those devices. For the guest network, having at least a rough idea of what mix of iOS and Android devices the guests bring into the facility can inform several design choices.

“What regulatory/policy constraints are there on the network?”

This is hugely important. Another mantra I’ve heard repeated often is, “‘Because you can’ is NOT a strategy!” If your network has specific privacy requirements such as PCI-DSS, HIPAA, any number of industry-specific policies, or even just organizational practices about guest hospitality, network access, etc., these also need to factor into the design and planning process.

I have one client whose organization is a church that is focused on a 5-star guest experience. What this translated to in terms of Wi-Fi is that they did not want to name the SSIDs with the standard “Guest” and “Staff” monikers that are common. The reasoning for this was that merely naming the private LAN SSID “Staff” would create in a guest’s mind that there are two classes of people, one of which may get better network performance because you’re one of the elect. It’s also a challenge when you have a lot of volunteers who perform staff-like functions and who need access to the LAN. Ultimately, we simply called this network “LAN”. Meaningful to the IT staff, and once the staff is connected to it, they no longer think about it. Something as simple as the SSID list presented by a wifi beacon is an important consideration in the overall guest experience.

“What is your budget?”

This one is so obvious it’s often overlooked. As engineers, we like to put shiny stuff into our designs. The reality is, most customers don’t have a bottomless pit of money, especially when they’re non-profits relying on donated funds. While I’d love to design a big fancy Ruckus or Aruba system everywhere I go, the reality is that it’s probably overkill for a lot of places, when a Ubiquiti or EnGenius system will meet all the requirements.

“What are the installation constraints?”

“Which of those constraints are negotiable? Which aren’t?”

Another obvious one that is overlooked. You need to know when the installation can happen (or can’t happen), or if there are rooms that are off-limits, potential mounting locations that are inaccessible. Areas that can’t support a lift, or areas that you simply can’t get cable to without major work. Aesthetics can be a significant factor for both AP selection and placement, wiring, and even configuration (such as turning off the LEDs). While one particular AP may be technically suited to a particular location, how it looks in the room may dictate the choice of something else.

“What is your relationship with your landlord/neighbors/facility manager like?”

I kid you not, this is a bigger factor than you might think. In an office building, being a good wifi neighbor is an important consideration. If the landlord is very picky about where and how communications infrastructure is installed outside the leased space (such as fiber runs through hallways, roof access, antennas outside the building, extra lease charges for technology access), you may encounter some challenges. If your facility manager is particular about damage, you need to factor that into the process as well. This likely also will come into play when you’re doing your site surveys and need access to some parts of the building.

There are a whole host of followup questions beyond these that focus on the more technical aspects of the requirements gathering, and your client may or may not have an answer:

“How many people does this need to support at one time?”

“Where are all these people located?”

“When are they in the building?”

“Where do you need coverage?”

“Where do you NOT need coverage?”

“What is your tolerance level for outages/downtime?”

… and many more that you will develop during this sacred requirements gathering ritual. Many of the technical aspects of the environment (existing RF, channel usage, airtime usage, interference source, etc) don’t need to be asked of the client, as you will find them during your initial site survey.

If you’re a wifi engineer, having these questions in your mind will help you develop a better design. If you’re the client, having answers to these questions available will help you get a better design.

What questions are important to your network? Sound off below!

If you need a wireless network designed, overhauled, or expanded, please contact me and we can work on making it work for your organization.

Automating Video Workflows With PowerShell

Linking today to some great content from another Ian (ProTip: get to know an Ian, we’re full of useful knowledge). Ian Morrish posts about automating a variety of methods of automating A/V equipment using PowerShell. Lots of useful stuff in here.

No Windows? No worries, you can install PowerShell on MacOS and Linux too.

I’ve put some feelers out to some of my streaming equipment vendors to find out what kind of automation hooks and APIs they support.

Meanwhile, Wowza has a REST API for both its Streaming Engine and Cloud products. Integrating this into PowerShell should be relatively straightforward. Any PowerShell wizards wanna take a stab at it?

Stay tuned.

 

Going Serverless: Office 365

Recently I just completed a project for a small church in Kansas. Several months ago, the senior pastor asked me for a quote on a Windows server to provide authentication as well as file and print share services. During the conversation, a few things became clear:

  1. Their desktop infrastructure was completely on Windows 10. Files were being kept locally or in a shared OneDrive account.
  2. The budget they had for this project was not going to allow for a proper server infrastructure with data protection, etc.
  3. This church already uses a web-based Church Management System, so they’re somewhat used to “the cloud” already as part of their workflows.

One of the key features provided by Windows 10 was the ability to use Office 365 as a login to your desktop (Windows 8 allowed it against a Microsoft Live account). Another is that for churches and other nonprofits, Office 365 is free of charge for the E2 plan.

I set about seeing how we could go completely serverless and provide access not only to the staff for shared documents, but also give access to key volunteer teams and church committees.

The first step was to make sure everybody was on Windows 10 Pro (we found a couple of machines running Windows 10 Home). Tech Soup gave us inexpensive access to licenses to get everyone up to Pro.

Then we needed to make sure the internet connection and internal networking at the site was sufficient to take their data to the cloud. We bumped up the internet speed and overhauled the internal network, replacing a couple of consumer-grade unmanaged switches and access points with a Ubiquiti UniFi solution for the firewall/router, network switch, and access points. This allows me and key church staff to remotely manage the network, as the UniFi controller operated on an Amazon Web Services EC2 instance (t2.micro). This new network also gave the church the ability to offer guest wifi access without compromising their office systems.

The next step was to join everyone to the Azure domain provided by Office 365. At this point, all e-mail was still on Google Apps, until we made the cutover.

Once we had login authentication in place, I set about building the file sharing infrastructure. OneDrive seemed to be the obvious solution, as they were already using a shared OneDrive For Business account.

One of OneDrive’s biggest challenges is that, like FedEx, it is actually several different products trying to behave as a single, seamless product. At this, OneDrive still misses the mark. The OneDrive brand consists of the following:

  • OneDrive Personal
  • OneDrive for Business
  • OneDrive for Business in Office 365 (a product formerly known as Groove)
  • Sharepoint Online

All the OneDrive for Business stuff is Sharepoint/Groove under the hood. If you’re not on Office 2016, you’ll want to make the upgrade, because getting the right ODB client in previous versions of Office is a nightmare. Once you get it sorted, it generally works. If you’ve got to pay full price for O365, I would recommend DropBox for Business as an alternative. But it’s hard to beat the price of Office 365 when you’re a small business.

It is very important to understand some of the limitations of OneDrive for Business versus other products like DropBox for Business. Your “personal” OneDrive for Business files can be shared with others by sending them a link, and they can download the file, but you can’t give other users permission to modify them and collaborate on a document. For this, you need to go back to the concept of shared folders, and ODB just doesn’t do this. This is where Sharepoint Online comes in to play.

Naturally, this being Sharepoint, it’s not the easiest thing in the world to set up. It’s powerful once you get it going, but I wasn’t able to simply drop all the shared files into a Sharepoint document library — There’s a 5000-file limit imposed by the software. Because the church’s shared files included a photo archive, there were WAY more than 5000 files in it.

Sharepoint is very picky about getting the right information architecture (IA) set up to begin with. Some things you can’t change after the fact, if you decide you got them wrong. Careful planning is a must.

What I ended up doing for this church is creating a single site collection for the whole organization, and several sites within that collection for each ministry/volunteer team. Each site in Sharepoint has 3 main security groups for objects within a site collection:

  • Visitors (Read-Only)
  • Members (Read/Write)
  • Owners (Read/Write/Admin)

In Office 365, much as it is with on-premises, you’re much better off creating your security groups outside of Sharepoint and then adding those groups to the security groups that are created within Sharepoint. So in this case, I created a “Worship Production” team, added the team members to the group, and then added that group to the Worship Site Owners group in Sharepoint. The Staff group was added to all the Owners groups, and the visitors group was left empty in most cases. This makes group membership administration substantially easier for the on-site admin who will be handling user accounts most of the time. It’s tedious to set up, but once it’s going, it’s smooth sailing.

Once the security permissions were set up for the various team sites, I went into the existing flat document repository and began moving files to the Sharepoint document libraries. The easiest way to do this is to go to the library in Sharepoint, and click the “Sync” button, which then syncs them to a local folder on the computer, much like OneDrive (although it’s listed as Sharepoint). There is no limit to how many folders you can sync to the local machine (well, there probably is, but for all practical purposes, there isn’t). From there it’s a matter of drag and drop. For the photos repository, I created a separate document library in the main site, and told Sharepoint it was a photo library. This gives the user some basic Digital Asset Management capabilities such as adding tags and other metadata to each picture in the library.

So far, it’s going well, and the staff enjoys having access to their Sharepoint libraries as well as Microsoft Office on their mobile devices (iOS and Android). Being able to work from anywhere also gives this church some easy business continuity should a disaster befall the facility — all they have to do is relocate to the local café that has net access, and they can continue their ministry work. Their data has now been decoupled from their facility. I have encountered dozens of churches over the years whose idea of data backup is either “what backup?” or a hard drive sitting next to the computer 24×7, which is of no use if the building burns to the ground or is spontaneously relocated to adjacent counties by a tornado. The staff doesn’t have to worry about the intricacies of running Exchange or Sharepoint on Windows Small Business Server/Essentials. Everything is a web-based administrative panel, and support from Microsoft is excellent in case there’s trouble.

If you’re interested in how to take your church or small business serverless, contact me and I’ll come up with a custom solution.

Mobile Internet In Haiti, Part 2

A while back, I posted about getting mobile Internet in Haiti. As technology changes rapidly, especially when it comes to Haitian internet access, I figured I’d post an update, having just returned from there in late February.

If you have a GSM-capable US phone (most Samsung Galaxy devices use software-defined radios and can speak CDMA or GSM fluently, simply by switching an option in the software), you’ll need to unlock it for international use:

Sprint: Contact Sprint Customer Service while still in the US and ask them for an international unlock. As long as your account has been active for more than 60 days, this should be no problem. They’ll walk you through the UICC unlock process. It helps to be on the Sprint network while this unlock happens, but it can also happen over Wi-Fi if you’re already out of the country.

Verizon: Verizon generally does not lock their phones. You may want to check with Verizon to make sure yours is unlocked. See item #18 in their Global Roaming FAQ.

AT&T: If your phone is under contract with AT&T or is an iPhone, you’re pretty much out of luck. AT&T is so terrified of losing their customers that they will only unlock the phone if you buy out your installment contract or pay an ETF. The good news is that most cell phone repair shops know the unlock codes and will unlock them for you for a small fee. (This is a tip I got from the manager of a local AT&T store who thinks corporate policy on unlocking for international use is dumb). If your phone is out of contract, simply go to https://www.att.com/deviceunlock and fill out the form. There is nobody at AT&T you can talk to about this, nor can the store personnel help you. If the process fails, then you’re simply out of luck, and should consider choosing a more customer-friendly carrier next time.

T-Mobile: No idea. I don’t know anyone who has a T-Mobile device. I expect their policy is probably very similar to AT&T.

Once you get to Haiti, you can stop at either the Digicel or Natcom shops just outside customs at the airport in Port-Au-Prince. (I would expect that there’s a similar setup at Cap-Haitien.) Natcom will load you up with 5GB of data and some voice minutes for 1000 Gdes ($25 US). I don’t know what Digicel’s current pricing is, but I expect it’s comparable. If you’re going to be out in the provinces, Natcom seems to have a better network than Digicel. If you’re staying in and around Port-Au-Prince, either network should work fine for you as both carriers have HSPA+ networks. I don’t know what the Natcom coverage situation is like on La Gonâve, but Digicel has EDGE coverage on most of the island, and HSPA/+ around Anse-a-Galets.

The staff at the Natcom shop had no trouble setting up my Galaxy S4, and in 15 minutes I walked out of there on the Haitian network. Using it as a hotspot was merely a matter of turning it on, and didn’t require any further configuration. Internet speeds in PAP average in the 2-3Mbps range.

It should be noted here that with both carriers, all Facebook traffic is free and doesn’t count toward your data plan usage. This is a pretty cool deal. My understanding is that Facebook located an edge node within Haiti to reduce transit off-island, and free access to the growing smartphone population in Haiti was part of the deal.

On a similar vein, Google also seems to be getting better presence in Haiti, and I’m told they too have edge nodes located in-country. Their maps product actually has pretty good data in PAP, although directions are still iffy as the addressing system there is a little tricky, and there aren’t necessarily names attached to many of the minor streets. It’s pretty good at figuring out where you are though. I wonder how soon they’ll get a Street View rig down there.

When you leave, your SIM will still be usable for 90 days, after which it will expire and no longer function on the network. There is currently excellent public wifi at the PAP airport, so handing your SIM off to one of your Haitian hosts is probably your best bet, as they can get some additional usage out of whatever unused data/minutes are left on it.

(I also discovered that on my Galaxy S4, GPS didn’t work unless there was a SIM in the slot)

 

Mobile Voice in Haiti

As a follow-on to my previous post about getting mobile internet, here’s one about getting voice service on your US phone (at least if you have a Sprint phone).

I have a Samsung Galaxy S4 on Sprint. Sprint’s CDMA voice network is incompatible with the GSM networks in most of the rest of the world, but recent Samsung Galaxy devices (at least the S3 and S4, and other devices of the same generation/platform) use a software-defined radio that can be made to speak GSM or CDMA at will, with a simple settings change. CDMA doesn’t require a SIM but LTE and GSM do, so the Galaxy is a de facto international phone.

Sprint lets you do international roaming calls for $2/min, which is absurdly high. It’s much better to get a SIM from a local carrier and use that. Making it do this is relatively simple. If your account is in good standing, a simple phone call to Sprint will unlock your phone for using other SIMs (and before you try to do this for a GSM carrier in the US, it explicitly does NOT work on AT&T or T-Mobile). This unlock process does require a data connection (mobile or Wi-Fi) for the phone to receive the unlock signal. After doing that, there’s a simple process that the Sprint rep will give you over the phone to complete the process.

Once that’s done (took me about 5 minutes on the phone – which I did via Skype from Haiti!), all you have to do is go find a local SIM (and in the case of the Galaxy, trim it down to size), pop it in the phone, switch it over to GSM in the Mobile Networks settings, pick your carrier, and off you go.

I’ll add screenshots just as soon as I can make the phone do them. The normal S4 tricks aren’t working.

 

Mobile Internet in Haiti

Note: Be sure to read my March 2015 update about this…

I’m back down in Haiti, as some of you already know, working on some of the wireless networks linking the different sites of the Église Méthodiste d’Haïti (EMH), which is the Haitian Methodist Church. Knowing that I was coming into an environment where the internet connection was not functioning properly, and that I was likely going to need internet access for troubleshooting, I armed myself with a 3G GSM hotspot that I picked up on eBay.

After parting with about 50 bucks (plus another 15 for a charger and 2 spare batteries), the Huawei E583C unit showed up via USPS on my doorstep 4 days later bearing a postmark from Hong Kong (color me impressed, I can’t even get postcards from Toronto that quickly!)

20131125_150332I opened it up and inside was a “T-Mobile Wireless Pointer” from the UK division of T-Mobile. I popped on down to the local T-Mobile store and get a SIM for testing, and fired it up. After much futzing around trying to get it to speak 3G to the network without any success, I go back to T-Mobile and pick a tech’s brains. Turns out this one operates on the 800/1800/1900 band, which T-Mobile has phased out 3G on to make room for more LTE. Meanwhile, Jay was in Haiti, so I asked him to pick up a NatCom SIM and bring it home with him.

I’ll pause briefly here to talk a bit about mobile in Haiti. There are two major players, Digicel (which has a thing for island nations all over the world) and NatCom, which is formed out of what was left of the national telephone company (Teleco) and the Vietnamese national telecom (VietTel) that bought up a 70% interest in Teleco not long after the earthquake. What little copper telecom infrastructure existed in the country has long since been destroyed by a number of different Screen Shot 2013-11-25 at 3.20.19 PMmeans, both natural and human. Since the earthquake, NatCom has been building out a LOT of fiber. Digicel operates the only direct fiber link out of the country to Columbus Networks‘ Fibralink fiber network that links the Caribbean up to the rest of the world. The other way out of Haiti to the internet is via microwave backhaul to the Dominican Republic which has 2 landings of the ARCOS fiber ring.

In the nearly 4 years since the quake, mobile internet in Haiti has gone nuts. It’s now quite reliable, and surprisingly cheap if you know how to do it. Monthly postpaid plans for data cost about a quarter what they do in the US – a 10GB plan on digicel will set you back 1000 HTG (about 25 bucks). The same plan on Verizon in the US by comparison is about $100! Digicel offers current-generation Android phones like the S4 (but be prepared to part with full unsubsidized price for it), and Apple recently started making unlocked SIM-less iPhones available on its own store. The smartphone revolution is coming to Haiti, and it’s going to be interesting to watch. There was someone at church on sunday using an iPad, and it wasn’t someone from our team.

When I got down to Haiti and put the SIM Jay obtained for me into the hotspot (erm, “Pointer”… can any Brits enlighten me as to the origin of that term?), and getting no joy. Realizing that the zillion config changes I’d made to try and get it to work on T-Mobile’s network were probably interfering, I hit the factory reset button, and as soon as it rebooted, it was speaking 3G on Natcom’s network. It was that easy.

Next step was to load up some funds on the card, since it was a basic card that came empty of funds. Normally you can do this from the phone, but since this was a hotspot, I didn’t have the ability to dial numbers (although the Huawei firmware does allow you to SMS, which turned out to be a critical component). Natcom partners with a third party called EzeTop which allows you to reload phone cards online (yours or anyone else’s). So I dropped 10 bucks onto it (which translates to 392 HTG, a fairly lousy exchange rate) plus a penny per 10 Goudes as a transaction fee, and off I go. No sign anywhere of what the per-MB cost is. NatCom’s website isn’t particularly helpful in that regard (I later find out that it’s 1.9HTG/MB, about 4 cents.)

Now that I had mobile internet, I fired up the iPad and did some testing on the drive to Petit-Goave, and was getting quite reasonable speeds around 1.5-2Mbps in both directions, very much capable of posting pictures to facebook and whatnot.

Once we got to the guest house where we were staying, we discovered that the wifi there was indeed out of service. I put the hotspot to good use downloading information I was going to need to fix it. In very short order, net access ceases, and I get a screen from NatCom saying that my card is empty, and provides a helpful list of plans and how to activate them. I then go find our hostess and borrow her laptop and internet access to load up some more funds on the card, and then try to activate one of the listed plans. It tells me I can’t do that because I have the wrong type of card.

Then, disaster. Within a matter of little more than an hour, 20 bucks worth of data on the card had vanished. After some digging, I discovered that my good buddy CrashPlan had stabbed me in the back and decided to start a big backup. I killed CrashPlan and reloaded the card (this is getting expensive, and I’m still not entirely sure how much data I’m burning through, especially now that the team is sharing in the internet joy — and the cost!)

Now that I’m back online, I start digging around the NatCom site again to figure out what plans I can access through the SIM I already have. Turns out that they have slightly different SIMs and plans for laptop/USB modems and for mobile phones. I had the latter, a “Nat-Mango” card, which can be had from any street vendor for 25 HTG. I finally found the list of mobile internet plans for the phones, and the correct number to SMS the plan change to. So I send off the text, only to get back “You don’t

Screen Shot 2013-11-05 at 8.03.55 AM

have enough funds for this plan”. I keep moving down the list until even the cheapest one kicks back the message… Uh-oh, I’m running on fumes again. Just as I go to top it up again, it shuts off. Fortunately, one of our Haitian team members had data on his Digicel phone, and I was able to get the account charged up, and switched over to the “Unlimited” plan. Unlimited in this case means 3.5GB at max HSPA+ speeds, then you’re rate-limited to 3.5 Mbps after that. Given that I never saw 3Mbps anywhere, this isn’t really a huge hindrance (that may be a factor of the device more than the network, too). By the time the week was out, our team had gobbled up nearly 25 gigabytes of data through the device.

So, in short, mobile internet from local carriers in Haiti is reliable and cheap (if you know the trick to not paying out the nose per MB), and can be done on a fairly inexpensive piece of hardware. If you’re so inclined, you can also get USB sticks from NatCom for about 1500 HTG. My next step is going to be to see if a device from Cradlepoint can handle the Natcom USB sticks, since they don’t have such a tight limitation on clients.