Posts

Increase Your App Adoption

At Oak City Labs, we love our continuous integration (CI). In our world, CI means that we have a trusty assistant sitting in the shadows that watches for new additions to our code repository.  Any updates get compiled, tested, packaged and shipped off for user consumption. If something goes wrong, the team is alerted immediately so we can get the train back on the tracks.

Let’s dive a little deeper at the toolset we use for CI. For iOS and Mac development, it might seem like a natural choice to use Xcode Server and we did, for a time. However, as our project load grew and our need for reliable automation increased, we found that Xcode Server wasn’t meeting our needs. We switched to TeamCity with very good results.

Xcode Server, after several years of evolution, has become a solid CI server and has some advanced features like integrated unit testing, performance testing and reporting. The great thing about Xcode Server is the integration right into Xcode. You don’t have to bounce out to a website to see the build status and any errors or failing tests link directly to your code. Unfortunately, that’s where Xcode Server runs out of steam. It doesn’t go beyond the immediate build/test cycle to handle things like provisioning profile management, git tagging, or delivery to the App Store.

Enter Fastlane. When we first adopted Xcode Server, Fastlane was in its infancy, only partially able to cover the iOS build cycle. In the years since, Fastlane has grown to be a full and robust set of automation tools that blanket the iOS and Mac build cycle, reaching far beyond the basic build/test routine. As Fastlane developed, we pulled more and more features into our CI setup. We built python scripts to integrate various Fastlane pieces with Xcode Server. Eventually, we were spending a good deal of time maintaining these scripts. Fastlane, on the other hand, handled all the maintenance internally, if we would embrace Fastlane fully. There were also some pieces we had built by hand (Slack integration, git tagging) that Fastlane included out of the box. It was clear that it was time to wholeheartedly jump on the Fastlane bandwagon to drive our automated build system.

One hiccup — Fastlane really wants to drive the whole build process. This is a great feature, but it means we can’t realistically do that from Xcode Server. We were already using TeamCity for CI with our other projects (Python, Angular, Android) and it seemed like a good fit. TeamCity is great at running and monitoring command line tools and now with Fastlane, our iOS and Mac builds are easily driven from the command line. Fastlane also creates TeamCity compatible output for tests, so our unit test reports are displayed nicely in the TeamCity dashboard.  

Now that our build system is fully Fastlane-ed, we benefit from their rich library of plugins and utilities. It’s simple to compute a build number for each build and push that as a git tag. Success and errors are reported to the team via Sack. We can easily publish beta builds to Crashlytics and send production builds right to Apple’s App Store. Fastlane’s ‘match’ tool keeps our provisioning profiles synced across machines. There are even utilities to sync our DSYM files from iTunes Connect to our crash reporting service.

Having the CI for all our projects under the TeamCity roof also comes with some nice benefits. There’s a single dashboard that shows the status for all the projects. There’s one login system to manage. The TeamCity server queues all the builds, so if an Android project is building when an iOS project updates, the iOS build is queued until the Android project finishes. With separate CI servers before on a single machine, you might have projects building in parallel which push the memory and cpu limits of the build machine. Also, the artificially elongated build times could confuse the build server system that monitors build time.

Our fully automated iOS and Mac build configurations have been running in the TeamCity / Fastlane environment for almost a year now and we’re delighted with the results. The Fastlane team does such a great job keeping up with changes in Apple’s environment. On the few occasions that things have broken, usually due to changes on Apple’s end, Fastlane’s team has a fix almost immediately and a simple ‘gem update’ on our end sets everything right. Going forward, Fastlane and TeamCity are our tools of choice for continuous integration.

At Oak City Labs, we enjoy solving all kinds of problems. Our projects span subject areas from IoT, to mining data from social media to integrating video capture hardware. One of my favorite projects we’ve worked on recently involves computer vision and real-time video analysis of data from a medical device.

Our client, Altaravision, “has developed the most portable, high-definition endoscopic imaging system on the market today”, called NDŌʜᴅ. A Fiberoptic Endoscopic Evaluation of Swallowing or FEES system like this allows a medical professional to observe and record a patient swallowing food. The NDŌʜᴅ system is portable and uses an application running on a MacBook to display the endoscope feed in real time and record the swallowing test to a video file.

After the test is completed on the patient, the video is reviewed to evaluate the efficiency of swallowing. Ideally, the patient will swallow all of the food, but a range of conditions can result in the patient being unable to adequately swallow all the material. Particles that aren’t swallowed may be aspirated and cause pneumonia. When reviewing the test footage, the test administrator has traditionally had to carefully estimate the amount of residual material after swallowing. Not only is this extremely time-consuming, but also introduces human error and compromises the reproducibility of results.

Oak City Labs has been working with Altaravision to tackle this problem. How can we remove the tedious aspect from the FEES test and make the results available faster and with better consistency? As with all our automation projects, we’d like a computer to handle the boring, repetitive parts of the process. Using computer vision techniques, we’d like the NDŌʜᴅ application to process each frame of the FEES test footage, categorize pixels by color and produce a single numerical value representing the residual food material left in the throat after swallowing. We should give the user this feedback in real-time as the test is being performed.

The NDŌʜᴅ application runs on macOS, so we can leverage Core Image (CI) as the basis for our computer vision solution. CI provides an assortment of image processing filters, but the real power lies in the ability to write custom filters. A pair of these custom filters will solve the core of our problem.

Our first task is to remove the very dark and the very bright portions of our image. We’ll ignore the dark portions because we just can’t see them very well, so we can’t classify their color. Very bright portions of the image are just overlit by our camera and we can’t really see the color there either. Our first custom filter looks at each pixel in the image and evaluates its position in color space with respect to the line from absolute black to absolute white. Anything close enough to this grey line should be ignored, so we set it to be transparent. After some testing, it turned out that it was difficult to pick a colorspace distance threshold that worked well at the light end and the dark end, so we use a different value at each end of the grey spectrum and linearly interpolate between the two.

Throat no filter
Throat transparent filter

The top image is the original image data. The lower image is the image after the bright and dark areas have been removed. In particular, the dark area, deeper down the throat, in the bottom center has been filtered out as well as the camera light’s bright reflection in the top right corner.

Now that we have an image with the only the interesting color remaining, we can classify each pixel based on color. In a FEES test, the food is dyed blue or green to help distinguish it from the throat. We need our second pass filter to separate out the reddish pixels from the blueish and greenish pixels. In our second custom CI filter, we examine at each pixel and classify it as either red, green or blue by looking at it’s colorspace distance from the absolute red, green and blue tips of the color cube. We convert each pixel to its corresponding nearest absolute color.

Throat no filter
Throat color filter

The top image is the original image. The bottom image is the fully processed image, sorted into red and green (no blue pixels in this example). Note how the green areas visually match up against the residual material in the original image.

Finally, our image has been fully processed. Transparent pixels are ignored and every remaining pixel is either absolute blue, red or green. Now we use vImage from Apple’s very powerful Accelerate Framework to build a histogram of color values. Using this histogram data, we can easily compute our residual percentage as simply the sum of the green and blue pixel counts over the total number of non-transparent pixels (red + green + blue). This residual value is our single numerical representation of the swallowing efficiency for this frame of data.

In this process, we’ve been very careful to use high performance and highly optimized tools to ensure our solution can perform in real-time. The Core Image framework, including our custom filters, takes advantage of graphics hardware to run very, very quickly. Likewise, vImage is heavily optimized for graphics operations. We also use a little bit of the Metal API to display our CI images on screen, which is very speedy as well. While we’re enhancing NDŌʜᴅ on macOS, these tools are also quite fast on iOS as well.

At Oak City Labs, we love challenging problems. Working with real-time video processing for a medical imaging device has been particularly fun. As Altaravision continues to push NDŌʜᴅ forward, we look forward to discovering new challenges and innovating new solutions.

TL;DR — Using an empty app delegate for unit testing is great for iOS developers. With a little modification, Mac developers can do the same.


App Delegate — Not in Charge Anymore

At Oak City Labs, we’re big believers in the power of unit testing which is vital to the health and reliability of our build automation process. Jon Reid runs a great blog called Quality Coding, focusing on Test Driven Development (aka TDD) and unit testing. Jon’s blog is one of our secret weapons for keeping up with new ideas and techniques in testing.

A few months ago, I read Jon’s article, “How to Easily Switch Your App Delegate for Testing”. It’s a quick read detailing a neat trick for speeding up execution of your unit tests. The gist is that you switch UIAppDelegate classes at startup, before the bulk of your app has bootstrapped. By switching to a UIAppDelegate just for testing, which does absolutely nothing, you bypass the app’s startup routine that slows down test execution. Faster tests mean less time waiting and less pain associated with testing. ?

There’s also another benefit that Jon doesn’t really mention. Because you skip the normal startup routine, the only code executed must be called by your test. Say I’m writing a test for my DataController without using this technique. The test is failing and I drop a breakpoint in the initialization routine. When I run the test, the the debugger stops at the breakpoint twice — once because the app is bootstrapping itself and once for the unit test that creates its own DataController. Now there are two DataController instances running around. Comedy hijinks ensue!

On the other hand, if you switch to an empty UIAppDelegate for testing, we can eliminate the bootstrap altogether, meaning only one instance of DataController is created and that’s part of our unit test. No more confusion about whether an object in the system is under test. By dynamically choosing a testing UIAppDelegate, our tests run faster, there is less confusion and, as Jon points out, it becomes easy to test our production UIAppDelegate too.

Back to the Mac

Hopefully you’re convinced at this point that choosing an UIAppDelegate at runtime is a Very Good Idea. Setting all this up for an iOS project is thoroughly discussed in the original article, including variants for Objective-C and both Swift 2 and 3. At Oak City Labs, we write Mac Apps too, so how does this translate from mobile to desktop?

For reference, here’s an implementation of main.swift I’m using in an iOS Swift 2 project.

This is pretty straightforward, since `UIApplicationMain` takes the name of the UIAppDelegate class as one of it’s parameters. Unfortunately, when we move to the Mac, that’s not how `NSApplicationMain` works. Showing it’s C roots, `NSApplicationMain` just takes `argc` and `argv`. So, in order to make this work on the desktop, we need to do a little extra jiggery pokery.

Running normally, we just call NSApplicationMain like always. Running in unit test mode, we need to manually create our empty NSAppDelegate and explicitly set it as the delegate property of the global shared app instance. Then we’re ready to kick off the run loop with `[NSApp run]`.

Side note — I started writing this in Swift 3 since the rest of the project is Swift 3, but I’m still new to Swift 3 and I couldn’t manage to get the instantiation from a class name bit working.  Luckily, I realized I could still write my `main()` routine in trusty, old Objective-C and it would play nicely with my type-safe Swift application.

Just for reference, here’s my TestingAppDelegate class. Like school on Saturday, this class is empty.

We want unit tests to provide fast feedback, and clear feedback. Using the empty TestingAppDelegate approach makes our testing better at both. Now, with a little runtime hocus-pocus, we can employ delegate switching on the Mac as well as iOS.