As you are nearing the end of beta testing your iOS app and preparing for its submission to the App Store (and subsequent deployment out into the real world!), it’s easy to forget about a crucial part of your app’s success: the product page. Nothing has a greater impact on driving downloads and acquiring users than your App Store product page. Done correctly, your product page really can set your app up for success in the market. As you’re planning for the launch of your iOS app, make sure the following details are being thought through.

Your App Store Product Page Details

  • App Name: Your app’s name is pretty straightforward and by the time you are planning for its release, the name is likely already established. Besides your keywords, the name of your app has the single biggest impact on discoverability within the App Store. The name should be simple, easy to understand and descriptive of the service you are offering. Apple recommends name lengths should be limited to 23 characters or less (and caps them at 30 characters maximum).
  • App Icon: This is the first visual a user will see of your app in the store, as well as the long-term visual users will search for on their device to launch your app. The icon should be simple, focused and recognizable. More on Apple’s icon requirements.
  • Category: You can assign your app two categories in the App Store, a primary and secondary. Though most apps have an obvious primary category from the get-go, others have the potential to fall into a few different categories. Your primary category is what affects search rankings and discoverability, so choose wisely and think about where your targeted users are most likely to be exploring. More on Apple’s categories, including special cases.
  • Demo/Previews: A video demo or preview of your app’s core functionality is a great way to make a lasting impact on potential users and drive downloads. Videos should be short at 15 to 30 seconds in length and show your app in action. More on Apple’s Preview recommendations here and here.
  • Screenshots: You can (and should) add up to five screenshots of your app to your product page. The first two screenshots are shown automatically in search results if a demo is not present, so it’s important that they capture users’ attention and draw them into your product page for more information. More on Apple’s screenshot properties. First time creating screenshots? We’ve had a lot of success using Launch Kit.
  • Description: The first two to three lines of text are key real estate when describing your app’s functionality and features. Apple only displays this limited amount of text before appending a “more” link, which users are forced to click in order to reveal the entire text. When crafting your description text, bear that in mind and put your most compelling details first. As a whole, your messaging should provide an overview of your app’s functionality as well as a list of key features.
  • Keywords: Besides your app’s name, keywords play the most critical role in search result rankings. Apple limits your keywords to 100 characters total, including commas to separate words. It’s important to be strategic when choosing your keywords. Think through what search terms your target audience will be using when looking for an app like yours. Be specific and focused.
  • “What’s New” section: While not necessarily important for your launch, it’s worth noting that the “What’s New” section will be valuable real estate beginning with your first update. Here you should not only describe the changes, fixes and added features being released, but you should also use the space to strategically communicate with users.
  • Privacy Policy / Terms and Conditions: Apple, and the law, require your app to have a published Privacy Policy and/or Terms and Condition page if a user’s personal information is being “accessed, collected and transmitted” within and/or from your app. You must also gain a user’s permission before “accessing, collecting and transmitting” personal information. It is your responsibility to consult with legal representation to determine when a Privacy Policy is needed and what it should contain. More about Privacy Policies, including examples.

Come back next week as I continue this series on launching your iOS app with part two: Marketing Your App. We’ll be discussing creating a marketing website, social media, press kits and more! Update: the series continues here.

Well, that was exciting.

WWDC 2017 wound down last week. It’s been a firehose of information, mostly delightful, a little disappointing and largely overwhelming. I wasn’t a lottery winner this year, so I’m observing from the other coast and I’ve still got about a bajillion hours of video queued up to watch. It’s going to be a fun summer! But the first session I was sure to watch was the venerable “What’s New in HomeKit” to learn the fate of my Hopes and Dreams.

The news is mostly good. Here’s a quick rundown of my wishlist and whether Santa delivered this DubMas.

  • Better automation rules
    • Offset from sunset
      • ? Yes! Now you can create triggers for “significant events” with an offset, so you can turn on lights 45 minutes before sunset. “Significant events” seems to mostly be sunrise and sunset in the current release.
    • Follow up events
      • ? Yes! HomeKit now includes “duration events” which can act as bookends on something like a motion trigger. When there’s motion in the hallway, turn on the hall light and turn it off in five minutes. I’m not clear if you can make this a little more sophisticated and turn the light off if there is no motion for five minutes or if you’re limited to a fixed interval. But even so, definitely an improvement.
    • Limit rules to time ranges or scenes engaged
      • ? Yes! The specific example from the presentation restricts a scene triggered from a motion sensor to only fire after sunset and before sunrise. You can connect these time limits to either “significant events” (with offset) or to specific times. I believe you can also gate them based on an active scene, but my notes are slightly sketchy on that point.
  • Beyond devices
    • Interaction with iOS apps, devices
      • ? Nope. There was no mention of triggering apps or controlling your AppleTV from Siri on your phone. But all is not lost! More on that later.
    • Workflow integration
      • ? Nope. No mention of Workflow at all. I’m still optimistic about Workflow. They could easily integrate HomeKit support in an upcoming release which wouldn’t rate a pre-announcement at DubDub.

Other HomeKit Improvements

There were lots of tantalizing tidbits included in the presentation, all of which bode well for the future of HomeKit. In a lot of ways, they’ve been building a foundation for several years and we’re finally getting to the point of critical mass where we can go beyond a fancier X10 system).

Highlights of the other announcements:

  • Other new events
    • ? Presence based events
      • First person comes home
      • Last person leaves home
      • House is occupied or unoccupied
    • ? Threshold range events
      • Temperature is above 80 degrees
      • Temperature is below 60 degrees
      • Temperature is between 60 and 80 degrees
    • ? Events which recur on a schedule
      • Execute “Good Morning” scene at 7 am, but only on weekdays.
  • Protocol enhancement for bluetooth accessories which improve latency from several seconds to sub-seconds
    • A bluetooth motion sensor used to take three seconds to turn on a light. Now it can do it in less than a second.
    • Should be available as a software upgrade to existing devices — no new hardware needed.
  • Enhanced setup
    • If you’re scanning (or typing) the string of numbers printed on the device, you can do so now before plugging in the device. Personally, I’m not limber enough for the gymnastics required to scan the smart plug after it’s plugged in and powered up.
    • New devices can use QR codes instead of printed numbers. The big benefit of QR codes is that they can be physically much smaller, as small as 10mm x 10mm.
    • And the pièce de résistance — setup via NFC tags. Yup, finally. Tap the device and that’s it. I honestly don’t know why any new device wouldn’t use NFC “tap to configure”.
  • New categories
    • ⛲️Sprinklers! I’m surprised sprinklers are just now getting to the party. They’ve been low hanging fruit for automation geeks for 20 years.
    • ? Faucets! A quick confab at the office and everyone agrees on the killer use case — cooking chicken. Nobody want to smear salmonella all over the sink.
  • Authentication / Certification
    • Software authentication. Previously, all HomeKit devices needed hardware authentication which meant older devices couldn’t be upgraded and new devices had to include an extra chip, adding cost and complexity. For example, I suspect Wemo bailed on HomeKit support because of the hardware authentication requirement. A few weeks ago, they were back on board. Coincidence?
    • Non-commercial products can be certified for free! Hobbyists, students and the like can now access the technical documents and tools to build HomeKit controllable devices for free. Now (in my copious free time), I can build a garage door monitor out of a Raspberry Pi and control it with Siri! This is super exciting because it lowers the bar significantly for building and testing prototypes before taking it to market. I expect this to feed a niche corner of KickStarter very, very soon.

Was there anything on the wish list that we didn’t get? Yes, HomeKit for the Mac. All the discussions were aimed at watchOS, tvOS and iOS, but the Macintosh is left out in the cold. It’s probably a question of resource management. Mac users already mostly have an iPhone in their pocket or a watch strapped to their wrist, so it’s not a desperately necessary feature. And soon, they’ll be a Siri Speaker listening for any request in the house too.

That brings us to the Siri Speaker, now with it’s official given name — HomePod. (I actually love HomePod, but it seems a little more “space cadet” than I’d expect out of Apple Marketing.) The intro at DubDub very much focused on it as a spiritual successor to the iPod HiFi, although they didn’t call that ghost by name. This week, at least, they’re positioning HomePod firmly as a music device that you can talk to. As an aside almost, they confirmed that it would act as a gateway to HomeKit too. HomePod does usher in the next wave of AirPlay — AirPlay 2 — which supports whole home audio streamed from any device to HomePods and/or AppleTVs.

AppleTV and tvOS didn’t get much attention during the keynote, only that Amazon Video is coming this summer (finally). But among the other WWDC technology announcements, Apple signaled a major change from h.264 to h.265 (AKA HEVC) which includes better support for 4K video, a feature currently missing from my beloved AppleTV. In the fall, we’ll see a hardware update for AppleTV taking it into the world of 4K video, along with a 4K upgrade iTunes Store video content. I think we’ll see tighter integration at that point between the AppleTV and HomePod, which only seems natural. (Just watch the demo of Google Home and ChromeCast from I/O this year.) If HomePod can control the AppleTV, this will be the big reveal moment.

Overall, I’m really happy with the HomeKit announcements at WWDC. Apple is pushing forward and seems committed to the platform across (almost) all the platforms. The rules/triggers/scenes system has become more sophisticated and shouldn’t feel like a hindrance in iOS 11. We’ll have to wait another six months for HomePod to land, but at least we know it’s coming. I’m cautiously optimistic that a solid AppleTV update before the holiday shopping season will reinforce that appeal of the whole ecosystem. In the meantime, I guess I’ll start saving up for HomePods. Maybe they’ll come in a six pack.

The first weeks of June are upon us and that means one thing — DubDub, Apple’s World Wide Developer’s Conference — is days away. This has always been Mac Geek Christmas. Back in the “Good Old Days,” DubDub was the High Holiday of summertime. The other was MacWorld, which ushered in the new year in January. But years ago, Steve Jobs withdrew from MacWorld and left us with only DubDub, although the Festival of iPhone has become a traditional event, bookending the summer.

But I digress. It’s DubDub time and the rumors, analyses and readings of tea leaves are dialed up to eleven. We will likely see a nice preview of the new operating systems slated for the fall release. (Remember, there are four now — iOS, macOS, tvOS and watchOS.) Keep in mind that this is a developer conference which means Apple will showcase new features and new technologies needing developer support, like enabling new SiriKit domains, building extensions for Mail or enabling person-to-person ApplePay transfers in your app. Apple will hold back the flashy new stuff that doesn’t need developer buy-in for the iPhone hardware release in the fall. That might include things like multi-user FaceTime chats, snoozing messages in the email app, or sleep tracking for watch.

There’s lots to talk about that might happen at WWDC, but I’m going to focus on one near and dear to my heart — HomeKit. Three years ago, with the release of iOS 8, Apple introduced HomeKit, a framework to integrate home automation gizmos under one roof. Back then (and still very much today), when you bought a gadget to control your lights, you had to use a gadget specific app. Buy another thermostat doohickey, get another app. HomeKit promised to unify all that under one roof, but because of long hardware cycles and stubborn manufacturers it has taken time to gain real traction. HomeKit didn’t come with a consumer app, although it did allow some plucky, independent developers to build such a thing. (Results ranged from “not bad” to “OMG, my eyes!”) With iOS 10, Apple finally introduced their own “Home” app to provide that universal control panel for every device in your home. Finally, with a first party app and three releases of HomeKit improvements, things are finally starting to come together. But it could be so much better.


In the Home app now, you can create automation rules, however they’re not quite smart enough to be useful. For example, you can set the Home app to turn on the lights at sundown, but in my house you need lights 45 minutes before sundown. To become really useful, we need modifiers on sunrise/sunset.

Rules can be tagged to only happen after sunset. For instance, if a motion detector downstairs fires, turn on the downstairs lamp. Sounds okay on paper, but between 6 pm and 11 pm, the downstairs lights are already on. What I really need is for a rule to only be active between 11 pm and 7 am, when I go downstairs in the middle of the night. That’s when I need the smart house to light my way. Perhaps an even better option is a scene based conditional — use this rule only when the BedTime scene is set.

And for these motion sensitive events, I usually want to follow that up with “turn the light off” if there’s no motion for five minutes. I don’t have any motion sensors yet (only a few available at the moment), but my research indicates that the “light off” companion event isn’t available yet.

Interactions Beyond Devices

With each version of iOS, HomeKit supports more types of devices — ceiling fans, window shades, humidifiers, door locks, cameras and so on. We now have the ability to build fairly complex scenes. At bedtime, one request to Siri can turn off the lights, turn down the thermostat, lock the doors, close the garage, draw the blinds and enable the security system. (Dads across the world will be in search of new hobbies.) With iOS 11, HomeKit needs to come out of it’s shell and start interacting outside its comfort zone.

What if HomeKit could interact with apps on your phone? On your TV? I like to sleep with the sound of crashing waves in the background (mostly to drown out the loud gulping of a bunny rabbit who needs a drink of water at 2 am). A couple of years ago, I switched from a dedicated sound machine (R.I.P. Squeezebox) to an app. When I tell Siri “BedTime” to shut down the house for the night, she should fire o do it manually. Likewise, when I hit the “Movie Mode” scene that sets the lights just right, why doesn’t the AppleTV automatically fire up and switch to Netflix? I’d be full prepped for Gilmore binging in one tap. (Seriously, what will Rory do?)

One way this could come to pass is via Workflow. Apple recently acquired the iOS automation app and everyone is anxiously waiting to see what Workflow can do now that they’re behind the curtain. The ability for HomeKit to activate a Workflow “workflow” would open the door to a cornucopia of possibilities. How about a ”yoga” scene that dims the lights, brings up the practice video on the TV, plays some new age tunes on the stereo and starts a workout session on your watch. That’s Apple’s big pitch anyway — the magic of a comprehensive and integration ecosystem.

Implied in that last bit is that HomeKit acquires the ability to interact across your devices. If this arrives, it’ll come wrapped up with Siri being able to do the same. This is an absolute must for the fabled Siri Speaker. Google Home’s killer feature is the ability to show information and entertainment on your TV via Chromecast. The Siri Speaker will need to do at least that, letting you use voice to play content on your AppleTV (or start a phone call on your iPhone, etc). Hopefully HomeKit can ride Siri’s coattails and automations can control not only “smart devices” like lights and switches, but iOS devices and Macs too. Picture walking into your office and a motion sensor triggers the “Work” scene. Lights turn on, the Mac wakes up, unlocks via your watch and fires up the morning workflow.

There’s so much more that I want to see from DubDub next week. These wishes for HomeKit are just a deep dive into one narrow area. The keynote is three days away and the rumors are few and far between. Apple has dropped a couple of press releases about material that would normally take up the first quarter of the keynote. We’re basically going in blind to two hours of complete surprise. And. I. Can. Not. Wait.

Waiting is the hardest part…

This year at their annual developer’s conference, I/O, Google announced that Android now supports Kotlin. For those that don’t know, Kotlin is a cleaner and more expressive and modern language than Java. Our Android team is excited to begin learning and implementing Kotlin. With numerous features and benefits, I thought I’d share the top four features I am excited to dig into.

Nullable types

Touted as the solution to NullPointerExceptions, nullable types in Kotlin make surprise null values in your code much less likely. There is no need for constant null checks as functions and fields on nullable types can be accessed safely with the ? operator.

Reduced to:

Kotlin properties and data classes

With properties in Kotlin, getters and setters are no longer necessary. Getters and setters are created for properties automatically and implicitly called when the property is accessed. This will significantly reduce clutter in data model classes. Additionally, the data keyword in Kotlin can be used to create a data model class with automatic implementations of methods like equals, hashCode, toString, and others.

Reduced to:

Extension methods

In Kotlin, classes can be extended to include custom functionality right where you need it as opposed to creating an entirely new class that extends the one you need custom functionality for. Behind the scenes, Kotlin actually creates static methods for the extension methods where the first parameter is an instance of the extended class.

The milesAfterTrip method used earlier could be added to the Car object as follows:

Lambda functions and Inline functions

Kotlin has better support for functional programming than Java with proper function types.

Whereas the Java code above requires the Predicate<T> interface, the same code in Kotlin could be:

Inline functions eliminate the overhead of lambda functions, which typically cause the creation of an anonymous class and the capture of a closure. Functions marked as inline will simply execute the code contained in the lambda functions that are passed to it at the call sites.

If we make the printOldCars function used earlier inline like so:

Then the result of calling printOldCars:

Is equivalent to

Functionally, this is the same as before, but now the oldCarTester lambda that is passed in does not require additional memory allocations. By the way, the let function used above is also an inline function. You can read more about inline functions here.

These are just a few of the features available in Kotlin. You can explore more Kotlin here.

Interested in an Android app? Or need help maintaining an existing one? Drop us a line. We’d love to chat with you!

Today I’m offering some tips on how to set up Google Maps in your Android application. This is not meant to be a start to finish tutorial on the process, but instead a few tips to move past some of the stumbling blocks I’ve run into.

Finding your app’s SHA-1 key

To set up a Google API key, you will need your app’s SHA-1 key. The easiest way to get both your debug and release (assuming you have a signing config setup for one of your build variants) SHA-1 keys is to use the method demonstrated here:

in Android Studio.

Alternatively, the debug SHA-1 key can be found via command line by navigating to your ~/.android directory and running the following command (on Mac):

  • keytool -list -v -keystore debug.keystore
  • The password should be “android”.
  • Similarly, the release SHA-1 key can be found by running keytool -list -v -keystore YOUR_KEY_STORE_FILENAME.jks in whatever directory your keystore is located. The password will be the keystore password.

Getting a Google API key

  • Head to the Google API Manager console here:
  • Enable the Google Maps Android API
  • Go to the Credentials page and click Create credentials, choose API key.
  • When prompted with your new API key, click Restrict Key.
  • Name your key if you would like, then under the Key restriction section click “Android apps”
  • Click “Add package name and fingerprint” and enter your app’s package name (found in the android project’s AndroidManifest.xml) and the correct SHA-1 key. You will likely want at least two of these package name / fingerprint entries, one with the debug SHA-1 key and one with the release SHA-1 key. If you have build types that alter the package name, you will want to create additional package name / fingerprint entries for them. For example, say I append “.beta” to a build type that I upload to Crashlytics Beta. The package name “io.oakcity.project.beta” will need its own entry with a release SHA-1 key.

Setting Google API credentials in your app

Add the following meta-data tag to your AndroidManifest.xml file:

It is recommended that you store your Google API key in a string resource and reference it from this meta-data tag. From here, you should be able to add a SupportMapFragment to an activity and get started developing your Google Maps application. Refer to this for how to set up a basic Google Maps activity.

Customizing the Google Map

Here are a few brief tips on customizing your app’s Google Map.

Setting size of a Google Map Marker

Markers are covered extensively in the documentation Setting a custom marker image can be easily done, but there doesn’t appear to be a way to set marker size directly in the MarkerOptions. Marker size can instead be set by loading and sizing a bitmap of the marker image first.

The bitmap returned by this method can be handed to the MarkerOptions as follows:

Moving the toolbar

The Google Maps toolbar contains buttons for opening a selected location in a navigation app or in Google Maps. If your user interface covers the toolbar in its default location, you can reposition the toolbar by setting the GoogleMap object’s padding.

Setting the bottom padding will move the toolbar.

Getting GPS bounds

To get the GPS bounds of the map that you see on screen, you can use the GoogleMap’s projection as follows:

The LatLngBounds object can be used to get coordinates of the center of the viewing area as well as the bounds. This can be used in conjunction with GoogleMap.OnCameraMoveStartedListener and GoogleMap.OnCameraIdleListener (implemented by the Activity) to update markers if the center of the viewing area has moved a certain amount (or even a certain percentage of the view area’s width).

TL;DR — Using an empty app delegate for unit testing is great for iOS developers. With a little modification, Mac developers can do the same.

App Delegate — Not in Charge Anymore

At Oak City Labs, we’re big believers in the power of unit testing which is vital to the health and reliability of our build automation process. Jon Reid runs a great blog called Quality Coding, focusing on Test Driven Development (aka TDD) and unit testing. Jon’s blog is one of our secret weapons for keeping up with new ideas and techniques in testing.

A few months ago, I read Jon’s article, “How to Easily Switch Your App Delegate for Testing”. It’s a quick read detailing a neat trick for speeding up execution of your unit tests. The gist is that you switch UIAppDelegate classes at startup, before the bulk of your app has bootstrapped. By switching to a UIAppDelegate just for testing, which does absolutely nothing, you bypass the app’s startup routine that slows down test execution. Faster tests mean less time waiting and less pain associated with testing. ?

There’s also another benefit that Jon doesn’t really mention. Because you skip the normal startup routine, the only code executed must be called by your test. Say I’m writing a test for my DataController without using this technique. The test is failing and I drop a breakpoint in the initialization routine. When I run the test, the the debugger stops at the breakpoint twice — once because the app is bootstrapping itself and once for the unit test that creates its own DataController. Now there are two DataController instances running around. Comedy hijinks ensue!

On the other hand, if you switch to an empty UIAppDelegate for testing, we can eliminate the bootstrap altogether, meaning only one instance of DataController is created and that’s part of our unit test. No more confusion about whether an object in the system is under test. By dynamically choosing a testing UIAppDelegate, our tests run faster, there is less confusion and, as Jon points out, it becomes easy to test our production UIAppDelegate too.

Back to the Mac

Hopefully you’re convinced at this point that choosing an UIAppDelegate at runtime is a Very Good Idea. Setting all this up for an iOS project is thoroughly discussed in the original article, including variants for Objective-C and both Swift 2 and 3. At Oak City Labs, we write Mac Apps too, so how does this translate from mobile to desktop?

For reference, here’s an implementation of main.swift I’m using in an iOS Swift 2 project.

This is pretty straightforward, since `UIApplicationMain` takes the name of the UIAppDelegate class as one of it’s parameters. Unfortunately, when we move to the Mac, that’s not how `NSApplicationMain` works. Showing it’s C roots, `NSApplicationMain` just takes `argc` and `argv`. So, in order to make this work on the desktop, we need to do a little extra jiggery pokery.

Running normally, we just call NSApplicationMain like always. Running in unit test mode, we need to manually create our empty NSAppDelegate and explicitly set it as the delegate property of the global shared app instance. Then we’re ready to kick off the run loop with `[NSApp run]`.

Side note — I started writing this in Swift 3 since the rest of the project is Swift 3, but I’m still new to Swift 3 and I couldn’t manage to get the instantiation from a class name bit working.  Luckily, I realized I could still write my `main()` routine in trusty, old Objective-C and it would play nicely with my type-safe Swift application.

Just for reference, here’s my TestingAppDelegate class. Like school on Saturday, this class is empty.

We want unit tests to provide fast feedback, and clear feedback. Using the empty TestingAppDelegate approach makes our testing better at both. Now, with a little runtime hocus-pocus, we can employ delegate switching on the Mac as well as iOS.

Realm is a great alternative to SQLite for data persistence on Android. Today I’m sharing three things to be aware of when using Realm for Android.

No. 1: If you are using a library like Retrofit to return your data model Java objects directly from network calls to your API and then writing those Java objects to the client’s local Realm database, any of the fields that were not included in the server response will be reset to their default values in your local Realm. In other words, let’s say you have an object stored locally in the Realm database, then you query your API to get an update for a part of the object and you receive a partial JSON representation of your object. If you let Retrofit parse that JSON into your Java object data model and then write that object to Realm with a method like copyToRealmOrUpdate, the properties of the local Realm object that were not updated in the JSON will be reset to default.

No. 2: It’s worth noting that to partially update the local Realm objects, your options are rather limited. If your data model Java object has variable names that directly match the names of the properties in the response JSON object, you can hand an Inputstream or JSONObject instance directly to Realm and Realm will only update the properties that are present in the JSON, leaving all other properties the same. Realm does not currently support any way to specify which variable names differ from their corresponding JSON properties (like the SerializedName annotation that Gson offers), so your JSON property names must map directly to your Java object’s property names (the Realm team appears to be addressing the issue If such a direct mapping is not possible or convenient, you can manually parse your response JSON and update the fields of the Realm object directly by querying Realm for the object with that id.

No. 3: Unit tests with realm can be tricky to get up and running. The realm-java GitHub repo has an example unit test here ( and more specifically here:, but I found that I had to make some minor changes. The unit test uses Robolectric and Powermock in addition to Junit4 and Mockito. The Robolectric version used in the example is outdated so I used 3.3.1, the newest Robolectric version at the time of this blog post. The RobolectricGradleTestRunner is deprecated as of 3.1.1 and should be changed to RobolectricTestRunner. In addition, the sdk can be bumped up to 25, or whatever version you want to target. Here are the annotations I used for my unit test class and the relevant build.gradle dependencies.


@Config(constants = BuildConfig.class, sdk = 25, application = TestApplication.class)
@PowerMockIgnore({“org.mockito.*”, “android.*”, “org.robolectric.*”})

build.gradle dependencies:

Unit Tests
testCompile ‘junit:junit:4.12’
testCompile ‘org.mockito:mockito-core:1.10.19’

More Unit Testing, specifically for Realm

testCompile ‘io.reactivex:rxjava:1.2.5’
testCompile “org.robolectric:robolectric:3.3.1”
testCompile ‘org.robolectric:shadows-support-v4:3.3.1’
testCompile “org.powermock:powermock-module-junit4:1.6.6”
testCompile “org.powermock:powermock-module-junit4-rule:1.6.6”
testCompile “org.powermock:powermock-api-mockito:1.6.6”
testCompile “org.powermock:powermock-classloading-xstream:1.6.6”

Since joining the team at Oak City Labs last year, I have had many conversations with friends, family and prospective clients about mobile app development. When I show them an app we’ve built, like CurEat, they often tell me that it looks great, and then ask me to show them which parts of the app I made. While seemingly an innocuous comment intended to allow me to showcase my hard work and reap some easy praise, this particular request typically leads me to release one of my patented <heavy sighs>.

But Cory, why do you heavy sigh at such a polite, ego-boosting request, you ask? Well, fellow human, I release the air from my lungs at a much faster rate than usual in an exasperated manner because users of our apps don’t actually see anything I have made. They see the work of the design team in the visuals of the app. They see the buttons, lists, login screens and settings menus that Jay (iOS) and Taylor (Android) have coded in. They even see the legalese a lawyer somewhere put together. So when I tell people that I spent over a year working on an app after explaining that there is no visual evidence of my work at all, they get this sort of half-confused, half-amused look on their face.

Of course, I could simply tell people that I am a backend developer who helped make the Python Flask server, PostgreSQL database structure, RESTful API and Angular webapp that allow the mobile app to be more than a sad, functionless, empty shell. This jargon only helps those with a technical background, and usually leads most others to say something along the lines of “Oh, OK…” when in reality, they’re thinking “Was he even speaking English?”

And so, through a long, arduous process of trial, error, confused looks and copious amounts of feedback, I have concocted what I believe to be the ultimate formula in explaining what my actual role is in the app development process.

To aid in understanding my role as a behind-the-scenes magician, I will refer to the Snapchat app as an example.

For those who aren’t familiar with Snapchat, let’s run through a typical Snapchat use case. This is assuming you already have an account and are logged in.

Step 2: Snap a super artsy picture of your keyboard and wrist rest

Step 3: Send it to your friends (Some names below may or may not be based on the many nicknames of Burton Guster)

Aaand we’re done! I have now sent a poignant photo of my keyboard to two of my closest imaginary friends. Soon, they will receive a notification of the photo and will be able to open and view it for themselves.

This is the process for using the app, but to understand how we are able to get to this point and do such a thing, we need to step back to the very beginning.

Here is the terminology I will use in the following examples:

  • Mobile Developer – Somebody who builds the actual app that a user sees on their mobile device
  • Backend Developer – Someone (like myself) who works on the behind-the-scenes details such as the server, API and web framework. If this doesn’t make sense, that’s OK – we’ll get there!
  • Snap – A snap is an image that you exchange with friends on Snapchat

When the app loads, you have the option to log in or sign up. So far, this is all the work of the mobile developer. They created this screen, pulled in images from the designer, added some text onto buttons, and bam. This screen is born:

Now let’s say you click Log In. You are then taken to this screen:

This screen lets you log in with your account information and takes you into the part of the app where you can actually send and receive snaps. Every “usable” thing on this screen was added by a mobile developer: the input areas for username and password, the “Forgot your password?” link and the “Log In” button. So far, the backend developer hasn’t played a role in either of the previous screens.

Now, what happens when you press the “Log In” button? Does the phone magically know whether your account info is valid or not? If that information were only stored on your phone, how would you be able to log into Snapchat on a new phone with the same account?

The answer: You wouldn’t. Your account information is not even touched by the mobile developers!

*My Time to Shine!*

There are a lot of behind the scenes details happening here that the backend developer is responsible for, so let’s break down the differences between the work of the mobile and backend developers in the above scenario.

Technical Breakdown:

  • Mobile Developer:

    • Create app screen with inputs
    • Send username/password to server
    • Receive output from server telling the app if the login was successful
    • Continue on to the next app screen if the user logged in successfully
  • Backend Developer:

    • Tell server how to handle a request to log in a user
    • Tell server how to check with the database to confirm/deny a login request
    • Tell server how to send a request back to the app if the user logged in

Non-Technical Breakdown:

The mobile developer creates the code for the login screen, the backend developer creates the code for all of the actual “logging in” and checking of whether you are a valid user or not.

Let’s go back to the scenario of sending a snap to your friends.

Starting with my super duper artsy photo again, we can examine this screen. All the interactions on this screen are completely up to the mobile developer. A few examples are the ability to save this image or add text to it. Now, let’s say the user clicks the button in the bottom right to choose which of their friends to send the snap to.

We’re taken to this screen. The app displays a list of names which are all of the potential friends you could send your snap to. You select the ones you want, then click the send arrow in the bottom right to send your snap to these friends. Everything visible is presented with code from the mobile developer still. Except…

Let’s back it up.

Where did your friends list come from? Does your phone just magically know which of your friends have snapchat and also have you as a friend? How would you add or remove a friend? And what are those numbers and emojis next to the stars on the right side of each name row?

That’s a lot of questions, so let’s do a breakdown like we did before.

Technical Breakdown:

  • Mobile Developer:

    • Request list of friends and friend information from server
    • Display list of friends
    • Display information about each friend (such as the numbers and emojis)
    • Allow the user to select which friends will receive their snap
    • Tell the server who the user wanted to receive their snap
  • Backend Developer:

    • Build a list of friends and friend information for the given user and return the information to the app
    • Perform the actual sending of the snap to all of the selected friends.
    • Notify all the friends that they have a new snap

Non-Technical Breakdown:

The mobile developer creates the code that displays all the friends and let’s the user choose who to send the snap to, but the backend developer creates the code that tells the app who the user’s friends are and the code that actually sends the snap to those friends.

At the end of the day, there’s a lot of communication that needs to happen between mobile and backend developers, because as you can see, they depend on each other very much. Without a mobile developer, all the backend logic and data and snap sending would be pointless. Similarly, without a backend developer, Snapchat wouldn’t actually be able to do anything. It might sit there looking pretty but you wouldn’t be able to send your snaps to anybody, which would be quite sad.

I hope my examples helped, but if not, here are a few analogies to provide more clarity:

Non-Technical Analogies:

  • A mobile developer is like a news anchor. Without the backend developer (teleprompter), the news anchor would just sit there looking pretty without knowing what news to tell viewers about.
  • Alternatively, think of a mobile developer like a grocery store, choosing how to display all of the food that the backend developer delivers. Without the shelves and organization from the mobile developer, all of the food from the backend developer would be too chaotic and unstructured to give to the shoppers. But, without the backend developer, there would be no food at all to give to the shoppers.

Technical Analogies:

  • A mobile developer creates your Gmail app, but without the backend developer, your email would never show up in the app.
  • A mobile (or web, in this case) developer would create the Netflix app or website, but without the backend developer, no shows would show up or be watchable.

When I think about automation, the first thing that pops into my head is a giant warehouse teeming with robots that scurry around, filling orders and shipping them off to fulfill the whims of internet shoppers. Something like this:

It’s not exactly the Jetson’s yet, but they’re getting there. And robots are cool.

But automation really applies to anything that can save us time, reduce errors and make us more efficient (which is code for saving money). The best targets for automation are tasks that are well defined, repetitive and common. In other words, chores that are boring, tedious and error prone. Fortunately for developers, we have plenty of these targets.

Our primary goal here is to automate the journey from code repository to finished product. A developer should be able to check code into the git repo and the system will build an installable product with no human intervention.

Benefits of Automation — Fast, Consistent, Correct

Let’s look more specifically at what automation can do for developers. At Oak City Labs, we build mobile apps and their backend servers that live in The Cloud. What sort of concrete things do we get from automation? We’re able to save time, ensure dependability and have confidence in every build. Automation creates builds that are fast, consistent and correct.

Saving time is the most obvious benefit of the whole process. When I have a bug to fix, I track it down and fix the errors on my laptop. Once I check that fix into git, I’m done. The automation kicks in, notices the change, runs through the build process and uploads the fixed app to a beta distribution service. Our QA folks get an email that the new app is ready to install and test. That whole process takes 30 minutes, but I’m free and clear as soon as I check into git. That’s 30 minutes of waiting for builds, running tests, and waiting for uploads that I don’t have to worry about or monitor. I’m on to tracking down the next bug. Saving half an hour a couple of times a week adds up, and sometimes it’s more like a couple of times a day. With those extra hours, I can fix more bugs and write more tests!

Less obvious than the time savings is consistency. Automation is codified process, so these builds happen exactly the same way each time. The automation always takes the same steps, in the same order, in the same context every time. Doing builds manually, I might use my laptop or my desktop, which are mostly the same, but not quite. Because I’m in a hurry, I might forget one of those simple steps, like tagging the repo with the build number, which won’t matter until I try to backtrack a buggy build later. With automation, we just don’t have those worries. Even better, I can go on vacation. Any of the developers on our team can build the app correctly. If the client needs a trivial fix, like changing a copyright date, it’s simple. Anyone on the team can update the text in the code repository and a few minutes later, a build is ready to test. Not only does the automation reduce the chances of human error, but it makes sure we no longer have to rely on a particular human to operate the controls.

Along with consistency, we also have confidence in every build. Consistency builds confidence, but so does testing and regimen. We build our software with a healthy dose of testing. As part of our automated builds, the testing ensures that the code behaves as we expect and that a change in one part of the code hasn’t inadvertently caused an error elsewhere. Of course we don’t catch every bug, but when there is a bug, we add a test to make sure we don’t make that mistake again.

Our automation is a tool we use every day as part of our development habits. It’s not a special task that’s only run at the full moon to bless our new release. It’s our daily driver, that reliable Toyota Camry we drive every day that always runs. It might need a little maintenance now and then, but it’s not your crazy uncle’s antique Mustang that only works a third of the time when he tries to take it out on a Saturday afternoon. This is really important when crisis mode comes around. Imagine if the app has some critical bug that needs to be patched ASAP. Because we have confidence in our consistent and correct automation, we can focus on fixing the bug and know that once it’s fixed, releasing the new version to users will be a smooth standard procedure.

Automation has been something we’ve grown to rely on in our development process because it make us more efficient and saves us time. For our clients, saving time means saving them money. We can get more work done because we can focus on the real challenge of writing apps and leave the boring tedium to our trusted automation. With the consistency and confidence that automation adds to our workflow, we can always be proud to deliver a top notch product to our clients. Fast, consistent and correct — automation delivers all three!

How the Magic Happens

So… automation is great and wonderful and makes the grass greener and the sun shine brighter, but how does it work? We’ve been talking in very vague terms so far, so let’s get down to the bits and bytes of how to put it all together.

For the last several years, we’ve been using a git branching strategy based on the excellent git-flow model. You can read all the brilliant details in the link, but the short story is that you have two long lived branches, master and dev. The master branch contains production level code that is always ready to release. The dev branch is the main development version that gets new features and maintenance fixes. Once a new feature set is ready in dev, it gets merged into master. This maps very nicely onto our concrete goals for automation.

Our projects have two deployment modes: beta and production. Beta is code from the dev branch. This is code ready for internal testing. For mobile apps, beta builds are distributed to QA devices for testing against the staging server. For server apps, beta builds are deployed to the staging server for testing before rolling out to production. Production mode is the real deal. Production mobile apps go to the app stores and to real users. Production server apps are rolled out to public servers ready to support users.

The automation workflow maps from git-flow into our deployment environments with the dev branch always feeding the beta environment and the master branch feeding the production environment.

The engine we use for our automation is a continuous integration server called TeamCity from JetBrains. It’s a commercial product, but it’s free for small installations. TeamCity coordinates the automation workflow by

  1. monitoring both master and dev branches in git
  2. downloading new code
  3. building the new version of the app
  4. running all the tests
  5. deploying to beta or production

If any of those steps fail, the process stops and the alarms go off. Our dev team is alerted via email and Slack and we’re on the problem like minions on a banana.

The last three steps are specific to the product type. For iOS, we use the wonderful Fastlane tools to orchestrate building the app, running tests, and uploading to either Fabric’s Crashlytics beta distribution service or to Apple’s App Store for production release.

We use Fastlane for Android, as well, so the flow is very similar. Beta releases go to Crashlytics, but production releases are shipped to the Google Play Store.

Our servers are written in Python and our web applications are in Angular. Testing for both uses the respective native testing tools, driven from TeamCity. For deployment, we use Ansible to push new versions to our beta and production clusters. We love Ansible because it’s simple and only requires an ssh connection to a node for complete management. Also, since Ansible is Python, it’s easy to extend when we need to do something special.

Since all of our build paths go through TeamCity, the TC dashboard is a great place to get a quick rundown of the build status across all projects.

With TeamCity coordinating our workflow automation and using tools like Ansible and Fastlane to enable builds and deployments, we’ve been able to build a system that is fast, consistent and correct, relieving us of the tedium of builds and letting us focus on the hard problems of building great apps for ourselves and our awesome clients.


It’s a three letter acronym that’s thrown around all the time, yet most non-technical founders and businesses gloss over the impact of an API, the value it can add to your business, and how it can help you in the future. We’ll start by covering what an API is and how it works.

What is an API?

API stands for application programming interface, and most explanations on the web want to explain a whole bunch of other things like clients, servers, JSON, etc. We won’t go into that for this article because often times it’s kind of boring for folks that don’t work with it everyday.

You probably have a ton of apps on your phone, so for this scenario let’s use Google Maps. When you open Google Maps and search for “grocery”, the app needs to get that information from somewhere.

While most new phones have a ton of processing power, the amount of data behind something like Google Maps is huge. Since your phone can’t handle all of the data for every possible thing people search for, the data needs to live somewhere else. Additionally, when Google needs to change or update the search results, they don’t want to push constant updates to your phone. So it makes sense for all that data, and all the processing, to sit and take place somewhere else. Usually this is in the cloud, often a service like Amazon Web Services, Google Cloud, Microsoft Azure, etc.

OK, so we know your phone needs to contact the cloud for some information but how does it know which cloud, where in that cloud and what info to get? That’s all a part of the API! The API has a URL just like any other web page. Therefore, when you search for “grocery” in Google Maps, your phone hits that URL in a special way and automagically serves up specific information about grocery stores to your phone.

As one of my former economics professors used to say in class, “So What?” The phone talks to a server (aka the cloud) and the server returns some information that the phone not-so-magically understands, how does that really help?

What does that mean for you or your project?

It means that your project’s API (the thing that magically serves up content so the app on the phone can operate) needs to be super well documented, well laid out and secure. It also means that time and attention should be devoted to correctly building it out. The API is basically a map for the app on your phone. If you also have a web app that matches, it will use the same API. If the API is poorly documented, you can expect some real problems both during development and afterward – making troubleshooting or sharing information a nightmare.

What does this have to do with business?

A large number of companies sell the data behind an API to other parties. Google, again, is a prime example. As developers, we use the Google Maps API quite a bit.

As a business you may not be thinking about selling the data, but what if you have integration partners down the road? For example if you’re building a CRM, what if people want to sync data between their app and yours? Or if you have a product in the healthcare space, maybe you’ll need to integrate with Epic at some point. You’ll need to not only understand your API but understand their API in order to be successful.

Other examples include the Facebook Graph API and Twitter APIs.  Both are open to developers to use for new apps. ProgrammableWeb has an online directory of APIs that are both free and paid. It’s worth a little searching if you’re building an app to see if you can buy versus build from scratch.

The API market itself is continuing to grow, including tools and servers around APIs. Take a look at two major acquisitions by Oracle and Google for API design, documentation tools and management services.

If you’re building an app, definitely consider the market potential for the API. Often, there is more long term viability behind the data and what others can do with the data versus the actual app.