by Jay Lyerly, CEO of Oak City Labs

According to new research, Americans check their phones every 12 minutes. And what are they doing on their phone? Well, 90% of their mobile time is spent on apps.

Pretty good news for someone like you wanting to build an app, right? But, if you want to make some headway in this multi-billion dollar industry, two things need to happen:

  1. Your app needs to work.
  2. People need to enjoy using your app. 

Sounds simple and obvious, but knowing is easier than executing. If you want to compete and succeed, you need to have a better understanding of how to build a functional app that can grow with you and will get you 5-star reviews all day long in the app store. 

And it all starts with unit testing. Keep reading to learn the reasons why it’s important to test your app early during development and what questions to ask a developer who’s building your application.

Why Mobile Application Testing is Important

Reason 1: It helps you manage risks

At Oak City Labs, we see applications come in all the time with absolutely no history of testing.

Most of the time, it’s due to working with an inexperienced developer, someone who’s early on in their career and doesn’t understand the value of testing. And sometimes, it’s the organization that doesn’t understand how necessary testing is, and slices it out of the scope to save some coin.

However, of all the steps you take during your app development process, and all the services you invest in to build your app, testing is one of the most important.

Testing helps you manage risks. That’s really what it’s all about.

As your product grows and develops new features, it becomes more complex. If you have solid unit testing, you can rest assured that as you add new features, you’re not going to break existing functionality.

Testing also lets you know that your app will continue to work as expected as your features and product evolve.

Reason 2: It fixes your bug problems

Much like glue traps and pest repellent spray, testing helps your app developer monitor your app for bugs and create tests to address them.

For example, with apps we’re building, if we encounter a bug, we write a test for it (the purpose of that test is to fail), and that lets us illustrate to ourselves and our clients what went wrong.

After we fix the bug, we conduct another test to demonstrate that the solution we chose was the correct one and will ensure the bug doesn’t reoccur in the future.

Reason 3: It helps you stay in compliance

With a health and wellness app, you’re likely in an environment where you need to stay in compliance with HIPAA and other regulations. And by codifying some of those compliance workflows and rules in your unit testing, you can make sure that as you grow, and your app becomes more complex, you stay in compliance. 

Questions to Ask Your Developer About Unit Testing

As you can see, it’s important to start testing early.

As soon as you start writing code that’s not tested, you begin to dig a hole of technical debt. The more code you write, the bigger that hole gets. Later on, you’re going to spend a lot of time (and money) filling that hole back in.

Many codebases that don’t have code fall into this trap. Their code has gotten too big, and the developer can’t fill it in. So, it’s important to stay ahead of the game with your testing.

When you’re talking to a developer who might work with you on building your application, here are some questions you can ask to learn more about how much value they place on unit testing and the process for doing so:
• Do you test?
• What kind of testing do you do?
• When do you test? 

Testing is an investment in the future of your health & wellness app.

Unit testing takes work, but it’s important to ensure you have a stable and reliable app that can grow with you.

A quality software development team will embrace testing as a tool to make sure that your product can make an impact, and customers will love using it — now and as you grow. 

Building a health & wellness app? Read our free guide, The Impact of the ‘World’s Largest Work-From-Home-Experiment,” to learn how you can be on the front lines of COVID-19 innovation. Download it here.

You want an app. Seems simple, right? But much like purchasing a car, there is no one-size fits all solution when it comes to mobile apps. Among all of the decisions you’ll need to make when building your app, from a technical perspective, the most important decision is what type of app will it be? And I’m not talking about iOS or Android (though those are also important decisions to make!). I’m talking about how will your app be built. Native? Hybrid? Web?

Read on to find out about the three different ways your app could take shape.

Native Apps

At Oak City Labs, we consider native apps to be our bread-and-butter. Native apps are built with a specific platform in mind – like iOS or Android. Users download these apps from the Apple App Store or Google Play Store. Native apps are capable of taking advantage of device features – like the camera, GPS, contacts, etc. – for use within the app. They can also employ push notifications and work with or without access to internet.

From a technical perspective, these apps have a codebase of either Swift/Objective-C for iOS and Java/Kotlin for Android and are built according to the standards set forth by Apple and Google (who also offer SDKs).

Side note: There is also such a thing as a cross-platform native app. You can read more about that here.

Web

A web app is a completely different approach. The easiest way to differentiate a web app from a native app is that web apps aren’t downloaded from app stores. Web apps are built like a website would be with HTML, CSS and JavaScript and can be accessed with your phone’s mobile browser. They are quick and simple to develop, but don’t allow for the wider range of functionality that native apps do like push notifications, integration with the device camera, contacts, GPS and more.

Side note: There are also such things as progressive web apps. You can read more about those here.

Hybrid

So what’s left? A combination of the two types we’ve discussed already: hybrid apps.

As their name suggests, hybrid apps are part native app, part web app. You download these apps from an app store, but they are essentially just a wrapper (called a WebView) around a web app. That would be appealing if you want to spin up a quick minimum viable product (MVP), which is often simpler to do as a web app, but still would like to have users download your app from the app store. The pros: you’ll have access to analytics of app downloads and usage. The cons: performance is inferior to a native app and you’ll likely have to scrap everything and start fresh if you chose to move forward and expand the MVP into a full-fledged native app.

Our Recommendation

We weren’t kidding when we said there is no one-size fits all solution when it comes to building a mobile app. Our best recommendation if you’re just beginning the app development process is to partner with someone who can walk you through the specifics of each approach and guide you into the solution that makes the most sense for both your short term and long term goals. Sound like something you’d like more information on? We’d love to chat with you!

Like many tools, the Git version control system, through a handful of commands, provides the core functionality that most users need on a daily basis. In today’s post, I will touch on one that doesn’t seem to be used as often or, for many users, hasn’t been used at all.

Making One of Two

One of Git’s primary features, that was a huge draw for me when first exposed to it, is branching. Unlike many other systems, branching in Git is simple, fast, and the expected procedure to use for introducing new functionality, fixing bugs, and more.

If you are working alone in a repository, there usually isn’t much of a need to do more than simple branch and merge operations. However, Wwhen you add more contributors, however, managing the work can become a bit more involved, but still be easily maintained with the branch and merge commands.

A Common Occurrence

Let’s say you’ve created a new feature branch off the dev branch, done some work (with commits) locally in that branch, then see that someone else working in a different feature branch has merged their work into the dev branch.

oak-city-labs-trevor-history-lesson-1

You might continue work in your feature branch because the new code doesn’t affect what you’re implementing and vice versa. But often, you will find yourself wanting to incorporate changes and fixes into your current working branch.

At this point, most will reach for the merge command which will definitely do the job it’s intended to: incorporate changes from another branch into the current one. But if you find yourself (or other team members) doing this often, you can wind up with quite a few merge commits which, among other things, can make it a bit more difficult to understand the history of the project. Merge commits don’t go away when you eventually fold your feature branch back into dev.

Rebase

While merging is a common workflow for handling these kinds of scenarios, another option is the rebase command.

The rebase command performs the same work as a merge operation, but in a way that results in a different historical view.

Using the scenario from above, how would things look if you reached for rebase instead of merge?

When you choose to rebase the changes in your feature branch onto those that have been committed in another branch (such as the dev branch your feature branch is based on), the command goes back to the common ancestor snapshot saving the feature branch commits to temporary files. It then switches to the branch you’re rebasing (dev), resetting HEAD to the most recent commit, then replaying those stored commits from your feature branch and reinstating the feature branch.

Now, a look at the history will appear to show that the work you’ve done in the feature branch came after the commits (done in parallel) in the dev branch. No merge commits will “litter” the history of either branch.

oak-city-labs-trevor-history-lesson-2

While this workflow may seem only useful for resulting in clean commit histories, Scott Chacon (Pro-Git) makes a good point that it also provides value when working in a repository that you don’t maintain. By rebasing your work on origin/dev, for example, the maintainer does not need to do any integration work to incorporate your changes; just a fast-forward merge.

The Golden Rule

There is absolutely at least one time when you will not want to employ the rebase command: when your commits have been published outside of your repository.

If you’ve pushed local commits to a remote, do not use the rebase command. It can cause lots of pain and suffering for your teammates (even though there are ways to work around it).

To rebase or not to rebase…

Most people are comfortable with just using the merge command for combining branch work. It’s a completely serviceable practice and, as they will often argue, shows the “true” history of the repository compared to rebasing.

I often reach for rebase when hot fixes or other commits that contain changes I’d like to have while working in a feature branch are made before I’m done. The feature branch will be clear of those merge commits letting me see a clear history of the work and will maintain that clarity when it is merged back into the destination branch.

At Oak City Labs, we’re working more and more with computer vision (CV) and image analysis, so it’s exciting to see how others are using CV to solve problems. Face ID from Apple has garnered a ton of recognition in the past few months as they attempt to use CV to solve the issue of mobile authentication.

FaceID is a technology that Apple shipped in the fall of 2017 as part of the iPhone X. The phone projects a constellation of 30,000 infrared dots onto your face and the user facing camera reads the location of those dots. The phone can use that positional information to create a 3D map with enough detail that it’s unique (with a one in a million chance of false positives).

FaceID replaces TouchID, a fingerprint based authentication technology that is fast and relatively mature. I’ve spoken with some folks who lament the loss of TouchID in favor of FaceID. They miss the ability to unlock a phone with only touch without having to ‘frame’ their face, so the phone gets a good look at it. Others say FaceID is too slow, or doesn’t work consistently enough, falling back to manual passcode entry. While FaceID might have some rough edges in this initial release, in the long term, FaceID will win out over TouchID.

TouchID was a great stepping stone, enabling much better security with relatively low friction. But it didn’t work for everyone. I know several people who aren’t able to consistently unlock a phone with TouchID. In my experience, they all have small hands and slim fingers that don’t seem to adequately cover the TouchID sensor. The other group with TouchID issues all have “end of the world” type cases on their phones. These big, bulky, indestructible cases promise to save your phone from the harsh reality of a concrete world. While they do an admirable job, they often make it physically difficult to place a finger fully on the TouchID sensor, rendering it useless. These are the worst kind of experiences for a technology like TouchID, where they train the user that it’s difficult to use and unreliable.

FaceID solves a lot of these issues. By not requiring physical contact, having the “right size” fingers isn’t an issue. Neither is encasing your phone in drop-proof armor as long as the camera can still see. Other issues with FaceID, like taking too long or having to ‘frame’ your face for the phone are just growing pains associated with an initial release. FaceID is usually fast enough not to notice now, and in a few years time, it will be fast enough to be unnoticeable. I also expect the cameras to continue to improve their field of view so FaceID is effective at wider angles. As the software is refined and the hardware evolves, FaceID will only improve.

FaceID brings something to the table that TouchID simply can’t — continuous authentication. That’s the idea that authentication isn’t something that happens once when you start a session, but something that happens continuously as long as you’re using the device. You see a bit of this on the iPhone X with notifications. When a new notification pops up on the screen, it doesn’t have real content. The phone shows a “New Email” pop up, but no content or who it’s from. When you, the phone’s owner, pick up the phone and look at it, FaceID verifies you and the notification changes to show who the email is from and the first bit of text. Imagine extending this to third party apps like 1Password. When you’re looking at the screen, the passwords might be automatically displayed, but when you look away or put the phone down, they’re obscured again. You could also imagine an online testing service that could use continuous authentication to ensure that you’re the one taking the test and not your buddy who’s much better at calculus. We’re scratching the surface of all new use cases for security and convenience with continuous authentication and I’m very excited to see where we’ll go next.

As FaceID becomes commonplace, we’ll see it adopted in many devices beyond phones and tablets. Because it doesn’t rely on physical contact, like TouchID, it’s easier to see it adapted to devices like AppleTV. In the large screen, 10 foot interface, FaceID could be the key to finally having a seamless multi-user experience. As I approach the TV, I’m identified and verified and I have access to all my content. Kids in the house might be identified in the same way and presented with only their kid friendly content. And if we really want to jump ahead, imagine FaceID for your car, where it unlocks and starts because it recognize you as it’s owner.

TouchID was an incredible innovation in making devices more secure with less burden on the user. FaceID is the next step in the evolution of strong security coupled with ease of use. As this technology becomes a staple of our digital world, we’ll see it applied to more and more niches. As we develop computer vision solutions at Oak City Labs, we’ll be considering how we can incorporate this kind of ingenuity as we solve problems for our clients. If you have a computer vision problem that you need help with, let us know! We’d love to speak to you!

 

During some recent discussions with clients, I noticed we tend to throw around the SDK acronym quite a bit. Today we’re going to simplify what an SDK is, share examples of SDKs and talk through how you might think about an SDK as a potential software product or product extension in the future.

What is an SDK exactly?

SDK stands for software development kit. An SDK is typically a 3rd party chunk of pre-written code. For example, in the majority of mobile apps, a user will need to login and most apps use social logins like Facebook, Twitter and Google. All three companies provide an SDK which a developer “drops” into the mobile application they’re building. That SDK allows the developer to quickly, easily make the right calls (via code) to Facebook for instance to authenticate the user (make sure the user is who they say they are).

Examples of SDKs you might hear used in mobile or even web applications:

Analytics and crash reporting

User Login/Authentication

Notifications, Engagement and Messaging

Advertising

Payments

There are also SDKs available for news feeds, weather data, restaurant reservations, and more.

Potential Downside of SDK usage in mobile apps

SDKs are pretty powerful in terms of speeding up app development. They allow a development team to quickly put a new feature in place without building it from scratch. As always, with great power comes great responsibility. Occasionally, and it’s probably not happening to your mobile app, developers (or product owners) start to integrate multiple ad networks, authentication methods and analytics services. All of these integration points can slow the performance of the app down and also introduce complexity in the troubleshooting process. The SDK will need to be updated when a new version is released by the provider to continue to receive support from the provider. With any new release or change, that update can either introduce or fix bugs. SDKs also introduce a layer into your application that can make debugging (or bug fixing) more complicated. Often times the bug will be in the SDK but the app developers hands are tied waiting for the SDK owner to release an update. Ask yourself these questions before integrating a third party SDK:

  1. Does it provide value to the user of the mobile or software application?
  2. What if the owner of the SDK goes out of business or is acquired (ahem…anyone remember Parse?)
  3. Does it impact the performance and user experience of the app?
  4. Is there clear documentation and support around the SDK?
  5. What security risks will the SDK introduce?
  6. Is the SDK a critical component? If so, are you OK depending on a third party for bug fixes and support?

Building your own SDK

It might make sense for your business to build an SDK depending on your product and market. For example, one of our clients came to us with a connected device for motorcycles (think IoT – Internet of Things). We’ve built an SDK that handles the communication between another mobile application, an IoT platform and the hardware device itself. The SDK is a part of the product and necessary for device operations.

In the case of Facebook or Twitter, the SDK is a free extension of their product that allows the social network to grow and also acquire data on the users. Whenever a Facebook or Twitter SDK is integrated into an app, it potentially allows the app developer to access data about the user and also share that data back to the social network. Yes, if you have an app on your mobile phone, you are likely being tracked for advertising purposes.

Here are a few more examples of potential market opportunities for an SDK:

  1. Connected devices (IoT) where you might want other developers to also integrate your product. Something generic like bluetooth beacons are a great example.
  2. Products that provide a general utility. Analytics services, data services, or “platform” specific products that house data from your application. For example, you might build a platform product that handles user reviews for consumer products. You could provide an SDK that allows a developer to easily pass user reviews from the application to your review platform that centrally stores the user reviews. The platform could then provide analytics on the review data. In that scenario, you might need to gain as many reviews as possible for data analysis and would consider giving the SDKs to developers for free and then sell the data to product companies.
  3. Unique app features that would take a developer extensive time and resources to build on their own, for example unique maps or even making an app screenshot-proof!

SDKs present opportunities and challenges for mobile and software applications. A discerning developer will help you choose wisely and also help you understand if there are potential market opportunities to expand your product. Along with APIs, they can be a pretty powerful tool in the software developers arsenal. If you’d like to chat more about SDKs or any software development project, drop us a note!

Often, our daily challenges are solved with simple solutions instead of overly complex or “clever” ones that even the authors don’t understand a month later.

Not long ago (in a galaxy quite close to our own; okay, the same one), we ran into an issue where we needed to add activity indicators on some tableviews when getting information from a remote data source. At first, this seemed liked a simple case of showing the indicator when the call for data begins then hiding it in the completion handler when the data is returned.

We soon noticed that there were times when the activity indicator was being hidden while a search was still executing. The only code to hide the indicator was in the completion handler so, at first, it was puzzling.

The application’s data access layer often allows that a request for data be “cancellable.” There are times when a request might be in flight (e.g., an initial call for data when the tableview appears) and a user may tap into the search bar and start filtering by keying characters before a previous search has completed. When that happens, the previous search is cancelled, but the completion handler from that search still gets called.

As hiding of the activity indicator takes place in the completion block, this would occur even though another request was still being executed. We needed to prevent the indicator from being hidden until the last request was finished and ignore the other callbacks that would have hidden it prematurely.

Our first thought was to have an ivar that could be incremented when searches were made then decremented in the completion handler. When it reached zero, that would indicate that the last executing search had finished and then, and only then, would hiding of the indicator take place. It was a simple solution and worked as expected.

After considering the number of places this would be useful, we decided to create a struct that exposes two methods:

An internal counter goes up and down with calls to the corresponding method, and false is only ever returned if the value is zero.

Now, when a search is started, the increment method is invoked on the counter and the flag indicating that an update is occurring is set just prior to the call for data:

When the completion handler is called, the decrementWithValue() method is called on our counter and its result assigned to the flag (which also hides the activity indicator):

Now, if a search is cancelled for whatever reason and another search is still executing, the completion handler is still called, but the activity indicator remains visible.

This solution eliminated the need to go deeper into the data access layer and attempt a modification that may have resulted in unexpected behaviors and side effects.

At times, it’s tempting to create “clever” or “elegant” solutions to problems we’re dealing with. And sometimes, those are the things to do. But it’s also valuable to consider the “keep things simple” principle when you’re writing code. Your team (and maybe, your future self!) will appreciate it.

Today I’m going to discuss some pros and cons of Realm and Room for Android data persistence. Room was introduced at Google I/O 2017 as part of the Android Architecture Components. Realm is a mobile database solution that was launched for Android in May 2014 and has become a feature-rich choice for data persistence. While both serve a similar purpose, they are very different in implementation and their effectiveness may vary depending on your projects needs.

Room is just a layer over the native SQLite that comes stock with Android. As such there is a large amount of customizability in the queries (you write your queries in SQL and they are validated at compile time). However, Room also requires that relationships be created using foreign keys and the like, so complicated object graphs can be a bit of a pain to implement. Realm on the other hand requires no SQL knowledge. You do not have to write any SQL statements and object relationships are incredibly simple to implement. Referencing one object (or a list of them) from another object creates the relationship automatically.

Realm is a much larger library than Room because it includes a separate database. It adds somewhere around 3-4 MB to your app’s apk. Because Room is just a layer on top of SQLite, it only adds a few dozen KB to the APK. Room also contains far fewer methods if you are concerned about dex method limit.

Realm requires that objects not be passed between threads. Realm data objects are views into the data that respond to database changes so they are tied to whatever thread the Realm instance that they were retrieved from exists in (if that Realm instance is closed, any objects retrieved from it become invalid). In my experience this isn’t generally much of an issue if you are careful about it, but if you find yourself switching threads a lot you’ll have to create new Realm instances and re-query to get your objects. No such thread limitations exist for Room.

Room is officially supported by Google, so it should remain well supported and will likely have good community support. On the other hand, Realm has been around for a while (officially released about 4 years ago for Android) and has undergone tons of bug fixes and improvements and has an active community. Additionally, Realm supports iOS as well as Android, so developing for both platforms with virtually the same data persistence layer can allow for similar app architectures.

Both libraries support reactive queries, allowing you to subscribe to updates on a view of your data. Room achieves this using LiveData, another part of the Android Architecture Components, which can be linked to an app component (Activity, Fragment, etc.) and update intelligently based on the lifecycle of the component (i.e. not causing UI updates when an Activity is in the background). This is a nice feature to have out of the box and allows you to avoid keeping track of unsubscribing listeners in backgrounded app components. Realm objects, lists, and query results can all be directly subscribed to in order to monitor for changes, convenient features not entirely present in Room. Realm also has an additional library for an auto-updating Recyclerview adapter. While something similar isn’t too complicated to implement with LiveData, Realm’s library comes for free and works well.

Depending on your app’s data model complexity, APK size concerns, and personal experience/preference both Realm and Room are viable options for data persistence. Let us know in the comments which one you prefer.

At Oak City Labs, we rely heavily on unit testing in our quality software process. Unit testing is the safety net that lets us reliably improve existing code. Our testing suite double checks that modified code still behaves the way we expect. I’ve written before (here and here) about how we use dependency injection (DI) which makes unit testing easier. DI helps us wrap code we need to test in a special environment. By controlling the environment, we can make sure that the code being tested gives the correct output for a given set of conditions. This works very well for pieces of the application that we can encase in a layer that we control. Unfortunately, we can’t wrap everything.

Consider our API layer. This is the part of our application that talks to the server over the internet. It makes network calls, processes replies and handles errors like slow network responses or no network at all. In testing this code, we want it to behave as normally as possible for accurate testing, so we still want it to make API requests and interpret the results. At the same time, these are unit tests, so they should be fast and not depend on external resources. We don’t want to make requests to a real server on the internet. If that test failed, it wouldn’t be obvious if our code was broken, the server was down or the network cable was mistakenly unplugged. It’s important that our unit tests be self contained so when something fails, we know that a specific section of code has failed and we can fix it ASAP.

Back in the old days, before Swift, we wrote in Objective-C. Swift is a strongly typed language where Objective-C is weakly typed. While weak typing in Objective-C often gave a developer enough rope to hang themselves, it was flexible enough to do interesting things like easily mock pieces of software. Using mocks, fakes and stubs, you could (with some effort) replace pieces of the system software with substitute code that behaved differently.  We could use this to test our API code by changing the way the system network routines worked. Instead of actually contacting a server on the internet, a certain call just might always return a canned response. Our API code wouldn’t change, but when it asked the system to contact the server for new data, our mocked code would run instead and just return the contents of a local file. By building a library of expected calls and prepared responses, we could create a controlled environment to test all our API code.

Swift, on the other hand, brought us strong typing, which wiped out whole classes of bugs and insured that objects were always the type we expected. We paid for this safety with the loss of the ability to easily mock, fake or stub things. Someday, Swift might gain new features that make this possible (maybe even easy) but for now, this is difficult to do with any efficiency. So, we need a new approach for testing Swift API code.

Like we said earlier, we don’t want to use external network resources to test our code because too many things are out of our control. But what if the test server were running on the development machine? In fact, what if the test server were running inside our application? Enter two open source projects — Ambassador and Embassy from the fine folks at Envoy.com.  Embassy is a lightweight, dependency free web server written in Swift. Ambassador is a web framework that sits on top of Embassy and makes it easy to write a mock API server.  

In this new approach to testing our API layer, we’ll write some Ambassador code to define our mock endpoints. We’ll declare what URL path they’ll respond to and what response the endpoint will return. Now, inside our unit test, we’ll fire up the mock server and run it on a background thread.  Since you’re controlling both the client and server side of the request, you can make asserts on both ends of the exchange. On the server side, you can validate the data submitted by the client. On the client side, you can ensure that the response from the server was interpreted correctly. Ambassador has some nice conveniences as well for working with JSON data, introducing delayed responses from the server and other common testings needs.

In order to use our freshly built API mock server, all you need to do is change the server name used by the API layer. This is important because we don’t want to make significant changes to our code in order to test it. We want to test as close as we can to production code. By switching our base URL from “https://api.ourserver.com” to “http://localhost:8080”, we can test all our network requests with no external dependencies. Since we’re using dependency injection, this change is very simple to implement in our unit testing routines.

The move from Objective-C to Swift has allowed us to write cleaner and safer code, but the price we pay is the loss of the fast and loose, über permissive Objective-C runtime environment. Fast and loose always caused more problems than it solved, so I’m happy to see it go. A few of our existing solutions have gone with it, but with a bit of ingenuity, we can move forward with better and safer new solutions.

 

A few weeks ago at the All Things Open conference I was introduced to a term I had heard a few times but had not done much research on: “Progressive Web Apps”. Wikipedia describes Progressive Web Apps (PWAs) as “regular web pages or websites but can appear to the user like traditional applications or native mobile applications. In other words, PWAs are websites that look and behave like mobile apps. Now, isn’t that interesting?

It appears that the main priority of PWAs is to combine the benefits of modern browsers and web development with the benefits of a mobile experience. Several checklists by Google Developers contain the requirements for being considered a baseline PWA as well as an exemplary PWA. They also suggest using the Lighthouse tool for “improving the performance, quality, and correctness of your webapps.”

The baseline PWA requirements are:

  • Site is served over HTTPS
  • Pages are responsive on tablets & mobile devices
  • All app URLs load while offline
  • Metadata provided for Add to Home screen
  • First load fast even on 3G
  • Site works cross-browser
  • Page transitions don’t feel like they block on the network
  • Each page has a URL

It is evident from this list of requirements that Progressive Web Apps are really aimed at providing a secure, modern online and offline experience, much like mobile apps. Let’s look at some reasons why people might prefer a PWA to a native mobile app.

Preferring PWAs


Discoverability

PWAs allow developers to leverage the search engine benefits of SEO practices. In this way, existing search engine SEO strategies can be employed in order to promote an app rather than App Store Optimization techniques.

Usability

One of the requirements for classification as a PWA is that is works across different browsers. This rule means that not only can PWAs be used on computers across operating systems, but on mobile web browsers as well.

Additionally, users don’t need to go through the process of grabbing an app install from the app store. This implies that developers also don’t need to go through the process of uploading apps to be reviewed by Google and Apple before releasing any updates. This, in turn, means instant updates for developers and end users.

Caching / Offline Usage

One of the typical benefits of mobile apps over web apps are the amount of storage you have access to. With modern Cache APIs, users can install their PWAs to their home screen and access the app offline without needing to download any additional data. This functionality mimics that of a mobile app and unlike most websites, allows users to use the app even without internet.

Push Notifications

The age of notifications is upon us. Hardly an hour goes by without receiving a handful of notifications from various social media networks, emails, messages, etc. PWAs bring this functionality to the web, allowing you to receive notifications straight to your device, whatever it may be.

Hesitations about PWAs

While there are many upsides to the growth of Progressive Web Apps, upsides I am personally excited about, I would be remiss if I didn’t address their potential downsides as well.

Security

Because PWAs don’t receive the same sort of App Store review that Google and Apple require, developers can stick anything they want into their apps. This means that if a developer chooses to be secretly malicious, they could, and there’s no review process stopping them from doing so.

Functionality

Web apps can do a lot, but they can’t do everything. There is a lot of functionality only able to be utilized by native mobile apps still. PWAs are gaining ground every day, and as such are growing in the number of previously native-only features offered. Despite their gains, however, native apps still have many features that PWAs simply don’t have the ability to accomplish yet. Check this site to see if the functionality you want to add can be done with a PWA!

Platform Limitations

Plain and simple, iOS likes iOS apps. PWAs are only as successful as the platform that they are to be used on. As stated above, PWAs are gaining support every day. Within a few years I fully expect PWAs to have a majority of the functionality normally afforded solely to native apps.

Final Thoughts

Progressive Web Apps are a wave. Whether they are the wave of the future is yet to be seen. The fact that Google is pushing PWAs should be a sign of things to come, as they are often at the forefront of web development technologies (See: Angular, Vue). It will be awhile before PWAs gain all the functionality that native apps currently have, but they are on their way. Batten down the hatches and ignore the naysayers – viva la Progressive Web App!

At Oak City Labs, we love our continuous integration (CI). In our world, CI means that we have a trusty assistant sitting in the shadows that watches for new additions to our code repository.  Any updates get compiled, tested, packaged and shipped off for user consumption. If something goes wrong, the team is alerted immediately so we can get the train back on the tracks.

Let’s dive a little deeper at the toolset we use for CI. For iOS and Mac development, it might seem like a natural choice to use Xcode Server and we did, for a time. However, as our project load grew and our need for reliable automation increased, we found that Xcode Server wasn’t meeting our needs. We switched to TeamCity with very good results.

Xcode Server, after several years of evolution, has become a solid CI server and has some advanced features like integrated unit testing, performance testing and reporting. The great thing about Xcode Server is the integration right into Xcode. You don’t have to bounce out to a website to see the build status and any errors or failing tests link directly to your code. Unfortunately, that’s where Xcode Server runs out of steam. It doesn’t go beyond the immediate build/test cycle to handle things like provisioning profile management, git tagging, or delivery to the App Store.

Enter Fastlane. When we first adopted Xcode Server, Fastlane was in its infancy, only partially able to cover the iOS build cycle. In the years since, Fastlane has grown to be a full and robust set of automation tools that blanket the iOS and Mac build cycle, reaching far beyond the basic build/test routine. As Fastlane developed, we pulled more and more features into our CI setup. We built python scripts to integrate various Fastlane pieces with Xcode Server. Eventually, we were spending a good deal of time maintaining these scripts. Fastlane, on the other hand, handled all the maintenance internally, if we would embrace Fastlane fully. There were also some pieces we had built by hand (Slack integration, git tagging) that Fastlane included out of the box. It was clear that it was time to wholeheartedly jump on the Fastlane bandwagon to drive our automated build system.

One hiccup — Fastlane really wants to drive the whole build process. This is a great feature, but it means we can’t realistically do that from Xcode Server. We were already using TeamCity for CI with our other projects (Python, Angular, Android) and it seemed like a good fit. TeamCity is great at running and monitoring command line tools and now with Fastlane, our iOS and Mac builds are easily driven from the command line. Fastlane also creates TeamCity compatible output for tests, so our unit test reports are displayed nicely in the TeamCity dashboard.  

Now that our build system is fully Fastlane-ed, we benefit from their rich library of plugins and utilities. It’s simple to compute a build number for each build and push that as a git tag. Success and errors are reported to the team via Sack. We can easily publish beta builds to Crashlytics and send production builds right to Apple’s App Store. Fastlane’s ‘match’ tool keeps our provisioning profiles synced across machines. There are even utilities to sync our DSYM files from iTunes Connect to our crash reporting service.

Having the CI for all our projects under the TeamCity roof also comes with some nice benefits. There’s a single dashboard that shows the status for all the projects. There’s one login system to manage. The TeamCity server queues all the builds, so if an Android project is building when an iOS project updates, the iOS build is queued until the Android project finishes. With separate CI servers before on a single machine, you might have projects building in parallel which push the memory and cpu limits of the build machine. Also, the artificially elongated build times could confuse the build server system that monitors build time.

Our fully automated iOS and Mac build configurations have been running in the TeamCity / Fastlane environment for almost a year now and we’re delighted with the results. The Fastlane team does such a great job keeping up with changes in Apple’s environment. On the few occasions that things have broken, usually due to changes on Apple’s end, Fastlane’s team has a fix almost immediately and a simple ‘gem update’ on our end sets everything right. Going forward, Fastlane and TeamCity are our tools of choice for continuous integration.

Subscribe for the latest updates

Where problems get solved.

© 2020 Oak City Labs | A Well Refined Website