Oak City Labs & WWDC 2022

A few weeks ago, we met as a team to ooh and aah over the shiny new tech Apple unveiled at the annual WWDC. We had some pretty high expectations, and our wish lists were chalked full of fancy tech we wanted Apple to ‘wow’ us with this year. If you missed the event and want to hear our team’s thoughts and highlights, this post is for you!

“It’s Christmas in June for developers! This will be the preview for all the new tools and technology that we’ll work with for the next year, so that whole thing is hugely exciting.” – Jay Lyerly, COO and CO-Founder of Oak City Labs

Our Hopes and Dreams

As Apple fanatics and techies, we had hoped for updates on a wide range of both software and hardware. A couple of us wanted to hear the long awaited announcement of the virtual reality headset. We also held our breath in anticipation of the Apple car that has been in the works behind the scenes.

As some of Apple’s most loyal customers (our credit card statements will confirm), we had hoped to see a Snow Leopard style release of all software lines, from the Apple Watch to the Mac. We thought it would be beneficial to see Apple take a break from launching new products, and instead, focus on perfecting the stability and efficiency of its current hardware.

As software developers, we had hoped for updates on the current storyboard system and the introduction of SwiftUI Previews for both production apps and the UIKit. By adding SwiftUI Previews to the system, developers would have instant access to view a screen’s completed look without having to navigate through a running app. If the Preview system were to be integrated into the UIKit in the future, development would be a lot faster and completed with much less hassle.

Our Highlights

We’ll admit, our team didn’t get much on our wish lists this year. BUT, there was still a lot excitement for what Apple did announce, and we can’t wait to start using all the fancy new tech! Here’s a little overview of our team’s favorite releases for both personal use and development.

Of course we were all geeking out over the M2 chip for the new MacBook Air. With it being 40% more effective than the M1 chip, we are eager to see what this baby can do! We were also impressed by the redesigned MacBook Air that features an updated chassis with a uniform, flat body and a lighter feel from the previous generation. We were all happy to see the new charging options that the new laptop offers, including the 35W compact power adapter that has two USB-C ports, so users can charge two devices at once. This new MacBook Air finally supports fast charging capabilities for charging up to 50 percent in just 30 minutes with an optional 67W USB-C power adapter — there’s something we can all get excited about!

The new texting features “edit” and “undo” were also nice to see, and we love that we’ll be able to avoid embarrassing texting mistakes with this new ability! As developers, we look forward to seeing how the newly announced live text features will shape the way we build apps in the future. Developers will be able to copy text from photos, easily share map addresses, jump to URL’s, and use QR code interactions with live text. As we add new features to our apps, these abilities will definitely come in handy.

Another feature that we expect developers to start using is “passkeys”. While we’re not totally convinced in a “password-less” future, we do think this new software will provide a quick and easy way to sign into websites and apps across multiple platforms, as passkeys are synced with the iCloud Keychain!

The last feature that really stuck out to us as developers was the WeatherKit. “WeatherKit gives developers the ability to integrate Apple Weather forecast data directly into their apps and Xcode Cloud…it enables developers to integrate the same world-class global weather forecast that powers Apple Weather directly into their apps. Using high-resolution meteorological models combined with machine learning and prediction algorithms, Apple Weather provides current weather, 10-day hourly forecasts, daily forecasts, and historical weather.” As we continue to focus on building apps that focus on making day-to-day activities easier, it will be convenient to easily access valuable weather data that is included in the WeatherKit.

The Overall Score

It’s safe to say that our team’s favorite Apple event is the WWDC. Every year brings progress to the industry and exciting new tech for developers across the globe. Overall our team gave the WWDC22 a solid 4 star rating. Even though the event left some new software and hardware to be desired, we enjoyed gathering as a team and seeing the different features that we can use in both our development and everyday lives!

Software Development + Apple Event 2021

By: Carol Vercellino, CEO & Co-Founder

On September 14th, 2021, Apple held its annual iPhone-centric event, announcing their new iPhones and iPads. If you missed it or want a quick breakdown, watch our update below. We also talk about what features might be a game-changer for companies in the healthcare and life sciences industry.

Jay, what did you think about today’s announcement?

Jay: It was interesting. In general, you’re seeing issues where the iPhone and even the watch are mature products. So, there’s nothing earth shattering they’re going to add to it. For the iPhone and iPhone Pro, I just wrote down “more better”. It’s the same stuff they do every year.

On the phone, they added the cinematic mode where they track the person, and focus on who’s in the foreground or the background, which is cool, but that’s a super niche issue.

Carol: Yeah, but I could see as a small company or startup, if you want to record a professional video, you now have this device in your hands that could make it look more professional.

Jay: Yeah, it’s crazy that now for $1,000 or $1,500 you get this movie studio in a box.

You mentioned earlier today that there’s still a lack of audio features.

Jay: Yeah, you’re still walking around with that boom mic. I don’t know if that’s a thing that’s feasible to fix from a phone – or if that’s something they care about. Professional people are still connecting external microphones and audio capture equipment. 

Some of the more complicated features that we’re building in the healthcare and life sciences industry that you have to have a server to process, we’re very quickly moving to where we can put that on the phone, right?

Yeah, they continue to update the neural engines on the devices so they’re faster, and they’re running more and more stuff on the device. They mentioned that all your Siri requests are processed on the phone where before it was on the cloud, so, if you had a flaky internet connection, Siri would have gone away. Now all of it is happening locally.

They showed a clip from a Tennis training app. They were doing real time processing of taking those high quality images off the camera and tracking where the ball lands.

I saw a demo a couple years ago for basketball, where they were doing body tracking of the shooter. If they were releasing at the top of their jump and registering if the ball went into the hoop. It’s set up off to the side, but the player is wearing AirPods. All that audio is coming back real-time and feeding them information.

Carol: That’s interesting because I remember when we first started the company, and we talked to a few prospects that had to manufacture their own devices or partner with companies to put sensors all around. So, I wonder if we’ll see consolidation in the future, where  you’re just like here’s the watch, iPhone, and AirPods – go.

Jay: Right, instead of sensors, you’re just using the super fancy phone to do that.

Anything else stand out to you?

There’s a special place in my heart for the iPad Mini because I love tiny computers. The Mini hasn’t seen an update in a couple of years, and it was a big update. It got a second generation Apple pencil, touch ID, USBC, 5G – it got everything you could want in a pocket iPad.

Why would I buy an iPad Mini over a larger iPad?

The iPad Mini is now more of a mini version of an iPad Air. It has all the features that before you’d only get on the iPad Air. 

There’s the iPad Pro, which has all the bells and whistles. The iPad Air is the middle of the road product, and then there’s the entry level iPad, which has all the basics, but is still very nice now. So, the iPad Mini is more in that middle of the road category.

All the iPads got the center stage feature. It uses the FaceTime camera and has a really wide field of view. Traditionally, we’d have robotic cameras that would face track, but now, instead of physically turning the camera, they pan different parts of that sensor. Coupled with the software with face tracking, it’s an interesting feature for FaceTime and Zoom to follow one person around or focus on the speaker when a group is using it.

If you had to buy an iPad for your Mom and Dad, which one would you get?

Oh, I’d totally buy the cheapest one! For somebody who’s just going to use it for FaceTime, solitaire, and checking emails, the entry one is great. You need to have a solid reason to get one of the other ones. Do you need the pencil? Does it need to be small like the mini? 

**The above interview has been transcribed for clarity and brevity.**


Building an app for the iPhone? Check out our article, 3 Steps to Create a Benefits-Oriented Mobile App Onboarding, before you start.

Stuart Bradley is the founder and CEO of not one, but two companies – Carolina Speech Pathology LLC and Altaravision. We caught up with him on a busy Monday afternoon in between meetings, and he was gracious enough to take some time to talk with us about his experience as founder of Altaravision and the interesting journey of their flagship product, NDŌʜᴅ.

Put simply, NDŌʜᴅ is the most portable, high-definition endoscopic imaging system on the market today and an invaluable tool for speech pathologists. It has been extremely well received by the medical community, but its path from concept to market was not without its obstacles.

Where did the idea for NDŌʜᴅ come from? Because it is a very specific product.

It came from a need. Specifically, the need to be able to do machine vision on a Macintosh. Surprisingly, there really wasn’t any software that addressed it anywhere in the marketplace.

Would you mind just briefly explaining what machine vision is?

Sure. Machine vision is the ability for a computer to view imagery or an object, take that information and then display it. Essentially, it is a computer’s ability to see.

And the capacity to do that wasn’t on a Mac? That’s interesting.

Well, no. There was plenty of software out there, but it was all secondary purpose. The bigger issue was that nothing had the capabilities you would need in a medical setting.

It all comes down to video capture. All of the off-the-shelf software could capture images, but they suffered from significant lag. What you saw on the screen might be a full second behind what was happening in real time. That might not seem like much, but when you are dealing with medical procedures that kind of lag isn’t going to cut it.

I played around with off-the-shelf software for a number of years and finally found something I thought might work, but there were a ton of features that I didn’t want or need. I reached out to the developer to make me a one-off, but he was ultimately unable to deliver a final product. That’s what led me to Oak City Labs.

Once you had your software developer in Oak City Labs, what was the hardest part about going from this idea you had to an actual finished product?

By far, the biggest hurdle was doing it in a way that maintains compliance with FDA regulations. Jay Lyerly, the one who was doing the coding, knew that from the start and was able to work with my FDA consultant in a way that we could survive an FDA audit.

The thing is, FDA audits are worse than IRS audits and you’re guaranteed to get one, whereas IRS audits are random. As a medical device company, we are audited every two years by the FDA. Thanks to Jay and Carol at OCL, we’ve been able to pass every single audit with zero deficiencies, which is nearly unheard of.

Was there a moment when you got NDŌʜᴅ out into the world and thought “ok, we did it.”

Yea, there was. With FDA-regulated software you actually do have to draw that line in the sand. Unlike other software development cycles, where updates are always being pushed out, you can’t do that with medical devices. It has to be the finished product from the day it comes out. If you add features, it has to go back through the FDA approval process, which, as you can imagine, is pretty lengthy.

If you could do it all over again, is there anything that you’d do differently?

We bootstrapped the entire thing, with CNP essentially acting like an angel investor for the product. That was really tough, especially when there are people out there actively looking for good products to invest in. If I had to do it again, I would have taken the time to seek out some outside investment instead of just jumping in and doing it all myself.

When you think about where you are today as a business owner, is there anything that sticks out to you as the thing you are most proud of?

Honestly, being able to take on, create, sell and make an actual viable business out of a medical device when I had no prior experience in that industry. I had owned Carolina Speech Pathology for years, but the journey with Altaravision and NDŌʜᴅ was an entirely new one.

What’s your favorite part about doing what you do?

It has to be the satisfaction I get from solving hard problems, and the fact that it’s never boring.

Finally, whenever you have clients or colleagues that are talking about Altaravision or the NDŌʜᴅ product, what do you want them to say or know about it?

I want them to know two things. First, I want them to know it works, and always works. Second, that it is designed to be incredibly easy to use. If you can use Facebook, you can use NDŌʜᴅ.

For more on Oak City Lab’s work with Stuart Bradley and Altavision, check out this article Jay wrote on Computer Vision for Medical Devices via Core Image. If you have an idea and need a software development partner, or if you just have some questions about the development process, we’d love to talk to you. Follow the link below to contact us!

Swift materialized four years ago at WWDC in June 2014. At the outset, the language was intriguing, but the tooling was…let’s say minimal. Xcode has steadily improved since then — we can even refactor now! But one big chunk of the toolset hasn’t been Swiftified. Interface Builder (IB) stands stoic and steadfast in the face of change. Before we get into the Swift issues, let’s take a look at the history of IB.  

Back in the 80’s, some Very Smart People built the Objective-C language and its runtime which was the foundation of the NeXTSTEP operating system. Developers built user interfaces for NeXT computers using Interface Builder. (Yes, IB is 30 years old.) IB creates interface files called NIBs — NeXT Interface Builder. NeXT was acquired by Apple and the NeXTSTEP OS became the basis for Mac OS X, now macOS. macOS begat iOS and IB and its NIB files hung around, quietly doing their jobs. Today, we create user interfaces in Storyboard files, but under the hood, they’re just a bundle of NIBs and a bit of glue.  

So NIBs have been around for a while. They clearly get the job done, but how do they work? When you create an app’s GUI using IB, it snapshots the view hierarchy and writes it to a NIB. This is often referred to as a “freeze-dried” version of your UI. When the NIB loads at run time, the view hierarchy is rebuilt using the serialized info in the NIB along with UIView/NSView’s init(coder:) method. This init(coder:) uses an NSCoder instance to rebuild the view with all the settings from IB. And now comes the important part. Once the view is fully instantiated, the IBOutlets are hooked up via the Objective-C runtime, using setValue:forKey:. Because this happens after the view is created (ie, after init returns), the IBOutlets must be Swift optionals, since they are still nil after init completes. Also, because we’re using Key-Value Coding (KVC), IBOutlets are stringly (1) typed. If an outlet name changes, but the NIB isn’t updated, it’ll crash at runtime.   

When coding in Swift, we prefer immutability. We’d rather have a let than a var. Unfortunately, IB is built on the idea of on-the-fly mutation at runtime. The Objective-C runtime gives it the power to create objects and then monkey with them, trusting that the attribute names are valid and if they’re not? Crash. Game over. Thanks for playing. Please come again. So we cheat. Devs learn to mark all their IBOutlets with !, to forcibly unwrap the optional. This is an implicit promise to the compiler. “I know this is really an optional, but I swear on my CPU that it will never be nil when I use it.” Everyone has forgotten to hook up an outlet sometime. And it crashes. Force unwrapping an IBOutlet is a hack. It’s a shortcut to make it easier to take IB, a tool that relies on a weakly typed, dynamic runtime, and connect it with Swift, a strongly typed language.  

How we fix this? IB has been around for 30 years, relying on Objective-C’s loosey goosey ideas about types and KVC. How can Apple engineers Swiftify it? One way is to pass IBOutlet information to the init(coder:) method. Devs would have to implement hooking up outlets explicitly based on deserializing information in an NSCoder instance. That’s not always straightforward. NSCoder isn’t rocket science, but it’s a bit fiddly and verbose. One of the goals of Swift is give new devs less opportunity to shoot themselves in the foot. Writing this kind of intricate code just to hook up a textfield or a button isn’t very Swifty.

Let’s take a step back and consider similar problems. JSON decoding has highly similar requirements. In parsing JSON, you want to take a serialized format, dependent on string identifiers and map it into the attributes of a newly created Swift object. Early on, this was a common, tedious hurdle that developers had to overcome on their own. With Swift 4, the Codable protocol arrived, and made JSON processing so much easier, albeit through compiler magic (2). But Codable is a generic thing, not just for JSON. Out of the box, Swift comes with a plist encoder/decoder in addition to JSON. You can easily serialize any Codable object to a property list just as easily as JSON. And you can write your own encoder/decoder as well. This repo has examples of a dictionary encoder/decoder and several others.

What if you wrote a NIBDecoder for Codable and declared your view controller as a Codable object? The compiler generates an init(from:) method that takes the serialized data in the NIB file and instantiates your view object, including connecting the IBOutlets. Now it’s all Swifty! No more forced unwrapping of optionals because the outlets are all hooked up during init(from:). NIBs load now by using init(coder:), so this is a direct replacement, but instead of having a developer implement the complicated bits of deserialization, the compiler generates all the code. This would be simple for new developers, but provide the extensibility needed for complex situations to do it yourself.

The Codable approach also adds Swift style error handling. In the case where names don’t match up for some reason, the NIBDecoder can throw and the application has a chance to handle the error in a controlled way. Recoverable errors are a big upgrade from the existing situation, where any error causes an immediate and unforgiving crash. With non-fatal NIB errors, we can also think about handling these UI bundles in more dynamic ways.

I’m handwaving over some complicated processes and we haven’t even talked about backward compatibility or mixing old school NIB loading and this new Codable process. Maybe Codable isn’t the right process for this, but I think it’s tremendously similar. Maybe NIBs would get a new internal format (3) to support Codable deserialization. In any event, Swift is clearly the path forward and its time that Interface Builder evolve to work smoothly with Swift. It’s time to say goodbye to forced unwrapped IBOutlets and let Interface Builder embrace the goodness of immutable outlets. WWDC 2018 is days away, so fingers crossed.

1 – Swift is trying to avoid “stringly” typed items, even with KVC.  Swift 3.0 brought use #keyPath() which avoids the need to hard code strings in KVC code. Instead of using a string corresponding to the attribute name, like “volume”, you can give the dotted address of the attribute within the class, like #keyPath(AudioController.volume). With the hardcoded string, there are no checks until runtime. If it’s wrong, it crashes.  With the #keypath() version, the compiler finds this property at compile time and substitutes its name. If a name changes, compilation will fail and you can fix the issue. No more runtime surprises!

2 – The inner workings of Codable can get complex. It’s pretty simple for basic usage, but implementing your own encoder is a bit more intricate. Mike Ash has written a nice run down of the details. The magic part happens when the compiler generates encode(to:) and init(from:) methods for your classes that are Codable. The compiler knows all about the types of your attributes, so it can generate this correctly. You could hand write the code as well, but Swift introspection features aren’t mature enough yet that you could implement a Codable-like thing at runtime on your own. All the bits you need just aren’t there. But the compiler does a great job of implementing Codable at compile time, which’ll suffice for now.

3 – NIBs are all XML these days (XIB files). These are super complex files that are impossible to read, even though their text format invites you to try. If you’ve ever had two or more developers working on a project, it’s inevitable that the XIBs in your git repo will cause a “merge hell” situation. This would be a great time for Apple to implement a new kind of UI description file format. Maybe a well documented, clear, concise JSON format. Imagine how great it would be to be able to read the UI file. Or better yet, be able to programmatically generate one, should the need arise.

At Oak City Labs, we enjoy solving all kinds of problems. Our projects span subject areas from IoT, to mining data from social media to integrating video capture hardware. One of my favorite projects we’ve worked on recently involves computer vision and real-time video analysis of data from a medical device.

Our client, Altaravision, “has developed the most portable, high-definition endoscopic imaging system on the market today”, called NDŌʜᴅ. A Fiberoptic Endoscopic Evaluation of Swallowing or FEES system like this allows a medical professional to observe and record a patient swallowing food. The NDŌʜᴅ system is portable and uses an application running on a MacBook to display the endoscope feed in real time and record the swallowing test to a video file.

After the test is completed on the patient, the video is reviewed to evaluate the efficiency of swallowing. Ideally, the patient will swallow all of the food, but a range of conditions can result in the patient being unable to adequately swallow all the material. Particles that aren’t swallowed may be aspirated and cause pneumonia. When reviewing the test footage, the test administrator has traditionally had to carefully estimate the amount of residual material after swallowing. Not only is this extremely time-consuming, but also introduces human error and compromises the reproducibility of results.

Oak City Labs has been working with Altaravision to tackle this problem. How can we remove the tedious aspect from the FEES test and make the results available faster and with better consistency? As with all our automation projects, we’d like a computer to handle the boring, repetitive parts of the process. Using computer vision techniques, we’d like the NDŌʜᴅ application to process each frame of the FEES test footage, categorize pixels by color and produce a single numerical value representing the residual food material left in the throat after swallowing. We should give the user this feedback in real-time as the test is being performed.

The NDŌʜᴅ application runs on macOS, so we can leverage Core Image (CI) as the basis for our computer vision solution. CI provides an assortment of image processing filters, but the real power lies in the ability to write custom filters. A pair of these custom filters will solve the core of our problem.

Our first task is to remove the very dark and the very bright portions of our image. We’ll ignore the dark portions because we just can’t see them very well, so we can’t classify their color. Very bright portions of the image are just overlit by our camera and we can’t really see the color there either. Our first custom filter looks at each pixel in the image and evaluates its position in color space with respect to the line from absolute black to absolute white. Anything close enough to this grey line should be ignored, so we set it to be transparent. After some testing, it turned out that it was difficult to pick a colorspace distance threshold that worked well at the light end and the dark end, so we use a different value at each end of the grey spectrum and linearly interpolate between the two.

Throat no filter
Throat transparent filter

The top image is the original image data. The lower image is the image after the bright and dark areas have been removed. In particular, the dark area, deeper down the throat, in the bottom center has been filtered out as well as the camera light’s bright reflection in the top right corner.

Now that we have an image with the only the interesting color remaining, we can classify each pixel based on color. In a FEES test, the food is dyed blue or green to help distinguish it from the throat. We need our second pass filter to separate out the reddish pixels from the blueish and greenish pixels. In our second custom CI filter, we examine at each pixel and classify it as either red, green or blue by looking at it’s colorspace distance from the absolute red, green and blue tips of the color cube. We convert each pixel to its corresponding nearest absolute color.

Throat no filter
Throat color filter

The top image is the original image. The bottom image is the fully processed image, sorted into red and green (no blue pixels in this example). Note how the green areas visually match up against the residual material in the original image.

Finally, our image has been fully processed. Transparent pixels are ignored and every remaining pixel is either absolute blue, red or green. Now we use vImage from Apple’s very powerful Accelerate Framework to build a histogram of color values. Using this histogram data, we can easily compute our residual percentage as simply the sum of the green and blue pixel counts over the total number of non-transparent pixels (red + green + blue). This residual value is our single numerical representation of the swallowing efficiency for this frame of data.

In this process, we’ve been very careful to use high performance and highly optimized tools to ensure our solution can perform in real-time. The Core Image framework, including our custom filters, takes advantage of graphics hardware to run very, very quickly. Likewise, vImage is heavily optimized for graphics operations. We also use a little bit of the Metal API to display our CI images on screen, which is very speedy as well. While we’re enhancing NDŌʜᴅ on macOS, these tools are also quite fast on iOS as well.

At Oak City Labs, we love challenging problems. Working with real-time video processing for a medical imaging device has been particularly fun. As Altaravision continues to push NDŌʜᴅ forward, we look forward to discovering new challenges and innovating new solutions.