Stuart Bradley is the founder and CEO of not one, but two companies – Carolina Speech Pathology LLC and Altaravision. We caught up with him on a busy Monday afternoon in between meetings, and he was gracious enough to take some time to talk with us about his experience as founder of Altaravision and the interesting journey of their flagship product, NDŌʜᴅ.

Put simply, NDŌʜᴅ is the most portable, high-definition endoscopic imaging system on the market today and an invaluable tool for speech pathologists. It has been extremely well received by the medical community, but its path from concept to market was not without its obstacles.

Where did the idea for NDŌʜᴅ come from? Because it is a very specific product.

It came from a need. Specifically, the need to be able to do machine vision on a Macintosh. Surprisingly, there really wasn’t any software that addressed it anywhere in the marketplace.

Would you mind just briefly explaining what machine vision is?

Sure. Machine vision is the ability for a computer to view imagery or an object, take that information and then display it. Essentially, it is a computer’s ability to see.

And the capacity to do that wasn’t on a Mac? That’s interesting.

Well, no. There was plenty of software out there, but it was all secondary purpose. The bigger issue was that nothing had the capabilities you would need in a medical setting.

It all comes down to video capture. All of the off-the-shelf software could capture images, but they suffered from significant lag. What you saw on the screen might be a full second behind what was happening in real time. That might not seem like much, but when you are dealing with medical procedures that kind of lag isn’t going to cut it.

I played around with off-the-shelf software for a number of years and finally found something I thought might work, but there were a ton of features that I didn’t want or need. I reached out to the developer to make me a one-off, but he was ultimately unable to deliver a final product. That’s what led me to Oak City Labs.

Once you had your software developer in Oak City Labs, what was the hardest part about going from this idea you had to an actual finished product?

By far, the biggest hurdle was doing it in a way that maintains compliance with FDA regulations. Jay Lyerly, the one who was doing the coding, knew that from the start and was able to work with my FDA consultant in a way that we could survive an FDA audit.

The thing is, FDA audits are worse than IRS audits and you’re guaranteed to get one, whereas IRS audits are random. As a medical device company, we are audited every two years by the FDA. Thanks to Jay and Carol at OCL, we’ve been able to pass every single audit with zero deficiencies, which is nearly unheard of.

Was there a moment when you got NDŌʜᴅ out into the world and thought “ok, we did it.”

Yea, there was. With FDA-regulated software you actually do have to draw that line in the sand. Unlike other software development cycles, where updates are always being pushed out, you can’t do that with medical devices. It has to be the finished product from the day it comes out. If you add features, it has to go back through the FDA approval process, which, as you can imagine, is pretty lengthy.

If you could do it all over again, is there anything that you’d do differently?

We bootstrapped the entire thing, with CNP essentially acting like an angel investor for the product. That was really tough, especially when there are people out there actively looking for good products to invest in. If I had to do it again, I would have taken the time to seek out some outside investment instead of just jumping in and doing it all myself.

When you think about where you are today as a business owner, is there anything that sticks out to you as the thing you are most proud of?

Honestly, being able to take on, create, sell and make an actual viable business out of a medical device when I had no prior experience in that industry. I had owned Carolina Speech Pathology for years, but the journey with Altaravision and NDŌʜᴅ was an entirely new one.

What’s your favorite part about doing what you do?

It has to be the satisfaction I get from solving hard problems, and the fact that it’s never boring.

Finally, whenever you have clients or colleagues that are talking about Altaravision or the NDŌʜᴅ product, what do you want them to say or know about it?

I want them to know two things. First, I want them to know it works, and always works. Second, that it is designed to be incredibly easy to use. If you can use Facebook, you can use NDŌʜᴅ.

For more on Oak City Lab’s work with Stuart Bradley and Altavision, check out this article Jay wrote on Computer Vision for Medical Devices via Core Image. If you have an idea and need a software development partner, or if you just have some questions about the development process, we’d love to talk to you. Follow the link below to contact us!

A while back, we introduced you to Amazon Web Services (AWS) for non-technical folks and today we’re continuing the discussion with the AWS cloud based Relational Database Service (RDS). Understanding the basic components of AWS like EC2 and RDS are some of the foundation blocks for most software, including mobile applications. Gaining high level knowledge in these areas can help, whether you’re a new engineer, manager or startup founder. And today’s topic is super important. Why? Because a database typically houses the most important information about your product, customers and business.

A database is at its most basic, a repository for information. With a large software product, you might have multiple databases, but for today we’re going to focus on a single database since most companies start with one. A database could be compared to a fancy spreadsheet.  Instead of tabs like in Microsoft Excel you might have tables and each table contains different bits of information. The way the information in the tables is laid out is called the data model. This is incredibly important as a software product scales because poorly structured data can be a pain in the you know what later on. So it’s not quite as simple as Excel.

The database for your application needs somewhere to live (like AWS or other cloud providers) and an engine that runs it (like Microsoft SQL Server, PostgreSQL, MySQL, Oracle, etc).  For most of our clients, we use AWS and PostgreSQL. In AWS we have a few hosting options, one is that we could spin up an EC2 instance and install PostgreSQL on that instance and then go from there. However, when we do that, we need to worry about making sure it’s highly available (always up), backed up (disaster recovery) and updated (latest version). If that single EC2 instance were to stop working, then we lose access to our data…and with most applications, that’s not a good thing.

That’s why Amazon introduced the Relational Database Service (RDS). RDS is not a specific type of database or database engine. It’s a managed service running in the AWS cloud for databases that is easy to set up, deploy, scale and update a database with the click of the button. No need to worry about high availability or keeping the database versions up to date. RDS will take care of it for you. Need to have everything backed up? No problem, it’s built in to RDS. Instead of building out everything necessary to have a highly available, durable and updated database, it’s already built into RDS.

On the cost front, RDS can look expensive but consider most companies would need to employ a dedicated person to setup and manage the infrastructure. And instead of days to setup a database, it took a few minutes. That’s time to be used elsewhere. RDS has simplified an incredibly complex process and should be considered as part of any scalable software infrastructure.

We’ve covered what a database is, why it’s important and why you should consider RDS as an easy way to setup a highly reliable database running in the AWS cloud. If you have any questions about how this might fit into your project, send us a note.

Amazon recently announced over 100 new cloud services and products at the latest re:Invent conference. While there are tons to be excited about, there are four cloud services that we look forward to using in future projects. Most we’ve had to build from scratch at some point or have had clients ask for them only to be disappointed in the high cost of development. All four services introduce easier ways to integrate machine learning into your cloud based software or mobile application, without significant added cost.

Amazon Personalize

When shopping on Amazon.com, have you noticed the recommendations based on your latest purchases? That’s all created by a recommendation engine that is based on things you like, and things other people like you have purchased too. Amazon is now making similar recommendation models available to developers. Imagine you have an app, like CurEat, that curates local restaurants and lists.  You can now make more personalized recommendations using Amazon Personalize instead of developing something from the ground up.

Amazon Forecast

Remember using spreadsheets to forecast sales for your company, or maybe for a school project? Ok, so maybe not everyone had to do that, but at some point in your career you’ve likely been asked to forecast sales, inventory, or some sort of business or application metric. With Amazon Forecast, you can now feed your data into deep learning algorithms based on the same algorithms Amazon uses for their own business. Amazon says their forecasts are 50% more accurate and are completely automated. Say goodbye to continuous feeding and care of spreadsheets for forecasts. As developers, we can use Amazon Forecast in all sorts of features and components, from predicting product usage to in app features like forecasting production material needs or equipment breakdown in IoT devices.

Amazon Textract

Amazon Textract is like an OCR service except it goes a few steps further and can extract data from fields and tables, and do so very affordably. This is great news for startups or really any company needing to integrate some level of OCR into their product. Imagine quickly building a mobile app that scans old school paper copies of insurance claims, medical records or any paper form. Then taking that data and uploading to a CRM or EHR system, quickly and easily. That’s just the beginning of what’s possible with Textract and there are sure to be more complex, more exciting uses for something that is now incredibly inexpensive and accessible. How inexpensive? Try $1.50 per 1,000 pages for the Document Text API. Read more here.

Amazon Comprehend Medical

Finally, for our medical device and healthcare clients, we’re super excited to see Amazon Comprehend Medical, an expansion of Amazon Comprehend. Amazon Comprehend Medical uses Natural Language Processing to process text in documents and files. Say you have years of medical records that weren’t exactly filed away correctly. Now you can use Amazon Comprehend Medical to process those files and look for patterns. For example, maybe you have an archive of unstructured documents, like physician notes, and you want to extract documents pertaining to a particular medical condition. You can use Amazon Comprehend Medical to look for the medical terminology that coincides with that condition, making it possible to comb through archives in a matter of minutes without manual intervention. It also has the ability to detect Protected Health Information (PHI) which could be used for organizing data or in some cases, avoiding parts of data that may not be necessary for a specific use case.

These are just four of the new services that will be available via AWS in the next few months, and we’re excited to help our clients introduce new features that are now more affordable than ever. If you’d like to hear more about what AWS can offer, contact us or read all the latest announcements from re:Invent here.

Today we’re kicking off a series that will tackle the basics of the Cloud – specifically Amazon Cloud. At this point, most everyone knows about the Cloud, but you may not know some of the basics, especially in regards to Amazon Web Services (AWS). The goal is to break down some of the most popular components of AWS in hopes that it becomes less overwhelming, aids in risk management and gives you a more solid understanding of what it means when your software application is running in the Cloud.

What is the “Cloud”?

The Cloud is any publicly available on-demand infrastructure or computing resource. On-demand is key. As an example, Amazon, one of the most well known technology companies around, offers a service called Amazon Web Services (AWS). For this conversation, I’ll focus on AWS terminology. Why? Because as of Q1 2018, Amazon still leads overall market share among cloud providers with Microsoft, showing strong growth but still far behind.

API & Elastic Compute Cloud (EC2)

Before we go any further, it’s helpful to understand a tiny bit about how cloud based software applications work. In our API 101 article, we mention that your phone makes contact with the Cloud via an API, which is pretty much just a URL that does fancy stuff. All of the work and logic that happens behind the API is hosted on a cloud service provider like AWS, typically using Elastic Compute Cloud (EC2) as the foundation.

As a systems engineer, I worked at several Software as a Service (SaaS) companies and we hosted all of our software in data centers. A data center is often a really huge building that contains a ton of computers, also known as servers. Anytime we wanted to expand or scale our software application we would buy more servers and rack them in the data center. There was much joy in the unboxing of new servers. That new hardware smell…like a new car. One of my teammates was even known to sniff the new servers, that’s right…you know who you are.

Sorry, back to the story. With AWS, a physical server equivalent is an Elastic Compute Cloud (EC2) instance. No need to purchase hardware, cable it and pay for power and location at a data center anymore – simply go to the AWS Console and create one on the fly. You may hear the term virtual machine or virtual server. An EC2 instance is Amazon’s version of a virtual server. A physical server, like those in the picture above, host several virtual servers. At a super high level, they’re basically special files that behave very much like a traditional server, except you can orchestrate these files, move them around to different physical servers, copy them, delete them and much more. Instead of looking at racks of servers, you login to the AWS Console, go to EC2 and have instances running like the one below:

Once an EC2 instance is setup and available, it operates very much like a physical server. You can login to the server, install software packages, reboot, shutdown and so on, just like a normal computer. For more scalable infrastructures, which is a topic for another day, we can take advantage of things like auto-scaling groups and other automation options that don’t require someone to login.

Amazon also provides a wide variety of services that make up Amazon’s total cloud service offering, “the AWS Cloud.” So when someone says they’re in the Cloud (with Amazon), they really have the software infrastructure hosted within one of Amazon’s massive data centers on Amazon physical servers. The same concept applies to Google Cloud and Microsoft Azure and several other cloud providers. It gets complicated because the terminology changes among the providers, as well as how they each approach their service offerings.

Understanding the basics can help you avoid any major issues down the road as your software architecture gets more complicated. Stay tuned to learn about Amazon Relational Database Service (RDS), in our next exciting post about AWS Cloud 101!

If you’re anything like me, you’ve spent an unhealthy amount of time perusing adorable cat GIFs on your phone, tablet, computer, tv and anything else that will let you get your fix of cuteness.

via GIPHY

In addition to browsing cat GIFs on every device I own, I can also check my email, listen to my collection of music, chat with my friends and even write blog posts! On every device I own I can access the same emails, the same music, the same chats, and the same text documents.

In a world where seemingly everything is connected, it is almost unheard of for a web app to exist without an associated mobile app, and vice versa.

Build Once, Use Anywhere

To reach so many diverse platforms, it is crucial to initially develop with the “Build Once, Use Anywhere” mentality. What do I mean?

Let’s say you want to create a program that lets you browse adorable cat GIFs (if you do end up doing this, definitely let me know!). You decide you want to build a website and an iPhone app, but also want to be able to add iPad and Android apps in the future. To “build once, use anywhere” in this case means building one API that knows how to talk to all of the apps, current and future, that you create, rather than creating an API for each app.

API

An API allows one program to interact with another. In this case, you would create an API that communicates between your cat GIF storage and the mobile and web apps people are using to view and upload cat GIFs. So, let’s come up with a few API commands you’ll want:

  • Create – Allows user to upload a GIF (Technical term: POST)
  • Delete – Allows user to delete a GIF (Technical term: DELETE)
  • Update – Allows user to change the name of a GIF (Technical term: PATCH)

For reference on the technical terms listed above, check out this article.

Now that we’ve defined our API commands, we need to create a way for apps to interact with the API. Both the device running the apps and the device running the API must be connected to the Internet. In this case, the API is part of a program that runs on a computer called a “server” which is connected to the internet.

Server

In computer science jargon, this project structure can be referred to as a “client-server model”, where one server (a program that runs your API) knows how to communicate with many clients (iPhone, Android, desktop, laptop, etc). If you want to learn more about the details of how a client-server model works, check out this article.

In reality, the server is just a computer in “the cloud,” which means it is just a computer connected to the open internet!

https://cdn.techterms.com/img/lg/client-server_model_1253.png

Storage

Now that your API connects your server with your apps, you need a database to store the cat GIFs that are uploaded. A database can either be hosted on a server of its own or the same server as the API. To keep it simple for this example, we’ll just say the database is hosted on the same server as the API.

The goal of the database is to keep it isolated so that only the web server framework running the API knows how to access the data. This keeps the data secure and abstracted in a way that only one device (the server) has to know how to access it instead of all the devices users have.

Takeaways

  1. If you’re looking to have a mobile app built, make sure the API is being developed with the “build once, use anywhere” mindset. This is one of the easiest money and time-saving decisions in modern programming.
  2. Whether you’re on a phone, tablet, computer or other device, if you’re using the same service (e.g. Google, Spotify, etc.) then you’re likely interacting with the same API to access that service on all devices!
  3. “Build Once, Use Anywhere” allows you to build the logic behind a service once and allow use of that logic from as many devices as you want (or can afford).
  4. If you build an awesome cat GIF app, make sure to let us know!

At Oak City Labs we develop using a particular stack of technologies we are not only comfortable with, but believe to be the most effective tools in solving whatever challenges come our way. Too often when a developer says “We use X framework or Y library”, it sounds like utter gibberish and has no real meaning to non-technical people. This disconnect is the inspiration for my blog series: Non Technical Overviews of Technical Things.


Hey there!

Welcome back to my series of non technical overviews of technical things. This week I’ll be educating you on the powerful yet soothing nature of Git, a modern version control software. According to the Git docs, version control is “a system that records changes to a file or set of files over time so that you can recall specific versions later.” Sounds useful, right?

You may find version control useful if you have ever found yourself saying any of the following:

  • “Oh #&(@ I ‘accidentally’ deleted EVERYTHING!!!”
  • “Oh #&(@ I accidentally deleted EVERYTHING!!!”
  • “What did version 4.0.5.6.2.10.549.1 of this file look like?”
  • “Who made this change to this file?”
  • “How can my team make changes to the same file without overwriting each other’s work?”
  • “Do we backup our files?”

Intrigued? Great! Proper use of version control is one of the easiest and most efficient ways to keep your projects organized. Now a twist on that old familiar story to help explain things…

You’re A Web Developer, Charlie Brown

Chuck, Patty and Linus are working to build a website together. Chuck is a graphic artist, while Patty and Linus are developers.

The trio decides to use Git to organize and store their project. We will visualize their Git process using a sticky note board, a la realtimeboard.com. For clarity, we’ll match their sticky note colors up with their shirt colors: Green for Patty, Yellow for Chuck and Red for Linus.

Step 1: Set up Git

A nice aspect of Git is that it allows you to store your main project files anywhere. Git has a concept called a ‘repository’ which you can think of as the main storage location. In this example, it will be the whiteboard, but in real life, it can be any computer with the ability to store files. The most popular ‘hosting as a service’ platforms are Github and Bitbucket. For people who prefer to host the files themselves, Gitlab is a trusted option.

With Git, each person working on the project maintains a copy of all the project files at all times. The remote repository is a copy of the project from which all others pull from. When someone finishes a task in their local repository, they will push it up to the remote repository, and when they want to make sure they have the latest versions of all files, they pull down from the remote repository.

To start their project, they store one file on their remote repository:

  • index.html – The main website file

Here is a visual representation of Team Peanuts’ Git usage for their project:

Step 2: Assign tasks

Now that the team has set up their remote repository, they choose tasks for each of them to work on.

Step 3: Setup for development

Now that the team has decided what they will work on, it’s time to start developing! Since the project is currently only stored on the remote repository, all three of our favorite character use git clone to create a clone, or an exact copy, of the remote repository. Now Chuck, Patty and Linus are all set up and ready to go!

Step 4: Development!

Being the efficient, diligent team they are, the Peanuts gang all get to work immediately doing what they do best! Chuck gets to work on the website icon, while Linus and Patty begin coding.

While Chuck is busy creating the website graphics in Adobe Illustrator, Patty and Linus each work on their copy of the index.html file, which was retrieved from the remote repository and acts as the main code file for their website.

Step 5: Sharing Work

Patty finishes her work first. Since she edited index.html, she needs to make sure her teammates get the very latest version of that file. Since the Peanuts team project is structured where each of the three characters’ local repositories (clones of the remote repository) only receive files from the remote repository, Patty has to push her changes up to the remote repository for her teammates to see.

To communicate with a Git repository, local or remote, you have to package your code changes in a commit. Think of a commit as an envelope containing all the changes that have been made recently. As soon as a commit is created, any changes made after that will fall into the next commit envelope.

Patty bundles her index.html changes into a commit called “Add boilerplate page layout”. Now, her local repository has been notified of these changes and can communicate to the remote repository whenever Patty is ready.

Since Patty works quickly, she goes ahead and pushes, or shares, her commit with the remote repository. In practice, Patty could send one push with many commits to the remote repository.

Before Patty pushes her files from local to remote, the Git board looks like this:

After pushing, the Git board now looks like:

At this point, Patty has completed her job and it’s up to Chuck and Linus to update their local repositories to mirror the remote repository and remain up-to-date.

Step 6: Sharing Work Pt. 2 // Updating Local Repositories

Right now, Chuck and Linus’s repositories are outdated. They lack the changes that Patty made. That’s alright for Chuck, though. His work does not touch the same files as Patty, so he’s not worried. He bundles his website icon in a commit.

Now that his local repository has the commit, Chuck is ready to push his local repository changes to the remote repository.

Chuck cannot do that, though. Why? Because Git would reject his commit. Git would say “Chuck, you don’t have the latest files. Your local repository is not in sync with the remote repository. Pull down the latest files before you try to push your changes.” Okay, Git wouldn’t actually say that…but it does reject his commit. How does Git know Chuck’s local repository is outdated? Magic. Also, because Git tracks every change in the history of the project, and it sees that his latest commit is one commit ahead of the latest commit on the remote repository. In order to make sure the project remains logically sound, Git requires that all local repositories are up-to-date with the remote repository before sending new commits.

Chuck pulls the latest code from the remote repository. To perform this operation, his local repository pulls down all of the commits that Chuck doesn’t  have. Chuck can now push his icon, so he does.

Step 7: Merge Conflict Resolution

So far, all has been fine and dandy. Patty and Chuck were working on separate files, so there were no conflicts when Chuck’s commits were laid on top of Patty’s. But Linus has been working on the same index.html file as Patty, and now Linus is two commits behind the remote repository (one Patty commit, one Chuck commit).

Let’s say Linus finishes his work and bundles it in a commit envelope titled “Website spinning.” We already know he won’t be able to push, as we saw with Chuck, until Linus pulls the latest commits from the remote repository. Linus does so.

!!! Oh no !!! CONFLICTS DETECTED. What happened? Patty and Linus edited the same file. This means in order for Linus’ work to be applied on top of the commits from remote, any conflicts that arose from editing the same lines in index.html must be fixed before Linus can push his work to remote. In about 90% of cases, Git will actually be able to automagically fix these conflicts without you having to do any extra work. The other ~10% of the time, however, require manual conflict resolution. To do this, Git will show you which parts of the files are in conflict so that you can go in and fix it yourself.

Luckily for Linus (and for you, blog consumer), Git handled all conflicts that arose with aplomb. Now that Linus’ local repository has the latest commits from remote AND has his latest local commit, Linus can push his code to the remote repository.

Voila! Now Patty and Chuck can pull the latest changes from the remote repository and tada! Everybody’s local repository is synced up with the remote repository, and website development is well on its way.

Commands

Here are the Git concepts I covered in this post, alongside the actual Git command you would use to perform such an action.

  • Add – git add
  • Commit – git commit
  • Pull – git pull
  • Push – git push

To create a local repository:

  • If remote already exists:
    • git clone <url>
  • If remote doesn’t exist:
    • git init

To create a remote repository:

  • If using a provider such as Github or Bitbucket:
    • Create one via their website
  • If self-hosting:
    • Create one the same as you would create a local repository

What We Learned

Git is an incredibly powerful tool that we only scratched the surface of in this post. There are far more sophisticated aspects like branching and rebasing that weren’t even mentioned.

If you want to get started with git, this simple guide is a great place to begin your journey. Until then, the doctor is out.

At Oak City Labs we develop using a particular stack of technologies we are not only comfortable with, but believe to be the most effective tools in solving whatever challenges come our way. Too often when a developer says “We use X framework or Y library”, it sounds like utter gibberish and has no real meaning to non-technical people. This disconnect is the inspiration for my blog series: Non Technical Overviews of Technical Things.


Hey there!

Welcome back to my series of non technical overviews of technical things. This week I’ll be discussing the basics of how a website is born. Well, you see, when a mommy website and a daddy website love each other very much…

Just kidding! Modern websites are composed of the following three components:

  • HTML (HyperText Markup Language) – Content and Structure
  • CSS (Cascading Style Sheets) – Appearance
  • JS (Javascript) – Functionality

If you have heard these terms before but have no idea what they mean or how they affect your website, have no fear! Soon you will have a grasp on the basics of how websites are made so that you can appear hip and knowledgeable to all your colleagues.

Via

House Analogy

Take this house:

Via

Pretty swanky house, right?

Right. Now, let’s divide this house into its structural, aesthetic, and functional components.

  • Structure and Content – Structurally we have the foundation the house is laid on, the walls, floors, ceilings, supports, frames, etc. For content, we have the potted plants, the couches, chairs, tables, TVs, beds, appliances, nerf guns, and anything else that gets placed inside the house.
  • Aesthetic – For appearance, we think of the way the house looks. This component spans everything from the type of wood on the floor to the color of the walls to the way the furniture is arranged. Anything that changes the way the existing pieces of the house look is lumped into this category. This includes changing the width and height of structural elements, as well as how much space exists between structural components like the couches and TV.
  • Functionality – Think electrical, gas, water, internet. Anything that makes the house work like a house should as compared to a hollow, functionless shell.

Structurally, a few rooms in our house might look something like this html file:

Aesthetically, we can add some color, borders, and widths of certain things like in this css file:

Functionally, we can specify interactions with elements from the html in a js file:

Now, this code is far from complete, but I want to highlight a few things about it by using the door as an example:

  • An element gets defined in the html file:

  • A style gets defined in css so that anytime that element appears in the html file, it has the same style:

  • Finally, functionality for the door gets defined in the js file we saw above:

The website a user sees is essentially the structural, aesthetic, and functional components working together to create an experience.

Finally, I want to give a quick real-world example of these three components.

Navigate to Google in Google Chrome browser:

Right click, hit ‘Inspect’:

The developer console will open at the bottom of your browser

Search for and highlight the line that starts with <body ……>

On the right side, you’ll see the following:

Highlight the #fff as seen above, change it to #000 as seen below, and hit enter:

Boom! Google is now black. You just used hex codes and css to change the color of Google!

Now, you might be wondering whether other people can see the changes you made or not. The quick answer is nope, nobody else can see what you did. Why? Because when you access a website, the HTML, CSS, and JS files for that website get copied from their computer to yours. Any changes you make to the files that were downloaded to be viewed in your web browser are ONLY on your computer. You can make Google look black, white, rainbow-colored, filled with unicorns, or even covered with custom poetry, but in the end only you can see your changes! And if you refresh your page, all your custom changes disappear and the files look the exact same as the ones from the server.

I bet you’re wondering now how the files from the website you’re accessing get to your computer in the first place. The quick answer is websites on the internet are all hosted on servers. Server is a fancy word for a computer that host publicly accessible information via the internet. Last time I described the web server framework we use at Oak City called Flask. Flask helps us manage all the different pieces that go into creating our mobile and web apps, but in this instance, it is largely responsible for allowing the HTML, CSS, and JS files of our websites to be downloaded and accessed by users.

What We Learned

  • Websites are comprised of three components: structural, aesthetic, and functional
  • These components, respectively, are called HTML, CSS and JS
  • These files are copied from servers to users’ computers when websites are accessed.

The internet is an exciting place where you can watch cat videos, promote your business, or brag about your sock collection. Or all three, at the same time. You could make a website with a video of your business that sells hand-knitted cat socks. That’s a free business idea for you right there. Don’t forget about me when you’re famous!

I hope you found this post informative. Fun fact, building websites is one of our specialties here at Oak City Labs! Drop us a line if you’re interested in chatting about having a website made for you or your business.

At Oak City Labs we develop using a particular stack of technologies we are not only comfortable with, but believe to be the most effective tools in solving whatever challenges come our way. Too often when a developer says “We use X framework or Y library”, it sounds like utter gibberish and has no real meaning to non-technical people. This disconnect is the inspiration for my blog series: Non Technical Overviews of Technical Things.

In this post, I will cover Flask, the web framework we use. You can think of a web framework as a set of building blocks that developers use to write a web application. It’s important to note that in this article when I say “web framework,” I’m talking about a web server framework. In other words, Flask lives on a server in the cloud rather than running on a user’s computer. Flask is an “unopinionated micro web framework” written in Python.

Let’s break that down:

Micro

Why “micro”? Because instead of being an all-in-one swiss army knife style solution, Flask is an add-as-you-go barebones solution from the start. This low barrier to entry allows developers who want to build a small program or a quick prototype to just download and get going with a few simple lines of code. According to the Flask docs, “The ‘micro’ in microframework means Flask aims to keep the core simple but extensible.”

Unopinionated

Why “unopinionated”? Again, the Flask docs explain it best, “Flask won’t make many decisions for you, such as what database to use. Those decisions that it does make … are easy to change. Everything else is up to you, so that Flask can be everything you need and nothing you don’t.”

Web Framework

A web application could be anything from a website to a mobile app to an online document editor to an email program. In programming, a framework is a tool that aims to prevent developers from having to rewrite code each time they want to accomplish a task. Thus, web frameworks fill the role of facilitator and middleman by allowing developers to quickly create web services, resources and APIs that people can interact with through user-facing programs like web browsers and mobile apps.

Okay, that’s cool and all, but what does a web framework mean to me, an average user?

A lot of tasks in web development are very repetitive. One example is as simple as a user visiting a webpage. Let’s use Google as our example. Let’s say you open up your favorite web browser, type “Google.com” and hit enter. You see something like this:

In this scenario, the web browser is in charge of:

  • Displaying what you see, from the search box to the Google logo
  • Allowing you to interact with the page

And the web framework is in charge of:

  • Providing your web browser the code for the page, the images, and any scripts that run on the page.
  • Performing any searches you enter
  • Logging you into your Google account and ensuring you stay logged in

Why do we use Flask?

As a services company, we often begin working with clients by building small prototypes to test the viability of certain ideas for them. As such, we have found that choosing tools that allow us to easily and quickly build prototypes for potential long-term projects is essential to success.

Extensibility

In the software world, prototypes are often scrapped once a project is given the green light. With Flask, we can quickly build up a prototype and just as easily build on top of it as we go. Think of Flask as a kit of legos where the main building block has nodes on every side to be built on top of. You can customize it any way you want, work with other technologies any way you want, and it doesn’t tell you that you should do anything a certain way.

Durability

While Flask is small in size, it is developed in a way that allows it to scale incredibly well. The Flask docs have an entire chapter on “becoming big.” Even Pinterest uses Flask for their API! For CurEat, we built the original prototype using Flask, and now in release, with thousands of users and about a dozen pieces of technology all working together, Flask is still working like a well-oiled machine, showing no signs of slowing down.

Additionally, the small overall size of Flask means there are less places for security vulnerabilities to occur. If there ever were to be a vulnerability in the code discovered, Flask’s large, active open source community would handle it swiftly and promptly release an update.

Here’s a Quora post showing some large web applications that use Flask.

Python

We really like Python. It is a well-documented and incredibly popular programming language being used increasingly for web server development due to its modularity and ease of use. The other popular web server programming languages are PHP, Ruby, and Java. PHP and Java show their age and are well known for not being quick prototyping languages. Sinatra is the Ruby equivalent to Flask. We find Python to be an easier programming language to work with overall, and more developers are familiar with it, so there is typically more out there in terms of 3rd party libraries and documentation.

Flexibility

This is perhaps the most important point of all. As an app development shop, we have to always be ready to adapt to rapidly changing requirements. Flask allows us to easily change direction without having to backtrack and lose valuable time. Flask also allows us to change the type of tools we use alongside it without having to change the logic itself. For instance, we could go from using a simple file for data storage during prototyping to a fully redundant PostgresSQL database in production without any change in logic. This ability to change at a moment’s notice is very helpful when we are deciding how we want to structure a project.

Recap

  • Flask is a web server framework, which is a crucial piece of any app or website that uses the internet.
  • Flask helps us manage all the different pieces that go into creating our mobile and web apps.
  • We like Flask because it is small, flexible, durable, and allows for easy prototyping.

This was the first part of a series of non-technical overviews of technical things. Stay tuned for more!