At the end of next month, I’ll be hanging up my project manager hat and embarking on a new journey. Over the past year, I’ve been balancing my role here at Oak City Labs with graduate school as I’m pursuing my Master’s of Arts degree in Teaching from Meredith College. Though I’ve spent the better part of the past decade working in the technology industry, I felt a shift in my goals and interests and, with the amazing support of the Oak City Labs leadership team, I decided to make a career change.

Why tell you this?

Because in some ways, I’m not really leaving the technology industry at all. I’ll just be applying my skills in a different way to a different set of “clients” (read: elementary school students).

In my time spent in graduate school and in field placement positions this past year, it has become increasingly clear to me that there is more of a need for globally-minded, technologically-equipped educators than ever before. The reality is that educators need to be preparing students for jobs that don’t even exist yet. Yes, you read that correctly. According to the World Economic Forum, 65% students entering elementary school now (aka my future “clients” if you will) will hold jobs that don’t even exist yet. And that data is two years old. The numbers have certainly increased since then.

I think about some of my recent blog posts on artificial intelligence, machine learning, computer vision and machine vision. As cutting edge as these technologies are, odds are they will have significantly evolved by the time current primary and younger secondary school students graduate high school in 8-10 years. Therefore, instead of preparing students for specific jobs, we are charged with preparing students with skill sets that will grow with them as this world also grows.

Figuratively, that preparation is a multi-layered, interdisciplinary approach to learning beginning with the earliest grades through high school graduation. It looks different for every student and every teacher. Practically that preparation begins with integrating meaningful technology in the classroom, expanding student learning through social studies and science, as well as enhancing student understanding through the arts.  

The hope is through all of our efforts, we’ll prepare students not for the jobs that artificial intelligence will certainly replace, but for new jobs that work alongside artificial intelligence. While machine vision may eliminate the need for a factory worker to inspect products, machine vision will certainly create the need for software engineers to manage the inspection system. And desirable software engineers will need to possess specific skills in technology, along with soft skills like critical thinking/problem solving, collaboration, communication and creativity/innovation.

So I leave this role, company and industry with a lot of change ahead, but I’m hopeful that my efforts will foster students that are better prepared for those jobs that don’t even exist yet. And maybe even some future employees of Oak City Labs.

PS – Did you hear? We’re currently on the hunt for a project manager and software developer. Check out our Careers page for more information and job details.

Today we’re back again sharing the basics of two not-so-emergent technology concepts and breaking down the basics between each: computer vision and machine vision.

It is possible that you’re reading this blog and have never heard of computer vision or machine vision. The concepts are well-known and discussed within the technology world, but the same can’t be said for the general public. Despite that unfamiliarity, the general public is already experiencing computer vision and machine vision in ways they may be surprised by. Read on to learn more.

Computer Vision

Computer vision falls under the Artificial Intelligence umbrella just like machine learning does. The goal is to utilize computers to acquire, process, analyze and understand digital images or videos. For instance, computer vision is being employed when a train station has a computer use security camera footage to count the number of people entering and exiting instead of manually counting with a turnstile. Or computer vision is at use when driverless cars use a live video feed to make decisions about turning, braking, speed, etc.

Have you seen the augmented reality capabilities from IKEA? The company encourages you to use your device to video your living room and then they virtually place sofas, coffee tables and chairs in real time for your consideration before making the big purchase. That’s possible because of computer vision. Summed up, computer vision is attempting to use a computer to emulate the human eye, visual cortex and brain when acquiring, processing, analyzing and understanding images.

We’ve talked about computer vision before here on our blog.

Machine Vision

When computer vision is put in place in an industrial (and sometimes non-industrial) setting to inform operations and functions of a machine, we call that machine vision. An inspection task at a manufacturing facility once performed by humans, can now be performed by machine vision.

Machine vision is at use when at a manufacturing facility, a machine will scan (read: computer vision) a bottle to ensure the liquid product (like cleaning solution, soda, medicine, etc.) is correct, the fill level is correct, the container is free of flaws, the correct label is placed (and placed straight!), the expiration date is correct, etc. And when one or more of these conditions aren’t met, machine vision has logic in place to tell the production line to reject the item. The beauty of machine vision is that all of the sample analyses I gave above are performed by one machine, with a high-degree of accuracy, over and over again.

At Oak City Labs, our mission is help businesses and organizations solve daily problems with technology. Utilizing computer vision and machine vision are excellent ways to accomplish that task. Do you have a problem that you need help solving? If so, let us know! We’d love to chat.