How to build an Edge optimized computer vision application?

Unravelling Computer Vision

Here’s the first episode of the podcast series “Unravelling Computer Vision and Edge AI for the real world” Let’s talk about the software and hardware building blocks that are required to build successful real-world deployable computer vision applications.

4 questions this podcast will answer:

01

What are the major components of computer vision application– and how to select them

02

How to discover the key characteristics of a vision sensor?

03

Why selecting the edge computing hardware or the processor is extremely important?

04

How to discover the key characteristics of a vision sensor?
Subscribe to VisAI Labs' Newsletter

Error: Contact form not found.

Relevant link:
Podcast Transcript:

This is Alphonse and welcome to episode one of my podcast unraveling computer vision and Edge AI for the real world.

In today’s podcast, I’ll be talking to you on a very simple topic that is what are the software and hardware building blocks required to build a successful real-world deployable computer vision application.

Looking forward to engage with you in this podcast.

To give it to you in a nutshell, a successful and practical real-world deployable computer vision application consist of the following components.

It has a camera with good optics along with an edge AI computing platform, along with Edge optimized AI/ML algorithms and a front end application. Too many components are confusing?

Well, don’t fret. Over the course of this podcast, I’ll break down each one of these components and explain to you exactly what are the characteristics you look for while choosing the right fit components for your computer vision application.

Just remember before even diving into these components to build your computer vision application.

Make sure you have done your market research and validate it the use case for which you have going to build practical computer vision application.

So that is an important career you need to look into before you even dive into integrities of computer vision application solution developments.

Now going into the components, let me first talk about the vision sensor.

The vision sensor is nothing but a camera which will capture light to form a digital image, whilst optics is nothing but the lens which is attached to the camera sensor.

Choosing the right vision sensor is the first step to building a successful computer vision application. I basically look at multiple characteristics such as resolution, image quality, white balance, and exposure to choose my vision sensor.

Now once you have choosen the right embedded camera for your product, you need to look at edge computing hardware or the processor which you go into your computer vision application. Before we go into explaining what are the characteristics you need to look out the processor, there is an larger conversation revolving around whether to use Edge AI and your building your computer vision application.

There seems to be a lot of conversation going around it, and depending on use cases, my conversation varies.

But what I see nowadays is that many computer vision application leverage edge AI rather than cloud AI mainly because of increased privacy and low bandwidth. But that is a topic for another day. Coming back into the podcast.

Many processors such as Nvidia, coral, and intel real sense in the market you are hard-pressed to choose the right processor for the right use case personally I tend to use multiple parameter to choose my processor.

Remember, all these parameter differ based on my use case. Some of the parameters I tend to use include inference performance, the power consumption of the processor, how well does the processor integrate with my image sensor, and most importantly, depending on the use case, will my processor operate in the given temperature range which the use case will be subjected too.

In addition to these, I will also look at the size fitment of the edge computing hardware and how it fits into my overall computer vision application because computer vision applications have space constrain, and we need to make sure that the processor fits right and sits snuggly into our computer vision applications.

Well, while these are some of the parameters I looked into while choosing a processor.

The most important consideration I have is that I always keep my eye on the bill of materials because what is the use of a computer vision application if it is too costly for real-world deployed.

The easier thing is to build everything on Nvidia, Xavier, and, you know, give it the best of performance and go into the market.

But it is not how it works in the real world. Every application has its own value, and the pricing of the application is dependent on the value it provides.

So we need to make sure that we provide the right fit processor especially based on end to end-user so that they will actually deploy it in the real world.

If not, we are extremely awesome computer vision application. You just be a curiosity.

Now this conversation we have actually assembled on hardware, and now the time has come for us to look at software side for our application

You have two components in the software side of your computer vision application development.

One is building your front end application which will be expose to the user while the other is the edge optimization AI/ML algorithms which will can we call the brain behind the whole computer vision application.

I am not be talking much on the front end application as most of us have a fairly good idea, And so on where I will focus on is how to build this Edge optimized AI/ML algorithms.

First of all we need to understand this AI/ML algorithms built for the Edge is different from the algos you  built and deployed on servers.

Edge AI is not equal to algos deployed on servers.

That something which most people have to understand while building an embedded AI using computer vision application.

You have to remember that these algorithms run on low-power edge computing platform with limited processing power so and so.

That embedded world has a separate name for these algorithms, and it is called Tiny ML.

Truth be told, I get really excited when I start talking people about the software side of embedded hardware, especially how the Edge optimized ML algorithms space is moving.

A slight digression here, but in a business side of things edge computing market alone will grow by around 35 percent over the next five years, and it slated to become fifteen billion dollar industry.

So it comes as no surprise that not only multiple companies are looking at the hardware side of edge computing, but companies like google are also developing models which will be easily deployed on the embedded hardware side of things.

Google recently released a solution called Tensorlite, which basically contains edge optimized ML algorithms, and they are so optimized that certain models can run not just the processors but micro-controllers. Amazing, isn’t it!

Now we have spoken about all the hardware and software components which will required to build a successful real-world deployable computer vision application. But there is something which I mentioned in the beginning of this podcast, which I reiterate once more. The most important thing which you need to do before starting to build computer vision application is that you need to focus on product-market fit, meaning does your use case really have a market need, and number two is your use case actually  validated by the market.

The easiest way to do it is to talk to a couple of industry experts. I learned this in a really hard ways. Because after couple of months, I am sitting with my teammates and developing amazing intuitive automated dimensioning system.

After I sat down and started speaking to supply chain industry experts I understood that in order for my application to be used by my end warehouses, I needed to have certain certifications.

Because the use cases, I was focusing on legal for trade applications. So what it does meant for me I have to not only rework the position of product but also certain product features. This meant lost time and money.

So before building a computer vision application understand the industry and if needed, please talk to industry experts.

This brings me to the end of my podcast, and hopefully, I have answered the question what are the requisite software and hardware blocks required to build successful real-world deployable computer vision application.

Well, in the future, I’ll be doing a more detailed podcast on each one of these components and will go into greater depth on what are the characteristics you need to look at depending on the use case for each of these components in order to build the right fit computer vision application.

But for now, to summarize, in order to build a real-world deployable computer vision application, you need to have a vision senor with good optics, edge computing hardware combine with good Edge optimized AI/ML algorithms, and front-ending application and Don’t forget all these should be covered up with a good industry validation use case market fit study. Alright, if you would like to know more about computer vision and Edge AI you can check out our website www.visailabs.com we actually have a lot of resources around this space.

So thank you for tuning into this podcast and looking forward to meet you in subscribers

ciao!

Share this page