How to identify the feature set of Edge AI-based Hardware Product-focused PoCs?

How to build Edge AI & Computer vision PoCs in 4-week with ready-to-deploy hardware and algorithms Watc

What will you Learn from this webinar?

01

Step by Step 4-week plan to test any idea by building a fast PoC?

02

What hardware, software and solution expertise a team needs to build fast PoCs?

03

How a Retail firm built fast PoCs and choose which self-billing solution to take to production? ( Smart-cart vs automated checkout system)

04

Which edge Processors are useful for a Cloud Trained – edge Inference based solution architecture ( Nvidia Jeston vs Rashberry Pi vs Qualcomm vs Rockchip vs Intel Movidius )

05

How to choose the right embedded camera depending on environment?

06

How to identify the right readily deployable algorithms such as Nvidia Deepstream, OpenVino, VisAI Platform, etc to build your fast PoCs?
Relevant links:
Subscribe to the newsletter and be updated on the latest happenings in Edge AI and Computer vision
Webinar Transcript:

So, let’s go ahead and have a look at the agenda for today’s webinar so what are we trying to cover here see

The first thing that we would go is a go about is to see a step-by-step plan on how we approach a proof of concept.

So, any idea or any solution that you are designing, we would require a proof of concept to prove that there is technical feasibility in it, and in some cases, business viability will also need to be checked, so that’s the reason why we go for a proof of concept when we talk about a solution and that is more true when we have when we are working on cutting edge technology and latest algorithms

so the first thing we will see is how we approach a proof of concept, and then after that, we would see what hardware that we choose because when it comes to Edge ai the hardware that we choose plays a very vital role, and the hardware consists of both the CPU the camera and so on and so forth so we’ll be going in detail about how we go about choosing these pieces of hardware to make our proof of concept, and of course, the most important part of the proof of concept is the algorithm we’ll also see how we can take up ready to use algorithms and how we can build this proof of concept and what kind of expertise you will require to do the POC

so, this is, in short, the agenda for today’s webinar, and I’ll go about first we’ll introduce who we are, and we’ll also tell you about some of the use cases that we have worked on

so, we e-con systems have two divisions one is the OEM camera group, so we deal with OEM camera products um, and we have been selling our camera products which are inside other customer products, so we have over 300 products in the market with e-con systems camera modules in them, and we have developed over 45 camera products and development kits that we have that we sell through our website so that is the background of e-con systems and we have been in business for the past 18 years as a camera company and there is a division within e-con systems called VisAI Labs, which is focused on vision and ai and here we work on all the ai algorithms and models, and many of these proofs of concepts were developed with VisAI Labs algorithms.

So, I’ll go forward, and I will explain how we go about doing the POCs.

So, when I talk about the proof of concept, I think the best way to explain um explain a solution is to take some examples so we have taken some case studies here which we have worked on and probably I would take one case study, and I’ll explain how we can do a four-week POC with that case study that is the approach we are taking today.

So, in the case studies, we have done a smart cart and automated checkout both that that works in a retail store a smart surveillance solution uh which has been used in office spaces and then a smart digital signage which actually captures uh how people are interacting with it, so this is a short video that shows some of the proof-of-concept demos.

Okay, this video would have given you an idea of what we mean by a proof of concept.

So the most of the demos or the demo videos that you saw are proof of concepts, and they were built in four weeks

Now I’ll walk you through how we approach these four weeks.

So, week zero, which is the first week we focus on discussing the requirements and identifying key requirements that need a proof of concept

So what is the difference so? We talked about requirements in general, but we have to be more specific on key requirements. Sometimes the key requirements are what kind of objects we are going to detect or what kind of environment we are going to use this solution in the right? and in some cases, the key requirements are performance metrics; we may want the accuracy of more than 90 percent and without which the solution might not work so we have to identify or pick key requirements uh in a whole list of requirements, and we’ll decide on what we are going to focus in a proof of concept

And the exploration phase, which is week zero, we also have to decide on the computing hardware.

So here decisions like whether the solution is going to be an Edge-based solution and if or whether it’s going to be a cloud-based solution so if it is an Edge-based solution, what is the need for it? Um, and what is the processing power that is required for the algorithm at the Edge level? Uh, what is the cost that we are targeting for this solution? so on and so forth

So we need to make a decision on what type of computing hardware we are going to choose and also a choice of a camera so what are the lighting conditions there are so many conditions and there are so many points to consider when we choose a camera and Gomathi Shankar will explain this as we go forward in this webinar

So in the first week, it’s all about understanding what requirements we are targeting

we’re just basically setting the goal post and the expectations for our proof of concept, and also we are trying to pick and choose the right hardware both in terms of the CPU as well as the cameras in the first week

Going on to the next week, it’s going to be about the POC setup. So how do we make a POC setup is? We normally pick hardware which is off the shelf

So we do not want to develop the entire hardware during a proof of concept stage, so we want to pick up hardware which is off the shelf targeting some particular development kit along with the camera and then
taking it out

And likewise, we also have to pick up an algorithm which is already available. Of course, there are options in the open-source where you can pick up a model, and you can try and use that if you’re already experienced with ai and machine learning and deep learning. If you’re experienced with that, then you can pick up something from open source, or there are a lot of third-party vendors like VisAI labs who have these algorithms readily available, which you can pick up, and you can use it to set up your POC

Now at least a week have uh, it takes at least a week to get all this up and running, so you have some working setup, you connect all this and make some basic application which will target the solution. Now the third week is all about tuning the algorithm.

Now I have connected all the pieces you have the camera images flowing through your algorithm, and there is an inference happening, and there are some decisions which are being made, and now it is about tuning your algorithm, and in some cases, we go ahead and retrain the model to include new objects and things like that to achieve the level of performance that we want, so that happens in the week three.

And the fourth week is about putting everything together and showing it to a customer or showing a demo to him.

Now all this may sound very simple and uh, like bifurcating into weeks, but in practical life, I mean most of you are developers and engineers. You know it does not go exactly to plan, right?

So sometimes what we have planned in week one may happen in a couple of days, but we may take more time where we are spending in tuning the algorithm, so we’ll take a week or a week and a half.

But our experience is that in most cases we have we have been able to come to a proof-of-concept demo in four weeks that has been our experience, and this is a rough split on how you can split your uh if you’re planning for something like this is how you can plan.

And from week four, it is about deploying it in the customer’s test zone, so normally you make a POC because you wanted to deploy it somewhere and you want to check, so there are two things which I told you first either it is technical feasibility or business viability whether this model will work right?

So, both of these have some target numbers which you can start experimenting from week four. In the POCs that we have worked with, we have worked with customers for a couple of weeks in some cases, and with some customers, we have worked even a couple of months because the experiment took long, we needed more data, so we got a lot of real-time feedback, and we’ve also then worked on working on the second stage of a POC where we go ahead and optimize even further, and this is how we come to a final solution from a POC to a solution

One thing we all must remember when we are working on a proof of concept is this is not just a demo; it is not a video that you can capture and put it in your social media handle and walk out because this POC is going to be the building block that is going to bring about your solution at the end.

So whatever we use here should be a viable, and should be, we should be able to build on top of this and go towards our final solution so that should always be in our mind when we build this POC, and that’s how we have worked with our examples as well.

Now I’ve told you how this four-week breakdown happens and what all we do in that what may take more time and what may take less time now going forward. Let me give you a case
study an example of how I apply this four week to that case study.

So this case study came to us from a customer who wanted uh help with the self-billing solution in a supermarket, of course, this is a very common use case we have heard this a lot from a lot of different people.

So the customer we the customer who came to us had two kinds of requirements one is they wanted to identify fruits and vegetables in a shopping cart um they would take up the fruits and vegetables in the bag and put them in the cart, and they had a camera they wanted a camera system or a vision system in the cart which would actually look at those vegetables and fruits and give the name as well as the cost and maybe if there is an offer or whatever it should show it on the card um, so that was the final requirement basically, but the proof of concept was all about identifying the fruits and wisdom.

Sow the second requirement they had was the same fruit and vegetable identification in addition to grocery identification at the counter.

Now, these fruits and vegetables have different codes, and they have different names like there are two or three types of apples, all with different names.

So the checkout person, when he showed a bag, he wanted the vision system should spell out the name of what type of apple it is and also give you the code for it.

So this was a requirement the customer had, and now here they had both the questions; they also had the question of the technical viability; of course, there was no requirement to have a 100 accuracy with this, uh there was no requirement for that, but they preferred to have at least 80 to 90 percent of accuracy with this solution,

So they wanted to know how technically feasible it is the second thing they wanted to understand was the business viability now how will shoppers react if I put such a vision system in the shopping cart and how will the clerks at the counter how will they react, and how will their work get more easier if such a helpful vision system is there on the counter so this is some experiment that they wanted to do, and we took this up as a proof of concept to show them how we can deploy it in four to five weeks in their premises

The basic challenges that we took were where that we have to identify fruits and vegetables, and we said that we will take around 20 different fruits and vegetables, and that’s the first set that we’ll be working on in this POC

So this is what I call by fixing your goal post, so we will not be able we will not say that we’ll do all fruits and vegetables, but we’ll pick them, and we’ll build a POC only for that the next thing we decided was that they wanted to train a new fruit or they wanted to train a new grocery item at the store itself, which means that we needed to include a training module of course that will not run on the Edge itself the training could run on an on a PC or a server, but it could be deployed back on the Edge device, and you could be you should be able to infer the new fruit or the new grocery item

so there were two main requirements that we took one is the identification and the other one was training a new item

so these were the two requirements that we took, and we went forward with this proof of concept

so like I said in the exploration phase, the first part of fixing the requirement and fixing the goal post was done, so here we’ve identified a lot of challenges right there are a lot of occlusions that come because we put these fruits and vegetables in a bag the lighting conditions might differ, so those were some of the possibility for these occlusions and then like I said when we have apples a lot of apple varieties look very similar we need to differentiate as well.

so in our requirements set we took at least two apple types of apple so that we could differentiate between those types, so that was part of our fixing the requirement, and also we worked on certain lighting conditions two or three lighting conditions we fixed so that we could check the accuracy at those lighting conditions

Our intention here is not to fine-tune for everything and reach our final goal for the solution in this four weeks; what we need to prove in a proof of concept stage is to say that there is scope for improvement like if we are able to achieve something if we are able to demonstrate some solution and then if we are able to prove their scope for improvement then we will call this POC as a success and then we can go ahead developing the this as a full-flEdged product or a full-flEdged solution right?

so we identified the key requirements for the POC, and we move forward in making the decisions for the computing hardware

So here, what we are going to do is we are going to run an algorithm. We already had an algorithm which we were which we have used for similar purposes, and we found the Jetson Nvidia Jetson hardware as a good fit because it has the necessary GPU and the horsepower, uh to run these inference engines on the Edge, and of course, we have a lot of cameras which we have already worked on which are connected to the Nvidia Jetson platform as well

so we thought that was a good um starting point and the choice of the camera also we wanted to have a camera which will work under normal lighting conditions and also which had some good depth of focus because we assumed that it is not always the fruits of the vegetables they are not always going to be at the fixed distance from the camera the height would differ and so having a camera with a good depth of focus will give us a very clear image of the fruits and vegetables

of course, we need a proper color reproduction uh because a lot of that would depend on the color of these fruits and vegetables, so we needed a good quality camera that worked under normal lighting conditions and a good depth of focus

so this was what we identified in the first exploration phase, and from there, we went ahead and decided on the exact hardware in week one where we finalized on the photon carrier board uh we took that off the shelf Nvidia dev kit along with one of e-cons cameras like ecam50 cu and x which is a 5-megapixel camera which will attach to the Nvidia dev kit so as because this is ready to use development kit uh this has the camera running on it already you have all the software which runs on this dev kit already and you could get the streams camera streams to memory with an application straight forward

so the next step was to pick up uh an algorithm from our VisAI platform, so we use the accelerator the object detection algo accelerator from we say platform, and we put that into this dev kit as well

so this is all like picking pieces like we are picking pieces of building blocks which we are aligning one after the other to build our solution, and that’s what we did with week one

And the week two is all about putting this into action, so we bought samples of all these fruits and vegetables which we targeted, and then we trained them trained the retrain the model object detection algorithm block. We retrained that to include these fruits and vegetables, and we went ahead building our application for this as well and once the training was complete, we built a basic demo application for anybody to use with a touch-based interface. I mean, this is like a basic sample application um nothing very fancy it’s just a straightforward application which will do uh detection as well as it could also do training with a click of a button

Once all this was done, we did a demonstration to our customer at week three.

So here the confidence of the customer like when we started with a with week zero when we started with defining these requirements to having something working on week four, we had so we had built the confidence on the technology part at least.

so, the next part went to putting this on the store and checking the business feasibility, which the customer did

so basically, um, the outcome for the supermarket as they were able to see the pros and cons of this idea both at the business level and at the retraining level, of course, this POC when it is actually put on and when it’s being tested um this has to be treated like a POC, so it is going to have problems it was not like all perfect it was not like the 100 perfect solutions, but it met all the requirements so when we were able to get an accuracy of around 70 percent 75 percent and some instances we were sure like with more training and more fine-tuning we will be able to reach 85 to 90 which the customer wanted

So just as I said, it was able to meet our requirement to a certain extent, and there was scope for improvement, which is all we wanted from the POC.

So that’s about how we approached this uh proof of concept and how we went about giving this proof of concept to a customer and how he used it.

And one of the main reasons he came to us was because all of these were already in some shape available to us. We had the dev kits, and we had the algorithms, and this is this when you approach a POC as well if you have ready-made dev kits um that or if you are able to pick ready-made dev kits from the web, I think that would be a very good starting point, and also some algorithms which are off the shelf be it open-source which is freely available are some algorithms from companies like VisAI labs where you can pick and try and use them so that you can quickly figure out whether your thought process or your idea will work or not

Now coming into building a POC, there are a lot of things that you will have to think about when you start a proof of concept.

When I say you can do this in four weeks, we have to understand that there is a level of expertise that you will require if you have to get this done in this short period and it is not only expertise, I would say it is some familiarity with these kids.

Now you can identify off-the-shelf hardware kits. When you identify off-the-shelf hardware kits, it cut down your cost of building your hardware that is perfect. You can find off-the-shelf algorithms. Now it cut down your cost in building your own algorithms that’s perfect too. However, when you want to put all these together, you would require some kind of expertise or some familiarity with these things. If you’re familiar with this, then this four weeks’ time frameworks like a charm. There’s no problem with it; otherwise, there would obviously be the time required for you to get familiar with all these tools to get familiar with all these platforms right, so that is something that you need to think about when you’re starting a POC.

And for the success of any proof of concept when it is connected to ai and Edge, I think there are only three areas that we need to focus

One is the camera because that is the core for your vision system, so you need to understand which camera is the best suit for this use case because the best if you can get the best image if you can get the right image to your model or your algorithm then you amplify your chance of success or you amplify the chance of making your algorithm work like incredible.

Whereas if you choose the wrong camera or the wrong lighting or wrong lens, then of course, um, however good your algorithm might be, it would not give you the right output or the solution.

So, choosing a camera is very important, and I think we’ll go forward, and we’ll see how you can choose your camera; Gomathi would work. We could explain more on that.

So, the next thing is choosing an Edge processor; again, the Edge processor, like the camera, you need to understand the horsepower that’s behind this Edge processor.

Has this algorithm that you are picking off the shelf? Has this been tested on this platform? Because if it has not been, then you carry the risk of trying new on this hardware, and that is going to consume a lot of time.

So, when you pick an algorithm, uh, you have to make sure, and when you pick an Edge processor, you have to make sure that it has been tried before. If it has been tried, then it’s well and good.

So, choosing the Edge process is going to be a very critical part, and that will be covered in the webinar going forward as well.

And last but not least, the algorithm block, and like I said, you pick an algorithm which has been already tested in these processes and understand what has been already done with those algorithms; what is the performance output of that? What kind of training it has gone through? Uh, what model or what kind of data set was used to drain these algorithms?

If you can get as much information as possible, it’s well and good, and otherwise, it is better to have some algorithm vendor who is experienced with that to help you with this part.

So, these are the three main areas where you need to focus when you do a POC, and the rest of this webinar, our team Gomathi and then followed by them will be focusing on how you have to choose this, and more technical details on these three areas so thank you uh one and all I’ll pass this on to Gomati so here so that he can carry on with the next part.

Thank you, Maha.

As Maha rightly said, if you get the right image at the right time, that makes the POC probability of the POC successful

So if you could imagine the final product and choose the right hardware components for its probability of making a successful POC is very high

Choosing the right interface for the vision system, right image sensor, and image signal processor for this sensor are the key challenges, so we should understand the ability of the sensor for a particular application.

so, let’s start looking at the sensor, due to the influence of cell phone cameras, a high resolution might always be the first thing we will look at, but there are several other things to consider

Each application has its own unique set of problems that can be addressed by a particular sensor.

Let’s assume an outdoor kiosk; the major problem with the outdoor kiosk is the lighting condition.

The lighting condition is going to be harsh sunlight might whitewash the complete image you might also get into both bright light and dark light on the same scene.

So, getting the sharp images for facial recognition or similar application is going to be really tough
so, if you choose a high dynamic range sensor that could really help in this situation, the HDR sensor can actually capture images with two different exposure and measure together to get the right mix.

So, I would recommend to consider the following things before choosing the sensor

Of course, the first thing is going to be the lighting condition, whether it is a low light condition or a controlled room-like condition or an outdoor lighting condition, whether the target is going to be stationary or moving going to capture still image or video for the recording.

If it is the video, what is the frame rate and resolution requirement, a field of view of the lens, and focus method, whether it’s an autofocus or fixed focus? These are the things you should consider before choosing the camera model.

Also, if the use case requires a color camera, make sure to choose the camera module with image signal processor.

The image signal processor is the one who which is responsible for auto functions adapting to different lighting and perfect like color reproduction is a key thing, and without an image signal processor, it is going to be really tough. So, consider these points before choosing the camera mode. Another major thing is the interface for the camera, so there are these are the four majors uh interfaces uh we used to see it with the devices.

Okay, so MIPI uses USB 3.0 GMSL, let’s understand the pros and cons of the camera interface before selecting it mpcsa2, of course, is the most reliable camera interface for Edge AI also supports very high bandwidth and the quality we can get raw images for it so since it’s a raw image there is no processor bandwidth will be used for this camera, but the limitation here is the range, we cannot go beyond 50 to 100 centimeters with MIPI cameras.

USB we can go for a longer cable, of course, But it has also a limitation of bandwidth, but USB has a fair enough bandwidth for uh full HD 60 frames per second.

So, if you are looking for a POC with a decent camera, then probably USB 3.0 for the POC would be a better choice compared to other interfaces it has reliability, it has decent reliability high it can also have support high bandwidth, and you can go for three meters or five meters also it can support both EV and MJPEG.

GIG-E is the next one on the list uh if you are looking for a longer cable length, then GIG-E would be the best choice; it can go for several meters, but the thing is it cannot support higher bandwidth always the GIG-E cameras will provide a historic for encoder stream so the quality may not be good enough for the application.

GMSL is a service method for transferring maybe data over the longer skill.

So, these are the cameras I would recommend for the smart shopping cart, digital signage, and smart ATMs.

Like I said, for the outdoor condition, I would recommend an HDR camera. A full-stack is a history camera, so we have uh we have this full HDR camera in both MIPI and USB interface

And the next thing is the facial recognition application, so if you are just looking for facial recognition not going to do the spoof detection, then probably a 2D camera will be fine, but if you need to do spoof detection like presentation attack detection, then you might need a 3D camera which will give you both 2d image and also depth information to check whether it is actually a real person or a photo or a video.

Let’s assume you want to capture images in a shopping cart; if you want to capture it while it is in the moment, you would need a global shutter camera; a rolling shutter camera will create rolling center artifacts that might uh create problems for your algorithm to detect the object slightly it could be like fruit or it could be a barcode in both cases um if it is a moving object global shutter would be a better choice.

Then the higher resolution camera imagine a situation digital signage or a kiosk. If the person is standing two-three feet away from the kiosk or signage, the resolution of the face alone is going to be really low. The high-resolution camera, along with autofocus functionality, will be really helpful

As I mentioned already, all these cameras are available in both USB and MIPI interface

So here comes the major component, the processor platform for AI and ML.

As Maha said earlier, you have to choose the processor based on the horsepower required for this particular algorithm, and also, it has to fit into your budget.

so if you choose a processor low-cost processor which has the decent ability to try and run basic things on the processor and you can do the rest of things on the cloud

So these are the processor most popular for processor platforms that are available for cloud-based, so these things you can run some basic algorithms on these processors, but you cannot. You don’t really have a dedicated GPU or a TPU or any AI-ML engine running on this, so you may not be able to run a full-fledged Edge AI algorithm here so you can use these platforms for grabbing the images encode it and send it across a network for a cloud you can as I said you can do the basic stuff, but a majority of the things should be done on the cloud AI.

Also, you can include an intel mobility stick to these platforms that will make this as uh that will make you to use this for Edge AI processing as well

The first one is the raspberry pi that’s a quad-core processor available in both SOM and single-board computer, the most popular low-cost Edge device.

Rock chip, another low-cost quad-core processor, and the NXP based imxx6, Imax 8 processors available in various processes of combination again good enough for low-end Edge Processor and these are off the shelf they are ready to deploy as well.

These are the most popular uh Edge device, okay, so you have special cores here for running your algorithm on the Edge itself.

The first one is google coral. It is actually based on an NXP imx8 processor, but it comes with a sensor core chip which can be used for both Edge AI processing

Google provides this SOM. This SOM is still in the early-stage. We are supporting me cameras for it already.

The next one is QUALCOMM QCS. It is different from a snapdragon processor. It is specifically designed for Edge AI has a dedicated engine along with the processor. As I said, these three processors family has its engine for an also has an option to connect multiple cameras

So if you are looking for interfacing multiple cameras, probably the QUALCOMM and Nvidia are the most uh preferable one because Nvidia Jetson processors can have up to six cameras can stream full HD 30 frames per second. QUALCOMM also can support multiple cameras, so if your application is going to have multiple cameras, you might need to consider QUALCOMM or Nvidia processor.

As I said, the Nvidia is the next one. Nvidia has a wide range of processor um right from Nano to all the way up to Xavier. Nano is the most cost-effective low-end AI processor, and AHX is a high-end Edge AI processor

so google calls it as TPU Qualcomm pulsators an engine Nvidia calls it as GPU, but all of them has the ability to run a full-fledged HD on the Edge

Okay, I’m giving over to Sarath, who will thank you for the algorithm.

Thank you for detailing on us on the cameras and platforms for a solutions.

Good morning and good afternoon to everyone. Let me reiterate what Maha said, to build a solution, we need a camera, an Edge AI hardware platform, and an algorithm.

so, in this section, I will be explaining the details of an algorithm development

so, in an algorithm development, there are multiple steps, so that includes from data set preparation
a model development training accuracy analysis performance tuning and deployment so in
case of data set preparation which is very crucial for any AI solution development because the data set is key to achieving better results in any AI application or resolution

So a data set needs to be collected from a particular customer, or in case of freely available data sets, the data set need to be cleaned up, it needs to be annotated, and it needs to be separated as a training set or by a test set. And in case of limited data, we do something called data augmentation, where we try to multiply the given set of data by introducing different lighting effects orientation. We also introduce some form of blurring etc., To match real-world use cases.

so for any AI solution data set is the key, and that needs to be prepared before you do anything in an AI solution development

so the second step is a model development, where we generally try to figure out if there are any standard models or if one doesn’t exist, we build it from ground up

So in case of model development, we generally work on customizing 5 to 10 layers of a given model, and we also try to figure out a loss function uh which is very critical in case of model development where a model is trained, and we figure out if the model has a required amount of accuracy and we try to tune the model based on the learning and the loss function.

So, we also employ techniques such as non-maximal suppression in case of detection models where we get duplicate detections, so these are part of model development. Once we do data set collection and do our model development, a model needs to be trained.

So, for training, we employ the data set which we would have prepared in step one, and we do our training either on a multi-core system in-house, or we use cloud systems for training the model.

So we constantly monitor the training process to see if the model is learning; if it has stopped learning or reached a stage where there is no considerable learning, the training process is concluded.

And we go to the next step, which is accuracy analysis, so inaccuracy analysis, we take the test set, and we feed it to this trained model, and we understand the output from the model to check if it has reached a set of accuracy matrix.

So in the case of object recognition models, we have metrics such as MAP mean average precision, and in the case of classification models, we try to employ metrics such as top one or top five to see if a model is performing to the level of accuracy that is required for a particular use case.

So, post accuracy analysis, we do something called performance tuning because, by this time, you would have taken a model you know you would have trained it, and you would have checked the accuracy. Now, this has to run on an Edge AI platform which has very limited resources compared to your cloud or on a CPU with a big graphics card or, for that matter, any card on a big bigger PC.

So, we take this model, and we try to deploy it for your specific hardware, which is chosen for a given use case.

When we generally deploy our model for an Edge processor, what generally happens is the performance drops.

The reason is these models are generally heavier to run on a small device such as an Nvidia Nano or a DX2 or Xavier from, let’s say from Nvidia .

So what we do is we employ techniques such as pruning where we figure out okay if there are like 50 layers some of the layers are not contributing enough to make a decision, so we try to cut down those layers, and we try to save the cycles that are required to do a particular process

So once its technique is pruning, and we also try to analyze the precision versus accuracy. Suppose we employ a floating-point 32 in our particular inferencing engine. We try to see if floating-point 16 is good enough to meet the same accuracy so that the processing can happen faster.

so we employ some techniques a few techniques such as these in our performance tuning stage, once the performance tuning is done, we go for deployment.

so this particular slide uh talks about various platforms uh that are available to you when you uh do uh AI solution development

so the first one uh is the VisAI platform this is a platform which we have developed with over three years of R&D from VisAI labs

VisAI platform comes with pre-trained and customizable models and tools which one can take and create solutions for use cases such as smart checkout, smart digital, signage, and smart surveillance.

so the models can be easily ported with very minimal effort on mostly most popular Edge AI platforms that are available in the market today

Uh, the platform is also easily scalable and deployable to production systems.

so one can easily uh evaluate our platform by taking our evaluation version and using it for a particular use case

so the next uh framework is Nvidia deep stream, so Nvidia deep stream is developed by Nvidia and comes with pre-trained models and the most popular one is people net which is used for people detection and Nvidia has recently released a transfer learning toolkit using which a pre-trained model can be further trained for a particular use case and can be deployed for a particular use case

And the next framework is open vino which is developed by Intel, but it doesn’t come with a transfer learning toolkit. Again open vino comes with standard models for specific use cases which one can use and develop their prototypes

And the last one on this slide is a TensorFlow lite uh it is a go-to framework for Edge AI and development, especially in the case of android GPUs such as mali, and companies like Qualcomm they provide TensorFlow lite optimization tools for their chip cells

This particular slide talks about our solution for a smart card, so a smart card solution involves developing object detection and classification models.

So, the right side of the slide talks about various data sets that are freely available in the market, which one can pick up and try to train their models when developing a smart card application.

So, in a generally smart card, the application will involve thousands to lacks of different objects that need to be detected and classified. So, in a retail space so things can vary somewhere from fruits vegetables to goods that are used by retail customers.

so, if you try to train all this using a single model, generally it takes a performance seed, especially on an Edge hardware so how VisAI labs have solved this problem is by using a model hierarchy

so, what we have done is we have used a class and subclass hierarchy where the main class will classify something as fruit and hand over it to a subclass that will actually recognize it as an apple or a banana.

so this is an innovation we did to solve the performance problem for the smart cart application

so this particular slide explains about the digital signage application we developed. so in case of the digital signage application, we had to develop three different models to address face detection, as in gender prediction and gaze estimation, so this particular slide is listing the different data sets that are available on the internet, and a smart digital signage application needs privacy because this is placed in public and customers do not prefer to have their images on the system for privacy reasons

So what we have developed is called as anonymous video analytics, where on the Edge we read the images from the camera, process the images, and discard them, and all the analytics on these images goes to the cloud, and none of the images are stored on the cloud, so this is a way we ensure the privacy of the customers in the digital uh smart signage solution.

So this involved uh developing three different models uh so for face uh detection, age, and gender prediction and gaze estimation.

So, we had to develop all these three models, combine them and produce the end solution for a customer.

so this particular slide talks about a smart surveillance solution which we have developed for a customer, and as could be seen, most of the data set which we could find on the internet was a non-commercial data set, so we had to develop our data set, and we had to follow the process called data augmentation.

So we had to collect 2000 different images, and we had to perform the data augmentation, which I described as the first step when I started explaining about algorithm development.

So, we had to develop two models for this particular problem, one is the bonding box model, and the other is the classification model, and we have to deploy the solution using these two models, and that is the end of algorithm development presentation from me thank you!

Now I hand over the presentation to Mr. Alphonse.

Good day All!

I think uh you all had a really interesting time trying to understand what is the four-week POC methodology for building uh Edge ai and computer vision solutions

Now um we have we were predominantly focusing on explaining to you how this methodology, how we have used this particular methodology to help n number of customers get their Edge ai and computer vision applications fast uh into the market

So, some of the other use cases which we built was smart digital signage, where we helped build us we helped incorporate AI into the digital signage within four weeks, so what did we do?

We built together with a raspberry pi plus an intel Mobilius and a c3_cam_cu50 solution which can identify a person’s age, gender, gaze deduction while also calculating the gas reduction dwell deduction and emotion.

How does this actually help the customer because streaming almost 4k uh videos from every single digital signage to the cloud and processing it over there is not viable from the point of bandwidth, and also it improves upon customer privacy, but instead of that if you can get the video and work on it on the Edge and just provide the metadata out, you not only do you have better economics but also safeguard the privacy of your customers.

The other is a smart surveillance solution which is basically an anti-spell or spoofing algorithm which we built for a smart ATM

We use the v-tracks people reduction service along with the three uh tara 2c3d stereo camera MR. Gomathi had showcased uh in a previous slide

we used a hybrid Edge cloud deployment, and we interfaced the solution with that bank

Now the question is all this begs the question, what expertise do you actually require to build this end-to-end solution? And why our enterprises are working with, we say, labs and e-con systems to incorporate Edge ai into their products? the reason is because VisAI labs and e-con Systems is a one-stop provider for all your Edge hardware algorithm and software needs

the reason is very simple to think about this you as an end client, you are a project manager, and you work with one particular partner for your camera, one particular partner for your Edge processor, and one particular partner to get your algorithms going

Now your camera is getting out the stream of images right, but somehow on the other the algorithm is not able to capture these images into its pipeline; what do you do?

You talk to your algorithm guy? uh, the person says, hey talk to the camera people I think it’s their problem the camera person says hey the images are coming out right talk to the process of being.

So it goes around and around, but if you are if you have a single partner who has expertise across all these three spaces, you just have one person who will be answerable to you and will get the product into your hand

and there are very few people on this planet who can claim to have experience in both camera who build their camera while having the partnerships with all major Edge processor firms, whilst also having capability across all major uh machine learning algorithms as well as having their own uh platform Edge optimized algorithms in their platform

So simply put, the two major reasons why enterprises reach out to visit labs is that they want to incorporate ai and computer vision into their existing product or solution.

Like how we saw in digital signage, or you can check out the smart surveillance solution, which was a new product created from scratch, taking advantage of the best computer vision can offer.

So in the maximum reasons, of course, this, in the end, is to explain to you guys, if you’re thinking of building an Edge AI solution or a computer vision solution, why are enterprises choosing VisAI labs in computer vision and e-con systems, is because we have expertise in both Edge and cloud.

We have the ability to provide both hardware and software ability to accelerate your product development using our IP, which is of this ai platform, as well as our product development methodology—the ability to provide a solution from product ideation to prototype manufacturing.

We have we help you do product ideation; we help you build the POC, we help you choose the camera; we help you choose the processor, we help you build the algorithm, and optimize the algorithm, scale the product to the production-ready solution and also do the prototype manufacturing.

And most importantly, we are not a jack of all trades with an Edge AI, but we have focused capability on certain industry verticals and specific use cases, for example, the major industries we work in include retail, logistics, and supply chain, and industrial well the major use cases where we have worked on are people vehicle deduction and tracking or object quality deduction and object dimensioning.

So quality inspection uh people reduction face deduction all come right up our alleyway.

Share this page