In Part I, we started with the basics of image recognition accuracy. Then, in Part II, we went further: we explained how we at Eyrene calculate an accuracy level and provide you with the formula. Now it’s time for Part III, the last one in our series on image recognition accuracy. Here we’ll talk about how we demonstrate image recognition accuracy with dashboards on the Eyrene platform and what kinds of errors our clients can see there.
Image recognition accuracy dashboard in Eyrene
An image recognition accuracy dashboard is a standard dashboard in our solution, but it can be customized by request.
The dashboard consists of two diagrams as shown in the first image below: the first one captures image recognition accuracy on a daily basis, and the second one demonstrates accuracy mistakes by type and shows which type of mistake contributes to the general percentage of mistakes. The red line on the diagram indicates the mistake level defined as acceptable by the contract.
The analysis result is presented in the image below, and you can see that the average image recognition accuracy over the period of 2 weeks was 96.6%.
The process behind it looks like this: the image recognition accuracy is monitored daily, and there is an agreed number of images that are checked for errors on a daily basis.
To provide a high level of image recognition accuracy, we have quality assurance specialists on our team who know the customer’s products’ characteristics quite well and monitor the accuracy level daily. One of the tasks performed by them daily is to select images for monitoring. However, some customers prefer to have trained in-house specialists to perform these tasks.
Over the last few years, we’ve had lots of large-scale projects and we came to a decision that for both us and our customers, the most favorable situation is when both sides have a clear understanding of the real accuracy levels and the percentage of mistakes.
Types of image recognition mistakes we use in Eyrene
- False Negative: A product cannot be identified. In Eyrene, we mark it as FN.
- False Positive: Neural networks identify a competitive product as a customer’s product.
- Size: A mistake in product sizing; it is considered as the most challenging mistake.
- Detection: The product boundaries was not properly detected.
- Panorama stitching: The images collected for creating a panorama are merged incorrectly, and as a consequence, some of the products in the image cannot be identified.
- Unknown product: The product is identified, but it is absent in the customer’s product catalog. These mistakes are registered in the database, but they don’t affect the general accuracy level.
At Eyrene, we believe that everything we do in regards to image recognition accuracy in our customers’ projects leads to trust from the customer side, as we prove on a daily basis that the guaranteed level of image recognition accuracy is maintained in the course of the project. What’s more, every day we perform steps that allow us to adjust the level of accuracy, including searching for and analyzing errors.
The dashboards available in Eyrene via the customer portal provide detailed information about every error. Therefore, customers can cross-check this information on a regular basis, if they have enough human resources for the task.
In this last post of our series on image recognition accuracy, we discussed our approach as a vendor to calculating image recognition accuracy. This approach is a well-structured and continuous process, which leads to detailed reporting of accuracy levels and mistakes.
If you are in the retail business, we hope that this three-part series on image recognition accuracy will be helpful to you and will lead to a deep understanding of how this technology works and what results can be expected.
Part III: How to Report Image Recognition Mistakes and Build Trust?