Quality control

Computer vision for industrial quality control đź“·.

   Joe LORENTZ, Dr. Cyril CECCHINEL, Dr. Gregory NAIN         

Computer vision and machine learning are currently receiving much attention. In the past years, this technology has left the realm of pure research behind. Advanced machine learning models, especially deep neural networks have been successfully applied to many practical domains, e.g., image recognition [1], self-driving cars [2] and automation [3]. An application field of particular interest for computer vision is given by industrial quality assurance. The ever-increasing throughput and quality demands of modern manufacturing make it increasingly difficult to rely on the human eye for a rising number of visual quality checks.

For many applications, humans are not able to keep up with the speed of production lines which results either in a bottleneck or reduced output quality.

Computer vision aided approaches are better suited for those kinds of repetitive and time-constrained tasks. Computer aided quality checks can be split into two main categories, classical computer vision and machine learning. In this article we will introduce both variants and discuss their advantages and disadvantages. We will also provide practical examples from our current projects. Lastly, we will discuss a potential approach to alleviate one of the main problems of applying machine learning – the lack of suitable data.

#datascience #projects

→ Classical computer vision

The main difference between classical and machine learning based computer vision is the feature definition. The goal of any classifier is to detect the features which are useful to differentiate the target classes. In the case of quality assurance this can be viewed as a list of criteria which a good piece needs to fulfil (e.g., the size within boundary) or, in contrast, criteria that classify a product as defective (e.g., a scratch).

In classical computer vision the feature engineering is performed manually, i.e., classification criteria need to be defined and afterwards hardware-software solutions need to be defined to observe the features on the production line. For example, to check for the correct size, the piece could stop in front of a camera and boundaries defined within which the piece must fit on the produced image. The engineers would then need to make sure that the contrast between the piece boundaries and the background enable to take a decision based on the pixel values.

Classical machine learning can be a very potent solution for some use cases, especially if the number of important features is limited and after the initial feature engineering, the solutions can be easily replicated on identical or similar production lines. The feature engineering stage can, however, quickly become overwhelming for advanced applications. In addition, the solutions are very sensitive to exterior influences (e.g., changing illumination) and therefore need to be perfectly isolated to provide satisfactory performance.

In this context, DataThings participates to the European H2020 research project InterQ geared towards “Zero-defect manufacturing” by leveraging the data available throughout the entire manufacturing chain to improve (sometimes enforce) the overall quality. With this in mind, we have been tasked to provide a computer vision approach to identify and measure cutting tool wear from pictures. This measurement is important for the manufacturing operations since we can observe a direct relationship between product quality and cutting tool quality.

Currently, the analysis of these images is done manually by metrologists. They must define manually the wear area within a software to obtain a measurement. This approach has the disadvantage that it is done after the process, so some cuts may be of poor quality and could have been avoided if the tool had been replaced earlier.

For this use case, we decided to follow a traditional computer vision approach since the environmental setup is controlled. Indeed, a particular attention was given to the constraints of lighting and shadows during the positioning of the camera. In addition, the camera is firmly fixed allowing us to have the cutting tool to analyze always in the center of the image. The prototype developed integrates a contour identification algorithm based on the open source OpenCV library.

#datascience #projects

It identifies in most cases correctly the worn area and gives measurements at different points. Thus, the operators can immediately observe the evolution of the wear during the process. The evaluation of the outcomes by the metrologists themselves showed that the computer vision aided measurements were obtained much faster and more frequently and were sometimes even more accurate than manual measurements. The next iteration on this will target a classification of the defects, to complement the detection of the area already done. For this, a classification approach based on machine learning will be attempted.

→ Machine learning based computer vision

The main aspect of machine learning models is the automated feature engineering.

During the training stage, the model is learning by extracting the most prominent features from the statistical distribution of the provided input. In the case of visual quality assurance, the input can be a single image, a collection of views from different angles or even a stream of images.

The state-of-the-art deep learning models [1] usually use a multitude of convolutional layers to first encode the input pictures in feature space and afterwards a single fully connected layer to provide a classification based on the extracted features.

To learn the underlying feature distribution, the commonly used supervised learning approach requires a set of labeled examples, i.e. input images together with the correct class.

At DataThings we are currently working on project with Cebi Luxembourg S.A, a supplier for the automotive industry. Cebi had already a classical computer vision system in place to detect defective soldering points on temperature sensors.

#datascience #projects

→ Semi-supervised learning

While carefully trained supervised machine learning models provide great robustness and alleviate the need of manual feature engineering. But the need for high quality and quantity labeled datasets is a big challenge to real world applications. From our own experience with this project, the data gathering has been by far the most time-consuming task to finally evaluate our prototype.

With the prototype in place, gathering unlabeled data was fully automated and only limited by the speed of the production line. Labeling, however, required the input of domain experts and an individually planned collection session during which the production had to be stopped to allow us to put the collected pieces through the prototype setup.

Recently, researchers have started to investigate ways to reduce the required number of labels for machine learning. So-called semi-supervised learning approaches try to use only a limited number of labels and a much greater number of unlabeled samples during the training stage.

State-of-the-art methods [4] show impressive results, coming close to fully supervised baseline while using only a fraction of the available labels.

Motivated by these results we are currently investigating semi-supervised learning on our Cebi use case.

Conclusion

Computer vision has become an important asset in industrial quality control.
Both classical and data-driven solutions have many potential use cases. In the case of supervised learning approaches, the accessibility of labeled data remains an important point of failure.

The progresses made in semi-supervised learning approaches make them an appealing cadidate to alleviate this big challenge towards the application of machine learning based solutions.

Acknowledgements

flag EU
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 958357.

Logo Luxembourg National Research Fund
Supported by the Luxembourg National Research Fund (FNR) No 14297122

[1] Deep residual learning for image recognition. In:2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016). DOI 10.1109/CVPR.2016.90
[2] strong> Chen, C., Seff, A., Kornhauser, A., Xiao, J.: Deepdriving: Learning affordance for direct perception in autonomous driving. In: Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), ICCV ’15, pp. 2722–2730. IEEE Computer Society, Washington, DC, USA (2015)
[3] Li, H., Ota, K., Dong, M.: Learning iot in edge: Deep learning for the internet of things with edge computing. IEEE network 32(1), 96–101 (2018)
[4] Sohn, Kihyuk, et al. “FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence.” ArXiv:2001.07685 [Cs, Stat], Nov. 2020. arXiv.org, http://arxiv.org/abs/2001.07685.

© upklyak - fr.freepik.com / © Datathings, Cebi, CFAA


WANNA SAY HELLO?

Contact-us

If you are also interested in this type of workshop, do not hesitate to contact us : contact@datathings.com