At Rolling Wireless, we’re always looking for new ways to optimise our performance and productivity. One of our most exciting recent initiatives is to leverage the power of machine learning to improve decision-making and operational effectiveness.
“When Rolling Wireless was established as an independent company in late 2020, the new management team had many strategy discussions,” says Sylvain Ogier, Vice President System Engineering, who is spearheading the initiative. “One of the decisions we made was to explore how we could leverage our vast amounts of data to create additional long-term value for the company and our customers.”
Thanks to their ability to analyse vast amounts of data at high speed, machine learning models enable organisations to make a broad variety of predictions. One of the most interesting use cases for Rolling Wireless is the ability to predict latent failures in modules that have passed our already stringent QA tests.
“Our customer quality level is already ‘outstanding in any industry’, according to our latest ISO 9001:2015 audit: less than 20 DDPM (Defective Parts Per Million)”, says Rafet Lakhdar, Vice President, Quality at Rolling Wireless. “But with a zero PPM goal, there is no time to rest on our laurels. Our industrial processes have reached a very high level of maturity in what we may consider traditional methods, such as dynamic burn-in to detect weak parts and technology tests (TT) to predict random failure levels and wear-out time. In order to improve even further, we must look at new technologies and methods.”
On their journey through our fully automated manufacturing line, each and every Rolling Wireless NAD undergoes more than 500 different tests. Considering that we produce around 50,000 modules per day, or a million per month, the test results add up to a lot of data.
If a given module performs close to the tolerance limit on one or two tests, it’s easy to see, and the unit can be submitted to extra testing. The real challenge is process drifts that arise as a result of a combination of many different factors.
“With manual models, it’s impossible to analyse the production test data in sufficient detail to capture early alerts of process drift that might lead to failures; we can only react to them,” says Rafet. “One of the interesting possibilities machine learning offers is the ability to identify anomalies, or outliers: test data which are still within our limits, but that are starting to drift dangerously toward upper or lower limits.”
The goal of outlier detection is to identify modules that differ in some way from the ideal (normal) units, even though they have passed our QA tests, to avoid delivering modules with possible latent defaults.
As a proof of concept, we ran a test on a set of historical test data for a batch of modules which had had an unusually high return rate. Would the model have enabled us to predict which modules had a high probability of being returned by the customer as a failure?
The answer was a resounding yes. A model trained on the data extracted from our tests, and on historical failure analysis records, correctly identified the outliers which were indicative of impending failures. If the test results from this batch of modules had been run through the model before they were shipped, it would have enabled us to react preventively instead of taking only corrective actions.
The pilot was a unique occasion to test our hypotheses on a huge dataset which required advanced calculations. Capturing complex structures in the data, it confirmed that machine learning can be effectively used to review and improve our business strategies.
“The list of possible use cases is long, spanning from sales forecasts to R&D project planning and classification of support tickets,” says Sylvain. “The pilot has opened exciting perspectives for solving real business problems with machine learning.”