Mark Thienpont (delaware): “Machine Learning sometimes starts on the back of a napkin”

With Azure Databricks and Azure Machine Learning, delaware offers a familiar and scalable work environment to train and deploy machine learning models. AI technology in an industrial setting has enormous potential. Mark Thienpont, Lead Expert Data Science at delaware, sees all kinds of new ways to transform business processes. “You can compare it to the way the introduction of the assembly line changed car manufacturing in 1913. Today, more than a century later, AI and Machine Learning (ML) could mean the same for almost any industry. For large businesses, as well as for SMEs.”

delaware.ai

Not the actual gathering of (big) data is the new gold, rather the insights they can bring and even more so the possibility to automatically embed these newly found insights in intelligent processes. delaware.ai was founded over four years ago and offers production process improvement consulting and services. “Within this group of experts, we emphasize three main themes: computer vision, natural language and data mining. That combination will lead us to find the determining factors for product quality and ways to manage these quality levels in a better way.”

“All data generated by machines, sensor and telemetric data, are stored in a so-called historian database”, explains Mark. “These data are mostly only used to manage processes in the very short term. They will for example indicate downtime in the previous day or during the night, but without any additional context and with algorithms being mostly non-existent. Our proposition is to is to offload production data, build a history and bring together all kinds of historic business data. In most instances, this approach is instrumental in identifying the cause of an issue in the production process; more importantly, in being able to predict with a certain level of reliability at what moment a certain issue might pop up.”

DEL20 project

When it comes to ML, delaware likes to co-experiment with clients. Some insights have come from the experimental DEL20 initiative the company entered into with twenty of its most innovative customers in Belgium. The project’s name is derived from BEL20. “We brought these companies together in one group. They can all vote on each other’s use case, of which the 5 most interesting ones will be implemented. It’s a great way to collaborate: we become more inspired and the client can work on a use case at zero cost.”

One of the outcomes of this experiment is a project with a client active in textiles. With production sites on different locations all over the world, they were faced with a lot of downtime. “They really wanted to tackle this issue. The idea we worked on with them is sublime in its simplicity. In a normal production process, one operator oversees approximately 10 machines. But by determining at the start of each shift which 3 machines really require focus, based on historic data, production downtime has been reduced by 30 percent. We were able to do this by using insights gathered from big data, include these in a cloud-based machine learning tool, to then re-use these data in the IT architecture.”

Azure Machine Learning

In his talk during the Tweakers Meet-up AI on 11 February 2021, Mark elaborated more on process optimization and demonstrated applications for predictive maintenance. “We showed an operational technical set-up that uses cloud computing for cheap storage with Azure IoT and Azure Data Lake. The flexible calculation power for data preparation, data processing and AI/ML model training comes from Azure Databricks. We addressed an edge computing set-up with combined Azure and on-premises resources to enable 24/7 intelligent operations.”

One of the benefits of working with Databricks is the way it facilitates distributed computing, explains Mark. “Databricks is built for parallel calculations, but as a starting point you can just as well use the widely known Pandas Libraries that are purely single-threaded. Scaling up a Databricks cluster (more parallel nodes or a different configuration to CPUs or memory, red.) is very easy and quick to do, by default by the developer itself. The cost is calculated based on the number of minutes the cluster is active, which is indeed very price competitive.”

Containers and edge computing

“Injecting models in our clients’ operational systems can be done in several ways. We can deploy an Azure Docker instance via a web service, if needed supported by a Kubernetes cluster for cases that require a more robust set-up. This web service will calculate predictions based on the ML model.”

There is also a third option, as Mark demonstrated during the Tweakers Meet-up, which is an edge computing solution. “I showed an edge computing set-up with combined Azure and on-premises resources, for 24/7 intelligent operations. You can in fact plug containers into a machine that is guaranteed to be continuously accessible on the work floor.” The examples showcase the versatility of ML solutions. “Sometimes, people have an idea for a use case, which we will, so to speak, quickly doodle on the back of a napkin. That can be enough to spark anoptimization project. When you talk about continuous monitoring and ML embedded in your IT processes, you in fact talk about continuously improving operational excellence. That is exactly what we strive for by using machine learning.”

Source: https://tweakers.net/plan/3014...


Need some help identifying the best applications of Industry 4.0 technologies in your company? Get in touch with one of our experts.

Our expert

Mark Thienpont

Mark Thienpont

Contact us

Related articles

More articles