The Industry 4.0 revolution has transformed the machines into data generation artefacts. Lots of gadgets, acquisition cards, data gathering systems, the Computerized Numerical Control (CNC) itself and ad-hoc applications can nowadays keep an eye on the machine and ongoing process extracting hundreds of machine indicators per second. The same applies also to shop floor level, where several machines are managed and monitored as a unique entity.
This lead to a huge amount of data generated at every moment for every plant and every machine. The storage of this data varies depending of the gathering platform and client requirements, ending up with a mix of heterogeneous data sources to manage when developing a Data Intensive Applications.
The data is very valuable. Apart from simple dashboards, the exploitation of the data leads us to the concept of Social Machines. This concept brings up a near future scenario where machines learn and adapt themselves from other similar machines. They learn not only from their own behavior but communicate and interchange data with other machines working on any geographical location around the world.
The interchanged data and computation result serves for several objectives like:
- Condition monitoring: gain insights from the lifetime experienced in other machines to assess the remaining lifetime of critical components.
- Production: suggest some parameters in order to increase the productivity.
- Working parameters: custom dynamic machine configuration to increase the Mean Time To Failure (MTTF) and improve machine availability.
- Smart maintenance plan: Acquire new maintenance knowledge and propose a maintenance strategy based on this information.
Focusing on this last point, the IDEKO’s, leader of the Industry 4.0 case study, will develop an advanced technical service application that improves the way the machine incidences are faced when they occur. Currently, when an incident occurs, the customer gets in touch with the technical service of the machine manufacturer. The machine manufacturer tries to identify the failure based on the following aspects:
- The description the customer makes of the incidence
- The values of key indicators he gets from the machine:
- Remotely connecting to the machine using Teamviewer like tools
- Asking the customer to look at them
- Previous experience on similar failures
Once the origin is detected, if the fix cannot be applied remotely, an engineer must travel to the customer’s shop floor, he then inspects the machine, confirms the fault and tries to repair it. If the knowledge of the person is not suitable to make the repair, usually because the initial data for failure evaluation was not enough to select the proper person, another person must travel. This strategy derives in delays both in the incidence origin detection and the repairing of the same. Due to this kind of problems, this is an appropriate scenario for developing high added-value applications.
The IDEKO’s proposed technical service application will help facing technical service common drawbacks. The app will make possible to the technical service, to have a detailed view of the state of the machine when it is needed. The main objectives to achieve are the following:
- Reduce diagnostics time
- Decrease response time
- Decrease travel costs
- Prevent failures of other machines
The objective is to have the possibility to identify anomalous behaviors by analyzing machine data. Once the anomaly is identified, the objective is to provide a clear, dashboard like view with the status of the machine on that very moment to debug the anomaly origin.
The above scenario depicts a decoupled-from-machine application, this is, something happens on the machine that is identified in near real-time and then some analysis is triggered. A technical service operator can debug the related event and machine status postmortem. However, detecting anomalies in real time is very powerful and, for example, the machine operator could be notified if an event like this happens. Notifying the operator in real time is something tricky, as there are very different CNC controls brands and application integration mechanisms for each differs a lot.
Using the DITAS framework to develop the application, will give transparency to the data sources by masking them using VDC’s. Configuring different data sources has been a continuous problem for application developers of the sector, where every application has to configure its own source, edge or cloud. Configuring each machine and remember every data endpoint has been a well-known tough job on the Industrial sector.
The abstraction that the VDCs gives to the developer by only switching between VDC endpoints to get data from different sources, facilitates a lot the development of the application. Finally, having the VDC centralized on the Node-RED based CAF, eases the debugging if any data sources problem accur.
[Header image from: https://thumbor.forbes.com]