IIoT is the Industrial internet of things,
which is the IoT in industrial applications. It is merely the process of extracting analog data from machines in the factory, converting them to digital data, transmitting the data, storing it, summarizing/compressing/reorganizing/analyzing the data and displaying it.
Critical areas of operations to be managed
The most basic information the factory management looks for is production and efficiency in terms of OEE. There are others. Energy is a critical area, as it is a “green” cost saving. Quality is a must, which is monitoring of critical to process and critical to quality parameters and safety parameters like critical temperatures, pressures, fluid levels, and effectiveness of safety systems. Maintenance is an area where there is very little information and most maintenance is carried out by ad hoc decisions. Monitoring part changes, preventive maintenance activities and machine health monitoring are vital.
Functions like work order management which require interaction with the ERP system should reside in the ERP for reasons that become apparent when we get to security and understanding system architecture. Again for the same reason, control of the machine operation should not be an integral part of this system, but on a separate VLAN, something to discuss while talking about network segmentation.
Causes, effects, events, data compression, data storage
In general, there will be recording of two types of
data: causes and effects. Production and downtime are examples of “effects”, and will usually be viewed against some other data i.e causes; for instance, which machine, which product, which operator, which shift, what reason for stoppage etc. Each change of cause is recorded as an “event”. For example, the completion of a particular batch of raw material or the change of a specific part or operator suggests the cause has changed. Unraveling the relationship between cause and effect is the key to the IIoT system.
Recording the data is at a different level of granularity from its usage. When data is recorded, it would be on a per second or millisecond basis generally. But when you look at the data, this level of granularity is not required. So the data can be compressed into per minute or hour for instance when viewing live data. Again if you are looking at historical data you may tend to look at it in days, so the data can be compressed into days. The storage of data needs to be in a manner that is consistent with its expected usage.
The database needs to capture data in a manner that cause and effect can be correlated. Also, it should be possible to isolate and study parts of the database, like specific shops, machines, products, dates, etc. The data needs to be extracted and presented quickly and efficiently. An SQL data base should get the job done although other options may be available. MySQL is an open source system which enables interaction with the data base and would be my weapon of choice.
Where this data is to be stored is another decision, at the “edge” or in the cloud?
The “edge” and the “cloud”
There are several computations that should be done at the edge (or “on premises”) to avoid latency, particularly in time sensitive computations. It is also important where connectivity to the cloud is not good as it provides temporary storage as well. For instance you need to know when some parameter has crossed a limit, to take immediate corrective action, it is not optimal for the data to be sent to the cloud, be processed and come back as output. So, all real time activity should be at the edge, with only the historian outside- either on a computer (which can be on premise) or on the cloud. Data compression, which I talked about earlier, can also be performed at the edge.
Having a computer on premise that does all the heavy lifting enables disconnection from the internet. And a second computer that mirrors the historian and with internet connectivity enables the best of both worlds.