Industry News for Business Leaders
AutomationIoTSponsored

[SPONSORED] How the IIoT Can Liberate Valuable Stranded Data (by Emerson)

[SPONSORED] How the IIoT Can Liberate Valuable Stranded Data (by Emerson)

Michael Condon from Emerson identifies common reasons for stranded data produced by automation systems and how access to this valuable information can be simplified by an edge-to-cloud industrial internet of things (IIoT) solution.

By Michael Condon, Senior Product Manager – IIoT Software at Emerson Automation Solutions

Automation systems generate vast amounts of data which can be used to improve business outcomes, but only if it can be accessed, managed and analyzed effectively. Unfortunately, this valuable data often remains inaccessible due to a range of technical and commercial reasons. Newer architectures are changing this predicament by combining flexible and capable edge computing with a cloud computing model, making it not only feasible, but practical to analyze this data, gain new insights and make the results available to stakeholders. This article identifies some of the most common reasons organizations struggle with stranded data, and how a modern edge-to-cloud IIoT solution makes it easy to access and use this data.

How existing infrastructure causes stranded data

Engineer check and control welding robotics automatic arms machine in intelligent factory automotive industrial with monitoring system software. Digital manufacturing operation. Industry 4.0

Until recently, most manufacturing data was sourced from PLCs, HMIs, SCADA and historian systems running in the operations technology (OT) domain. These systems have been focused on providing control and visibility to maximize operational efficiency and uptime. As such, accessing and analyzing associated data beyond immediate production goals is a secondary concern. 

The OT infrastructure has been designed and scaled according to operational needs, leading to choices such as; selecting proprietary communication protocols that meet performance requirements, but without supporting flexibility and cross-vendor interoperability. Minimizing control and sensor data collection to maximize system reliability and simplicity. Implementing localized on-premises architectures to minimize cybersecurity threats and vendor lockout schemes to protect intellectual property and promote reliable machine operation, often at the expense of connectivity. 

The resulting systems perform admirably in the context of their operational goals, but they suffer from data `blind spots’ and do not benefit from the analysis of all potentially accessible data. 

Within the OT environment, data sources appear to be open, but in reality they are quite difficult to access for applications outside the OT environment, where the data can be more easily analyzed. In addition, many potentially valuable sources of data – such as environmental conditions, condition-monitoring information and utility consumption – are not needed for production or equipment control and are therefore not collected by the automation systems. Big data analytics capabilities continue to expand, but the constrained access to stranded data continues to limit their potential.

The full white paper can be downloaded here

Types of stranded data

Stranded data exists in many forms, originating from machines, the factory floor and other systems part of the OT or managing the balance of plant. This data can be as granular as a single temperature reading, or as extensive as a historical data log identifying the number of times an operator acknowledged an alarm. Typical types of stranded data include:

Isolated – assets within a facility with no network access to any OT or IT system. This is the most straightforward case, but not necessarily the easiest to solve. Consider a standalone temperature transmitter with 4-20mA connectivity or even Modbus capability. It needs to connect with some type of edge device – PLC, edge controller, gateway or other – to make this data stream accessible. In many cases, the data is not critical to machine control, so it is not available through traditional legacy PLC/SCADA data sources. Pulling the data in through the nearest machine PLC risks voiding OEM warranties due to required changes in the programming logic.

Ignored – assets connected with OT systems and generating data, but the data is not being consumed. Many intelligent edge devices provide basic and extended data. A smart power monitor can provide basic information like volts, amps, kilowatts, kilowatt-hours and more using hardwired or industrial communication protocols. But deeper data sets, such as total harmonic distortion (THD) may not be transmitted due to lack of application requirements, low bandwidth communications or limited system data storage capacity. The data is there, just never accessed.

Under-sampled – assets generating data but sampled at an insufficient data rate. Even when a smart device is supplying data to supervisory systems via some type of communication bus, the sampling rate may be too low, or the latency too great, or the data set is so large that the results are not obtained in a usable fashion. Sometimes, the data may be summarised before it gets published resulting in a loss of fidelity.

Inaccessible – assets generating data (often non-process, yet still important for things like diagnostics), but in a generally inaccessible format or not available via traditional industrial systems. Some smart devices have on-board data, like error logs, which may not be communicated via standard communication protocols, but nonetheless would be very useful when analysing events that resulted in downtime.

Non-digitalized – personnel generating data manually on paper, clipboards and whiteboards, which misses the opportunity to capture this information digitally. For many operating companies, workers complete test and inspection forms and other similar quality documents in a physical paper format, without any provisions for integrating this information with digital records. A more modern approach uses digital methods to gather this data, leading to a `paperless plant’.

Gaining value from edge-sourced data in the cloud

Stranded data is of significant interest to organizations looking to analyze operational performance across an entire production facility or multiple facilities. They are looking to find solutions to transmit stranded data from the field to the cloud for logging, visualization, processing and deeper analysis. This connectivity, especially to high-level on-site and cloud-based enterprise IT systems, is needed so that the many types of edge data can be historized and analyzed to achieve deeper and longer-term analytical results, far beyond what is typically performed for near-term production-oriented goals. 

When an end-user or OEM can liberate stranded data from traditional data sources and transmit this to cloud-hosted applications and services, this creates many opportunities including remote monitoring, predictive diagnostics and root cause analysis, planning across machines, plants and facilities, long-term data analytics, like-for-like asset analysis within and across multiple plants, fleet management, cross-domain data analysis and analytics (deep learning), insights into production bottlenecks, and identifying where process defects are originating, even if they are not detected until further in the production process.

The full white paper can be downloaded here

Creating an edge solution

The purpose of IIoT initiatives is to solve the challenges of stranded data and effectively connect edge data to the cloud, where it can be analyzed. IIoT solutions incorporate hardware technologies in the field, software running at both the edge and the cloud, and communications protocols, all effectively integrated and architected to securely and efficiently transmit data for analysis and other uses.

Edge solutions can be an integral part of automation systems or installed in parallel to monitor data not needed by the automation systems. Many users prefer the latter approach, because they can obtain the necessary data without impacting existing production systems. However, the key is that these new digital capabilities can connect with all previously identified forms of stranded data.

Edge connectivity solutions take many forms, including compact or large PLCs ready to connect with industrial PCs (IPCs) running SCADA or edge software suites, edge controllers that are `edge-enabled’ and running SCADA or edge software suites, and IPCs running SCADA or edge software suites. Hardware deployed at the edge, may need wired I/O and/or industrial communication protocol capabilities to interact with all sources of edge data. Once the data is obtained, it may need to be pre-processed or at least organized by adding context. Maintaining context is particularly important in manufacturing environments where there are hundreds or thousands of discrete sensors monitoring and driving mechanical and physical machinery actions. Modern automation software systems help preserve the relative relationships and context. 

Finally, the data must be transmitted to higher-level systems, using protocols like MQTT or OPC UA. Today’s OT/IT standards are developing in a way that ensures the consistency and future flexibility of data and communications. It is important for any solution to be flexible yet standard-compliant – as opposed to custom setups that will be impossible to maintain long-term. Once an edge solution is in place and can get the data, the next step is to make it accessible to higher-level IT systems, with seamless communications to cloud-hosted software. 

The full white paper can be downloaded here

Connecting the edge to the cloud

Hosting software in cloud offers a range of benefits, these include reduced costs, with the user paying only for what they use and avoiding investment in purchasing and managing IT infrastructures. Cloud computing is often referred to as an `elastic computing’ environment because if more computing or data resources are needed, they can be added in real-time as required.

The cloud also relieves the user from the problems of configuring IT hardware and software systems, along with deployment, management, performance, security and updates of the same. Resources can be focused on pursuing their core business-related objectives. 

Critically, the cloud provides the ability to process big data sets efficiently with CPU processing power scaling based on the needs of the analytics. Greater accessibility is possible, with data accessible from wherever and whenever using any device capable of hosting a web browser.

Data security can be enhanced using different servers for storage, with backup and disaster recovery options.  Quicker development is possible, with platforms immediately operative, and only an internet connection and access credentials needed.

A cloud architecture fits particularly well with the needs of organizations when implementing IIoT data projects. The cloud is the enabling infrastructure of many IIoT projects and the combination of these two technologies enables innovative interactions among humans, objects and machines, giving birth to new business models based on intelligent products and services.

Conclusion

Stranded data is an all-too-common reality at manufacturing sites and production facilities everywhere. It is the unfortunate result of legacy technologies incapable of handling the data, and traditional design philosophies focusing on basic functionality at the expense of data connectivity. Only recently has the importance and value of big data analytics become mainstream, so end users are working to build this capability into new systems and add it to existing operations. 

Edge-to-cloud data connectivity delivers value in many forms of visualization, logging, processing and deeper analysis. Any IIoT solution for bridging data between OT and IT relies on digital capabilities that can interface with traditional automation elements like PLCs or can connect directly to the data sources in parallel to any existing systems. These edge resources must be able to pre-process the data to a degree and add context, and then transmit it up to cloud systems for further analysis.

The full white paper can be downloaded here

Advertisement
Advertisement
Advertisement
Advertisement
Advertisement