peaxy

Manuel Terranova

MANUEL TERRANOVA, President and CEO of Peaxy, says:

 

Current talk would lead one to believe that industrial giants have cracked the code on predictive maintenance thanks to the petabytes of sensor-generated data coming off of machines today and landing in the hands of engineering teams — GE attributed $800 million in incremental revenue to new predictive maintenance capabilities in 2013. The reality is that only the surface has been scratched, as current IT infrastructures make it impossible to achieve zero-outage ambitions or long-lasting breakthrough “smart” technology innovation.

Yes, sensor data is a vital piece to bringing the true promise of the Industrial Internet of Things, and the ability to harness and analyze these data sets in real time will no doubt yield notable improvements. However, to reach the next-level predictive maintenance breakthroughs and “smart” products, engineering teams must be able to aggregate and compare telemetry data with original geometry drawings and test simulations. Yet, keeping in mind that industrial machinery be in the field for 30 or more years, these files may be decades old and dark.

Consider, for example, a large passenger aircraft. The original design may have been created in the 1980s, while data from various simulations and the original test bench would have occurred over the course of multiple decades. This would pose no problem for engineers if the following were true: 1) files were aggregated over the years in a consistent and logical manner throughout the organization; and 2) data management practices over the decades maintained the stability of the location of the files.

As anyone in data management knows, neither of these two conditions holds true for any large organization. The fact is, IT has been under a mandate to optimize storage architecture, and this directive has inadvertently put them at odds with engineering. While the standard “tech refresh cycle” that occurs every few years in most organizations does result in more efficient use of storage hardware, it means that files are moved multiple times over the course of their long lifespan, and are lost to the very engineers that need them to do their jobs. Companies must rely on “tribal knowledge” to keep track of valuable datasets. As those intimately familiar with the datasets leave, knowledge is lost as they’re scattered across disparate platforms and buried in constantly shifting pathways. Teams spend hours to weeks tracking them down.

As the role of the CIO continues to shift to a more strategic positioning, there is an opportunity to add incredible value by rethinking the standard approach to data architecture, in particular prioritizing access to these massive mission-critical files that are scattered about the enterprise today. The first and most fundamental step is to create an abstraction layer that separates the data sets from underlying physical hardware so that the pathnames to those files can be preserved indefinitely. Essentially, this erases the negative effects of the tech refresh, by making the physical location of the files irrelevant to the end user. Engineers access the data through the same pathname, regardless of where it may have been moved to.

Meanwhile, leveraging modern distributed and clustered architectures to take advantage of the high speeds of networks, processors and semiconductor memory will eliminate bottlenecks in aggregating and accessing files, and enable companies to manage the volume, variety and velocity of data generated in their environments.

Geometry, simulation and telemetry data must be thought of as the “crown-jewel” data sets for the Industrial Internet. Despite their age, the latter two are living, breathing mission-critical files that are essential to maximizing telemetry data and producing these innovations. As it turns out, any predicted increase in incremental revenue is only a drop in the bucket compared to the potential total “smart technology” gold mine made available by an infrastructure that respects the long lifespan.