Failure to keep and maintain accurate supply chain data can spell disaster in the event of an emergency—or even contribute towards one, as Anurag Dixit explains.
Organisations are growing in size and reach, with the nature, scale and range of their operations changing with newer technologies and processes. In the face of increasing competition and demand, businesses have ramped up their output generation by expanding their operational, production, and distribution facilities and channels—especially in highly asset-intensive industries like oil and gas, utilities, and discrete and process manufacturing. However, there is often no consistency in standards, processes and strategies across the resulting cross-functional divisions and locations of an enterprise.
With the advent of IT as a major stakeholder in the way operations are run, which is intended to bind processes together, many organisations have instead become more fragmented in terms of their resources, assets and information. This has significantly complicated effective management of the entire enterprise, opening the floodgates for a variety of problems.
Recent accidents around the world have highlighted the potential hazards and risks inherent in many industrial operations. The constant increase in the size of industrial facilities, the larger workforce that must be employed, the greater power and higher speed of the machinery and appliances and the faster production lifecycles are the primary factors which inevitably produce more opportunities for errors and failures.
Organisations could face huge unwanted costs in the form of losses, penalties and compensation in the event of an industrial failure or accident—and one of the key reasons for failures and accidents is the breakdown of crucial industrial infrastructure arising from ineffective management of the material base and inefficiencies in the MRO supply chain. Global organisations frequently have materials and other key assets spread across several localised inventories, with their management having shifted from cumbersome manual approaches to automated IT systems over the years.
While this transformation has delivered some benefits, companies have often failed to drive them all the way through, with the sheer number and disparity of the several IT systems and applications in place creating material, supplier and other key asset master data that is inaccurate and inconsistent. Data is after all what drives processes, and this scenario has lead to ineffective plant and infrastructure management and a continuing failure to plug the leaks through effective monitoring and detection of potential hazards.
Take the case of a large electrical utility as an example. Electrical failures can lead to huge costs in the form of damaged equipment, injuries, fatalities, hazardous situations, lost production and downtime; and one of the most common causes of these power outages is the breakdown or failure of key electrical equipment.
Consider the following scenario: a circuit breaker fails at a substation, causing a large power outage, accidents and hazardous situations at many locations. The maintenance team works to replace the damaged equipment and get the system up and running as quickly as possible, but with no proper visibility into their inventory, they struggle to find materials that already exist in it. As time runs out, they purchase the materials and in the absence of good supplier master data, they end up purchasing less-than-ideal replacement parts out of the contract at an increased price. In many cases, if the material data is inaccurate, incompatible equipment might be purchased, leaving ample scope for a similar failure in the future and further reducing the ability to properly manage known equipment defects.
Inaccurate master data prevents plant maintenance teams from identifying hotspots and preventing a breakdown by forecasting the accident and the extent of potential damage beforehand. The data mismanagement also hinders the ability of an organisation to withstand and minimise the losses, and resume operations. More and more companies are now looking to establish master data management (MDM) implementations in order to manage their assets through informed decision-making based on reliable data and ensure optimal industrial safety across the enterprise. Master data management is a comprehensive strategy to determine and build a single, accurate and authoritative source of a company’s information assets and deliver this on-demand as a service across all locations and cross-functional divisions of the company.
An effective master data management initiative comprises two key parts—historical data cleansing and ongoing data maintenance (ODM). Historical data cleansing involves classification and business-value enrichment of the existing legacy data across all the systems, applications and organisational units and plants of a business. It ensures enterprise-wide visibility of the material, supplier and other key asset bases leading to efficient asset management and supply base rationalisation. Maintaining quality of data on an ongoing basis and creating a framework for the creation, use, access and maintenance of data across the organisation leads to enhanced operational efficiencies.
A master data management campaign will provide accurate visibility into the infrastructure and operational set-up of the company, which enables it to identify key problem areas and maintenance timelines, and prevent high losses by acting on information beforehand. Ongoing data maintenance ensures that the key data associated is created and maintained real-time to avoid any disruptions.
In addition to ensuring high standards of industrial safety, master data management also delivers significant business-value benefits, like enterprise visibility enhancement and efficient enterprise risk mitigation, inventory optimisation and effective materials handling, streamlined operations, increased productivity and profitability and greater process compliance.
Anurag Dixit is VP of Global Marketing at Zynapse.