Dr. Theo Vassilakis, Founder and CEO, Metanautix
Dr. Pravin Fulay, Associate Vice President, TechMahindra
Globalization has been a boon to the semiconductor industry. With Fabless on the rise and outsourced manufacturing dominated by low cost countries, the industry is experiencing critical changes. Integrating information about the physical supply chain with information about the data supply chain will be a key differentiator. Global visibility across the supply chain is one of the key challenges which determines the success of product manufacturing. By performing rapid supply chain analytics across data silos, companies can collaborate effectively, work seamlessly, innovate better, reduce costs and detect and respond to product issues quickly.
Some of the key questions which need to be answered are:
- Is the organization experiencing cost reduction through value engineering?
- Can predictive modeling forecast the success of new product lines?
- How to identify dead or obsolete stock and manage it through product aging strategies?
- What is the best strategy for managing returns and does it make the best economic sense to recycle or refurbish defective products?
Need for Rapid Supply Chain Analytics
Supply Chain analytics help organizations to identify trends, even when large data volumes are involved. The most obvious areas where analytics will help is in sourcing, inventory management, manufacturing, quality, sales and logistics.
Analytics can leverage information from enterprise applications, data warehouses and the web, as well as incorporate information from external sources to locate data patterns and organize by function such as customer, procurement, finance, planning and quality.
With the changing manufacturing landscape and increased outsourcing, supply chain operations are a fertile ground for innovation and competitive advantage for semiconductor companies. Some of the key industry trends driving the need for rapid supply chain analytics are:
- Distributed operations and globalization: Company value chains are spread across the globe with key operations dispersed from Silicon Valley to Taiwan to India and East Asia. This diversification and fragmentation has resulted in more complex processes, systems and organizational structures.
- Price sensitive consumer market dominance: Consumer market segments now account for almost 70 percent of semiconductor output, impacting every company along the high-tech value chain.
- Ever evolving customer expectations have a direct impact on the customer mix and created diverse product requirements, driven by emerging economies. Just collecting this feedback from various sources like social media can be tricky.
- IoT and collaborative innovation: All semiconductor chipmakers are “critically dependent” on the industry’s ecosystem to create value in their offerings. Sensors and data enabled devices are driving both hardware and software integration efforts.
- Time to Market: An extension of innovation is product development and launching products on time is equally critical. Rapid product commoditization enabled by transition to digital technology and use of modular designs has resulted in shrinking product life cycles — from 24 months to as low as nine months. Opportunity costs have skyrocketed.
As a result, companies need to be able to access, combine and analyze data of different shape, structure and formats and from various data silos across the supply chain. For example, third parties may provide outsourced R&D data which needs to combined with data from other third parties that are part of the new product research initiative. Internally, same part or product may be named differently between different divisions. Such product genealogy information can be especially critical when a company is trying to respond to a product failure in the field, such as a cell phones catching fire.
For analysts and engineers to rapidly perform supply chain analytics on all data, companies need to:
1. Bridge data silos
2. Simplify and accelerate analysis
3. Increase transparency
Bridge Data Silos across the Supply Chain
Data is generated across the supply chain and is different structures, shape, size and format. A strategy of bridging data silos provides superior access to data for engineers and analysts involved in the supply chain. Alternative approaches that require moving data and fitting it into one centralized system are cumbersome and may take months or years before data is available for analysis. Such approaches also go against the reason why data silos exist in the first place designed for specific purposes and groups.
Data assets across the supply chain are diverse and may take different forms:
- Shape: flat data vs. nested and repeated fields.
- Structure: records vs. multi-structured and unstructured documents and files, machine logs, images, etc.
- Size: small (few gigabytes, single server) to very large (trillions of records, petabytes, thousands of servers).
- Storage: data warehouse, RDBMS, Hadoop, NoSQL, the web, and other sources across and outside the organization.
- Location: on-premises, on-cloud, or hybrid.
Bridging data silos involves leveraging technology to automatically understand the format and source of data and letting analysts treat any type of data as a table. For example, analysts expect to easily access and interactively combine a CSV file to a machine log, database table and a huge Apache Hadoop™ HDFS file without elaborate extract, transform, load (ETL) processing.
Simplify and Accelerate Analysis
Performing rapid supply chain analytics requires a simplified approach that empowers the analyst to get the analysis done. However, today’s analytics involve data pipelines that are slow, opaque, and cumbersome involving many manual steps. Proliferation of narrowly focused technologies has resulted in fragmentation of skills and requirement for specialized resources. For the analyst to rapidly analyze data from all sources, they need to be able to ask questions in an intuitive manner and use a single, unified interface to go end to end from raw data to insights. Using a high level language like SQL lets the analyst express the analysis in a natural way of how they think, but so far has been tied to individual RDBMS implementations.
Data pipelines can involve negotiating with multiple groups to get access to the data. The process of getting raw data and cleaning it, mapping with other data assets, and then doing analytics can take days to weeks to months. Many times it involves manual steps, such as requesting another team to get a CSV file with the data. To develop the agility demanded of today’s semiconductor supply chains, companies need to shorten this process by automating access and mapping of data.
A key concept from databases that can be applied more broadly in supply chain analytics is the notion of logical views. This is the idea that an analyst can quickly build new logical tables out of existing physical tables without the cost of creating new physical tables and managing lifecycle and access. Instead, logical views are just names that can be used in queries but are translated on-the-fly in every access. These views can be used to perform quick renaming of fields, lightweight data cleaning and simple aggregations. This enables analysts to match data quickly without waiting on a heavy warehousing process that could slow down an already stalled pipeline even further. Views help decouple various perspectives on data and provide a highly productive form of glue, while still being able to use the security mechanisms of the underlying store. Generalized to operate cross-site, these can be powerful integration and collaboration mechanisms.
Greater transparency and understanding about the data and what analytics are being performed, can save costs and increase agility. Using a high level language such as SQL to describe the analysis makes it easy for other stakeholders in the supply chain to understand what is being done. For example, a tape out incident can easily cost tens of thousands of dollars. While the dollar cost is not trivial, what is more important is the time lost while the process is stalled. It can easily take a couple of weeks to circle back with the engineer to identify and resolve the issue. If the analysis is expressed in a high level manner in SQL, it makes it easy for any individual involved in the supply chain to understand what is being done which helps accelerate resolution of issues and avoid long delays.
Not only does data travel across the supply chain, but code also gets transferred and modified across the chain. A more transparent way of expressing code can further increase agility. For example, automatic testing equipment (ATE) is used by semiconductor manufacturers as well as their end customers. Testing is code intensive and may require sending the validation code to the customer to run on the ATE. Using a high language like SQL makes it easy for engineers to understand the validation logic and help resolve issues quickly.
Staying ahead of the competition with rapid business improvements
When analysts and engineers across the supply chain have superior access to data and are able to ask questions of any data in an easy, simple and fast manner, it results in agile supply chains that are continuously improved. A bottom-up movement results when more and more users find it easy to perform analysis on their own and data across the organization gets mapped and shared. Data pipelines are transparent, connected and fast, resulting in a data-driven supply chain.