It all starts with memory and storage as two trends in information technology change the nature of the enterprise data center. One is the Internet of Things (IoT), the other the emergence of the cloud. Data generated by IoT devices are causing data centers to scale rapidly and evolve while continuing to serve their online customers. After all, downtime is not an option in a 24/7 always-on world.

The explosion of IoT-connected devices and digital services are generating massive amounts of new data. To make this data useful, it must be stored and analyzed quickly, creating challenges for service providers and system builders who must balance cost, power and performance trade-offs when they design memory and storage solutions.

With so much data streaming to the cloud, breaking the bottleneck in the data center must be done by improving the performance of the data link between servers and large storage farms. In one scenario, data center architects build virtualization and distributed computing systems using commodity components running Linux and an open source variant of Unix to deliver enterprise-class stability and performance.

Another transformation is occurring in semiconductor devices that make up the server and storage farm hardware, a requirement for bandwidth and intelligence to move the data in and out of storage. The first of the bottlenecks was hard drives, millions of rotating disks drive in large storage farms that hold the vast majority of all data collected.

Storage farms contain thousands of hard drives that use mechanical actuators to access a physical track on a rotating disk to read data from individual sectors in the track. Each drive stores bits of data in concentric tracks on a magnetic disk surface with a mechanical actuator positioning a read/write head over the track to store and extract data from the track. Access times to reach individual sectors and the data stored on these surfaces are on the order of milliseconds.

To accelerate this access, data centers first installed large arrays of high-capacity solid state drives (SSDs) because the ever-decreasing cost of Flash Memory made it cost effective to use them. The emergence of Flash enabled software architects to follow a tiered approach, where some Flash-based non-volatile dual in-line memory module (NVDIMM) was used on the memory bus as cache hiding the latency of mechanical storage devices. By frequently moving accessed data out of rotating storage and semiconductor Flash Memory chips, data can be accessed in microseconds, reducing transaction times and improving the user experience.

Moving to all Flash Storage encountered the limitations of the data interface between the drive and its controller electronics. SSDs originally employed the same interfaces used for rotating memory devices. Since hard drives have built-in mechanical limitations, the bandwidth expectations of this interface was not as high as they were for solid state storage devices that could move large amounts of data much faster. This was addressed with the advent of the non-volatile memory express (NVMe) standard where the data interfaces changed to PCI Express. The new data centers use PCI express-based NAND Flash Storage as well as NVDIMM-based cache to accelerate data center data analysis and storage.

Obviously, a breakthrough in memory process technology is long overdue as data center managers yearn for new innovations to address power, latency and scalability, and a number of new technologies are waiting in the wings to take the place of Flash Storage. Aside from cost-per-gigabyte storage, these emerging technologies need to address power dissipation, endurance and data retention and speed of access to be considered viable technology to replace Flash. It also is possible that some of these new technologies will have a niche role for specific use case scenario. Some emerging technologies in memory are Phase Change Memory (PCM), MagnetoResistive RAM (MRAM), ReRAM or Memristor. The 3D X Point storage device by Intel and Micron is one such device.

Some new NVM components have the potential to revolutionize any device, application or service, especially data centers, that benefit from fast access to large sets of data. Data stored in NVM devices can be accessed orders of magnitude faster and have greater endurance than current NAND Flash technologies.

Data Center engineers who previously used standard chip solutions (ASSP) are beginning to use programmable devices as SSD controllers. Field Programmable Gate Arrays (FPGAs) have an added advantage because they can be highly optimized to accelerate each data centers’ specific software application for a competitive and differentiated advantage.

Semiconductor startups are making important strategic contributions to IoT and cloud ecosystem suppliers as well by providing intellectual property (IP) and fully validated platforms for IoT, cloud and data center networking. Semiconductor IP vendors are providing PCI Express and NVMe IP that enables new semiconductor devices to increase the bandwidth of the communications links transporting information into and out of the data center. For data centers doing e-commerce, Internet searches, fraud detection and other datamining operations, this performance boost has a huge, positive impact on their operation.

datacenter 1

Traditional interfaces developed for hard disk drives limit performance of solid state disk drives. NVM Express architected for SSDs connect closer to the processor to increase input/output operations and reduce latency.

Source: NVM Express Consortium

For data center engineers who want to build their own FPGA solution, a complete and fully validated platform including a printed circuit board with an FPGA is offered by companies such as Mobiveil. The board contains standard hardware and software elements that give data center engineers an accelerated path to product development. With the platform, engineers only need to add the custom program they created for their application into the FPGA and plug the board into their server. As the architecture of the data center continues to evolve, a new scenario could emerge where data center engineers reprogram their proprietary algorithms, remove the old code in the FPGA and replace it with the new code.

Programmable chips in the flow of data between server and data farm have the potential to meet some of the more rigorous requirements and solve some of the more vexing challenges. Other solutions may follow as memory storage evolves to support the data center’s growing demand for rapid and flexible response.

In a 24/7 always-on world, downtime is not an option. As the data center continues to serve its online customers, semiconductor IP companies already supply viable and production-proven solutions to accelerate migration to the next generation of memory and storage technologies.