As artificial intelligence (AI) permeates more industries with each passing day, there is a fast-growing demand for AI applications that have real-time responsiveness. Be it autonomous driving, industrial automation, facial-recognition for user authentication, or even customer service, quick response is crucial for enhanced user experience and business value. But the traditional CPUs or GPUs, which were not created with AI in mind, have many shortcomings in terms of their flexibility, cost, performance and power. The problems get worse with the onset of 5G, which, although brings higher networking capacity and shorter networking latency, puts greater pressure on computing throughput and latency of a processor. This scenario demands a new generation of AI-centric processors that can enable high performance while ensuring low power consumption and latency.
Fremont, CA-based DinoPlusAI is taking AI hardware to a new level, helping the modern businesses develop cutting-edge AI applications that respond in real-time with constant response time in any environment. While other AI solutions vendors focus on power and performance, DinoPlusAI goes one step ahead to tackle the latency (response time) demands as well. “With our breakthrough combination of AI processors and software, we are setting new standards in chip performance, programming flexibility, and device agility,” Jay Hu, CEO, DinoPlusAI. “Driven by a unique architecture that brings FPGA and ASIC together, we are emerging as the driving force behind the edge cloud computing.” DinoPlusAI takes a hybrid approach where it marries FPGA’s flexibility and easy configurability with ASIC’s cost and performance benefits. Servers and systems that have DinoPlusAI’s chipset at the core can support AI workloads that are highly sensitive to latency and demand better performance and power efficiency.
As AI models mature and applications become more complex, DinoPlusAI offers clients the flexibility to leverage FPGA to improve their time to market and swiftly migrate to ASIC, with minimum friction. So clients can meet their needs from both compute and memory perspectives-while keeping costs in check.
With DinoPlusAI, clients can save themselves from packing AI processing capability in their edge devices. The company’s scalable chipsets allow users to increase the AI performance and power efficiency through a centralized control mechanism without having to redo their programming or configuration from scratch. DinoPlusAI has built large on-chip memory (128MB memory in ASIC) and supports it through compression to extend the memory up to 300-400 MB. However, the solution is designed to support external memory as well. “We have built our algorithm to compress data as it is stored, so clients can achieve up to 6X bandwidth and cost benefits,” said Hu. “
Our chip fairs far better than chips from other vendors in terms of utilization of computing resources.” The chip’s pseudo VLIW controller and programmable, scalable microcode support offloading of the compiler workload through parallelism.
With such capabilities, DinoPlusAI has already achieved a happy clientele, which is expanding at a rapid rate. For instance, one of its clients from the Education domain has achieved an NLP-based application for its effective learning and training needs. For a different client, the company’s chipset is driving the performance of their cloud-based speech recognition solution by 13X while the deployment costs are kept minimal. The chipset aids the client in running 4,000 programs simultaneously. Besides, DinoPlusAI has created success stories in the area of industrial automation, where it has helped a company maximize the production output at the lowest latency.
DinoPlusAI has achieved this feat because of its talented team that has vast experience in Semiconductor, hardware, and programming. The company’s expert management team- including entrepreneurs- with engineering background remains focused on introducing scalable platforms to address the growing market demands.