Custom AI Architectures Challenge CPUs and GPUs
MOUNTAIN VIEW, Calif.–(BUSINESS WIRE)–#AI–Deep-learning accelerator (DLA) chips, also known as artificial intelligence (AI) processors, continue to proliferate to meet rising demand. Adoption of deep-learning applications in data centers and automotive markets has been substantial, but the past year has seen more robust growth in edge devices and embedded (IoT) systems. With new entrants and products emerging at a rapid pace, the challenge is to separate the leaders from the laggards in a chip market that has now topped $4 billion. A new report from The Linley Group, “A Guide to Processors for Deep Learning,” provides clear guidance on this dynamic market with analysis of deep-learning accelerators for artificial intelligence, neural networks, and vision processing for inference and training.
AI acceleration is quickly spreading from the cloud to the edge as it proliferates in many different deep-learning applications particularly in client devices such as high-end smartphones, voice assistants, smart doorbells, and surveillance cameras. The addition of AI engines in those applications significantly increases their capabilities, bolsters privacy, and adds value by differentiating those products. As the technology proliferates, and demand increases, it will eventually find its way into lower-cost products. Over the past year, edge devices have emerged as the highest-volume application for AI-enhanced processors.
“Many new companies are starting to address these applications. Most use innovative architectures to improve performance and power efficiency, presenting viable alternatives to traditional CPUs and GPUs for AI,” said Linley Gwennap, principal analyst with The Linley Group. “Because no single processor is suited to all applications, some vendors are developing diverse sets of products to capture a greater share of the market. We’ve analyzed these various architectures and products to determine which will win over time.”
The comprehensive report features more than 40 different vendors of AI chips. It provides detailed technical coverage of announced deep-learning accelerator chips from AMD, Cerebras, Graphcore, Groq, Gyrfalcon, Horizon Robotics, Intel (including former Altera, Habana, Mobileye, Movidius, and Nervana technologies), Mythic, Nvidia (including Tegra and Tesla products), Wave Computing, and Xilinx. Other chapters cover Google’s TPU family of ASICs and Tesla’s autonomous-driving ASIC. It also includes shorter profiles of numerous other vendors developing AI chips of all sorts, including large companies such as Marvell and Toshiba, many startups including BrainChip, Hailo, and Syntiant, and cloud-service vendors such as Alibaba and Amazon.
The report includes head-to-head technical comparisons in each product category, as well as extensive technical and market overviews to help those coming up to speed on this complex technology. Those seeking a quantitative look at the market for deep-learning accelerators will find market size and forecasts in three market segments: data center, automotive, and edge.
“A Guide to Processors for Deep Learning” is available now directly from The Linley Group. For further details, including pricing, visit the web site at https://www.linleygroup.com/dla.
About The Linley Group
The Linley Group is the industry’s leading source for independent technology analysis of semiconductors for a broad range of applications including AI, networking, communications, data center, and embedded. The company provides strategic consulting services, in-depth analytical reports, and conferences focused on advanced technologies for chip and system design. The Linley Group is the publisher of the noted Microprocessor Report, a weekly publication. For insights on recent industry news, subscribe to the company’s free email newsletter: Linley Newsletter.