Qualcomm and Nvidia, two of the biggest players in the artificial intelligence (AI) chip industry, have been sparring over the top spot in AI chip efficiency tests. While Nvidia dominates the market for training AI models with vast amounts of data, the focus of these tests was on the market for data center inference chips, which are used for tasks like generating text responses and analyzing images.
In a new set of test data published by MLCommons, an engineering consortium that maintains testing benchmarks for the AI chip industry, Qualcomm’s Cloud AI 100 chip beat Nvidia’s flagship H100 chip at classifying images and object detection. The Cloud AI 100 hit 197.6 server queries per watt for image classification and 3.2 queries per watt for object detection, while Nvidia’s H100 scored 108.4 queries per watt for image classification and 2.4 queries per watt for object detection.
Meanwhile, Neuchips, a Taiwanese startup founded by chip academic Youn-Long Lin, came out on top for image classification, scoring 227 queries per watt. In natural language processing, which is the AI technology most commonly used in chatbots, Nvidia took the top spot with 10.8 samples per watt, followed by Neuchips at 8.9 samples per watt and Qualcomm at 7.5 samples per watt.
Analysts believe that the market for data center inference chips will grow quickly as more businesses put AI technologies into their products, but concerns remain over the cost of electricity. To address this, Qualcomm has used its expertise in designing chips for battery-powered devices to create chips that consume less power, such as the Cloud AI 100.
While Qualcomm beat Nvidia in two out of three measures of power efficiency, Nvidia still came out on top in terms of absolute performance and power efficiency in natural language processing. As the AI industry continues to evolve, it remains to be seen which company will come out on top in this ongoing battle for AI chip supremacy.