Nvidia’s Groq bet shows that the economics of AI chip-building are still unsettled

1 month ago 17

Nvidia built its AI empire on GPUs. But its $20 billion bet on Groq suggests the company isn’t convinced GPUs alone will dominate the most important phase of AI yet: running models at scale, known as inference. 

The battle to win on AI inference, of course, is over its economics. Once a model is trained, every useful thing it does—answering a query, generating code, recommending a product, summarizing a document, powering a chatbot, or analyzing an image—happens during inference. That’s the moment AI goes from a sunk cost into a revenue-generating service, with all the accompanying pressure to reduce costs, shrink latency (how long you have to wait for an AI to answer), and improve efficiency.

That pressure is exactly why inference has become the industry’s next battleground for potential profits—and why Nvidia, in a deal announced just before the Christmas holiday, licensed technology from Groq, a startup building chips designed specifically for fast, low-latency AI inference, and hired most of its team, including founder and CEO Jonathan Ross.

Inference is AI’s ‘industrial revolution’

Nvidia CEO Jensen Huang has been explicit about the challenge of inference. While he says Nvidia is “excellent at every phase of AI,” he Read Entire Article