Groq founder Jonathan Ross explains why their architecture is better for inference than GPUs.
— Oguz O. | ???? Capitalist (@thexcapitalist) December 25, 2025
Inference doesn't require as much compute power as training. What matters in inference is speed, cost, and energy consumption.$NVDA was going to lose market share as AI workloads… pic.twitter.com/TdE5AMrVEu