LAS VEGAS – Advanced Micro Devices Inc showcased its next-generation MI500 series AI accelerators at CES 2026 on Tuesday, with CEO Lisa Su previewing chips designed to deliver superior performance in training and inference for large language models.
The announcement, during Su’s keynote, highlighted the MI500’s architectural leaps over the MI300 series, promising up to 3x gains in certain workloads. OpenAI President Greg Brockman joined virtually to underscore the importance of AMD’s advancements for scaling AI systems.
AMD also launched the MI440X enterprise AI chip, targeting data center deployments with enhanced efficiency and memory bandwidth. The reveals position AMD to challenge Nvidia’s dominance in the $100 billion AI accelerator market.
MI500 Series Preview
Su detailed the MI500 accelerators, built on a new CDNA 4 architecture with advanced chiplet design and HBM4 memory.
Key advancements:
- Up to 288 compute units for massive parallelism.
- 3x performance in FP8 training versus MI300X.
- Enhanced Infinity Fabric for multi-GPU scaling.
- Power efficiency improvements supporting 1,000W+ configurations.
Su demonstrated benchmarks showing MI500 outperforming rivals in LLM fine-tuning and inference.
Availability is slated for late 2026, with early access for hyperscalers.
MI440X Enterprise Launch
AMD introduced the MI440X as an immediate enterprise option, bridging MI300X and MI500.
Features:
- CDNA 3+ architecture with 192GB HBM3E.
- 1.5x inference speed over MI300X.
- Optimized for RAG and agentic AI workloads.
Shipments begin Q2 2026, targeting cloud providers and enterprises.
OpenAI Endorsement
OpenAI President Greg Brockman appeared remotely, praising AMD’s role in diversifying AI infrastructure.
Brockman noted OpenAI’s use of AMD GPUs in clusters, stating the MI500’s capabilities are “critical for cost-effective scaling of frontier models.”
The endorsement highlights AMD’s growing traction with major AI developers.
Company Response
Su said: “MI500 represents a generational leap, enabling the next wave of AI breakthroughs.”
AMD emphasized open ecosystems, contrasting Nvidia’s proprietary CUDA.
Broader Context
AMD’s push comes as AI training costs soar, with hyperscalers seeking alternatives to Nvidia’s 90% market share. The MI300 series generated $5 billion in 2025 sales, per company estimates. CES 2026 attendance exceeds 150,000, with AI hardware central.
Challenges
- MI500 faces delays if CDNA 4 yields lag.
- Competition from Nvidia Blackwell and Intel Gaudi 3 intensifies.
- Supply chain constraints for HBM4 could limit ramp-up.
Outlook
- MI440X deployments accelerate enterprise AI in 2026.
- MI500 positions AMD for $10 billion+ AI revenue by 2027.
- OpenAI partnership may expand cluster deals.
- AMD’s CES momentum signals stronger challenge to Nvidia in the AI accelerator race.
Conclusion
AMD’s MI500 and MI440X launches at CES 2026 mark a pivotal step in diversifying AI hardware, backed by OpenAI’s validation. As demand for efficient, scalable accelerators grows, AMD is poised to capture significant share in the multi-trillion-dollar AI economy.






