Rahul Somvanshi
OpenAI is tackling $5 billion in compute costs with a custom AI chip, joining forces with Broadcom and TSMC for production by 2026—will this reshape the AI chip market?
Photo Source- ishmael daro (CC BY 2.0)
Diversifying its chip supply, OpenAI will continue using Nvidia while integrating AMD chips, aiming to secure its spot in a rapidly evolving, $383.7 billion AI chip industry by 2032.
Photo Source- Diego3336 (CC BY 2.0)
Broadcom’s involvement goes beyond production, supplying critical components that move data across thousands of processors, essential for OpenAI’s AI systems.
Photo Source- FMT
Led by engineers with experience from Google’s Tensor Processing Unit project, a 20-member team at OpenAI is now working on its own chip design.
Partnering with TSMC for manufacturing and setting a target for 2026, OpenAI is taking careful steps toward in-house chip production but might face timeline shifts.
Nvidia still commands 80% of the market, yet OpenAI’s collaboration with AMD on Azure’s MI300X chip introduces a new contender in the AI chip arena.
This strategic move aligns OpenAI with tech giants like Amazon, Meta, and Microsoft, each developing custom chips to control costs and meet rising AI demands.
Photo Source- Jernej Furman (CC BY 2.0 )
With $3.7 billion projected in revenue but losses exceeding $5 billion, OpenAI’s expanding chip strategy could mark a critical shift in reducing expenses.
Will OpenAI’s pursuit of in-house chip design provide the resilience it needs to support ChatGPT and other AI products under mounting computational demands?
Meta’s New AI Search Engine Emerges: Report—Are Google and Microsoft About to Lose Their Grip on the Market?