AMD has partnered with electronics manufacturing services provider Celestica to launch Helios, an open-standards artificial intelligence infrastructure platform designed to provide enterprises with customisable alternatives to proprietary AI hardware ecosystems.
The collaboration, announced on 16 March 2026, positions AMD’s Instinct accelerators and EPYC processors within Celestica’s infrastructure designs, targeting organisations seeking flexibility in AI deployment without vendor lock-in. The platform aims to address growing enterprise demand for modular, standards-based AI compute infrastructure.
Helios represents AMD’s strategic push to expand its footprint in data centre AI workloads, a market where Nvidia currently commands an estimated 80-95% share of AI accelerator sales. By partnering with Celestica—a contract manufacturer with established relationships across telecommunications, cloud, and enterprise sectors—AMD gains access to customised deployment channels that bypass traditional server OEM constraints.
The platform emphasises open standards including the Ultra Accelerator Link interconnect and support for industry-standard software frameworks. This approach contrasts with Nvidia’s tightly integrated CUDA ecosystem, which has historically created switching costs that discourage customers from migrating to alternative hardware.
For enterprises, the business case centres on procurement flexibility and potential cost optimisation. Organisations can theoretically source components from multiple suppliers whilst maintaining interoperability, reducing dependency on single-vendor roadmaps. However, the practical advantage depends heavily on software maturity—AMD’s ROCm software stack still lags CUDA in library coverage and developer mindshare, particularly for specialised AI workloads.
Celestica benefits by positioning itself as a neutral infrastructure provider in the AI supply chain, potentially capturing margin from customisation services that traditional server manufacturers cannot easily provide. The company’s manufacturing capabilities span thermal management, power delivery, and system integration—critical competencies as AI accelerators push beyond 1,000-watt power envelopes.
The timing aligns with broader industry momentum towards disaggregated infrastructure. Hyperscalers including Microsoft and Meta have increasingly designed custom AI servers, whilst the Ultra Ethernet Consortium—backed by AMD, Intel, and others—seeks to establish alternatives to Nvidia’s proprietary NVLink interconnect.
Market implications extend beyond hardware sales. If open standards gain traction, software vendors may invest more heavily in hardware-agnostic AI frameworks, potentially eroding Nvidia’s moat. Conversely, if CUDA’s advantages prove insurmountable for cutting-edge models, Helios may find its market limited to inference workloads and cost-sensitive deployments rather than frontier AI training.
The platform’s commercial availability timeline remains unspecified in AMD’s announcement, as do pricing structures and initial customer commitments. These details will prove critical—previous open-standard initiatives in AI infrastructure have struggled to translate architectural advantages into market share without clear total-cost-of-ownership benefits.
Enterprise IT leaders should monitor several developments: third-party benchmarks comparing Helios configurations against Nvidia DGX systems for representative workloads, software ecosystem maturity particularly around large language model training frameworks, and whether major cloud providers adopt the platform for customer-facing services.
The partnership represents a tangible challenge to Nvidia’s infrastructure dominance, leveraging manufacturing scale and standards-based architecture to offer procurement alternatives. Whether technical openness translates to commercial adoption will depend on software parity and demonstrable economic advantages for production AI workloads.













