NVIDIA has secured Marvell Technology as the first external semiconductor manufacturer to integrate its proprietary NVLink interconnect technology, marking a strategic expansion of its AI infrastructure ecosystem beyond internally developed components.
The partnership, announced this week, enables Marvell to design custom silicon incorporating NVLink—NVIDIA’s high-bandwidth chip-to-chip communication protocol—allowing cloud providers and enterprise customers to build AI systems combining NVIDIA GPUs with Marvell’s networking and data infrastructure processors. The collaboration represents NVIDIA’s first licensing of NVLink to an external chipmaker since the technology’s introduction in 2014.
NVLink provides substantially higher bandwidth than traditional PCIe connections, delivering up to 900 gigabytes per second of bidirectional throughput in its latest iteration. This performance advantage has been critical to NVIDIA’s dominance in AI training workloads, where moving data between processors often creates bottlenecks that limit system performance.
According to NVIDIA’s announcement, Marvell will incorporate NVLink into its data processing units (DPUs) and custom silicon offerings, targeting hyperscale cloud operators seeking to optimise AI infrastructure costs whilst maintaining performance. The integration allows system designers to offload networking, storage, and security functions from GPUs to specialised Marvell processors without sacrificing the low-latency communication required for distributed AI training.
The business implications extend beyond the immediate technical collaboration. NVIDIA gains a strategic ally in Marvell’s existing customer relationships with major cloud providers, whilst reducing pressure to develop every component of AI systems internally. For Marvell, NVLink access provides differentiation in the competitive custom silicon market, where rivals including Broadcom and Intel compete for hyperscaler design wins.
Cloud infrastructure operators stand to benefit most directly. Microsoft, Amazon Web Services, and Google Cloud have invested billions in custom AI accelerators to reduce dependence on NVIDIA’s premium-priced GPUs. NVLink-enabled Marvell chips could allow these providers to build hybrid architectures that preserve compatibility with NVIDIA’s CUDA software ecosystem whilst incorporating cost-optimised components for specific workloads.
The partnership also signals NVIDIA’s response to emerging competitive threats. As AI workloads diversify beyond pure training towards inference and edge deployment, the company faces pressure from specialised chip designers offering lower-cost alternatives. By opening NVLink to partners, NVIDIA positions its interconnect as an industry standard rather than a proprietary advantage—a strategy reminiscent of ARM’s licensing model in mobile processors.
Industry analysts note the timing coincides with growing enterprise demand for AI infrastructure flexibility. Whilst NVIDIA’s integrated GPU solutions dominate current deployments, customers increasingly seek modular architectures that avoid vendor lock-in. Marvell’s participation in the NVLink ecosystem provides an alternative path that maintains NVIDIA compatibility whilst enabling customisation.
The financial stakes are substantial. The AI accelerator market reached approximately $45 billion in 2024, with NVIDIA capturing an estimated 80-90% share. Even modest erosion of that dominance represents billions in potential revenue for competitors. Marvell’s market capitalisation has grown 40% over the past year, driven partly by custom silicon design wins at major cloud providers.
Technical implementation details remain limited. Neither company has disclosed licensing terms, royalty structures, or specific product timelines. NVIDIA’s announcement indicated first Marvell products incorporating NVLink would target 2025 availability, aligning with next-generation AI infrastructure deployments at hyperscale operators.
The partnership’s success will depend on execution across multiple dimensions: Marvell’s ability to deliver competitive silicon designs, cloud providers’ willingness to adopt hybrid architectures, and software developers’ capacity to optimise applications for heterogeneous systems. Early deployments will likely focus on inference workloads and data preprocessing tasks where specialised processors offer clear cost advantages over general-purpose GPUs.
Market observers will watch whether additional semiconductor manufacturers follow Marvell’s lead in adopting NVLink, potentially establishing it as a de facto standard for AI system interconnects. NVIDIA’s willingness to expand licensing could indicate either confidence in its ecosystem’s stickiness or recognition that proprietary control carries strategic risks as the AI infrastructure market matures.













