Google Cloud commits billions to Mira Murati’s Thinking Machines Lab

Abstract illustration of cloud computing infrastructure representing Google Cloud's partnership with AI research laboratory

Google Cloud has secured a multi-billion-dollar compute infrastructure partnership with Thinking Machines Lab, the AI startup founded by former OpenAI chief technology officer Mira Murati, according to sources familiar with the agreement reported by TechCrunch AI.

The deal, which represents one of Google Cloud’s largest AI infrastructure commitments to date, will provide Thinking Machines Lab with access to clusters of Nvidia’s latest GB300 chips alongside Google’s proprietary tensor processing units. The partnership deepens an existing relationship between the companies that began when Murati launched her venture in late 2024.

Thinking Machines Lab, which has maintained a deliberately low public profile since its founding, is developing what sources describe as next-generation reasoning systems intended to address current limitations in large language model capabilities. The startup has reportedly raised significant venture funding, though exact figures remain undisclosed.

The timing of this expanded partnership coincides with intensifying competition amongst cloud providers to secure relationships with well-funded AI laboratories. Microsoft’s exclusive arrangement with OpenAI and Amazon Web Services’ investments in Anthropic have established a pattern of hyperscale cloud providers tying themselves to frontier model developers through infrastructure agreements.

For Google Cloud, the deal represents a strategic opportunity to demonstrate its credentials in supporting cutting-edge AI research whilst generating substantial revenue from compute services. The division has historically trailed Amazon Web Services and Microsoft Azure in cloud market share, but has positioned itself as the platform of choice for machine learning workloads through its tensor processing units and AI-optimised networking infrastructure.

The inclusion of Nvidia’s GB300 chips—part of the company’s Blackwell architecture generation—signals the computational intensity of Thinking Machines Lab’s research agenda. These accelerators, which began shipping to select customers in early 2026, offer substantially improved performance for training large-scale models compared to previous generations.

Industry analysts note that multi-billion-dollar compute commitments of this scale typically span multiple years and include provisions for reserved capacity, priority access to new hardware, and technical support arrangements. Such deals provide cloud providers with predictable revenue whilst giving AI laboratories the infrastructure certainty required for long-term research programmes.

The partnership also reflects Murati’s continued influence in the AI sector following her departure from OpenAI in September 2024. Her technical leadership during the development of GPT-4 and DALL-E established her reputation amongst investors and potential partners, facilitating both fundraising and infrastructure negotiations for her new venture.

For competitors, the announcement underscores the challenge of securing relationships with elite AI talent and well-capitalised startups. The limited number of teams capable of training frontier models means that each major partnership narrows the field of potential customers for alternative cloud providers.

The deal’s structure may also influence how other AI laboratories approach infrastructure planning. Rather than building private data centres—an approach pursued by some well-funded competitors—Thinking Machines Lab’s commitment to cloud infrastructure suggests confidence in the performance and economics of rented compute at scale.

Market observers will be watching whether Thinking Machines Lab’s technical approach justifies the substantial infrastructure investment, and whether the partnership yields research breakthroughs that validate Google Cloud’s positioning in the AI infrastructure market. The startup’s progress will likely become more visible as it moves from research to deployment phases in the coming quarters.