Palantir and NVIDIA Launch Sovereign AI OS With 8 Blackwell Ultra GPUs — “Nations Can Turn Data Into Intelligence”

Sunita Somvanshi

Diagram of the NVIDIA Blackwell Ultra GPU chip showing its internal architecture and key compute components.
Palantir + NVIDIA Sovereign AI OS | Interactive Breakdown
AI Infrastructure · March 12, 2026  |  Palantir + NVIDIA
Sovereign AI OS · Reference Architecture

Your Data. Your Models.
Your Datacenter.

Palantir Technologies and NVIDIA have published a Sovereign AI OS Reference Architecture — a production-ready blueprint that takes organisations from GPU hardware procurement all the way to running live AI applications, while giving them total control over their data, AI models, and applications. The architecture is called AIOS-RA and runs Palantir’s full software suite on NVIDIA’s Blackwell Ultra infrastructure across on-premise, edge, and sovereign cloud deployments.

On-Premise AI Data Sovereignty Blackwell Ultra AIP · Foundry · Apollo Edge Deployment

What Is Inside AIOS-RA?

Click each layer to see what it does and who provides it.

L1
🖥️
NVIDIA Blackwell Ultra Hardware
By NVIDIA
The physical foundation. Each node runs eight NVIDIA Blackwell Ultra GPUs connected over Spectrum-X Ethernet networking for AI training and inference workloads.
Blackwell Ultra GPU ×8 Spectrum-X Ethernet AI Training AI Inference
L2
⚙️
NVIDIA Software Stack
By NVIDIA
A software acceleration layer that includes CUDA-X Libraries, Nemotron open models, Magnum IO for high-throughput data movement, and the broader NVIDIA AI Enterprise software stack.
CUDA-X Libraries Nemotron Open Models Magnum IO AI Enterprise
L3
🔧
Palantir Compute Infrastructure
By Palantir
A hardened Kubernetes substrate that runs Foundry services including Catalog, Build, and Multipass. This orchestration layer handles the reliable operation of Palantir’s data platform workloads on top of the hardware below.
Kubernetes Foundry Catalog Foundry Build Multipass
L4
🤖
Palantir AIP Platform
By Palantir
The AI application layer. AIP (Artificial Intelligence Platform) enables LLM-powered functions, AI agents, and ontology-driven workflows. According to Palantir’s official AIP documentation, AIP enables the development of production-ready AI-powered workflows, agents, and functions built on top of the Ontology. AIP Hub is part of the qualified software suite.
AIP AIP Hub AI Agents Ontology-Driven
L5
🛡️
Unified Management — Rubix & Apollo
By Palantir
Rubix and Apollo together form the unified management plane of the architecture. Rubix handles zero-trust Kubernetes management, while Apollo is Palantir’s autonomous deployment and lifecycle management system — keeping software consistent across geographically distributed or edge environments.
Apollo CD Rubix Zero-Trust Edge Ready Multi-Region

Four Types of Customers AIOS-RA Targets

The architecture is described as particularly critical for organisations fitting one or more of these profiles.

🏛️
Existing GPU Infrastructure
Organisations that already own GPU hardware and can potentially build on their existing investments — Palantir’s Chief Architect specifically cited “building on many customers’ existing investments.”
Latency-Sensitive Workflows
Use cases where sending data to a remote cloud and waiting for a response is not viable — the on-premise and edge deployment options address this directly.
🔒
Data Sovereignty Requirements
Governments and regulated enterprises that require total control over their data, AI models, and applications within defined deployment environments.
🌐
High Geographic Distribution
Operations spread across regions or at the edge — the Rubix and Apollo management plane handles consistency across distributed deployments.

What the Customer Controls

Hover each node to see what organisations own under this architecture. The official announcement confirms enterprises retain total control over their data, AI models, and applications.

Enterprise / Nation
Total Sovereign Control
📦
Your Data
Stays within your deployment
The architecture is designed for customer-controlled on-premise, edge, and sovereign cloud deployments — enterprises retain total control over their data.
🧠
Your Models
GPU-accelerated open-source AI
The stack is purpose-built to leverage GPU-accelerated open-source AI models and data acceleration libraries, including Nemotron open models in the NVIDIA software layer.
📱
Your Apps
AIP + Foundry applications
Palantir’s complete software suite — AIP, Foundry, Apollo, Rubix, and AIP Hub — is tested and qualified to run on the AIOS-RA architecture.
🖥️
Your Hardware
NVIDIA Blackwell Ultra
Built on NVIDIA Blackwell Ultra systems with 8 GPUs per node and Spectrum-X Ethernet networking for AI training and inference.
📍
Your Location
On-prem, edge or sovereign cloud
Akshay Krishnaswamy’s official statement confirms: “on-premise, edge, and sovereign cloud deployments” are all supported.
🔄
Your Updates
Apollo autonomous deployment
Apollo is Palantir’s autonomous deployment and lifecycle management system, managing software updates across distributed or edge environments.

In Their Own Words

Directly from the official press release.

“From our first deployment with the United States government and in every deployment since, our software has had to meet the moment in the most complex and sensitive environments where customers must maintain control. Together with NVIDIA — and building on many customers’ existing investments — we are proud to deliver a fully integrated AI operating system that is optimized for NVIDIA accelerated compute infrastructure and enables customers to realize the promise of on-premise, edge, and sovereign cloud deployments.”
Akshay Krishnaswamy  ·  Chief Architect, Palantir
“AI is redefining the infrastructure stack — demanding, latency-sensitive and data-sovereign environments require a full-stack architecture — built from silicon to systems to software. By combining Palantir’s sovereign AI OS reference architecture with NVIDIA AI infrastructure, industries and nations can turn data into intelligence with speed, efficiency, and trust.”
Justin Boitano  ·  VP Enterprise AI Platforms, NVIDIA

Leave a comment