Physical AI Is Here — And the Race to Own It Just Got Real

Something significant is happening in AI right now — and it’s not happening on a screen. Physical AI, the category of artificial intelligence that doesn’t just process data but actually moves, perceives, and acts in the real world, has crossed a threshold that most people haven’t noticed yet. What I find most telling isn’t any single product launch. It’s the fact that platform companies, semiconductor firms, manufacturing giants, and entire governments are all pivoting toward the same thing at the same time. When that kind of convergence happens, it usually means a technology has stopped being experimental and started becoming foundational.

What Physical AI Actually Means — And Why the Definition Matters

Physical AI refers to AI systems that interact with the physical world — robots, autonomous vehicles, adaptive industrial machines. These systems don’t just generate text or analyze images. They perceive their environment, make real-time decisions, and take action. Think of the difference between a calculator and a surgeon. Both process information. Only one reaches into the world and changes it.

Nvidia CEO Jensen Huang described it as “the ChatGPT moment for robotics” at CES in January. That framing is deliberate and worth unpacking. The ChatGPT analogy isn’t about hype cycles. It signals that a technology previously confined to research labs and controlled environments is now crossing into mainstream commercial deployment — the same crossing that happened with large language models in late 2022.

Why the West Is Building the Platform Layer, Not the Robot

The Western approach to physical AI is revealing. The companies investing most aggressively aren’t robotics companies in the traditional sense — they’re infrastructure companies. They want to own the platform that robotics runs on, not the hardware itself.

Nvidia has released new Cosmos and GR00T open models specifically designed for robot learning and physical reasoning. Alongside those, its Blackwell-powered Jetson T4000 module delivers four times greater energy efficiency for robotics computing — a critical factor when you’re deploying thousands of machines on a factory floor. Arm has carved out an entirely new Physical AI business unit focused on chip design for robotics and intelligent vehicles.

Google’s move is perhaps the most strategically telling. It brought its robotics software unit, Intrinsic, fully in-house — out of Alphabet’s experimental “Other Bets” division and into Google’s core operations. The internal analogy being discussed is Android. Android didn’t win the smartphone era by making the best phone. It won by becoming the layer that everything else ran on. That’s precisely what Google is attempting here: a vertically integrated physical AI stack combining DeepMind’s models, Intrinsic’s deployment software, and Google Cloud infrastructure.

China’s Approach Is Different — and Harder to Ignore

China’s physical AI story has a different texture entirely. At this year’s Spring Festival Gala — watched by hundreds of millions of viewers — humanoid robots from multiple Chinese startups performed kung fu routines, aerial flips, and choreographed dances live on national television. A year prior, similar robots were stumbling in demo videos. The speed of that progression is genuinely striking.

China accounted for over 80% of global humanoid robot installations in 2025 and more than half of the world’s industrial robots. But what makes this more than a market share story is the structural depth behind it. China controls roughly 70% of the global lidar sensor market and leads in harmonic reducer production — the precision gears that give robots their fluid movement. Hardware costs have been driven down at a pace that Western manufacturers are struggling to match.

The implication is that the West may be building the intelligence layer while China controls the physical components that make the machines work. That’s not a comfortable asymmetry for anyone thinking about long-term supply chain resilience.

The Enterprise Adoption Curve Is Already Bending

A Deloitte survey of more than 3,200 global business leaders found that 58% are already using physical AI in some capacity. That number rises to 80% when you include those with firm plans to deploy within the next two years. The question in most boardrooms has shifted from “should we adopt physical AI?” to “how fast should we move, and on whose platform should we build?”

Boston Dynamics’ Atlas humanoid robot is now operating autonomously inside Hyundai’s manufacturing facility in Georgia — not in a pilot program, but in live production. Siemens and Nvidia have announced plans for what they’re calling an Industrial AI Operating System, with ambitions to build the world’s first fully AI-driven adaptive manufacturing site. These aren’t announcements about the future. They’re descriptions of things that are already being built.

Physical AI at a Glance — Key Facts and Figures

Metric / Development Detail
Global humanoid robot installations (2025) China accounted for over 80%
Enterprise adoption (Deloitte, 3,200+ leaders) 58% already using; 80% plan to within 2 years
China’s lidar market share ~70% of global supply
Nvidia Jetson T4000 efficiency gain 4x greater energy efficiency vs. previous gen
Google Intrinsic Moved from Alphabet “Other Bets” into Google core
Siemens + Nvidia partnership Building world’s first AI-driven adaptive manufacturing site
Boston Dynamics Atlas Operating autonomously in Hyundai’s Georgia facility

The Real Tension Nobody Is Talking About Enough

The framing of physical AI as a “race” between East and West is tempting but incomplete. The more interesting tension is between platform control and hardware control. If Google, Nvidia, and Arm own the software and silicon stack, but Chinese manufacturers control the sensors, actuators, and precision components that make robots physically functional, then neither side has a complete picture.

This is structurally similar to what happened in solar energy — where Western companies led on software and system design, but Chinese manufacturers captured the supply chain for panels and components. The lesson from solar is that hardware cost curves matter enormously, and they tend to favor whoever invests earliest in manufacturing scale.

What the Next 12–24 Months Actually Signal

Over the next two years, I expect the physical AI landscape to clarify around a few key fault lines. Platform consolidation will accelerate — manufacturers will not want to support five different robotics operating systems, and the Android-style winner-takes-most dynamic will begin to assert itself. Whoever locks in enterprise customers with developer tools, simulation environments, and cloud integration first will be very difficult to displace.

On the hardware side, expect Western governments to take a more active role in securing supply chains for critical robotics components — similar to what we’ve seen with semiconductor policy. The lidar and harmonic reducer dependencies are already being flagged in policy circles.

Perhaps most importantly, the definition of “AI company” is about to broaden significantly. In 12 months, some of the most important players in enterprise AI will be companies that make things move — not just companies that make things think.

If you’ve been watching AI primarily through the lens of language models and chatbots, now is the time to expand that frame. Physical AI is where the next decade of enterprise value is being built — and the foundations are being poured right now. I’d encourage you to explore our deeper coverage of agentic AI and enterprise automation to see how these threads connect into a single, much larger story.

Leave a Comment