Categories AI

Advancing Robots and Autonomous Systems with Physical AI Open Models and Frameworks

In today’s rapidly evolving technological landscape, open source is becoming a cornerstone for innovation, particularly in robotics and autonomous systems. NVIDIA is at the forefront, enhancing collaborative development through open access to vital resources. This initiative is paving the way for the creation of more capable and safer autonomous systems.

Recently at CES, NVIDIA unveiled a suite of new open physical AI models and frameworks. These advancements aim to expedite the development of humanoids, autonomous vehicles, and various physical AI embodiments. The tools encompass the entire robotics development cycle—from high-fidelity world simulations to cloud-native orchestration and edge deployment—offering developers a comprehensive toolkit to build autonomous systems capable of reasoning, learning, and acting in the real world.

OpenUSD plays a crucial role by standardizing how 3D data is managed across these physical AI tools. This framework allows developers to create accurate digital twins that can be effortlessly transitioned from simulation to deployment. The NVIDIA Omniverse libraries, built upon OpenUSD, serve as the foundation for high-fidelity simulations that support this entire ecosystem.

From Labs to the Show Floor

During CES 2026, developers introduced the NVIDIA physical AI stack beyond the laboratory, showcasing machines that included heavy equipment, factory assistants, and social robots.

This stack leverages NVIDIA Cosmos world models and innovative tools like the new Isaac Lab-Arena for policy evaluation. It features the NVIDIA Alpamayo open portfolio of AI models, simulation frameworks, and datasets tailored for autonomous vehicles, along with the NVIDIA OSMO framework for orchestrating training across diverse computing environments.

Caterpillar’s Cat AI Assistant, powered by NVIDIA Nemotron models and operating on the NVIDIA Jetson Thor edge AI module, introduces natural language interaction directly within heavy vehicle cabs. Operators can ask “Hey Cat” questions, receive step-by-step guidance, and adjust safety settings through voice commands.

Caterpillar is also utilizing Omniverse libraries to construct digital twins of factories and job sites. This enables simulations of layouts, traffic patterns, and multi-machine workflows, facilitating safer and more efficient AI-driven operations by incorporating insights back into equipment and fleet management.

LEM Surgical presented its Dynamis Robotic Surgical System, which has received FDA clearance for spinal procedure use. This advanced system relies on NVIDIA Jetson AGX Thor for computation, NVIDIA Holoscan for real-time sensor processing, and NVIDIA Isaac for Healthcare to train its autonomous arms. LEM Surgical also employs NVIDIA Cosmos Transfer for generating synthetic training data and utilizes the Isaac Sim framework for digital twin simulations. Designed as a dual-arm humanoid surgical robot, the Dynamis system enhances precision in spinal surgeries while alleviating the physical burden on surgeons.

LEM Surgical showcase.

NEURA Robotics is pioneering cognitive robotics using an integrated NVIDIA stack. They utilize Isaac Sim and Isaac Lab for training their 4NE1 humanoid and MiPA service robots through OpenUSD-based digital twins before deployment in home and workplace settings. NEURA Robotics has also post-trained the Isaac GR00T foundation model using NVIDIA Isaac GR00T-Mimic.

Moreover, NEURA Robotics is partnering with SAP and NVIDIA to connect SAP’s Joule agents with their robots. They are using the Mega NVIDIA Omniverse Blueprint to refine robot behaviors in intricate, realistic operational scenarios before integrating those agents and behaviors into their Neuraverse ecosystem, as well as real-world fleets.

AgiBot utilizes NVIDIA Cosmos Predict 2 for world modeling within its Genie Envisioner (GE-Sim) platform, generating action-conditioned videos based on robust visual and physical principles. This data, when combined with Isaac Sim and Isaac Lab, ensures policies developed in Genie Envisioner can be effectively transferred to Genie2 humanoids and compact Jetson Thor-powered tabletop robots.

Intbot leverages the NVIDIA Cosmos Reason 2 model to grant its social robots an enhanced understanding of their environment. This model enables them to read simple social cues and safety contexts beyond basic scripted tasks. Through its Cosmos Cookbook recipe, Intbot illustrates how reasoning vision-language models can improve natural human-robot interactions.

How Robotics Developers Are Using New Toolkits and Frameworks

NVIDIA has recently introduced Agile, an Isaac Lab-based engine designed for humanoid loco-manipulation. Agile encompasses a complete sim-to-real workflow allowing effective training of robust reinforcement learning policies for platforms like Unitree G1 and LimX Dynamics TRON.

Robotics developers can take advantage of Agile’s built-in task configurations, Markov Decision Process models for decision-making, and various training utilities to optimize their policies. These developers can validate these policies in Isaac Lab before transferring locomotion and overall behaviors to real-world robots efficiently.

Hugging Face and NVIDIA are synergizing their robotics communities by integrating NVIDIA Isaac GR00T N models and simulation frameworks within the LeRobot ecosystem. This allows developers to access Isaac GR00T N1.6 models and Isaac Lab-Arena directly within LeRobot, facilitating streamlined policy training and evaluation.

Additionally, Hugging Face’s open-source Reachy 2 humanoid is now fully compatible with the NVIDIA Jetson Thor, enabling the direct deployment of advanced vision-language action (VLA) models to boost real-world effectiveness.

ROBOTIS, a prominent developer of smart servos and educational robotic kits, has established an open-source sim-to-real pipeline using NVIDIA Isaac technologies. This pipeline begins with high-fidelity data generation in Isaac Sim, scales training sets using GR00T-Mimic for augmentation, and then fine-tunes a VLA-based Isaac GR00T N model for direct hardware deployment—accelerating the shift from simulation to robust, real-world applications.

Get Plugged In

Discover more about OpenUSD and robotics development by checking out these resources:

  • Read this technical blog to learn about developing versatile humanoid capabilities using NVIDIA Isaac and GR00T N1.6.
  • Read this technical blog for guidance on evaluating generalist robot policies in simulation with NVIDIA Isaac Lab – Arena.
  • Learn about post-training Isaac GR00T through this two-part video tutorial.
  • Watch NVIDIA founder and CEO Jensen Huang’s special CES presentation.
  • Enhance your skills in robotics development with the self-paced robotics learning path.
  • Participate in the Cosmos Cookoff, a hands-on challenge where developers utilize Cosmos Reason to elevate their robotics, autonomous systems, and vision AI workflows.

Leave a Reply

您的邮箱地址不会被公开。 必填项已用 * 标注

You May Also Like