XR And AI Dominate The Future Of Graphics At Siggraph 2025

Meta’s Boba and Tiramisu XR headsets
Meta
Siggraph 2025 was back in Vancouver, British Columbia this year, with celebrations for the 30th anniversary of Toy Story and 20 years of real-time rendering in video games. While Siggraph takes time to appreciate the past, it is even more about looking to the future, and for many years it has served as a great indicator of where the graphics industry is headed. Siggraph is very much a research-oriented conference where some of the brightest minds in the world share their latest research on a multitude of graphics-related topics. For example, Adobe Research shared more than 25 published papers at Siggraph 2025, with most of them touching on AI in one way or another.
Nvidia’s New GPUs and Physical AI Models
Nvidia had a three-day program at Siggraph 2025, with dozens of presentations and whole days dedicated to rendering and the OpenUSD framework. While OpenUSD has been a key topic for multiple years for Nvidia, this year had a stronger focus on new GPUs and how they deliver new compute and AI capabilities for professional users. Specifically, Nvidia announced new RTX PRO servers with Blackwell GPUs. The RTX PRO 6000 GPUs announced earlier this year at GTC 2025 were primarily for workstation applications. That expanded at Cadence Live, with the Millennium M2000 offering both HGX B200s and RTX PRO 6000 GPUs. These new servers compare to the previous-generation L40S GPU (Ada Lovelace), delivering improvements ranging from 4x real-time rendering FPS to 6x LLM inference throughput. Nvidia says that partners including Cisco, Dell Technologies, HPE, Lenovo and Supermicro will offer these systems.
In addition to the new RTX PRO 6000 servers, Nvidia also announced two workstation GPUs to round out the rest of the RTX PRO series running Blackwell. The RTX PRO 4000 Blackwell SFF edition is a two-slot card with 24GB of VRAM and 770 AI TOPS requiring only 70 watts of power. That should mean these GPUs don’t need external power, since the PCIe slot alone provides 75 watts of power. The RTX PRO 4000 is $1,500, while the RTX PRO 2000 is a 545 TOPS GPU with 16GB of RAM and the same 70-watt TDP at $700. These are the Blackwell replacements for the Ada-generation RTX 4000 and RTX 2000 GPUs.
Nvidia also announced new Omniverse libraries and Cosmos Physical AI models to accelerate the training and implementation of robotics with more physically accurate modeling. Nvidia’s Issac Sim combines the new Omniverse NuRec Libraries with Gaussian splats to quickly generate 3-D simulations that mimic the real world. Nvidia also worked with Google DeepMind to create USD integrations into Mujoco’s file formats to seamlessly integrate USD into Omniverse. The new Cosmos models continue the world-generation theme. Cosmos Transfer1 enables the creation of photoreal controllable synthetic data, which draws from multiple video sources to create a synthetic 3-D environment for training. Meanwhile, Cosmos Predict2 is an image to future-world-state model designed to predict movement and actions for simulations. Nvidia also announced Cosmos Reason 7B, which is its state-of-the-art reasoning vision language model for a multitude of on-device AI applications. These applications include physical AI data curation and annotation, robot reasoning and video analytics for AI agents.
Meta’s New Prototype Headsets
Meta always has something interesting to show off from its Reality Labs research division at Siggraph. This year was no different, with the Tiramisu and Boba 3 headsets designed to show off the direction that Meta’s research is going. Tiramisu’s goal is to create hyper-realistic VR with a resolution more than triple that of the Quest 3 — and 14 times the brightness.
Meta’s Tiramisu VR headset
Meta
While this headset is far from commercially ready, it does show what’s technically possible and where VR could go in the future in terms of image quality and brightness. Meta demonstrated the headset using Unreal Engine 5 with help from Nvidia’s DLSS 3, which offset some of the compute costs of rendering at such a higher resolution. One thing to note is that Meta says this current iteration of the headset has a very limited 33 x 33 degree field of view, which is considerably narrower than most headsets today.
Boba 1, Boba 2, and Boba 3 headsets
Meta
Boba 3, on the other hand, takes field of view to an entirely different extreme, offering a 200-degree FoV. This is considerably wider than the 110-degree FoV that most consumer headsets offer. The 200-degree diagonal FoV is measured by combining a 180-degree horizontal FoV with a 120-degree vertical FoV, which Meta compares against the Quest 3’s 110 and 96 degrees, respectively. Meta claims that Boba 3’s FoV covers roughly 90% of human eyes’ FoV, much of which isn’t at full resolution. Boba 3 also has a VR prototype version which weighs considerably less (at 660g) than the Boba 3 (840g) and even the Quest 3 (698g). Both Boba 3 headsets and the Tiramisu were demoed during Siggraph 2025 at Meta’s booth.
Arm Leans Into Neural Rendering For 2026
Arm has been aggressively leveling up the graphics capabilities for its GPUs this year. The company announced Arm ASR earlier this year to improve the image quality of graphics, including games, and is now adding neural technology to the portfolio in the form of its Neural Super Sampling feature. This feature has been common in the desktop space for quite a while, as Nvidia and AMD have been through multiple generations of their own neural supersamplers, but it is relatively new to mobile and enables much lower GPU workloads. Arm is claiming as much as a 50% workload reduction on the GPU by rendering at a lower resolution and then supersampling up to the native resolution. This is in line with claims from competitors for their implementations of AI-accelerated supersampling.
Arm also announced its own frame generation technology called Neural Frame Rate Upscaling, which is a newer capability for the industry that offers the opportunity to save on power while maintaining high frame rates. These neural capabilities are slated to be built in at the hardware level within Arm’s next-generation GPUs for 2026 and are set to come with ML extensions for the Vulkan graphics API as well. Arm will also have a Neural Graphics Development Kit for those not using Vulkan.
Khronos Expands glTF Into Geospatial Gaussian Splats
The Khronos Group is one of the most important standards bodies in the world of graphics. Much like many other standards bodies, it is completely invisible to most people, yet its work is absolutely critical for the future growth of the industry. The Khronos consortium has developed a mobile and efficient 3-D format called glTF — it stands for Graphics Library Transmission Format — that enables cross-platform 3-D experiences and assets with minimal overhead. The Khronos group announced at Siggraph 2025 that it would be partnering with the Open Geospatial Consortium, Niantic Spatial, Cesium and Esri to integrate geospatial Gaussian splats into the glTF 3-D asset format standard. Gaussian splats have become a popular AI-accelerated technique for generating 3-D models of people, places and objects at minimal cost.
This collaboration should broaden the application of glTF and enable even more industries to leverage the already popular format while also enabling faster and easier ways to create 3-D assets thanks to Gaussian splats. I believe that the 3-D geospatial industry will benefit greatly from the deep glTF infrastructure already built by its members for web and mobile, and even more so with further applications of the format.
XR And AI, Together Forever
Siggraph 2025 once again affirmed that there are countless intersections between XR and AI, whether we’re talking about geospatial Gaussian splats being integrated into glTF or new XR headsets that heavily depend on AI to make high-resolution rendering possible. Nvidia has obviously shown that it combines 3-D rendering with AI as a fundamental building block of its business — and that its GPUs are at the core of that vision. Even Arm has shown that its next generation of GPUs will be focused on neural graphics techniques and that we’re firmly in the era of neural graphics — whether that’s in the cloud, on the PC or in mobile. Based on the research and new developments shown at Siggraph 2025, I expect to see even more novel combinations of XR and AI in the months and years to come.
Moor Insights & Strategy provides or has provided paid services to technology companies, like all tech industry research and analyst firms. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking and video and speaking sponsorships. Of the companies mentioned in this article, Moor Insights & Strategy currently has (or has had) a paid business relationship with Adobe, Arm, Cadence, Cisco, Dell Technologies, Google, HPE, Lenovo, Meta and Nvidia.