Vulkan Schnee 0.0.1
High-performance rendering engine
Loading...
Searching...
No Matches
VR Renderer

Overview

The VR renderer is a high-performance, GPU-driven rendering system designed specifically for virtual reality applications. It leverages modern graphics APIs (Vulkan) and VR runtime integration (OpenXR) to deliver efficient rendering for both eyes simultaneously with minimal CPU overhead.

The renderer uses a unified buffer architecture where all scene geometry is pre-loaded into large, shared GPU buffers, eliminating the need for frequent CPU-GPU synchronization and descriptor set updates. This approach enables true GPU-driven rendering where visibility determination and draw command generation happen entirely on the GPU.

OpenXR Integration

The VR renderer integrates with OpenXR, the industry-standard API for VR/AR development. OpenXR provides:

  • Runtime Abstraction: Seamless compatibility with multiple VR platforms (SteamVR, Oculus, Windows Mixed Reality)
  • Input Management: Standardized handling of VR controllers, hand tracking, and spatial anchors
  • Compositor Integration: Automatic integration with VR compositor for distortion correction and lens matching
  • Session Management: Lifecycle management of VR sessions including initialization, frame timing, and cleanup

The OpenXR integration handles:

  • View Configuration: Manages stereo rendering for left/right eyes with proper field-of-view and projection matrices
  • Frame Synchronization: Coordinates with VR runtime for frame pacing and presentation timing
  • Space Management: Handles reference spaces (local, stage, view) for proper object positioning
  • Action System: Provides input abstraction for controller interactions

Rendering Pipeline Architecture

The VR renderer implements a multi-stage GPU pipeline that processes scene data through several distinct phases:

Pipeline Stages

Shader Stage Details

1. Object Culling (Compute Shader)

Purpose: Eliminates objects completely outside both VR eye frustums

Inputs:

  • Per-object transformation matrices and bounding spheres
  • Dual-eye frustum planes (left/right eye view frustums)

Outputs:

  • List of potentially visible objects per graphics pipeline
  • Atomic counters tracking visible object counts

Algorithm:

  • Tests each object's bounding sphere against 12 frustum planes (6 planes × 2 eyes)
  • Objects passing all frustum tests are binned by graphics pipeline for further processing

2. Meshlet Unpacking (Compute Shader)

Purpose: Expands object visibility into individual meshlet visibility

Inputs:

  • Visible objects from culling stage
  • Meshlet metadata (vertex/triangle counts, offsets)

Outputs:

  • Per-meshlet visibility list for each object
  • Updated atomic counters per pipeline

Algorithm:

  • For each visible object, iterates through its meshlets
  • Maintains meshlet-to-object mapping for efficient processing

3. Meshlet Culling (Compute Shader)

Purpose: Fine-grained culling at the meshlet level for maximum efficiency

Inputs:

  • Unpacked meshlet visibility data
  • Meshlet bounding data and frustum planes

Outputs:

  • Final list of visible meshlets per pipeline
  • Per-pipeline meshlet counters for draw dispatch

Algorithm:

  • Tests each meshlet against eye frustums
  • Builds final visibility lists organized by graphics pipeline

4. Draw Command Preparation (Compute Shader)

Purpose: Generates GPU-executable draw commands from visibility data

Inputs:

  • Final meshlet counts per pipeline
  • Pipeline configuration data

Outputs:

  • Indirect draw command buffer for mesh shader dispatch
  • Properly sized workgroups for each pipeline

Algorithm:

  • Converts meshlet counts into vkCmdDrawMeshTasksIndirectEXT commands
  • Ensures optimal workgroup distribution across GPU cores

5. Mesh Shader Assembly (Graphics Shader)

Purpose: Transforms meshlets into rasterizable geometry

Inputs:

  • Unified vertex buffer (all scene vertices)
  • Unified meshlet buffer (vertex/triangle metadata)
  • Unified index buffer (triangle connectivity)
  • Per-object transformation matrices
  • Material and texture binding data

Outputs:

  • Transformed vertices in clip space
  • Texture coordinates and normals
  • Primitive assembly for rasterization

Algorithm:

  • Each mesh shader workgroup processes one meshlet
  • Fetches vertex data from unified buffers using meshlet offsets
  • Applies object transformations and generates triangle primitives
  • Outputs vertex attributes for fragment shading

6. Fragment Shading (Graphics Shader)

Purpose: Computes final pixel colors and applies materials

Inputs:

  • Interpolated vertex data from mesh shader
  • Material properties and texture samplers
  • Lighting information (when applicable)

Outputs:

  • Final color values for each pixel
  • Depth values for depth testing

Variants:

  • Flat Color: Simple uniform color application
  • Textured: Samples from texture arrays using UV coordinates
  • Lightmapped: Combines albedo textures with precomputed lighting (WIP)

Buffer Architecture

Static Geometry Buffers

Created once at scene load - Never modified during rendering:

  • VertexBuffer: All vertex positions, normals, UVs in a single buffer
  • MeshletBuffer: Per-meshlet metadata (counts, offsets, material indices)
  • MeshletTriangleBuffer: Triangle index data for all meshlets
  • MeshBuffer: Mesh-level organization (which meshlets belong to which mesh)
  • MeshPrimitiveBuffer: Material and texture assignments per primitive
  • TextureArray: Bindless texture array for all scene textures

Dynamic Per-Frame Buffers

Updated each frame by CPU:

  • PerObjectSSBO: Object world matrices, color multipliers, visibility flags
  • FrustumBuffer: Current frustum planes for both VR eyes
  • ViewProjectionBuffer: Combined view-projection matrices per eye

GPU-Generated Intermediate Buffers

Written by compute shaders during pipeline execution:

  • BinnedVisibleMeshletIndexBuffer: Visible meshlets organized by pipeline
  • MeshletCounterBuffer: Atomic counters tracking visible meshlets per pipeline
  • IndirectDrawBuffer: Final draw commands for mesh shader dispatch

Data Flow Summary

  1. CPU Preparation: Updates per-frame data (matrices, frustums)
  2. GPU Culling: Object → Meshlet → Final visibility determination
  3. Command Generation: Convert visibility counts to draw commands
  4. Geometry Assembly: Mesh shaders transform meshlets to triangles
  5. Pixel Shading: Fragment shaders compute final colors
  6. Compositing: OpenXR handles stereo composition and distortion correction

This architecture enables scalable rendering that can handle complex scenes while maintaining VR frame rates, with the GPU doing the heavy lifting of visibility determination and command generation.