Assembly Line Y
AssemblyLine Y
An OpenXR VR training simulator for Tesla Model Y interior assembly — featuring an 8-step enforced workflow, real-time AI coaching via Claude API, voice Q&A, and a performance rating system, built in Unity 6 for Meta Quest.
Structure
Two scenes, one complete workflow
Scene 01 — Lobby
Skill onboarding
Four progressive zones introduce each interaction type — grab, socket, and scanner — before trainees reach the portal into the simulation. No guessing at controls mid-assembly.
Scene 02 — Simulation
Full assembly run
Trainees complete all 8 interior installation steps on a Tesla Model Y. Performance is tracked throughout and ARIA delivers a final rating — Rookie, Apprentice, Expert, or Master.
Assembly Sequence
8 steps, enforced by interaction logic
Steps with dependencies are strictly sequential. Steps where real-world order doesn't matter — like installing multiple seats or doors — allow any order within the group.
Why it matters
Spatial memory transfers where video doesn't
Factory assembly training has three real costs: errors on a live line cause production loss, traditional video training has low knowledge retention, and on-floor apprenticeship occupies experienced workers. VR moves errors into a consequence-free environment while preserving spatial memory — the procedural knowledge that actually transfers to the job. PwC research shows VR-trained employees retain skills at roughly 40% higher rates than video-based alternatives.
Design Decisions
Why the system is built the way it is
Decision 01
Sequential enforcement — lock the socket, not just the UI
Parts can only snap into the active step's socket. All other sockets stay disabled at the XRI layer, so skipping steps is physically impossible — not just discouraged by a warning message. This makes trainees understand procedural dependencies through action, not instruction.
Decision 02
Orientation validation — reject wrong angles with haptics
Parts placed at more than 45° off-axis are refused by the OrientationSocket, which extends XRSocketInteractor with a Vector3.Angle() check. A haptic impulse fires instantly on rejection. Wrong placement is felt, not read — matching how real assembly errors surface.
Decision 03
Lock-in-place — installed parts become permanent
Once a part snaps correctly, its Rigidbody switches to Kinematic and the grab component is disabled. This simulates the irreversibility of physical fastening and prevents trainees from pulling parts out to game the accuracy score.
Decision 04
Sparse ARIA triggers — AI speaks only at meaningful moments
ARIA does not narrate continuously. It responds to four events: correct placement, incorrect placement, player-initiated voice query (Right A button), and game end. Event-driven scarcity keeps guidance signal high and immersion intact.
Tech Stack
What it's built with
Reflection
What this project demonstrates
Assembly Line Y shows how I think about XR products that span interaction design, systems architecture, and AI integration. The core challenge wasn't building a VR experience — it was encoding real procedural knowledge into interaction logic so the simulation teaches correct habits, not just correct clicks. Writing a Claude API HTTP client from scratch for Android, designing ARIA as a sparse-trigger event system rather than a chatbot, and building orientation-aware sockets that fail gracefully are each design decisions as much as technical ones.