The research goal is to show location-appropriate views of a virtual human to multiple users in a mixed reality set. In the existing mixed reality stage, a virtual human is projected against a screen with a free standing, stationary projector. If one user is interacting with the virtual human, the projected view of the virtual human is appropriate to that user’s viewing position — but inappropriate to any other user also engaged in the simulation. This breaks immersion for the other user.
The current research trajectory is to mount a small digital light projector on to a helmet. This helmet is then tracked on the IRStage using the PhaseSpace motion capture stage. The view of the virtual human appropriate for the user wearing the helmet is then projected against a retroreflective screen. Taking advantage of the material property of the screen (which strongly reflects incident light back on nearly the same vector of approach, but very weakly on any other vector), multiple images from multiple users’ projectors may be projected at the same screen, but the viewable image for each user will only be the image their helmet is projecting. This presents certain advantages over existing multi-user frameworks, not least of which are minimal hardware requirements and the removal of obstructive mirrors and lenses from the user experience.
Joshua Newth holds Mechanical Engineering degrees from Stanford University, where along with his mechanical design coursework he studied software, analog and digital circuit design and mechatronics. He is currently a prospective student in the Computer Science Masters program at USC.