Dual view tandem simulator prototype

ABSTRACT
Many real world vehicles and situations require simultaneous interaction from two people such as a pilot and navigator or driver and gunner in collaboration. Sometimes the pair sit physically side-by-side as in many commercial aircraft and sometimes fore and aft in a tandem configuration such as a T50 jet or Apache helicopter. For side by side operators it is common practice in simulation to locate the computed design eye point midway between operators such that neither operator sees precisely the correct viewpoint (Quinn, Ed. IMAGE 2012). On direct view displays this can lead to heading errors so collimated cross-cockpit collimation mirrors are commonly used to reduce these. For tandem configurations it is challenging to provide accurate visual data without large geometric distortions to each operator - sometimes completely separate displays are necessary.

It is advantageous for crews to train and communicate together in the same fashion as they will engage in the actual vehicle. Sometimes they may be focusing on entirely separate tasks and even facing different directions such as a jeep driver navigating difficult terrain and a rear gunner firing at attackers. Each has to focus primarily on their own task and controls but they must be in close speaking proximity to operate effectively as a team. Our goal was to provide an effective solution to such problems at lower cost and less distortion than collimated cross cockpit displays with a technique that would also work for tandem seating setups.

Our approach to the problem was to compute and display completely separate images for each operator, each perfectly corrected for their unique Design Eye Point rather then a compromise average. This means each view must simultaneously be warped and blended differently for each eyepoint, and hence requires doubling IG resources. We present each view interleaved in time at 120 hertz such that pilot/copilot each sees only their own view with 60 hertz frame rate. They must each wear specialized active shutter glasses that synchronize with and display only their correct view, and block out the other incorrect view for their DEP. We developed and debugged such a system using 120 Hz capable LED DLP projectors and nine (9) PC’s with gen-locked video cards.

Several different electronic approaches were attempted to achieve this result. We used a four projector system on an ellipsoidal screen to produce a wide field of view for the forward sitting pilot with good peripheral vision, and a lesser FOV for a mission specialist/copilot in an underwater submersible scenario. Ellipsoidal screens have several advantages over spherical dome screens in contrast and immersiveness (Harris, G., IMAGE 2012). A geometry was analysed and optimized to minimize channel errors and maximize visual performance using an autocalibration warp/blend system using machine vision algorithms. Separate control systems for submersible pilot and robot-arm-operating copilot were devised and tested.

To do a good test of our DualView concept we wanted low cost but high fidelity imagery developed in a short time frame. Serious game software using CryENGINE3™ technology and Agile Software development was used by partner RealTime Immersive Inc. to develop and refine an undersea environment scenario for a high-speed submersible crew finding and sealing an oil leak. The Dual Views displayed simultaneously allowed the displays to be not only geometrically corrected for each operator, but also to display additional HUD data pertinent to their task.

The video game industry is outpacing the simulation industry with regards to high fidelity, cost effective, and rapid development tools. Higher fidelity visuals are critical for simulations such as medical/surgery, IED detection, and environmental/geo-specific location recognition. Studies have proven that engaging environments provide long term knowledge retention over traditional training methods. CryENGINE3™’s rapid development game technology allowed us to continually tweak the demo based on feedback and immediately test the changes. Leveraging game technology was a key factor in developing the prototype on a short schedule.

The opto mechanical design also had its challenges. Two users means two completely different sets of IG Channel Extents are required using only one projector layout so it is crucial to design the system efficiently in order to optimize pixels. We decided that the Pilot should have the largest HFOV near 140° for good situational awareness to steer the submersible and avoid obstacles. The Co-Pilot needed a HFOV around 63° that was only large enough to efficiently operate the sub’s robotic arm.

The two main constraints for the projectors were overall image size and shadowing. Shadowing is more of a concern with multiple users than with single user options. Depth of field was found to be a problem with our original 3-channel concept, so we moved to four channels to increase brightness, resolution and focus range. To achieve adequate FOV we needed a moving seat to slide the pilot under the screen. The pilot seat also used built-in controllers and binaural audio speakers for immersion. We placed the manipulator arm control on the seat back for co-pilot operation.

VITA
Mr. Gord Harris is R&D Program Manager for simulation and visualization displays at Christie. Since 2004 he has developed stereoscopic, blending, mirror and screen technologies for Visualization, NVG/IR, motion base, and multi channel simulators. Previous work included freelance R&D consulting and 25 years at IMAX in large format film and digital imaging. Recent work includes concept and execution of EGG and Collimated 3D displays at I/ITSEC 2011 with the Christie Visual Environments team.