The Effect of Different Viewing Devices for the Sense of Presence and

Immersion in Virtual Environments: A Comparison of Stereoprojections

Based on Monitors, HMDs, and Screen

 

Joachim Deisingera, Carolina Cruz-Neirab, Oliver Riedela, Jürgen Symanzikc

 

a Department for Virtual Reality, Fraunhofer-Institute for Industrial Engineering,
Nobelstrasse 12, D-70569 Stuttgart, Germany, email: joachim.deisinger@iao.fhg.de

b Iowa Center for Emerging Manufacturing Technologies, Iowa State University,
Black Engineering Building, Ames, Iowa 50011, USA, email: cruz@iastate.edu

c Department of Statistics, Iowa State University, Snedecor Hall, Ames, Iowa 50011, USA, email: symanzik@iastate.edu

 

Abstract

Displays for virtual environments (VE) are in focus of many discussions in ergonomics. Some tests [1] have been performed in the field of ergonomics of head mounted displays (HMD) but there is no comparison between different immersive viewing devices for VEs. This paper gives an overview of two experiments we conducted to compare different viewing devices for VEs like HMDs, monitors, and screen based projections (SBP) in combination with shutter techniques for time-multiplexed images. Since we included a stereoprojection with large screens and the fact that those devices are actually not a typical work environment, we performed the experiments under laboratory conditions. We supplied the experiments by query techniques in form of questionnaires. We paid attention to the "socially desirable" meaning of the words in the questionnaires to avoid mistakes. From the literature, most usability test or evaluation of user interfaces use inexperienced participants who perform determined tasks and filled out questionnaires [2].

In this study, Immersive projection technologies (IPTs) should be compared in an interactive environment. In such VEs there are different view projection paradigms with either a fixed eye-point model (camera view) or a moving eye-point model (off-axis projection). Normally, HMDs are using the camera view. Monitors and SBP [3] for example use single or connected off-axis projections. For both of the techniques, a device for most precise tracking the head position and orientation should be used. Actually an electromagnetic tracker is used with a filter for the cached data.

Since there were a lot of tasks for improvements, these experiments should be understood as a preliminary investigation only.

 

1. Description of the Experiments

The chosen application for the experiments was a VE where the user had to find and select a fixed number of boxes in order. This role includes the job to find the objects (global navigation)

and pick the objects (precise local selection). The time for selecting all boxes was recorded for statistical analysis.

For all tests, the same navigation tool, the same tracking device, and the same image generator was used. However, the used IPTs are very different and it is difficult to obtain the same conditions in each setup. The comparison covers the following three devices:

 

A four-wall cubic SBP with the edge-size of 144 inch and a per wall resolution of 1024x768,

A 21" workstation monitor with shutter glasses driven in a 1280x1024,

A stereoscopic HMD with LCDs with a per eye resolution of 240x120.

 

The displays differ beneath the resolution in brightness, contrast, flickering, etc. Due to the different size of the displays, the dimensions of the displayed objects have different scales. E.g., for the monitor the objects were scaled down to 1/7 of their original size; otherwise they would have covered the whole screen.

Each subject was asked to fill in a pre- and post-questionnaire. Besides establishing the background of the user, the questionnaires where helpful to receive a subject's judgment of the viewing devices and a self-assessment of fatigue by the subject due to the use of the IPTs. The investigation was split in two independent tests.

For our first experiment, we assumed that each subject has a particular ability to perform the task. Therefore, each subject had to operate all three devices. Otherwise, it was known from similar experiments [1] that a person might improve when doing the same task again, but eventually might get bored towards the end of the test series. There might be an effect of the Phases 1, 2, and 3, i.e., whether the human operates his first, second, or third VR device. Resources permitted only six test series. Thus, each of the six possible permutations in the ordering of Monitor, SBP, and HMD has been randomly assigned to one of the six subjects. Participant 1 used Monitor in Phase 1, SBP in Phase 2, and HMD in Phase 3. Participant 2 used Monitor in Phase 1, HMD in Phase 2, and SBP in Phase 3.

Each subject had to perform exactly the same task for each device in each phase to identify 15 boxes in their correct order. Because of this identical setup, boxes have been considered as blocking factor and not only as simple replications within this experiment. The response variable measured was the time needed to identify each of the boxes. There should have been 6 participants * 3 devices (3 phases) * 15 boxes = 270 observations.

The testing of the devices went depending on the device from 3 up to 20 minutes. To make a meaningful comparison, a number of factors had to be controlled [4]. The individual difference was controlled by using the same subject on all devices. Learning was controlled by having each subject practice to "asymptote" before collecting the data for comparison. Differences due to the moving direction were equalized by randomizing the target direction of the next object to select. The general motivation of the individuals was not a problem because VEs are still very attractive.

 

Experiment 2 was similar to Experiment 1, however no Phase effect had to be considered. There were 12 participants this time, four of them randomly assigned to each of the three VR devices Monitor, SBP, and HMD. In this experiment, the task was to identify 25 boxes in their corrrect order. Again, the response variable measured was the time needed to identify each of the boxes: There are 12 participants (assigned to 3 devices) * 25 boxes = 300 observations.

Each of the groups tested just one IPT and the participants did not practice to "asymptote" before collecting the data. The individual differences where controlled by randomizing the people to groups. Both experiments together give an opportunity to observe people who have never experienced VEs acting with different viewing devices. In addition to the judgment of the different viewing devices the tests give a good chance to record a learning curve for different IPTs.

The total subject population was 18 individuals (14 male, 4 female with an average age of 29 years). The distribution of visual aids was 50%.

 

2. Qualitative and Quantitative Results

Statistical analysis has been conducted and the statistical graphics have been produced. It is based on methods discussed in [5], [6], [7]. The exact measurements from the two experiments included in a technical report can be downloaded from:

http://vr.iao.fhg.de/ccvr/publications/publications/Hci/hcifull.htm

 

3. Discussion

3.1 Questionnaires

We recognized that some of the people who had obviously had problems navigating did not mention it in the questionnaires. This is probably due to the fact that is not "socially desirable" to mention that one had problems or was frustrated when doing the test.

A general opinion of the participants was, that the SBP gave them the best feeling of immersion of all three displays. The monitor was least immersive for all participants. The HMD was criticized because of the low resolution, the blurred image (caused by the lenses) and the cables.

 

3.2 Advice for Future Experiments

Based on these two experiments conducted, the following conclusions should be drawn and the following advice to design future experiments can be given: First, it is important to permanently monitor the recording of the data. If some data is missing for one individual that is critical for a particular statistical design of experiment, one should redo the entire test series with another individuum, replacing all observations of the first individuum (if possible at all).

One should try to design experiments where subjects are unlikely to produce outlying values as in the given experiments. If one individuum seems to produce many unusual values, one should redo the entire test series with another individuum, replacing all observations of the first individuum. Of course, this requires monitoring of the experiment. Especially, there should be some prior knowledge about the expected outcome of the experiment. Finally, one should try to find reasons why this individuum produced that many unusual values. These reasons might result in additional explanatory variables for future experiments. If there is a large number of unusual values, one might even want to consider the data as the outcome of a binary experiment (with possible values "not unusual" and "unusual") and formally establish, whether one device produces more "unusual" values than the other devices.

The question, whether one individuum should participate only in a test series that involves a single device or multiple devices can not be answered generally. A priori, we have to assume that there exists some learning effect. Also, persons might remember how they solved exactly the same problem with a different VR device. Otherwise, what does the questionnaire reveal about those people that are expected to conduct the tests? Questions such as "experience with VR", "overall attitude towards computers", "education", etc. might be important. If the population is quite homogeneous (and large enough), one might randomly assign one person to one VR device. Otherwise, if the population is heterogeneous (or small), it is more reasonable to assign one person to all devices and handle the learning effect in the analysis. Otherwise, if our population is heterogeneous but large, we can use those factors as additional explanatory variables in a controlled manner, e.g., for each person with "experience with VR", there is also one person without "experience with VR". Factorial designs might be a solution to incorporate a larger amount of these factors.

 

3.3 Conclusion

The test shows that the SBP gives inexperienced users the best feeling of immersion. Also, this environment was liked most by all subjects because it is the most natural virtual environment. The monitor has much more acceptance for the user because they are used to work with monitors in normal working environments. With the measured data one could expect that the monitor was easiest to use but with all disadvantages of a fishtank-like VE.

Regarding the used HMD, it was not an up to date HMD. Current HMDs with much better resolution are available on the market as a high-price device. The resolution of the used LCDs is a very important factor, because the test showed that the participants were not able to read the numbers on the objects from far away. This disadvantage caused a bigger amount of time for navigation as normally needed for the monitor or the SBP.

 

3.4 Future Work

Based on these preliminary results, we plan to conduct further experiments, comparing human performances using different types of VR devices. We are aware of problems that cause problems to obtain significant statistical results, but we hopefully have learned through these experiments how to reduce some of these problems.

 

References:

[1] Deisinger J., Riedel O.: Ergonomic Issues of Virtual Reality Systems: Head Mounted Displays. Virtual Reality World 1996. Conference documentation. Hudak Druck München, 1996.
[2] Dix A., Finaly J.: Human-Computer Interaction. Prentice Hall Int. Ltd., Cambridge, 1993.
[3] Cruz-Neira C., Sandin D.J., De Fanti T.A.: Surround-Screen Projection-Based Virtual Reality: The Design and Implementation of the CAVE. Proceedings of the SIGGRAPH 93, Anaheim. In: Computer Graphics Proceedings, Annual Conference Series, ACM SIGGRAPH, 1993, pp 135-142.
[4] Card S.K., Moran T.P., Newell A.: The Psychology of Human-Computer Interaction. Lawrence Erlbaum Associates Pub., New Jersey USA, 1983.
[5] Becker R. A., Chambers J. M., Wilks A. R.: The New S Language, Wadsworth & Brooks/Cole, Pacific Grove, California USA, 1988.
[6] Chambers, J. M. and Hastie, T. J.: Statistical Models in S, Chapman & Hall, New York, London, 1993.
[7] Venables W. N., Ripley B. D.: Modern Applied Statistics with S-Plus, Springer, New York, Berlin, Heidelberg, 1994.