Beyond the Bezel: Utilizing Multiple Monitor High-Resolution Displays for Viewing Geospatial Data CANDICE RAE LUEBBERING Thesis submitted to the faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements for the degree of Master of Science In Geography Laurence W. Carstensen, Jr., Ph.D., Committee Chair James B. Campbell, Ph.D. Lawrence S. Grossman, Ph.D. April 13, 2007 Blacksburg, Virginia Keywords: visualization, resolution, cartography, map reading, map size, human computer interaction
Beyond the Bezel: Utilizing Multiple Monitor High-Resolution Displays for Viewing Geospatial Data CANDICE RAE LUEBBERING Abstract Computers have vastly expanded capabilities for storing, creating, and manipulating spatial data, yet viewing area is still generally constrained to a single monitor. With this viewing window limitation, panning and zooming are required to view the full details of a map or image and, because of the large sizes of typical database, usually only in small portions. Multiple monitor configurations provide an attainable, low cost way for individuals to create large, highresolution desktop displays. This increased screen real estate is particularly useful for viewing and interpreting rich and complex geospatial datasets because both context and amount of detail can be simultaneously increased, reducing reliance on virtual navigation to obtain the desired balance between context and scale. To evaluate the utility of multiple monitor displays for geospatial data, this experiment involved a variety of map and image reading tasks using both raster and vector data under three different monitor conditions: one monitor (1280 x 1024 pixels), four monitors (2560 x 2048 pixels), and nine monitors (3840 x 3072 pixels). Fiftyseven subjects took the test on one of the three display configurations. A computer program captured each subject s performance by recording answers, mouse click locations, viewing areas, tool usage, and elapsed time. A post-experiment questionnaire obtained additional qualitative feedback about subjects experience with the tasks and display configuration. Overall, subjects did perform more efficiently on the larger display configurations as evidenced by a reduction in test completion time and in the amount of virtual navigation (mouse clicks) used to finish the test. Tool usage also differed among monitor conditions with navigation tools (zooming and panning) dominating on the single monitor while selecting tools (tools used to provide answers) predominated on the nine monitor display. While overall test results indicated the effectiveness of the larger displays, task-level analyses showed that specific performance varied considerably from task to task. The larger displays were the most efficient on some tasks, while other tasks showed similar results among all displays or even the single monitor as the most efficient. The best performance improvements occurred between the one and four monitor conditions, with the ii
nine monitor condition mostly providing only modest additional improvement. Subjects rated the four monitor display size as the most ideal. iii
Attribution Dr. Carstensen is my primary advisor for this research study and is Co-PI of the grant that funded this work. He helped design and implement the testing program, supervised experiment development, and oversaw data analysis. Dr. Campbell is a member of my thesis committee. With his experience with geospatial data, he helped to design the data visualizations and tasks used in the experiment and create effective figures. Dr. Grossman is a member of my thesis committee. He helped to develop the post-experiment questionnaire and provided expertise to capture pertinent subject demographics and experience needed to achieve the research objective. iv
Acknowledgements First, I would like to thank my committee chair and advisor, Dr. Bill Carstensen, for his guidance, support, and encouragement throughout the process of this research. Dr. Jim Campbell and Dr. Larry Grossman, my committee members, also provided helpful insights and a diversity of perspectives to keep my work in check. Technical assistance and research suggestions were kindly offered from research associates in the computer science department, Dr. Chris North, Dr. Robert Ball, and Beth Yost. Special thanks to the Center for Geospatial Information Technology at Virginia Tech for housing the subject testing site and especially to the reliable James Dunson for his technical knowledge and assistance. I must also thank all of my subject volunteers for their participation that made this research possible. Finally, I want to acknowledge the continuing love and support of my family (Mom, Dad, Nikki, and Justin) and the encouragement of fellow graduate students (Amos, Arvind, Becky, and Ben) that truly helped me see this work to its completion. v
Table of Contents Abstract ii Attribution... iv Acknowledgements. v Table of Contents vi List of Figures...viii List of Tables..ix Chapter 1: Introduction and Statement of Purpose...1 1.1 Introduction...1 1.2 Statement of Purpose...2 References...4 Chapter 2: Literature Review...8 2.1 Introduction...8 2.2 Limitations of Single Monitor Displays for Maps and Imagery...8 2.3 Visual Fields and Implications for Large Displays...9 2.4 Cartographic Research on Visual Search and Visual Field Size...10 2.5 Computer Science Research with Large Displays...12 2.5.1 Introduction...12 2.5.2 Larger monitors and projectors...12 2.5.3 Progression to Multiple Monitor Displays...12 2.5.4 Difference from Projectors...13 2.5.5 Multiple Monitor Research with Non-Geospatial Tasks...13 2.5.6 Multiple Monitors and Map Reading...15 2.5.7 Potential Problems with Using Multiple Monitors...15 2.6 Summary...16 References...17 Chapter 3: Beyond the Bezel: Utilizing Multiple Monitor High-Resolution Displays for Viewing Geospatial Data...20 Abstract...20 3.1 Introduction...21 3.2 Related Work...23 3.3 Methods...26 3.3.1 Display...26 3.3.2 Testing Software...27 3.3.3 Test Content...27 3.3.4 Subjects...27 3.3.5 Experiment Format...28 Page Page 3.4 Results...28 vi
3.4.1 Accuracy...28 3.4.2 Elapsed Time...29 3.4.3 Virtual Navigation...29 3.4.4 Tool Usage...30 3.4.5 Subject Responses about Displays...31 3.5 Discussion...32 3.5.1 Subject Pool...32 3.5.2 Elapsed Time...32 3.5.3 Virtual Navigation...33 3.5.4 Tool Usage...34 3.5.5 Subject Responses...35 3.5.6 Four Monitor Versus Nine Monitor Performance...35 3.5.7 Effect of Bezels...36 3.5.8 Configuration Issues...36 3.5.9 Further Research...37 3.6 Conclusion...37 3.7 Acknowledgements...38 References...39 Appendix 1: Subject handout explaining testing program response formats...55 Appendix 2: Subject handout describing testing program toolbar features...56 Appendix 3: IRB Approval Letter...57 Vita...58 vii
List of Figures Figure 1.1. A) Tradeoff relationships in map design B) Tradeoff relationships for map perception (modeled after Carstensen 2005)...6 Figure 1.2. Multiple monitor display constructed of nine 17 flat screen LCDs (3840 x 3072 pixels)...7 Figure 3.1. A) Tradeoff relationships in map design B) Tradeoff relationships for map perception (modeled after Carstensen 2005)...41 Figure 3.2. Multiple monitor display constructed of nine 17 flat screen LCDs (3840 x 3072 pixels)...42 Figure 3.3 Three display size conditions used in the experiment...43 Figure 3.4 Centroids of viewing areas for all subject mouse clicks on a task comparing literacy rates of CUB (Cuba) and MDV (Maldives)...44 Figure 3.5. Usage distribution of zoom/pan tools versus select/digitize tools for the entire test by monitor condition...45 Figure 3.6. A) Zoom/pan versus select/digitize tool usage among monitor conditions on an aerial photograph digitizing task. B) Zoom/pan versus select/digitize tool usage among monitor conditions on a size comparison task that crosses bezels on the larger monitor conditions...46 Figure 3.7. Example of zooming tactics used to avoid bezels...47 Page viii
List of Tables Table 3.1 Test map and image characteristics and associated questions and response types...48 Table 3.2. Elapsed time by monitor condition in seconds for entire test session....49 Table 3.3. Question level elapsed time by monitor condition in seconds...50 Table 3.4. Virtual navigation by monitor condition in mouse clicks for entire test session...51 Table 3.5. Question level virtual navigation by monitor condition in mouse clicks...52 Table 3.6. Percentage of zoom/pan tool use out of total tool usage for the entire test by monitor condition...53 Table 3.7. Percentage distribution of display ratings by monitor condition...54 Page ix