Title
Direct-touch vs. mouse input for navigation modes of the web map.
Abstract
Nowadays the web map (E-map) is becoming a widely-used wayfinding tool. However, when it is operated with a different input device, its performance will be affected. To investigate its functional performances in various navigation modes, two input devices were employed, i.e., the mouse and the touch screen. Meanwhile, the map websites over the Internet were searched and examined, and three dominant navigation modes in current use were identified: (1) continuous control and continuous display (CCCD), (2) discrete control and continuous display (DCCD), and (3) discrete control and discrete display (DCDD). Then, the experimental interfaces were designed and simulated tests were separately conducted with the mouse and the touch screen to evaluate the performance results. In this research, 36 volunteers participated in the experiment, whose task completion times and user interface actions (total number of clicks on arrow keys) were analyzed through a two-way analysis of variance (ANOVA) to determine the six types of operational performance. It was finally discovered that, in all of the navigation modes, the mouse performed remarkably better than the touch screen in terms of task completion time ( F 2,70 = 3.28, p < .001). Moreover, the participants did much better in the CCCD mode than in the other modes whether they used the mouse or the touch screen. The findings will be utilized by our research team as the stepping stone to the development of a navigation mode compatible with both the mouse and the touch screen; besides, they will serve as a reference when the web map is further studied and practically designed. Keywords Web map Input device Navigation mode 1 Introduction With the new operating systems Win7 and iPhone OS introduced to the market, it is expected that the user interface which previously relied on the mouse as the input device will gradually opt for the touch screen. As is confirmed by some studies, the touch screen is characterized by intuitive input, which makes it much easier for the user to learn and operate the device [1] . Up to the present, the touch screen has been widely used in kiosks, ticket machines, automatic teller machines (ATM), and so forth. Thanks to the features of the touch screen, kiosks are used by the public more and more frequently [2] . Kiosks are mainly intended to provide convenient and instant services, such as web maps, cash withdrawals, museum sitemaps, and self-service gas stations [3,4] . Surrounded by a new environment, the general public will usually turn to a kiosk, accessing the web map to get familiar with the vicinity. Therefore, the usability and functionality of the web map have been highlighted as an important issue. As the web map is browsed, its navigation, or how it is presented, is a key factor influencing the user’s viewing and operation [5] . A well-designed navigation technique can successfully lead the user to browse the information space of the webpage; furthermore, the user can explore its content by activating various functions [6] . As is indicated by some researches, if the user is unfamiliar with the conceptual model, he/she will be inclined to commit operational errors. As a result, he/she will easily suffer a sense of frustration and take less interest in the web map [7] . In view of the above, when designing the web map, the designer is confronted with a momentous task: whether the navigation will effectively provide the user with correct cognitive guidance and feedback. For most people, the mouse is the most common input device. Consequently, nearly all user interfaces, including web maps, base their navigation on the mouse and are designed as well as operated correspondingly. If the mouse is replaced by other input devices, like the touch screen, the user may have some difficulty in operating and suffer lower efficiency. In the past, some researches were made to compare the functionality of different input devices [8–13] . Nevertheless, there were few studies which applied those input devices to in-field, simulated tests on the navigation modes of web maps. In the first phase of this research, the tasks were intended to understand how the navigation modes of web maps were used concurrently. For that purpose, the map websites were searched, the available web maps were operated and examined, and, based on the previous studies, the navigation modes adopted by most of the web maps were identified. After that, the tasks in the second phase were launched. The collected navigation modes were analyzed, the experimental interfaces were designed, the simulated tests were conducted with two input devices (i.e., the mouse and the touch screen), and finally the operational performances were compared. The operational performances consisted of task completion times and user interface actions, which resulted from testing the web maps. This research was mainly aimed at two targets explained below. (1) In each of the navigation modes, the mouse and the touch screen were used as the input device separately, and then their performance results were compared. (2) With the mouse and the touch screen used, the navigation modes were tested, and the performance results of different navigation modes were compared. In the near future, the touch screen is likely to become the mainstream of the input devices. In such a case, this research will present a deep insight into how input devices and navigation modes influence the operational performance of the web map, as will be discussed in the following chapters. 1.1 Input devices Input devices, including touch screens, mice, styli, touchpads, pointing sticks and joysticks, function as the communication media between users and machines. At present, the mouse remains the most common tool among them. In the past, there were a lot of relevant studies comparing and analyzing different input devices. With four different input devices (the mouse, joystick, step keys and text keys) employed, a study was conducted to compare their text input performances. The findings indicated that the mouse performed better than the other three input devices in terms of positioning time, error rate, and moving speed [8] . Besides, to enhance the usability of the input device, a newly-designed Fluid DTMouse was proposed by some researchers, which would improve the switch from one fixed mode to another, ensure the stability of the cursor, and enable the user to input accurately [9] . The touch screen has become the dominant trend of input devices, boasting the following advantages. (1) As its control interface overlays the monitor, there is no need for such extra devices as the mouse, which needs a space-occupying carrier and operating environment. (2) Compared with other mobile input devices, the touch screen is much more robust and durable [2] . Despite a great number of advantages, the touch screen is not completely superior to the mouse in terms of operational performance. In a study, the mouse was compared with the touch screen in the single-touch mode, with the objects being 1, 4, 16, and 32 pixels per side. It was discovered that, when target ranging was more than 4 pixels, the selection time needed by the mouse was the same as that needed by the touch screen. However, when target ranging was less than 4 pixels, the selection time needed by the mouse was shorter than that needed by the touch screen [10] . While the touch screen did worse than the mouse in the single-touch mode, the former did better than the latter in the double-touch or multi-touch mode [11,12] . To keep up with the advantages of the touch screen, a new user interface is being developed which will be operated in the multi-touch mode and enable the user to select the smaller target easily through the menu [13] . 1.2 Navigation Navigation can be described as the task of determining position within the information space and finding the course to the envisaged information and other related information. Navigation is made up of two elements; one is wayfinding, which means a cognitive decision-making process, and the other is travel, which means moving from one place to another. While navigating in the real or virtual world, people collect information constantly, make plans and move from place to place. Therefore, in the process of navigation, wayfinding and travel are inseparable [14] . Wayfinding means that in a large-scale virtual environment, the user can move from the present position to another with the aid of some familiar landmarks [15] . Generally speaking, people’s spatial knowledge is founded on their daily living environment, including cities and buildings. Such knowledge provides people with both wayfinding guidance and directional guidance so that suitable spatial behaviors may be performed [16] . Therefore, wayfinding is regarded by some researchers as intelligent navigation. In such a perspective, wayfinding is seen just as the cognitive element of navigation. To put it briefly, the strategic and tactical elements are responsible for the navigational behavior. Regardless of the different definitions, it is held by most researchers that wayfinding is problematic for users of large-scale virtual environments [15,17–19] . Under such circumstances, navigation is intended to help the user explore information spaces that are too large to be conveniently displayed in a single window [6] . With the aid of the user interface, the information space can be browsed by the user. The user interface (UI) is intended to provide the user with various functions, such as moving the visible range onto the information space to view the selected part. The UI components of the web map include such interactive elements as icons, buttons, and menus [20] . As for the spatial navigation of the two-dimensional (2D) map, its operation functions are mainly composed of panning, zooming, scrolling, and moving [6,20] . After the presentation modes of the overview and details are analyzed, it is discovered that panning, zooming, and scrolling not only enable the user to view the overview and details in the information space but also offer interface operations on different levels [6,21] . Depending on different control functions, panning and zooming fall into either continuous or discrete control. Continuous control means that the user must undergo every step of translocation before reaching the destination. In contrast, discrete control means that the user can immediately jump to the newly-emerging zoom level or position within the information space [6,21,22] . Generally speaking, if the user is familiar with the structure and content of the particular information space, discrete control will work faster than continuous control. On the contrary, if the user is unfamiliar with the information space, continuous control will do better than discrete control [22] . As a rule, when the user turns to the information space for navigational assistance, the destination is not known in advance. Consequently, he/she has to start from the starting point, pass all the transitional points, and reach the destination, thus completing the search task. As for the navigational control functions, most of the previous studies were centered on operations, i.e., zooming, panning and moving. However, the effects of continuous control or discrete control on zooming, panning and moving were not explored [14,15,20,21] . In view of this, the main concern of this research is directed toward the navigational control functions. Namely, its purpose is to determine the effect of different control functions on navigational performance. Meanwhile, different input devices may affect the performance of the navigational control functions. Therefore, the interaction between the input device and the navigational control function will be further studied. 2 Investigation and analysis of map websites 2.1 Investigation of map websites In July, 2008, the major search engines, i.e., Google and Yahoo, were employed by the authors, the keyword “web map” was entered, and both Chinese and English map websites were searched. Based on the search result, the websites ranking among top 80 were arranged in descending order of relevancy. Thereafter, the web maps which adopted either continuous control or discrete control were singled out. Those websites which had been out of service or had an unstable connection speed were rejected. Only one website was chosen from those which provided similar services. Moreover, to facilitate comparisons, the domestic maps were confined to those of Taiwan while the foreign maps were limited to those of the United States proper. In the end, eight web maps were selected for the research under discussion, as is shown in Table 1 . 2.2 The result of investigation and analysis After the eight experimental samples were selected, their functions were tested, especially zooming and panning. Besides, searching for the targets was practiced and the navigation modes of current web maps were identified. It was discovered that the navigation mode of a web map includes two main functions, namely, control and display. The control function refers to the functionality of the press button, which is classified into continuous control ( Fig. 1 ) and discrete control ( Fig. 2 ). The display function refers to the presentation mode while an image action is being executed. Similarly, there are two display modes: continuous display ( Fig. 3 ) and discrete display ( Fig. 4 ). With the above samples studied, it was finally concluded that, from various control functions and display functions, there are at most three navigation modes to be synthesized: (1) continuous control and continuous display (CCCD), (2) discrete control and continuous display (DCCD), and (3) discrete control and discrete display (DCDD), as is shown in Table 2 . Because discrete control and continuous display cannot coexist, this unfeasible mode is not included in the scope of this research. The map websites investigated and analyzed by the researchers are shown in Table 3 . 3 Methodology This research was aimed at evaluating the operational performances in the three different navigation modes combined with two different input devices, i.e., the mouse and the touch screen. As for the strategies and tasks of wayfinding, it was discovered by previous studies that the wayfinding performance varies with the task difficulty and wayfinding strategy. Moreover, wayfinding performance decreases as an environment’s complexity increases [23,24] . The use of different types of technology to navigate [25] , the goal associated with a wayfinding task [26,27] , spatial knowledge [16] , visuospatial working memory in map learning [28,29] and the travel techniques used by a traveler [14,15,20,21] are also factors that affect wayfinding performance. In the process of navigation, the necessary memory of spatial knowledge and the effect of individual differences should be ruled out [15] . Just because of its simplicity [23,24] and unfamiliarity [15] , a virtual map environment was drawn and adopted as the experiment tool. In this way, the effect on navigational performance which would come from individual differences and the visuospatial working memory in map learning was minimized. An artificial campus map with a limited area was adopted to perform the experimental task. In the experiment, none of the participants had the spatial knowledge about the campus. Each participant was required to manipulate the navigational technique and to locate the designated red points on the map. For that reason, neither the route nor the landmark had to be memorized in the process of wayfinding. Besides, the combined panning arrow keys and hierarchical zooming were employed as the navigation technique for the experiment in question. 3.1 Participants This research was targeted at the students of National Cheng Kung University (NCKU), from whom volunteers were recruited. There were 36 participants in total, with males and females in equal numbers. The ages of the participants ranged from 18 to 27 (mean of age = 23, SD = 3.3). The participants were neither color-blind nor infected with other eye diseases; in addition, their natural or corrected eyesight was above 0.8. They all had a good deal of computer experience. Generally, they used the computer very often (0-rare, 4-often, 32-very often). Their frequency of using the web map was from rare to very often (19-rare, 12-often, 5-very often). Most of them frequently used the mouse (1-rare, 10-often, 25-very often). With regard to the touch screen, they rarely used the large-sized one (34-rare, 2-often, 0-very often). However, they occasionally used the small-sized touch screen (28-rare, 6-often, 2-very often), like the cell phone or PDA. The web map interface used in the simulated experiment had to be operated in the single-touch mode although the current touch screen can be operated in the multi-touch mode with two hands. The touch screen would be compared with the mouse, the latter of which had to be operated with a single hand. Furthermore, the panning and moving of the web map also had to be operated in the single-touch mode. Thus, throughout the experiment, the participants were required to operate the mouse or the touch screen with only the habitually-used hand and in the favorite way that they felt to be comfortable. 3.2 Materials and stimuli The experimental equipment was divided into the two categories set forth below. 3.2.1 Hardware A desk-top computer was accompanied by a touch screen (3 M M170). The specifications of the touch screen were listed below: viewable size (H × V) being 337.9 × 270.3 mm (17 in. diagonal), maximum resolution being 1280 × 1024 pixels, frame rate being 70 Hz, contrast ratio being 450:1, brightness being 260 cd/m 2 , and response time being 16 ms. When the touch screen was used as the input device, it was placed on the central line 10 centimeters away from the front edge of the desk. Also, it slanted at an angle of 40° so that the user’s muscles would not feel tired easily [30,31] . The desk was 1100 mm in length, 700 mm in width and 720 mm in height while the chair was 420 mm in height. The height of the desk surface was in agreement with that recommended for a workstation [32] . Alternatively, the input device could be replaced by the wireless mouse (Logitech LX8). 3.2.2 Software The sample web map was simulated with the help of Macromedia Flash and played with Flash Player. The experimental data were reorganized with the help of Microsoft Excel and then were analyzed with statistical software (SPSS for Windows). 3.3 Design of the experiment During the experiment, the simulated web maps were manipulated within a small range with the help of the mouse or touch screen. As for zooming, the ratio between two consecutive zoom levels was set at 1.4:1 (about √2) and then the simulated operation was conducted. As for the panning value of the arrow key, one-third of the image width was tested in each step. The experimental interface was composed of multi-level zooming and a combined set of arrow keys, as is shown in Fig. 5 [20] . After the control and display functions were combined, there were three navigation modes available: (1) continuous control and continuous display (CCCD), (2) discrete control and continuous display (DCCD), and (3) discrete control and discrete display (DCDD). Each of the participants had to employ the mouse and the touch screen separately, undergoing all the three navigation modes. In other words, six experimental results were obtained ultimately. The image area of the map contained 923 × 987 pixels; the size of the panning arrow key was 27 × 27 pixels; the size of the zooming key was 20 × 28 pixels. The target was 6.6 pixels, with its maximum zoom-in being 29 pixels [10] . 3.4 Experimental procedures The experiment was a within-subject design, or repeated-measures design. The mouse and the touch screen would be operated separately by the participant in the three different navigation modes, i.e., CCCD, DCCD, and DCDD. In other words, a total of six experiments was designed. The task of each experiment was to be performed on the same map. The experimental order followed the principle of counterbalance. That is to say, the order in which the interface was operated by the participant varied with his/her experimental order. The experimental order of each participant was encoded as M123, M213, M312, …, T123, T213, T312. The first English letter stood for the input device, namely, M for the mouse and T for the touch screen. The numbers 1, 2, and 3 stood for the three navigation modes respectively. There was a 5 min break between two experiments. The participants had to pan, zoom in, or zoom out the map in order to locate the designated targets, which were represented by red points. There were 10 red points in each experiment. During the experiment, the unsolved points were displayed at the bottom right corner of the map by a number, in descending order from 10 to 1. During the experiment, the participants were required to utilize the navigation techniques to locate the red points on the map. To enable the participants to use the techniques to the full, the red points would appear at ten randomly-selected sites. Only one red point would appear at a time. After the red point was found, the map had to be zoomed into the maximum and the red point had to be clicked. Right after being clicked, the red point would disappear. After that, another red point had to be searched until all the ten points were located. Thus, the task was successfully accomplished. The experimental procedures are set forth below: (1) The purpose, methods and procedures of the experiment were explained to the participant. (2) Personal information, like gender and age, was filled out by the participant. (3) The written instructions on the experiment were read by the participant. (4) After the experiment was started, one of the six simulated tests in the predetermined order had to be conducted by the participant. In each test, he/she was required to locate ten designated points, marked in red, on the map. When a designated point was found, the map image had to be zoomed into the maximum and then the point had to be clicked. Right after being clicked, the point would disappear and another red point would be located. (5) After a simulated test was successfully performed, the participant continued to conduct another test, repeating procedures 3–4 until all of the six tests were smoothly carried out. 3.5 Analysis of the collected data Each of the 36 participants was required to employ the mouse and the touch screen separately, operating the web map in three different navigation modes. With the test coming to an end, a total of six experimental tasks were performed by each participant. As each participant was observed repeatedly in six trials, six task completion times and six user interface actions were obtained. In other words, the mean completion times for different experimental samples came from the same group of participants, which means a repeated measures design. Moreover, the two variables, i.e., the input device and the navigation mode, were interdependent samples of the repeated measures. Consequently, the analysis of variance (ANOVA) was adopted to determine whether there was an interaction between the two variables. Afterwards, the least significant difference (LSD) method was used to compare the difference between them. 4 Result In this research, two different input devices, the mouse and the touch screen, were employed by the participant to conduct simulated tests in three navigation modes, i.e., CCCD, DCCD, and DCDD. The operational performances, namely, task completion times and user interface actions, were collected and compared. As is shown in Table 4 , task completion times and user interface actions were compared through the two-way ANOVA. Regarding task completion times, the interaction between the input device and the navigation mode reached the level of statistical significance ( F 2,70 = 3.28, p < .05); that is, in terms of task completion times, there was an interaction between the input device and the navigation mode. Also, the task completion times needed by the mouse and the touch screen in different navigation modes reached the level of statistical significance ( F 1,35 = 50.02, p < .001), which means that the task completion time spent by the participant varied with the input device. Similarly, different navigation modes resulted in different task completion times and the result was statistically significant ( F 2,70 = 14.80, p < .001), which means that the task completion time spent by the participant varied with the navigation mode. Concerning user interface actions, the interaction between the input device and the navigation mode was not statistically significant, which means that there was no interaction between the input device and the navigation mode. Similarly, when different input devices were used, no statistically significant difference in the user interface actions was found. To put it another way, different input devices, whether they were the mouse or the touch screen, did not exert a significant effect on the user interface actions. On the other hand, different navigation modes did have a significant effect on the user interface actions ( F 2,70 = 81.74, p < .001), which means that the participant’s interface actions varied with the navigation mode. The task completion times consumed by different inputs in different navigation modes reached the level of statistical significance ( Table 5 ). After the least significant difference (LSD) method was used to analyze and compare the results, it was discovered that the mouse performed better than the touch screen in any of the three modes. As is shown in Table 6 , the effect of different navigation modes on task completion times was statistically significant. When the touch screen was used, the three navigation modes exerted a significant effect on the task completion times ( F 2,70 = 10.14, p < .001). After the least significant difference (LSD) method was applied, it was revealed that DCDD took a longer time than either CCCD or DCCD. Similarly, when the mouse was used, the effect of the three navigation modes on task completion times was statistically significant ( F 2,70 = 5.43, p < .01). After the LSD method was applied, it was revealed that DCDD also took the longest completion time. The effect of different navigation modes on user interface actions was statistically significant. When the touch screen was used, the three navigation modes exerted a significant effect on the user interface actions ( F 2,70 = 87.66, p < .001). After the LSD method was used to compare the result, it was discovered that DCDD required the most actions, DCCD ranked second and CCCD needed the fewest actions. Similarly, when the mouse was used, the three navigation modes exerted a significant effect on user interface actions ( F 2,70 = 28.02, p < .001). With the LSD method applied, it was discovered that DCDD required the most actions, DCCD ranked second and CCCD needed the fewest actions. 5 Discussion This research was aimed at operating the mouse and the touch screen separately in the three different navigation modes of web maps and exploring the performance difference in task completion times and user interface actions. Regarding task completion times, it was discovered that the touch screen took more time than the mouse. That result agreed with the conclusion reached by other researchers, who conducted the single-touch operation test and reported that the touch screen did worse than the mouse [11] . As was observed by this research, when the arrow key on the touch screen was pressed by the participant, he/she happened to be at such a visual angle that the finger or hand spanned the screen and occluded the arrow key. To click the occluded arrow key precisely, he/she would first remove the hand or finger in order to see exactly where the key was before clicking it. Such an act lengthened the task completion time. On the other hand, when the mouse was used, he/she moved the cursor onto the arrow key as soon as it was seen. As a result, the mouse performed better than the touch screen in terms of task completion time. It was also discovered that, if the combined arrow keys were arranged too close, the user would worry about committing an error when pressing an arrow key or switching to another. In addition, some mental pressure might be created, which would cause the user’s finger to press the key more slowly than normal. However, when the mouse was used instead of the touch screen, the mouse cursor did not occlude the arrow key and the wrong key was unlikely to be pressed. Consequently, it took the mouse a much shorter time to complete the same task. In addition, regarding the input devices, the mouse was used more frequently by most of the participants in the experiment. On the other hand, they were rather unfamiliar with how to operate the large-sized touch screen. Actually, familiarity played an important role in the user’s performance in operating the interface. It was believed by other researchers that people would face a series of learning processes when operating a strange interface. However, if the interface was often used in daily life, then memories of the interface would be called to mind when the same interface was used again. Sometimes the reaction would become automatic [33] . For those who often used the mouse, its operation was so familiar to them that their reaction was almost automatic. Therefore, the mouse performed better than the touch screen because the latter was rather unfamiliar to the participants. According to the wayfinding strategy of this research, the simple and unfamiliar map is adopted as the task. In this way, because of individual differences and visuospatial working memory in map learning, the effect on the performance of the navigation techniques may be minimized. The experimental result ( Table 6 ) clearly indicates that, whether the touch screen or the mouse is employed, mode 1 (CCCD) performs best in terms of task completion time, with mode 2 (DCCD) coming next and mode 3 (DCDD) coming last. This (conclusion) agrees with the findings of previous studies, which indicated that the continuous navigation works faster than the discrete navigation in an unfamiliar information space [22] . DCDD can easily cause the user to get confused in virtual environments. To be exact, the user often feels as if he/she were moving the whole map image instead of the map frame, which causes the cognitive direction to be opposite to the operational direction [20] . As a result, the participants in the experiment have had much difficulty in operating properly. When repetitive and simple movements are performed, the participant tends to spend more time in thinking if a choice or judgment has to be made [33] . Thus, the task completion time is increased. With the navigation mode DCDD employed, the panning direction is displayed in an intermittent manner. As the continual movement of the image is kept invisible, the user must spend a longer time in thinking. In consequence, the task completion time needed by DCDD is longer than that needed by the continuous display mode. Comparatively speaking, CCCD and DCCD do better. It is because the two modes help the participant to view the changes of image movement clearly, to make instant judgments, and to reduce the thinking time. In terms of user interface actions, CCCD performs best (namely, with the fewest clicks), with DCCD coming next and DCDD coming last (with the most clicks). In the DCDD mode, the user tends to mistake the movement of the map frame for that of the map image, which results in the increase of user interface actions. Besides, it is discovered that, whether the touch screen or the mouse is used, CCCD performs better than the other two modes in terms of user interface actions. The result is in agreement with the theory presented by other researchers about the user-defined map browsing [22] . While the long-distance map is browsed, the discrete control mode restricts the distance of movement, causes inconvenience to the user, and lowers the operational performance. Concerning the image control, the continuous control mode allows the user to press the arrow key only once to move over a long distance. On the contrary, the discrete control mode requires the user to press the arrow key several times before reaching the destination. This explains why the CCCD mode performs best among the three navigation modes. So far the operational experiments with computers have not shown any difference in the injuries caused by the single-click and the double-click [34] . However, if the mouse is frequently used to perform repetitive movements, it is a risk factor of musculoskeletal disorders, which happen to the side of the upper arm, elbow, wrist, and fingers operating the mouse [35,36] . Therefore, considered from the standpoint of ergonomics, it is suggested that the hand-controlled devices should involve as few repetitive movements of the fingers as possible to minimize the muscular injuries [32] . The navigation mode CCCD can reduce the double clicks of the hand, so a better performance is achieved by the user. 6 Conclusion With the experimental interface of the web map simulated, two input devices were used and the simulated tests in three navigation modes were conducted to investigate the difference in their functional performances. The main findings are presented as follows: (1) As is shown by the experiment on the navigation modes of the web maps, the CCCD mode proves to be the most desirable, for the user is kept well aware of the moving direction. Contrarily, the DCDD mode displays the map image in an intermittent way, which causes the participant to spend more time in judging and choosing, and thus performs worst among the three modes. (2) The CCCD mode performs better than the other two modes in terms of both task completion times and user interface actions. Therefore, the CCCD mode is highly recommended for the devices with small screens, such as digital cameras, cell phones, PDAs, and web maps. (3) Even though the touch screen is easier to learn and operate, it performs considerably worse than the mouse in the three navigation modes, as is shown by our findings. It is mainly because the fingers of the participant occlude the arrow keys and thus the task completion time is increased. By contrast, when the mouse is used, the cursor synchronizes with the user’s eyes. That is why the mouse does better than the touch screen in terms of task completion time. Furthermore, most of the participants in the experiment usually use the mouse. As a result, the mouse performs better than the less familiar touch screen. This research is chiefly targeted at college students or graduates. In the future, the participants will include other groups and the experimental results will be carefully compared and analyzed. Besides, our findings will be applied to designing the UI interfaces compatible with the touch screen, and then further study and analysis will be made. References [1] Y. Lu Y. Xiao A. Sears J. Jacko A review and a framework of handheld computer adoption in healthcare Int. J. Med. Inform. 74 2005 409 422 [2] P. Albinsson S. Zhai High precision touch screen interaction 2003 ACM New York, NY, USA pp. 105–112 [3] B. Kules, H. Kang, C. Plaisant, A. Rose, B. Shneiderman, Immediate usability: Kiosk design principles from the CHI 2001 photo library, citeseer.csail.mit.edu/571542.html, (Last accessed 22, 2005). [4] W. Cartwright J. Crampton G. Gartner S. Miller K. Mitchell E. Siekierska J. Wood Geospatial information visualization user interface issues Cartogr. Geogr. Inform. Sci. 28 2001 45 60 [5] C. Gutwin C. Fedak Interacting with big interfaces on small screens: a comparison of fisheye, zoom, and panning techniques 2004 Canadian Human-Computer Communications Society pp. 152 [6] A. Neumann Navigation in space, time and topic 2005 International Cartographic Conference pp. 11–16 [7] D. Norman The Psychology of Everyday Things 1988 Basic books New York [8] S. Card W. English B. Burr Evaluation of mouse, rate-controlled isometric joystick, step keys, and text keys for text selection on a CRT Ergonomics 21 1978 601 613 [9] A. Esenther K. Ryall Fluid DTMouse: better mouse support for touch-based interactions 2006 ACM pp. 115 [10] A. Sears B. Shneiderman High precision touchscreens: design strategies and comparisons with a mouse Int. J. Man Mach. Stud. 34 1991 593 613 [11] C. Forlines D. Wigdor C. Shen R. Balakrishnan Direct-touch vs. mouse input for tabletop displays 2007 ACM pp. 656 [12] K. Kin M. Agrawala T. DeRose Determining the benefits of direct-touch, bimanual, and multifinger input on a multitouch workstation Proceedings of Graphics Interface 2009 2009 119 124 [13] H. Benko A. Wilson P. Baudisch Precise selection techniques for multi-touch screens 2006 Google Patents [14] D. Bowman E. Davis L. Hodges A. Badre Maintaining spatial orientation during travel in an immersive virtual environment Presence 8 1999 618 631 [15] K. Booth B. Fisher S. Page C. Ware S. Widen Wayfinding in a virtual environment 2000 Citeseer [16] D. Montello H. Pick Integrating knowledge of vertically aligned large-scale spaces Environ. Behav. 25 1993 457 [17] R. Conroy, Spatial navigation in immersive virtual environments, in: University College, London, 2001. [18] R. Darken J. Sibert Navigating in large virtual worlds Int. J. Human-Comput. Inter. 8 1996 49 72 [19] R. Darken J. Sibert Wayfinding strategies and behaviors in large virtual worlds 1996 ACM pp. 142–149 [20] M. You C. Chen H. Liu H. Lin A usability evaluation of web map zoom and pan functions Int. J. Des. 1 2007 15 25 [21] S. Burigat L. Chittaro S. Gabrielli Navigation techniques for small-screen devices: An evaluation on maps and web pages Int. J. Hum. Comput. Stud. 66 2008 78 97 [22] M. Harrower B. Sheesley Designing better map interfaces: A framework for panning and zooming Trans. GIS 9 2005 77 89 [23] P. Sadeghian M. Kantardzic O. Lozitskiy W. Sheta The frequent wayfinding-sequence (FWS) methodology: Finding preferred routes in complex virtual environments Int. J. Hum. Comput. Stud. 64 2006 356 374 [24] B. Stankiewicz G. Legge J. Mansfield E. Schlicht Lost in virtual space. Studies in human and ideal spatial navigation J. Exp. Psychol. 32 2006 688 704 [25] B. Peterson M. Wells T. Furness E. Hunt The effects of the interface on navigation in virtual environments 1998 Human Factors and Ergonomics Society pp. 1496–1500 [26] C. Lawton, Strategies for indoor wayfinding: The role of orientation, J. Environ. Psychol, 1996. [27] J. Magliano R. Cohen G. Allen J. Rodrigue The impact of a wayfinder’s goal on learning a new environment: different types of spatial knowledge as goals J. Environ. Psychol. 15 1995 65 75 [28] E. Coluccia A. Bosco M. Brandimonte The role of visuo-spatial working memory in map learning: New findings from a map drawing paradigm Psychol. Res. 71 2007 359 372 [29] C. Cornoldi, T. Vecchi, Visuo-spatial working memory and individual differences, Psychology Pr, 2003. [30] K. Schultz D. Batten T. Sluchak Optimal viewing angle for touch-screen displays: is there such a thing?-Design issues and a comparison with other devices Int. J. Ind. Ergon. 22 1998 343 350 [31] A. Sears Improving touchscreen keyboards: design issues and a comparison with other devices Interact. Comput. 3 1991 253 269 [32] M. Sanders E. McCormick Human factors in engineering and design 1987 McGraw-Hill Companies [33] K. Lim I. Benbasat P. Todd An experimental investigation of the interactive effects of interface style, instructions, and task familiarity on user performance ACM Trans. Comput.-Hum. Inter. (TOCHI) 3 1996 1 37 [34] S. Thorn M. Forsman S. Hallbeck A comparison of muscular activity during single and double mouse clicks Eur. J. Appl. Physiol. 94 2005 158 167 [35] A. Kilbom Repetitive work of the upper extremity: Part I – guidelines for the practitioner Int. J. Ind. Ergon. 14 1994 51 57 [36] C. Jensen V. Borg L. Finsen K. Hansen B. Juul-Kristensen H. Christensen Job demands, Muscle activity and musculoskeletal symptoms in relation to work with the computer mouse Scand. J. Work Environ. Health 24 1998 418 424
Year
DOI
Venue
2011
10.1016/j.displa.2011.05.004
Displays
Keywords
Field
DocType
Web map,Input device,Navigation mode
Direct touch,Computer vision,World Wide Web,Operational performance,Artificial intelligence,Engineering,Task completion,User interface,Input device,The Internet
Journal
Volume
Issue
ISSN
32
5
0141-9382
Citations 
PageRank 
References 
13
0.94
16
Authors
3
Name
Order
Citations
PageRank
Fong-Gong Wu16918.80
Hsuan Lin2305.51
Manlai You319448.56