Summary16

From EQUIS Lab Wiki

Jump to: navigation, search

During physical tasks in large workspaces people use focus and context regions: human vision only senses high resolution in the centre with the periphery being in much lower resolution

people place primary objects and information in a small area in front of them while using the surrounding area for supporting items.

As displays become larger it becomes important to consider focus plus context techniques in computer interfaces. As a display gets larger and has more pixels, navigating around it with a mouse and selecting graphical items becomes more difficult,

with large displays, techniques to make it easier to move the mouse pointer to items, or items to the mouse pointer, become useful.

when focusing on multiple displays, users tend to think of the displays as seperate and might for instance position a window within a particular moniter and avoid placing it across a boundry between 2 monitors

Attentive interfaces use sensing techniques like computer vision to generate implicit inputs from measurements of the user’s attention—for instance, where the user is looking.

with a single monitor, the use of a mouse combined with eye tracking can be used to reduce the amount of mouse movement, however with multiple monitors, the head is moved through a large range of positions and angles which makes eye tracking difficult

they used a head tracking system developed in their lab to create a system to move the mouse pointer between monitors and switch the active application

they had previously experimented with head tracking for switching between windows on a single-monitor system and for zooming and scrolling a map and for a prototype focus plus context system on a large projected display


In the multi-monitor system they used head tracking to detect which monitor the user was looking at.

When the monitor being focused on changes, the mouse pointer jumps to the new monitor, and the top window on that monitor is activated so it can receive keyboard events without any explicit action to switch applications.

To move a window between monitors the user can start dragging it with the mouse, then look to another monitor to have it jump there.

they tested 8 users with a window management task and found that they required significantly less mouse movement than without the system, and preferred using it to the conventional one... however, task time actually increased.

the main problem was moving between nearby points on adjacent moniters, in this case the system did not always select the correct monitor

to address this they are implementing fixation detection for head motion in an analogous way to that which is used for eye motion

Three important criteria that should be considered during the design of interfaces based on sensing techniques such as eye or head tracking are:

- the distinction between implicit and explicit inputs Implicit inputs are generated automatically in response to sensing of the user’s inadvertent movements; explicit ones are consciously performed.

- whether tracking data are treated as continuous or discrete The continuous stream of data generated by a tracking algorithm can either be used to continuously adjust parameters of the application, or it can be converted to a series of discrete events by, for example, applying a threshold, or using it in a hidden Markov model.

- the cost of mistakes cost of mistakes should be kept low so that the user can easily recover from their own errors or errors of the tracking system