Summary17

From EQUIS Lab Wiki

Jump to: navigation, search

http://www.eecs.tufts.edu/~jacob/papers/hot.txt

studying relatively unused methods by which users and computers can communicate information, focusing on obtaining input from the user's eye movements. computer identifies the point on its display screen at which the user is looking and uses that info as part of its dialogue with the user.

for example - if a display showed several icons, a user might request additional information about one of them. Instead of requiring the user to indicate which icon was desired by pointing at it with a mouse or by entering its name with a keyboard, the computer can determine which icon the user was looking at and give the information on it immediately.

A user interface based on eye movement inputs has the potential for faster and more effortless interaction than current interfaces, because people can move their eyes extremely rapidly and with little conscious effort.

The challenge in building a useful eye movement interface is to avoid this Midas Touch problem. midas touch problem - at first, it is empowering to be able simply to look at what you want and have it happen, rather than having to look at it and then point and click it with the mouse. Before long, though, it becomes like the Midas Touch. Everywhere you look, another command is activated; you cannot look anywhere without issuing a command.

another problem: Eyes continually dart from point to point, in rapid and sudden "saccades." they attempt to obtain information from the natural movements of the user's eye while viewing the display, rather than requiring the user to make specific trained eye movements to actuate the system.

partitioned the problem of using eye movement data into two stages. First they process the raw data from the eye tracker in order to filter noise, recognize fixations, compensate for local calibration errors, and try to reconstruct the user's more conscious intentions from the available information. uses a model of eye motions (fixations separated by saccades) to drive a fixation recognition algorithm that converts the continuous stream of raw eye position reports into discrete tokens that represent user's intentional fixations.

The tokens are passed to the user interface management system, along with tokens generated by other input devices being used simultaneously, such as the keyboard or mouse.

the second stage is to design generic interaction techniques based on these tokens as inputs. The first interaction technique they developed is for object selection. The task is to select one object from among several displayed on the screen

With the eye tracker, there is no natural counterpart of the button press. rejected using a blink for a signal because it detracts from the naturalness possible with just an eye movement by requiring the person to think about blinking

tested two alternatives. the first has the user look at the desired object then presses a button on a keypad to indicate his choice. The second alternative uses dwell time--if the user continues to look at the object for a sufficiently long time, it is selected without further operations.

In practice, the dwell time alternative was much more convenient. but how long should a dwell time be? too long is bad because it takes away from the the speed advantage of using eyes.

If the result of selecting the wrong object can be undone trivially, by simply selecting the right one right away, then a very short dwell time can be used.

found excellent results using a 150-250 ms. The lag between eye movement and system response was hardly detectable to the user, yet long enough to accumulate sufficient data for fixation recognition and processing.

For situations where selecting an object is more difficult to undo, button confirmation is used rather than a longer dwell time.

Other interaction techniques we have developed and are studying in our laboratory include: continuous display of attributes of eye-selected object (instead of explicit user commands to request display); moving object by eye selection, then press button down, "drag" object by moving eye, release button to stop dragging; moving object by eye selection, then drag with mouse; pull-down menu commands using dwell time to select or look away to cancel menu, plus optional accelerator button; forward and backward eye-controlled text scrolling.