Showing posts with label feedback. Show all posts
Showing posts with label feedback. Show all posts

Monday, March 17, 2008

Inspiration: Takehiko Ohno

Working out of NTT Cyber Solutions Laboratories in Japan Takehiko interests lies mainly in eye tracking technology and human computer interaction. He has published several papers on eye tracking technology and interaction methods. The QuickGlance selection method aims to solve the well known Midas-touch problem. The interface contains a specific selection area next to each choice/item which must be fixated to activate the function. There are two major advantages with this. First, the user can look around at the menu items without worrying about accidentally activating something. Second, advanced users can go for the activation area directly without even reading the menu text. Just like most people know the order/location of items on the Windows Start-menu. On the downside this means that all options are displayed on the screen all the time.

Takehiko additionally have published several articles on FreeGaze, a remote based system which allows the user to move around his head freely. The FreeGaze eye tracker at NTT has a 0.28 degree of accuracy and is based on a rather wide stereoscopic corneal reflection method using serveral image processing algorithms described in the papers, which are well written and worth reading.

His research highlights the importance of providing feedback to the user as the major method of reducing error rates. Something that I've taken to heart.

Monday, March 3, 2008

Zooming and Expanding Interfaces / Custom componenets

The inspiration I got from the reviewed papers on using a zooming interaction style to developing a set of zoom based interface components. The interaction style is suitable for gaze to overcome the inaccuracy and jitter of eye movements. My intention is that the interface components should be completely standalone, customizable and straightforward to use. Ideally included in new projects by importing one file and writing one line of code.

The first component is a dwell-based menu button that on fixation will a) provide a dwelltime indicator by animating a small glow effect surrounding the button image and b) after 200ms expand an ellipse that houses the menu options. This produces a two step dwell activation while making use of the display area in a much more dynamic way. The animation is put in place to keep the users fixation remained at the button for the duration of the dwell time. The items in the menu are displayed when the ellipse has reached its full size.

This gives the user a feedback in his parafoveal region and at the same time the glow of the button icon has stopped indicating a full dwelltime execution. (bit hard to describe in words, perhaps easier to understand from the images below) The parafoveal region of our visual field is located just outside the foveal region (where the full resolution vision takes place). The foveal area is about the size of a thumbnail on an armslengths distance, items in the parafoveal region still can be seen but the resolution/sharpness is reduced. We do see them but have to make a short saccade for them to be in full resolution. In other words the menu items pop out at a distance that attracts a short saccade which is easily discriminated by the eye tracker. (Just4fun test your visual field)

Before the button has received focus


Upon fixation the button image displays an animated glow effect indicating the dwell process. The image above illustrates how the menu items pops out on the ellipse surface at the end of the dwell. Note that the ellipse grows in size during a 300ms period, exact timing is configurable by passing a parameter in the XAML design page.

The second prototype I have been working on is also inspired by the usage of expanding surfaces. The purpose is a gaze driven photo gallery where thumbnail sized image previews becomes enlarged upon glancing at them. The enlarged view displays an icon which can be fixated to make the photo appear in full size.

Displaying all the images in the users "My pictures" folder.


Second step, glancing at the photos. Dynamically resized. Optionally further enlarged.

Upon glancing at the thumbnails they become enlarged which activates the icon at the bottom of each photo. This enables the user to make a second fixation on it to bring the photo into a large view. This view has to two icons to navigate back and forth (next photo). By fixating outside the photo the view goes back to the overview.

Wednesday, February 6, 2008

Better feedback!

Since having a pointer representing the gaze position on the screen becomes distracting some other form of feedback is needed. As mentioned before having a pointer will cause your eyes to more or less automatically fixate on the moving object. Remember, the eyes are never still and even more the eye tracker does create additional jitter.

What we need is a subtle way of showing the user that the eye tracker has captured the gaze coordinates to the same location as he/she is looking at. It's time for trying to make it look a bit nicer than the previous example where the whole background color of the button would change on a gaze fixation.

The reason for choosing to work with Windows Presentation Foundation is that it provides rich functionality for building modern interfaces. For example you can define a trigger to an event, such as GazeEnter (ie. MouseEnter) on a button and then apply a build in graphical effect on the object. These effects are rendering in real time such as a glowing shadow around the object or a gaussian filter that gives the object an out of focus effect. Very useful for this project. Let's give it a try.

This is the "normal" button. Notice the out-of-focus effect on the globe in the center.




Upon receving a glance the event "Image.IsMouseOver" event is trigged. This starts the built-in rendering function BitmapEffect OuterGlowBitmapEffect which generates a nice red glowing border around the button.




The XAML design code (screenshot) for the button. Click to enlarge.
Notice the GlowColor and GlowSize attributes to manipulate the rendering of the effects.
To apply this to the button we define the element Style="{StaticResource GlowButton}" inside the button tag. Further the globe in the center can be brought back in focus and highlighed with a green glow surrounding it inside the button canvas.


The globe is defined as an image object. Upon focus the triggers will set the gaussian blur effect to zero, which means in focus. The glow effect produces the green circle surrounding the globe.

Putting it all together in a nice looking interface, using the Glass Window style, it looks promising and a real improvement since yesterdays boring interface. Providing a small surrounding glow of giving the image focus upon fixation is much better than changing the whole button color. The examples here are perhaps somewhat less subtle than they should, just to demonstrated the effect.

Screenshots of the second prototype with new U.I effects and events.

The "normal" screen with no gaze input.

OnGazeEnter. Generate a glowing border around the button

The other example with a green glow and the globe in full focus.