Showing posts with label 3D. Show all posts
Showing posts with label 3D. Show all posts

Wednesday, July 13, 2011

LG introduces the world first Glasses-Free 3D monitor with eye-tracking technology

Today LG announced a 20" LCD display with built-in "eye tracking" technology that enables glasses-free 3D imaging which moves this technology closer to the consumer market. The image below does, as far as I can tell, not reveal any infrared illuminators, a requirement for all known systems with high accuracy so it's probably more of a rough estimation system than a full-blown remote system. Best known accuracy (published research) under natural light is about 3-4 degrees of angle, with their financial resources they could potentially achieve better results. 
Left. The "special" eye tracking camera sensor. Looks like a rather typical webcam CMOS sensor to me. Unless they are doing some magic it will not allow accurate gaze estimation. Regardless, makes me wonder if 3D displays is the path by which eye tracking goes mainstream? Is this related to the collaboration between Seeing Machines and SuperD announced earlier this year or just a competing solution? Details are sparse, I'll keep you posted as it becomes available. 


Official press release:


SEOUL, July, 13, 2011 – LG Electronics (LG) today unveiled the world’s first glasses-free monitor utilizing eye-tracking technology to maintain an optimal 3D image from a range of viewing angles. The 20-inch D2000 (Korean model: DX2000) monitor was developed as a fully functional entertainment display capable of reproducing games, movies and images in all their realistic glory.

“With a full line-up of 3D TVs, laptops, projectors and smartphones, LG Electronics is by far and away the industry leader in all things 3D.” said Si-hwan Park, Vice President of the Monitor Division at LG’s Home Entertainment Company. “LG’s position has always been that 3D will and must eventually function without glasses. The D2000 is a look at what the future has in store.”

The D2000’s 3D effect comes courtesy of glasses-free parallax barrier 3D technology, and the application of the world’s first eye-tracking feature to the monitor. The combination of parallax barrier and eye-tracking in a single unit promises to open up new horizons for glasses-free 3D products.


Existing glasses-free 3D technologies generally require viewers to stay within a tightly restricted angle and distance to perceive the 3D images. However, the D2000 has done much to resolve this issue, allowing viewer much freer movement and more comfortable viewing. Eye tracking in the D2000 works via a special camera sensor attached to the monitor which detects changes in the user’s eye position in real-time. With this information, the monitor calculates the angle and position of the viewer and adjusts the displayed image for the optimal 3D effect.

In addition to playing back existing 3D content, the D2000 has a highly refined 2D to 3D conversion feature which adds a new dimension to existing movies and game playing.

The D2000, available in Korea this month, will be introduced in other markets around the world in the latter part of 2011.

Wednesday, February 16, 2011

A self-calibrating, camera-based eye tracker for the recording of rodent eye movements (Zoccolan et al, 2010)

Came across an interesting methods article in "Frontiers in Neuroscience" published in late November last  year which involves the development of a fully automated eye tracking system which is calibrated without requiring co-operation from the subject. This is done by fixing the location of the eye and moving the camera to establish a geometric model (also see Stahl et al, 2000, 2004). Apparently they attempted to use a commercial EyeLink II device first but found it not suitable for rodent eye tracking due to thresholding implementation, illumination conditions and failing corneal reflection tracking when the rodent was chewing. So the authors built their own solution using a Prosilica camera and a set of algorithms (depicted below). Read the paper for implementation details. I find it to be a wonderful piece of work, different  from human eye tracking for sure but still relevant and fascinating.

Schematic diagram of the eye-tracking system

 Illustration of the algorithm to track the eye’s pupil and corneal
reflection spot.

Eye coordinate system and measurements


Horizontal and vertical alignment of the eye with the center of
the camera’s sensor.


Abstract:

"Much of neurophysiology and vision science relies on careful measurement of a human or animal subject’s gaze direction. Video-based eye trackers have emerged as an especially popular option for gaze tracking, because they are easy to use and are completely non-invasive. However, video eye trackers typically require a calibration procedure in which the subject must look at a series of points at known gaze angles. While it is possible to rely on innate orienting behaviors for calibration in some non-human species, other species, such as rodents, do not reliably saccade to visual targets, making this form of calibration impossible. To overcome this problem, we developed a fully automated infrared video eye-tracking system that is able to quickly and accurately calibrate itself without requiring co-operation from the subject. This technique relies on the optical geometry of the cornea and uses computer-controlled motorized stages to rapidly estimate the geometry of the eye relative to the camera. The accuracy and precision of our system was carefully measured using an artificial eye, and its capability to monitor the gaze of rodents was verified by tracking spontaneous saccades and evoked oculomotor reflexes in head-fixed rats (in both cases, we obtained measurements that are consistent with those found in the literature). Overall, given its fully automated nature and its intrinsic robustness against operator errors, we believe that our eye-tracking system enhances the utility of existing approaches to gaze-tracking in rodents and represents a valid tool for rodent vision studies."


  • Zoccolan DF, Graham BJ, Cox DD (2010) A self-calibrating, camera-based eye tracker for the recording of rodent eye movements. Frontiers in Neuroscience Methods. doi:10.3389/fnins.2010.00193 [link]

Thursday, January 13, 2011

Face tracking for 3D displays without glasses.

A number of manufacturers and research institutes have presented 3D display systems that utilizes real time face and eye region tracking in order to adjust the stereoscopic display in real time. This means that viewers doesn't have to wear any funky glasses to see the 3D content which has been a limiting factor for these displays. Some prototypes and OEM solutions were introduced at CEBIT last year. At CES2011 Toshiba presented a 3D equipped laptop that uses the built-in webcam to track the position of the users face (appears to be built around Seeingmachines faceAPI). It's an interesting development, we're seeing more and more of computer vision applications in the consumer space, recently Microsoft announced that they've sold 8 million Kinect devices in the first 60 days while Sony shipped 4.1 million Playstation Move in the first two months.


3D displays sans glasses at CEBIT2010


Toshibas 3D laptop sans glasses at CES2011.

Obviously, these systems differ from eye tracking systems but still share many concepts. So whats the limiting factor for consumer eye tracking then? 1) Lack of applications, there isn't a clear compelling reason for most consumers to get an eye tracker. It has to provide a new experience with a clear advantage and value. Doing something faster, easier or in a way that couldn't be done before. 2) Expensive hardware, they are professional devices manufactured in low volume with the use of high quality, expensive components 3) No guarantees, doesn't work for all customers in all environments. How do you sell something that only works under specific conditions for say 90% of the customers?

Monday, September 28, 2009

Wearable Augmented Reality System using Gaze Interaction (Park, Lee & Choi)

Came across this paper on a wearable system that employs a small eye tracker and a head mounted display for augmented reality. I've previously posted a video on the same system. It's a future technology with great potential, only imagination sets the limit here. There is a lot of progress in image/object recognition and location awareness taking place right now (with all the associated non-trivial problems to solve!)


Abstract
"Undisturbed interaction is essential to provide immersive AR environments. There have been a lot of approaches to interact with VEs (virtual environments) so far, especially in hand metaphor. When the user‟s hands are being used for hand-based work such as maintenance and repair, necessity of alternative interaction technique has arisen. In recent research, hands-free gaze information is adopted to AR to perform original actions in concurrence with interaction. [3, 4]. There has been little progress on that research, still at a pilot study in a laboratory setting. In this paper, we introduce such a simple WARS(wearable augmented reality system) equipped with an HMD, scene camera, eye tracker. We propose „Aging‟ technique improving traditional dwell-time selection, demonstrate AR gallery – dynamic exhibition space with wearable system."
  • Park, H. M., Seok Han Lee, and Jong Soo Choi 2008. Wearable augmented reality system using gaze interaction. In Proceedings of the 2008 7th IEEE/ACM international Symposium on Mixed and Augmented Reality - Volume 00 (September 15 - 18, 2008). Symposium on Mixed and Augmented Reality. IEEE Computer Society, Washington, DC, 175-176. DOI= http://dx.doi.org/10.1109/ISMAR.2008.4637353

Wednesday, July 22, 2009

Gaze Interaction in Immersive Virtual Reality - 3D Eye Tracking in Virtual Worlds

Thies Pfeiffer (blog) working in the A.I group at the Faculty of technology, Bielefeld University in Germany have presented some interesting research on 3D gaze interaction in virtual environments. As the video demonstrates they have achieved high accuracy for gaze based pointing and selection. This opens up for a wide range of interesting man-machine interaction where digital avatars may mimic natural human behavior. Impressive.



Publications
  • Pfeiffer, T. (2008). Towards Gaze Interaction in Immersive Virtual Reality: Evaluation of a Monocular Eye Tracking Set-Up. In Virtuelle und Erweiterte Realität - Fünfter Workshop der GI-Fachgruppe VR/AR, 81-92. Aachen: Shaker Verlag GmbH. [Abstract] [BibTeX] [PDF]
  • Pfeiffer, T., Latoschik, M.E. & Wachsmuth, I. (2008). Evaluation of Binocular Eye Trackers and Algorithms for 3D Gaze Interaction in Virtual Reality Environments. Journal of Virtual Reality and Broadcasting, 5 (16), dec. [Abstract] [BibTeX] [URL] [PDF]
  • Pfeiffer, T., Donner, M., Latoschik, M.E. & Wachsmuth, I. (2007). 3D fixations in real and virtual scenarios. Journal of Eye Movement Research, Special issue: Abstracts of the ECEM 2007, 13.
  • Pfeiffer, T., Donner, M., Latoschik, M.E. & Wachsmuth, I. (2007). Blickfixationstiefe in stereoskopischen VR-Umgebungen: Eine vergleichende Studie. In Vierter Workshop Virtuelle und Erweiterte Realität der GI-Fachgruppe VR/AR, 113-124. Aachen: Shaker. [Abstract] [BibTeX] [PDF]
List of all publications available here.

Tuesday, May 12, 2009

BBC News: The future of gadget interaction

Dan Simmons at BBC reports on future technologies from the Science Beyond Fiction 2009 conference in Prague. The news headline includes a section on the GazeCom project who won the 2nd prize for their exhibit "Gaze-contingent displays and interaction". Their website hosts additional demonstrations.

"Gaze tracking is well-established and has been used before now by online advertisers who use it to decide the best place to put an advert. A novel use of the system tracks someone's gaze and brings into focus the area of a video being watched by blurring their peripheral vision.In the future, the whole image could also be panned left or right as the gaze approaches the edge of the screen. Film producers are interested in using the system to direct viewers to particular parts within a movie. However, interacting with software through simply looking will require accurate but unobtrusive eye tracking systems that, so far, remain on the drawing board... The European Commission (EC) is planning to put more cash into such projects. In April it said it would increase its investment in this field from 100m to 170m euros (£89m-£152m) by 2013. " (BBC source ) More information about the EC CORDIS : ICT program.

External link. The BBC reported Dan Simmons tests a system designed to use a driver's peripheral vision to flag up potential dangers on the road. It was recorded at the Science Beyond Fiction conference in Prague.

The GazeCom project involves the following partners:

Sunday, May 3, 2009

Laval VRchive @ Tokyo Metropolitan University

Hidenori Watanave at the Tokyo Metropolitan University have released a brief video demonstrating gaze interaction for the Laval VRchive. The VRchive is a virtual reality environment for navigating media content. The setup is using a standalone Tobii 1750 tracker and a projector. The simple interface allows gaze control through looking at either the top, bottom, left or right areas of the display area as well as winking to perform clicks. Althrough an early version the initial experiments were successful, but the software is unstable and needs further improvements.


Monday, November 3, 2008

Gaze and Voice Based Game Interaction (Wilcox et al., 2008)

"We present a 3rd person adventure puzzle game using a novel combination of non intrusive eyetracking technology and voice recognition for game communication. Figure 1 shows the game, and its first person sub games that make use of eye tracker functionality in contrasting ways: a catapult challenge (a) and a staring competition(b)."


"There are two different modes of control in the main game. The user can select objects by looking at them and perform ’look’, ’pickup’, ’walk’, ’speak’, ’use’ and other commands by vocalizing there respective words. Alternatively, they can perform each command by blinking and winking at objects. To play the catapult game for example, the user must look at the target and blink, wink or drag to fire a projectile towards the object under the crosshair. "

Their work was presented at the ACM SIGGRAPH 2008 with the associated poster:

Saturday, August 23, 2008

GaCIT in Tampere, day 3.

In the morning Howell Istance of De Montford University, currently at University of Tampere, gave a very intersting lecture concerning gaze interaction, it was divided into three parts 1) games 2) mobile devices 3) stereoscopic displays

Games
This is an area for gaze interaction which have a high potential and since the gaming industry has grown to be a hugh industy it may help to make eye trackers accessible/affordable. The development would be benificial for users with motor impairments. A couple of examples for implementations were then introduced. The first one was a first person shoother running on a XBOX360:
The experimental setup evaluation contained 10 repeated trials to look at learning (6 subjects). Three different configurations were used 1) gamepad controller moving and aiming (no gaze) 2) gamepad controller moving and gaze aiming and 3) gamepad controller moving forward only, gaze aiming and steering of the movement.
Results:
However, twice as many shots were fired that missed in the gaze condition which can be described as a "machine gun" approach. Noteworthy is that no filtering was applied to the gaze position.
Howell have conducted a analysis of common tasks in gaming, below is a representation of the amount of actions in the Guild Wars game. The two bars indicate 1) novices and 2) experienced users.

Controlling all of these different actions requires switching of task mode. This is very challenging considering only on input modality (gaze) with no method of "clicking".

There are several ways a gaze interface can be constructed. From a bottom up approach. First the position of gaze can be used to emulate the mouse cursor (on a system level) Second, a transparent overlay can be placed on top of the application. Third, a specific gaze interface can be developed (which has been my own approach) This requires a modification of the original application which is not always possible.

The Snap/Clutch interaction method developed by Stephen Vickers who is working with Howell operates on the system level to emulate the mouse. This allows for specific gaze gestures to be interpretated which is used to switch mode. For example a quick glace to the left of the screen will activate a left mouse button click mode. When a eye fixation is detected in a specific region a left mouse click will be issued to that area.

When this is applied to games such as World of Warcraft (demo) specific regions of the screen can be used to issue movement actions towards that direction. The image below illustrates these regions overlaid on the screen. When a fixation is issued in the A region an action to move towards that direction is issued to the game it self.

Stephen Vickers gaze driven World of Warcraft interface.

After lunch we had a hands-on session with the Snap/Clutch interaction method where eight Tobii eye trackers were used for a round multiplayer of WoW! Very different from a traditional mouse/keyboard setup and takes some time to get used to.

  • Istance, H.O.,Bates, R., Hyrskykari, A. and Vickers, S. Snap Clutch, a Moded Approach to Solving the Midas Touch Problem. Proceedings of the 2008 symposium on Eye Tracking Research & Applications; ETRA 2008. Savannah, GA. 26th-28th March 2008. Download
  • Bates, R., Istance, H.O., and Vickers, S. Gaze Interaction with Virtual On-Line Communities: Levelling the Playing Field for Disabled Users. Proceedings of the 4th Cambridge Workshop on Universal Access and Assistive Technology; CWUAAT 2008. University of Cambridge, 13th-16th April 2008. Download


The second part of the lecture concerned gaze interaction for mobile phones. This allows for ubiquitous computing where the eye tracker is integrated with a wearable display. As a new field it is surrounded with certain issues (stability, processing power, variation in lightning etc.) but all of which will be solved over time. The big question is what the "killer-application" will be. ( entertainment?) A researcher from Nokia attended the lecture and introduced a prototype system. Luckily I had the chance to visit their research department the following day to get a hands-on with their head mounted display with a integrated eye tracker (more on this in another post)

The third part was about stereoscopic displays which adds a third dimension (depth) to the traditional X and Y axis. There are several projects around the world working towards making this everyday reality. However, tracking the depth of gaze fixation is limited. The vergence (as seen by the distance between both pupils) eye movements are hard to measure when the distance to objects move above two meters.

Calculating convergence angles
d = 100 cm tan θ = 3.3 / 100; θ = 1.89 deg.
d = 200 cm tan θ = 3.3 / 200; θ = 0.96 deg.


Related papers on stereoscopic eye tracking:
The afternoon was spent with a guided tour around Tampere followed by a splendid dinner at a "viking" themed restaurant.

Tuesday, July 15, 2008

Sebastian Hillaire at IRISA Rennes, France

Sebastian Hillaire is a Ph.D student at the IRISA Rennes in France, member of the BUNRAKU and France Telecom R&D. His work is situated around using eye trackers for improving the depth-of-field visual scene in 3D environments. He has published two papers on the topic:

Automatic, Real-Time, Depth-of-Field Blur Effect for First-Person Navigation in Virtual Environment (2008)

"We studied the use of visual blur effects for first-person navigation in virtual environments. First, we introduce new techniques to improve real-time Depth-of-Field blur rendering: a novel blur computation based on the GPU, an auto-focus zone to automatically compute the user’s focal distance without an eye-tracking system, and a temporal filtering that simulates the accommodation phenomenon. Secondly, using an eye-tracking system, we analyzed users’ focus point during first-person navigation in order to set the parameters of our algorithm. Lastly, we report on an experiment conducted to study the influence of our blur effects on performance and subjective preference of first-person shooter gamers. Our results suggest that our blur effects could improve fun or realism of rendering, making them suitable for video gamers, depending however on their level of expertise."

Screenshot from the algorithm implemented in Quake 3 Arena.

  • Sébastien Hillaire, Anatole Lécuyer, Rémi Cozot, Géry Casiez
    Automatic, Real-Time, Depth-of-Field Blur Effect for First-Person Navigation in Virtual Environment. To appear in IEEE Computer Graphics and Application (CG&A), 2008 , pp. ??-??
    Source code (please refer to my IEEE VR 2008 publication)

Using an Eye-Tracking System to Improve Depth-of-Field Blur Effects and Camera Motions in Virtual Environments (2008)

"
We describes the use of user’s focus point to improve some visual effects in virtual environments (VE). First, we describe how to retrieve user’s focus point in the 3D VE using an eye-tracking system. Then, we propose the adaptation of two rendering techniques which aim at improving users’ sensations during first-person navigation in VE using his/her focus point: (1) a camera motion which simulates eyes movement when walking, i.e., corresponding to vestibulo-ocular and vestibulocollic reflexes when the eyes compensate body and head movements in order to maintain gaze on a specific target, and (2) a Depth-of-Field (DoF) blur effect which simulates the fact that humans perceive sharp objects only within some range of distances around the focal distance.

Second, we describe the results of an experiment conducted to study users’ subjective preferences concerning these visual effects during first-person navigation in VE. It showed that participants globally preferred the use of these effects when they are dynamically adapted to the focus point in the VE. Taken together, our results suggest that the use of visual effects exploiting users’ focus point could be used in several VR applications involving firstperson navigation such as the visit of architectural site, training simulations, video games, etc."



Sébastien Hillaire, Anatole Lécuyer, Rémi Cozot, Géry Casiez
Using an Eye-Tracking System to Improve Depth-of-Field Blur Effects and Camera Motions in Virtual Environments. Proceedings of IEEE Virtual Reality (VR) Reno, Nevada, USA, 2008, pp. 47-51. Download paper as PDF.

QuakeIII DoF&Cam sources (depth-of-field, auto-focus zone and camera motion algorithms are under GPL with APP protection)

Wednesday, February 20, 2008

Inspiration: StarGazer (Skovsgaard et al, 2008)

A major area of research for the COGAIN network is to enable communication for the disabled. The Innovative Communications group at IT University of Copenhagen continuously work on making gaze-based interaction technology more accessible, especially in the field of assistive technology.

The ability to enter text into the system is crucial for communication, without hands or speech this is somewhat problematic. The StartGazer software aims at solving this by introducing a novel 3D approach to text entry. In December I had the opportunity to visit ITU and try the StarGazer (among other things) myself, it is astonishingly easy to use. Within just a minute I was typing with my eyes. Rather than describing what it looks like, see the video below.
The associated paper is to be presented at the ETRA08 conference in March.



This introduces an important solution to the problem of eye tracker inaccuracy namely zooming interfaces. Fixating on a specific region of the screen will display an enlarged version of this area where objects can be earlier discriminated and selected.

The eyes are incredibly fast but from the perspective of eye trackers not really precise. This is due to the physiology properties of our visual system, in specific the foveal region of the eye. This retinal area produces the sharp detailed region of our visual field which in practice covers about the size of a thumbnail on an armslenght distance. To bring another area into focus a saccade will take place which moves the pupil, thus our gaze, this is what is registered by the eye tracker. Hence the discrimination of most eye trackers are in the 0.5-1 degree (in theory that is)

A feasible solution to deal with this limitation in accuracy is to use the display space dynamically and zoom into the areas of interest upon glancing. The zooming interaction style solves some of the issues with inaccuracy and jitter of the eye trackers but in addition it has to be carefully balanced so that it still provides a quick and responsive interface.

However, the to me the novelty in the StarGazer is the notion of traveling through a 3D space, the sensation of movement really catches ones attention and streamlines the interaction. Since text entry is really linear character by character, flying though space by navigating to character after character is a suitable interaction style. Since the interaction is nowhere near the speed of two hand keyboard entry the employment of linguistic probabilities algorithms such as those found in cellphones will be very beneficial (ie. type two or three letters and the most likely words will display in a list) Overall, I find the spatial arrangement of gaze interfaces to be a somewhat unexplored area. Our eyes are made to navigate in a three dimensional world while the traditional desktop interfaces mainly contains a flat 2D view. This is something I intend to investigate further.