Showing posts with label game. Show all posts
Showing posts with label game. Show all posts

Wednesday, March 9, 2011

Sunday, March 6, 2011

Wednesday, August 4, 2010

EOG used to play Super Mario

Came across some fun work by Waterloo labs that demos how to use a bunch of electrodes and a custom processing board to do signal analysis and estimate eye movement gestures though measuring EOG. It means you'll have to glance at the roof or floor to issue commands (no gaze point-of-regard estimation). Good thing is that the technology doesn't suffer from issues with light, optics and sensors that often makes video based eye tracking and gaze point-of-regard estimation complex. Bad thing is that it requires custom hardware, mounting of electrodes and wires, besides that the interaction style appears to involve looking away from what you are really interested in.

Monday, March 29, 2010

Low-cost eye tracking and pong gaming from Imperial College London

A group of students at the Imperial College London have develop a low-cost head mounted tracker which they use to play Pong with. The work is carried out under supervision of Aldo Faisal in his lab.

"
We built an eyetracking system using mass-marketed off-the shelf components at 1/1000 of that cost, i.e. for less then 30 GBP. Once we made such a system that cheap we started thinking of it as a user interface for everyday use for impaired people.. The project was enable by realising that certain mass-marketed web cameras for video game consoles offer impressive performance approaching that of much more expensive research grade cameras.



"From this starting point research in our group has focussed on two parts so far:


1. The TED software, which is composed of two components which can run on two different computers (connected by wireless internet) or run on the same computer. The first component is the TED server (Linux-based) which interfaces directly with the cameras and processes the high-speed video feed and makes the data available (over the internet) to the client software. The client forms the second components, it is written in Java (i.e. it runs on any computer, Windows, Mac, Unix, ...) and provides the Mouse-control-via-eye-movements, the “Pong” video game as well as configuration and calibration functions.

This two part solution allows the cameras to be connected to a cost-effective netbook (e.g. on a wheel chair) and allow control of other computers over the internet (e.g. in the living room, office and kitchen). This software suite, as well as part of the low-level camera driver was implemented by Ian Beer, Aaron Berk, Oliver Rogers and Timothy Treglown, for their undergraduate project in the lab.

Note:the “Pong” video game has a two player mode, allowing two people to play against each other using two eye-trackers or eye-tracker vs keyboard. It is very easy to use, just look where you want the pong paddle to move...

2. The camera-spectacles (visible in most press photos), as well as a two-camera software (Windows-based) able to track eye-movements in 3D (i.e. direction and distance) for wheelchair control. These have been build and developed by William Abbott (Dept. of Bioengineering)."

Further reading:

Imperial College London press release: Playing “Pong” with the blink of an eye
The Engineer: Eye-movement game targets disabled
Engadget (German): Neurotechnologie: Pong mit Augenblinzeln gespielt in London

Tuesday, August 18, 2009

COGAIN Student Competition Results

Lasse Farnung Laursen, a Ph.D student with the Department of Informatics and Mathematical Modeling at the Technical University of Denmark, won this years COGAIN student competition with the leisure application called GazeTrain.

"GazeTrain (illustrated in the screenshot below) is an action oriented puzzle game, that can be controlled by eye movements. In GazeTrain you must guide a train by placing track tiles in front of it. As you guide the train, you must collect various cargo and drop them off at the nearest city thereby earning money. For further details regarding how to play the game, we encourage you to read the tutorial accessible from the main menu. The game is quite customizable as the dwell time and several other parameters can be adjusted to best suit your play-style." (Source)

The GazeTrain game.

Runner ups, sharing the second place were

Music Editor, developed by Ainhoa Yera Gil, Public University of Navarre, Spain. Music Editor is a gaze-operated application that allows the user to compose, edit and play music by eye movements. The reviewers appreciated it that "a user can not only play but can actually create something" and that "Music Editor is well suited for gaze control".

Gaze Based Sudoku, developed by Juha Hjelm and Mari Pesonen, University of Tampere, Finland. The game can be operated by eye movements and it has three difficulty levels. Reviewers especially appreciated how "the separation between viewing and controlling and between sudoku grid and number selection panel is solved" and that the game "has no time constraints" so it is "relaxing" to play.

Wednesday, July 15, 2009

Gaze & Voice recognition game development blog

Jonathan O'Donovan, a masters student in Interactive Entertainment Technology at the Trinity College in Dublin, have recently started a blog for his thesis. It will combine gaze and voice recognition for developing a new video game. So far the few posts available have mainly concerned the underlying framework but a proof-of-concept combining gaze and voice is demonstrated. The project will be developed on a Microsoft Windows based platform and utilizes the XNA game development framework for graphics and the Microsoft Speech SDK for voice input. The eye tracker of choice is a Tobii T60 provided by Acuity ETS (Reading, UK). The thesis will be supervised by Veronica Sundstedt at the Trinity College Computer Science dept.
Keep us posten Jonathan, excitied to see what you'll come up with!





Update: 
The project resulted in the Rabbit Run game which is documented in the following publication:

  • J. O’Donovan, J. Ward, S. Hodgins, V. Sundstedt (2009) Rabbit Run: Gaze and Voice Based Game Interaction (PDF). 

Wednesday, May 6, 2009

The Dias Eye Tracker (Mardanbeigi, 2009)

Diako Mardanbeigi at the Iran University of Science & Technology introduces the Dias Eye Tracking suite. It is a low-cost solution employing a head mounted setup and comes with a rather extensive suite of applications. The software offers gaze control for playing games and music, viewing images, and text-to-speech using a dwell keyboard. It also offers basic eye movement recording and visualization such as scanpaths. The software is built using Visual Basic 6 and implements various algorithms for eye tracking including a rectangular method, RANSAC or LSQ ellipse/circle fitting. Additionally, there is support tracking one or two glints. The following video demonstrates the hardware and software. Congratulations Daiko on this great work!


Tuesday, November 11, 2008

Gaze vs. Mouse in Games: The Effects on User Experience (Gowases T, Bednarik R, Tukiainen M)

Tersia Gowases, Roman Bednarik (blog) and Markku Tukiainen at the Department of Computer Science and Statistics, University of Joensuu, Finland got a paper published in the proceedings for the 16th International Conference on Computers in Education (ICCE).

"We did a simple questionnaire-based analysis. The results of the analysis show some promises for implementing gaze-augmented problem-solving interfaces. Users of gaze-augmented interaction felt more immersed than the users of other two modes - dwell-time based and computer mouse. Immersion, engagement, and user-experience in general are important aspects in educational interfaces; learners engage in completing the tasks and, for example, when facing a difficult task they do not give up that easily. We also did analysis of the strategies, and we will report on those soon. We could not attend the conference, but didn’t want to disappoint eventual audience. We thus decided to send a video instead of us. " (from Romans blog)




Abstract
"The possibilities of eye-tracking technologies in educational gaming are seemingly endless. The question we need to ask is what the effects of gaze-based interaction on user experience, strategy during learning and problem solving are. In this paper we evaluate the effects of two gaze based input techniques and mouse based interaction on user experience and immersion. In a between-subject study we found that although mouse interaction is the easiest and most natural way to interact during problemsolving, gaze-based interaction brings more subjective immersion. The findings provide a support for gaze interaction methods into computer-based educational environments." Download paper as PDF.


Some of this research has also been presented within the COGAIN association, see:
  • Gowases Tersia (2007) Gaze vs. Mouse: An evaluation of user experience and planning in problem solving games. Master’s thesis May 2, 2007. Department of Computer Science, University of Joensuu, Finland. Download as PDF

Monday, November 3, 2008

Gaze and Voice Based Game Interaction (Wilcox et al., 2008)

"We present a 3rd person adventure puzzle game using a novel combination of non intrusive eyetracking technology and voice recognition for game communication. Figure 1 shows the game, and its first person sub games that make use of eye tracker functionality in contrasting ways: a catapult challenge (a) and a staring competition(b)."


"There are two different modes of control in the main game. The user can select objects by looking at them and perform ’look’, ’pickup’, ’walk’, ’speak’, ’use’ and other commands by vocalizing there respective words. Alternatively, they can perform each command by blinking and winking at objects. To play the catapult game for example, the user must look at the target and blink, wink or drag to fire a projectile towards the object under the crosshair. "

Their work was presented at the ACM SIGGRAPH 2008 with the associated poster:

Saturday, August 23, 2008

GaCIT in Tampere, day 3.

In the morning Howell Istance of De Montford University, currently at University of Tampere, gave a very intersting lecture concerning gaze interaction, it was divided into three parts 1) games 2) mobile devices 3) stereoscopic displays

Games
This is an area for gaze interaction which have a high potential and since the gaming industry has grown to be a hugh industy it may help to make eye trackers accessible/affordable. The development would be benificial for users with motor impairments. A couple of examples for implementations were then introduced. The first one was a first person shoother running on a XBOX360:
The experimental setup evaluation contained 10 repeated trials to look at learning (6 subjects). Three different configurations were used 1) gamepad controller moving and aiming (no gaze) 2) gamepad controller moving and gaze aiming and 3) gamepad controller moving forward only, gaze aiming and steering of the movement.
Results:
However, twice as many shots were fired that missed in the gaze condition which can be described as a "machine gun" approach. Noteworthy is that no filtering was applied to the gaze position.
Howell have conducted a analysis of common tasks in gaming, below is a representation of the amount of actions in the Guild Wars game. The two bars indicate 1) novices and 2) experienced users.

Controlling all of these different actions requires switching of task mode. This is very challenging considering only on input modality (gaze) with no method of "clicking".

There are several ways a gaze interface can be constructed. From a bottom up approach. First the position of gaze can be used to emulate the mouse cursor (on a system level) Second, a transparent overlay can be placed on top of the application. Third, a specific gaze interface can be developed (which has been my own approach) This requires a modification of the original application which is not always possible.

The Snap/Clutch interaction method developed by Stephen Vickers who is working with Howell operates on the system level to emulate the mouse. This allows for specific gaze gestures to be interpretated which is used to switch mode. For example a quick glace to the left of the screen will activate a left mouse button click mode. When a eye fixation is detected in a specific region a left mouse click will be issued to that area.

When this is applied to games such as World of Warcraft (demo) specific regions of the screen can be used to issue movement actions towards that direction. The image below illustrates these regions overlaid on the screen. When a fixation is issued in the A region an action to move towards that direction is issued to the game it self.

Stephen Vickers gaze driven World of Warcraft interface.

After lunch we had a hands-on session with the Snap/Clutch interaction method where eight Tobii eye trackers were used for a round multiplayer of WoW! Very different from a traditional mouse/keyboard setup and takes some time to get used to.

  • Istance, H.O.,Bates, R., Hyrskykari, A. and Vickers, S. Snap Clutch, a Moded Approach to Solving the Midas Touch Problem. Proceedings of the 2008 symposium on Eye Tracking Research & Applications; ETRA 2008. Savannah, GA. 26th-28th March 2008. Download
  • Bates, R., Istance, H.O., and Vickers, S. Gaze Interaction with Virtual On-Line Communities: Levelling the Playing Field for Disabled Users. Proceedings of the 4th Cambridge Workshop on Universal Access and Assistive Technology; CWUAAT 2008. University of Cambridge, 13th-16th April 2008. Download


The second part of the lecture concerned gaze interaction for mobile phones. This allows for ubiquitous computing where the eye tracker is integrated with a wearable display. As a new field it is surrounded with certain issues (stability, processing power, variation in lightning etc.) but all of which will be solved over time. The big question is what the "killer-application" will be. ( entertainment?) A researcher from Nokia attended the lecture and introduced a prototype system. Luckily I had the chance to visit their research department the following day to get a hands-on with their head mounted display with a integrated eye tracker (more on this in another post)

The third part was about stereoscopic displays which adds a third dimension (depth) to the traditional X and Y axis. There are several projects around the world working towards making this everyday reality. However, tracking the depth of gaze fixation is limited. The vergence (as seen by the distance between both pupils) eye movements are hard to measure when the distance to objects move above two meters.

Calculating convergence angles
d = 100 cm tan θ = 3.3 / 100; θ = 1.89 deg.
d = 200 cm tan θ = 3.3 / 200; θ = 0.96 deg.


Related papers on stereoscopic eye tracking:
The afternoon was spent with a guided tour around Tampere followed by a splendid dinner at a "viking" themed restaurant.

Tuesday, July 15, 2008

Sebastian Hillaire at IRISA Rennes, France

Sebastian Hillaire is a Ph.D student at the IRISA Rennes in France, member of the BUNRAKU and France Telecom R&D. His work is situated around using eye trackers for improving the depth-of-field visual scene in 3D environments. He has published two papers on the topic:

Automatic, Real-Time, Depth-of-Field Blur Effect for First-Person Navigation in Virtual Environment (2008)

"We studied the use of visual blur effects for first-person navigation in virtual environments. First, we introduce new techniques to improve real-time Depth-of-Field blur rendering: a novel blur computation based on the GPU, an auto-focus zone to automatically compute the user’s focal distance without an eye-tracking system, and a temporal filtering that simulates the accommodation phenomenon. Secondly, using an eye-tracking system, we analyzed users’ focus point during first-person navigation in order to set the parameters of our algorithm. Lastly, we report on an experiment conducted to study the influence of our blur effects on performance and subjective preference of first-person shooter gamers. Our results suggest that our blur effects could improve fun or realism of rendering, making them suitable for video gamers, depending however on their level of expertise."

Screenshot from the algorithm implemented in Quake 3 Arena.

  • Sébastien Hillaire, Anatole Lécuyer, Rémi Cozot, Géry Casiez
    Automatic, Real-Time, Depth-of-Field Blur Effect for First-Person Navigation in Virtual Environment. To appear in IEEE Computer Graphics and Application (CG&A), 2008 , pp. ??-??
    Source code (please refer to my IEEE VR 2008 publication)

Using an Eye-Tracking System to Improve Depth-of-Field Blur Effects and Camera Motions in Virtual Environments (2008)

"
We describes the use of user’s focus point to improve some visual effects in virtual environments (VE). First, we describe how to retrieve user’s focus point in the 3D VE using an eye-tracking system. Then, we propose the adaptation of two rendering techniques which aim at improving users’ sensations during first-person navigation in VE using his/her focus point: (1) a camera motion which simulates eyes movement when walking, i.e., corresponding to vestibulo-ocular and vestibulocollic reflexes when the eyes compensate body and head movements in order to maintain gaze on a specific target, and (2) a Depth-of-Field (DoF) blur effect which simulates the fact that humans perceive sharp objects only within some range of distances around the focal distance.

Second, we describe the results of an experiment conducted to study users’ subjective preferences concerning these visual effects during first-person navigation in VE. It showed that participants globally preferred the use of these effects when they are dynamically adapted to the focus point in the VE. Taken together, our results suggest that the use of visual effects exploiting users’ focus point could be used in several VR applications involving firstperson navigation such as the visit of architectural site, training simulations, video games, etc."



Sébastien Hillaire, Anatole Lécuyer, Rémi Cozot, Géry Casiez
Using an Eye-Tracking System to Improve Depth-of-Field Blur Effects and Camera Motions in Virtual Environments. Proceedings of IEEE Virtual Reality (VR) Reno, Nevada, USA, 2008, pp. 47-51. Download paper as PDF.

QuakeIII DoF&Cam sources (depth-of-field, auto-focus zone and camera motion algorithms are under GPL with APP protection)

Passive eye tracking while playing Civilization IV

While the SMI iView X RED eye tracker used in this video is not used for driving the interaction it showcases how eye tracking can be used for usability evaluations in interaction design (Civilization does steal my attention on occations, Sid Meier is just a brilliant game designer)

Tuesday, May 6, 2008

Gaze interaction hits mainstream news

The New Scientist technology section posted an article on the Stephen Vickers work at De Montford University for the eye controlled version of World of Warcraft which I wrote about two months ago (see post)
Update: The New Scientist post caused rather extensive discussions on Slashdot, with more than
140 entries.

Great to see mainstream interest of gaze driven interaction. Gaming is truly one area where there is a huge potential, but it also depends on more accessible eye trackers. There is a movement for open source based eye tracking but the robustness for everyday usage is still remains at large. The system Stephen Vickers have developed is using the Tobii X120 eye tracker which is clearly out of range for all but the small group of users whom are granted financial support for their much needed assistive technology.

Have faith
In general, all new technology initially comes at a high cost due to intensive research and development but over time becomes accessible for the larger population. As an example, not many could imagine that satellite GPS navigation would be commonplace and really cheap a decade or two ago. Today mass-collaboration on the net is really happening making the rate of technology development exponential. Make sure to watch Google Techtalk Don Tapscott on Wikinomics.

Thursday, March 27, 2008

RApid GAze-Based Interaction Techniques (RAGABITS)


Stephen Vickers at the Computer Human Interaction Research Group at the De Montfort University, Uk have developed interaction techniques that allows gaze based control of several popular online virtual worlds such as World of Warcraft or Second Life. This research will be presented at ETRA 2008, US under the title RAGABITS (RApid GAze-Based Interaction Techniques) and is espcially intented for users with severe motor impairments.

Selection method seems stable. None of the usual jitter can be seen. Nice!




Quote from http://www.ioct.dmu.ac.uk/projects/eyegaze.html

"Online virtual worlds and games (MMORPG's) have much to offer users with severe motor disabilities. It gives this user group the opportunity as entirely able-bodied to others in the virtual world. if they so wish. The extent to which a user has to reveal their disability becomes a privacy issue. Many of the avatars in Second Life appear as stylized versions of the users that control them and that stylization is the choice of the user. This choice is equally appropriate for disabled users. While the appearance of the user's avatar may not reveal the disability of the person that controls it, the behavior and speed or interaction in the world may do.

Many users with severe motor impairments may not be able to operate a keyboard or hand mouse and may also struggle with speech and head movement. Eye gaze is one method of interaction that has been used successfully in enabling access to desktop environments. However, simply emulating a mouse using eye gaze is not sufficient for interaction in online virtual worlds and the users privacy can be exposed unless efficient gaze-based interaction techniques, appropriate to activities in on-line worlds and games can be provided.

This genre of gaming (MMORPG's) is constantly evolving and regardless of the aim of the game they all involve common tasks such as, avatar creation, social interaction (chatting, IM), interaction with in world objects (pick up, open, shoot etc), navigating and walking around the environment. Our research involves analyzing these common tasks so that suitable gaze based interaction techniques to support them can be used in place of a mouse and keyboard. These will have different performance/effort trade-offs, and will include extended mouse/joystick emulation, gaze gestures, toolglasses and gaze-aware in-world objects. These techniques need to be integrated into a coherent and efficient user interface suited to the needs of an individual user with a particular disability. The research aims to model tasks inherent in using these worlds so that predictions can be made about the most appropriate gaze based interaction techniques to use. When these have been identified, they can be assembled into a front end or user interface. One possible outcome could be a software device for automatic configuration of a gaze-control interface for new games, which could use knowledge of a specific user's disability and the eye tracking equipment that they have."

Monday, February 11, 2008

GazeMemory v0.1a on its way

The extra time spent on developing the Custom Controls for Windows Presentation Foundation (WPF) paid off. What before that took days to develop can now be build within hours. Today I put together a gaze version of the classic game Memory which is controlled by dwell-time (prologed fixation) The "table" contains 36 cards, i.e 18 unique options. By fixating on one card a smooth animation will make the globe on the front of the card to light up and after fixating long enough (500ms) if will show the symbol (flags in first version) After selecting the fist card another is fixated and the two will be compared. If they contain the same symbols then remove them from the table. If not, turn them back over again. The interface provides several feedback mechanisms. Upon glancing the border around the buttons begins to shine, when fixating long enough the dwell time function is activated and illustrated by a white glow that smoothly fades up surrounding the globe.

The Custom Control that will be named GazeButton is to be further developed to support more features such as a configurable dwell-time, feedback such as animations, colors etc. The time spent will be returned tenfold when later on. I plan to release these components as open source as soon as they reach better and more stable performance (ie. production quality with documentation)

Lessons learned so far involves Dependecy properties which is very important if you'd like to develop custom controls in WPF. Animation control and triggers and getting more into DataBinding which looks very promising so far.

Links:
Recommended guidelines for WPF custom controls
Three ways to build an image button
Karl on WPF


Screenshot of the second prototype of the my GazeMemory game