Showing posts with label radiology. Show all posts
Showing posts with label radiology. Show all posts

Monday, November 15, 2010

Exploring the potential of context-sensitive CADe in screening mammography (Tourassi et al, 2010)

Georgia D. Tourassi, Maciej A. Mazurowski, and Brian P. Harrawood at Duke University Ravin Advanced Imaging Laboratories in collaboration with Elizabeth A. Krupinski presents a novel method of combining eye gaze data with Computer-Assisted Detection algorithms to improve detection rates for malignant masses in mammography. This contextualized method holds a potential for personalized diagnostic support.

Purpose: Conventional computer-assisted detection CADe systems in screening mammography provide the same decision support to all users. The aim of this study was to investigate the potential of a context-sensitive CADe system which provides decision support guided by each user’s focus of attention during visual search and reporting patterns for a specific case.

Methods: An observer study for the detection of malignant masses in screening mammograms was conducted in which six radiologists evaluated 20 mammograms while wearing an eye-tracking device. Eye-position data and diagnostic decisions were collected for each radiologist and case they reviewed. These cases were subsequently analyzed with an in-house knowledge-based CADe system using two different modes:  conventional mode with a globally fixed decision threshold and context-sensitive mode with a location-variable decision threshold based on the radiologists’ eye dwelling data and reporting information.

Results: The CADe system operating in conventional mode had 85.7% per-image malignant mass sensitivity at 3.15 false positives per image FPsI . The same system operating in context-sensitive mode provided personalized decision support at 85.7%–100% sensitivity and 0.35–0.40 FPsI to all six radiologists. Furthermore, context-sensitive CADe system could improve the radiologists’ sensitivity and reduce their performance gap more effectively than conventional CADe.



Conclusions: Context-sensitive CADe support shows promise in delineating and reducing the radiologists’ perceptual and cognitive errors in the diagnostic interpretation of screening mammograms more effectively than conventional CADe.
  • G. D. Tourassi, M. A. Mazurowski, B. P. Harrawood and E. A. Krupinski, "Exploring the potential of context-sensitive CADe in screening mammography," Medical Physics 37, 5728-5736. Online, PDF

Wednesday, October 21, 2009

Medical Image Perception Society 2009 - Day three

Session 10. Displays and Tools. Chair: Kevin Berbaum, PhD
  • Objective methodology to compare clinical value of computed tomography artifact reduction algorithms. G Spalla, C Marchessoux, M Vaz, A Ricker, & T Kimpe
  • LCD Spatial Noise Suppression: Large-field vs. ROI Image Processing. WJ Dallas, H Roehrig, J Fan, EA Krupinski, & J Johnson
Session 11. Displays and Tools. Chair: Miguel Eckstein, PhD
  • Stereoscopic Digital mammography: Improved Accuracy of Lesion Detection in Breast Cancer Screening. DJ Getty, CJ D’Orsi, & RM Pickett
  • Detectability in tomosynthesis projections, slices and volumes: Comparison of human observer performance in a SKE detection task. I Reiser, K Little, & RM Nishikawa
Thanks Craig, Miguel and Elisabeth for a wonderful event, learned so much in just three days. Plenty of inspiration for future research.

Tuesday, October 20, 2009

Medical Image Perception Society 2009 - Day one

The first day of the Medical Image Perception Society conference, held biannual, this year in Santa Barbara was filled with interesting talks. Plenty of research utilizing eye tracking as a means of obtaining data. The conference is hosted by Craig Abbey and Miguel Eckstein at the Department of Psychology at the University of California, Santa Barbara in cooperation with Elizabeth Krupinski (book1 , book2) from University of Arizona whom has performed extensive research on eye movements (among other things) in relation to medical imaging and radiology.

Session 1. Visual Search. Chair: Claudia Mello-Thoms, PhD
Session 2. Visual Search. Chair: Elizabeth Krupinski, PhD
  • Visual Search Characteristics of Pathology Residents Reading Dermatopathology Slides. J Law & C Mello-Thoms
  • Are you a good eye-witness? Perceptual differences between physicians and lay people. C Mello-Thoms
  • Eye movements and computer-based mammographic interpretation training. Y Chen & A Gale
Session 3. Perceptual Effects. Chair: David Manning, PhD
  • Nuisance levels of noise effects Radiologists Performance. MF Mc Entee, A O'Beirne, J Ryan, R Toomey, M Evanoff, D Chakraborty, D Manning, & PC. Brennan
  • Observer Performance in Stroke Interpretation: The Influence of Experience and Clinical Information in Multidimensional Magnetic Resonance Imaging. L Cooper, A Gale, J Saada, S Gedela, H Scott, & A Toms
  • Interpretation of wrist radiographs: A comparison between final year medical and radiography students. L Hutchinson, P Brennan & L Rainford
  • Tumor measurement for revised TNM staging of lung cancer. FL Jacobson, A Sitek, D Getty, & SE Seltzer
  • Does Reader Visual Fatigue Impact Performance? EA Krupinski & KS Berbaum
  • Ambient Temperature is an Important Consideration in the Radiology Reading Room. MF Mc Entee & S Gafoor
Session 4. Performance Measurement I. Chair: Dev Chakraborty, PhD
  • Perceptual indicators of the holistic view in pulmonary nodule detection. MW Pietrzyk, DJ Manning, T Donovan, & Alan Dix
  • An e-learning tutorial demonstrates significant improvements in ROC performance amongst naive observers in breast image interpretation. PBL Soh, PC Brennan, A Poulos, W Reed
  • Is n ROC-type response Truly always better than A Binary Response? D Gur, AI Bandos, HE Rockette, ML Zuley, CM Hakim, DM Chough, MA Ganott
  • Recognition of Images in Reader Studies: How Well Can We Predict Which Will Be Remembered? T Miner Haygood, P O’Sullivan, J Ryan, E Galvan, J-M Yamal, M Evanoff, M McEntee, J Madewell, C Sandler, E Lano, & P Brennan
Session 5. Performance Measurement I. Chair: Alastair Gale, PhD
  • New classes of models with monotonic likelihood ratios. F Samuelson
  • Sample size estimation procedure for free-response (FROC) studies. DP Chakraborty & M Bath
  • Comparison of Four Methods (ROC, JAFROC, IDCA, and ROI) for Analysis of Free Response Clinical Data. F Zanca, DP Chakraborty, J Jacobs, G. Marchal, and H Bosmans
Feel free to post additional links in the comments. Slides will be posted as they become available.

Tuesday, May 19, 2009

Hands-free Interactive Image Segmentation Using Eyegaze (Sadeghi, M. et al, 2009)

Maryam Sadeghi, a Masters student at the Medical Image Analysis Lab at the Simon Fraser University in Canada presents an interesting paper on using eye tracking for gaze driven image segmentation. The research has been performed in cooperation with Geoffry Thien (Ph.D student), Dr. Hamarneh and Stella Atkins (principal investigators). More information is to be published on this page. Geoffry Thien completed his M.Sc thesis on gaze interaction in March under the title "Building Interactive Eyegaze Menus for Surgery" (abstract) unfortunately I have not been able to located a electronic copy of that document.

Abstract
"This paper explores a novel approach to interactive user-guided image segmentation, using eyegaze information as an input. The method includes three steps: 1) eyegaze tracking for providing user input, such as setting object and background seed pixel selection; 2) an optimization method for image labeling that is constrained or affected by user input; and 3) linking the two previous steps via a graphical user interface for displaying the images and other controls to the user and for providing real-time visual feedback of eyegaze and seed locations, thus enabling the interactive segmentation procedure. We developed a new graphical user interface supported by an eyegaze tracking monitor to capture the user's eyegaze movement and fixations (as opposed to traditional mouse moving and clicking). The user simply looks at different parts of the screen to select which image to segment, to perform foreground and background seed placement and to set optional segmentation parameters. There is an eyegaze-controlled "zoom" feature for difficult images containing objects with narrow parts, holes or weak boundaries. The image is then segmented using the random walker image segmentation method. We performed a pilot study with 7 subjects who segmented synthetic, natural and real medical images. Our results show that getting used the new interface takes about only 5 minutes. Compared with traditional mouse-based control, the new eyegaze approach provided a 18.6% speed improvement for more than 90% of images with high object-background contrast. However, for low contrast and more difficult images it took longer to place seeds using the eyegaze-based "zoom" to relax the required eyegaze accuracy of seed placement." Download paper as pdf.

The custom interface is used to place backgound (red) and object (green) seeds which are used in the segmentation process. The custom fixation detection algorithm triggers a mouse click to the gaze position, if 20 of the previous 30 gaze samples lies within a a 50 pixel radius.


The results indicate a certain degree of feasibility for gaze assisted segmentation, however real-life situations often contain more complex images where borders of objects are less defined. This is also indicated in the results where the CT brain scan represents the difficult category. For an initial study the results are interesting and it's likely that we'll see more of gaze interaction within domain specific applications in a near future.


  • Maryam Sadeghi, Geoff Tien, Ghassan Hamarneh, and Stella Atkins. Hands-free Interactive Image Segmentation Using Eyegaze. In SPIE Medical Imaging 2009: Computer-Aided Diagnosis. Proceedings of the SPIE, Volume 7260 (pdf)