Keynote Speakers

  

 

image

J.K Aggarwal

IAPR KS Fu Prizewinner
2004

Department of Electrical and Computer Engineering
The University of Texas at Austin

 

Human Activity Recognition – A Grand Challenge

Motion is an important cue for the human visual system. Mobiles have always fascinated children, Zeno (circa 500 B.C.) studied moving arrows to pose a paradox, and Zeke is investigating the human brain devoted to the understanding of motion.  In computer vision research, motion has played an important role for the past thirty years. A major goal of current computer vision research is to recognize and understand human motion, activities and continuous activity. Initially, we focused on tracking a single person; today we focus on tracking, recognizing and understanding interactions among several people, for example at an airport or at a subway station. Interpreting such a scene is complex, because similar configurations may have different contexts and meanings. In addition, occlusion and correspondence of body parts in an interaction present serious difficulties to understanding the activity.

Prof. Aggarwal’s interest in motion started with the study of motion of rigid planar objects and it gradually progressed to the study of human motion. The current work includes the study of interactions at the gross (blob) level and at the detailed (head, torso, arms and legs) level. The two levels present different problems in terms of observation and analysis. For blob level analysis, we use a modified Hough transform called the Temporal Spatio-Velocity transform to isolate pixels with similar velocity profiles. For the detailed-level analysis, we employ a multi-target, multi-assignment strategy to track blobs in consecutive frames. An event hierarchy consisting of pose, gesture, action and interaction is used to describe human-human interaction. A methodology is developed to describe the interaction at the semantic level.

 The recognition of human activities will lead to a number of applications, including personal assistants, virtual reality, smart monitoring and surveillance systems, as well as motion analysis in sports, medicine and choreography. Professor Aggarwal will present analysis and results, and discuss the applications of the research.

J. K. Aggarwal has served on the faculty of The University of Texas at Austin College of Engineering since 1964 and is currently a Cullen Professor of Electrical and Computer Engineering and Director of the Computer and Vision Research Center.  His research interests include image processing, pattern recognition and computer vision focusing on human motion. A Fellow of IEEE since 1976 and IAPR since 1998, he received the Senior Research Award of the American Society of Engineering Education in 1992, the 1996 Technical Achievement Award of the IEEE Computer Society and the graduate teaching award at The University of Texas at Austin in 1992. More recently, he is the recipient of the 2004 KSFU prize of the International Association for Pattern Recognition and the 2005 Kirchmayer Graduate Teaching Award of the IEEE. He is a Life Fellow of IEEE and Golden Core member of IEEE Computer Society. He has authored and edited a number of books, chapters, proceedings of conferences, and papers.

 

 

Simon K. Warfield

 Associate Professor of Radiology, Harvard Medical School

Director, Computational Radiology Laboratory at Children’s Hospital and Brigham and Women’s Hospital

 

Medical Image Analysis for Image Guided Therapy

Image guided therapy is the suite of techniques by which surgeons and interventional radiologists utilize preoperative and intraoperative imaging to support therapeutic interventions.  These techniques have been enabled by the availability of a variety of imaging modalities in the operating room, and have lead to less invasive interventions and improved outcomes for patients. 

The analysis of images has a crucial role to play in preoperative planning before the intervention, and intraoperative image guidance, utilizing multiple modalities, provides critical information for targeting, navigation and visualization during procedures.  Particular challenges include overcoming the effects of image acquisition artifacts, imaging system noise, and patient-specific normal and pathological variability, in order to achieve robust performance while meeting the time constraints of the operating room.

Advances in image acquisition and computational modeling now enable patient-specific simulation of biomechanical and electromagnetic properties.  We will present recent work in developing and evaluating patient-specific biomechanical simulations of soft-tissue deformation to improve guidance during neurosurgery and prostate brachytherapy and biopsy.  These methods enable us to create fused visualizations of critical healthy structures and tumor margins, despite rigid motion due to patient placement and nonrigid deformations occurring as a consequence of the interventional procedure.  We will also discuss recent advances that enable realistic patient specific electromagnetic simulations, with application to planning for epilepsy surgery.

Simon Warfield is the Director of the Computational Radiology Laboratory (CRL) in the Departments of Radiology at Brigham and Women’s Hospital and Children’s Hospital, a Research Affiliate of the Computer Science and Artificial Intelligence Laboratory at MIT and Associate Professor of Radiology at Harvard Medical School. His research in the field of medical image analysis has focused on methods for quantitative image analysis through novel segmentation and registration approaches, and in real-time image analysis, enabled by high performance computing technology, in support of surgery.

 

Bernard Buxton

Professor of Information Processing Systems
University College London

Flexible Template and Model Matching Using Image Intensity

Intensity-based image and template matching is briefly reviewed with particular emphasis on the problems that arise when flexible templates or models are used. Use of such models and templates may often lead to a very small basin of attraction in the error landscape surrounding the desired solution and also to spurious, trivial solutions. Simple examples are studied in order to illustrate these problems which may arise from photometric transformations of the template, from geometric transforms of it or from internal parameters of the template that allow similar types of variation. It is pointed out that these problems are, from a probabilistic point of view, exacerbated by a failure to model the whole image, i.e. both the foreground object or template and the image background, which a Bayesian approach strictly requires. Some general remarks are made about the form of the error landscape to be expected in object recognition applications and suggestions made as to optimisation techniques that may prove effective in locating a correct match. These suggestions are illustrated by example.

Bernard Buxton is the Professor of Information Processing Systems in the Department of Computer Science, University College London. He is  leader of the Vision, Imaging, Virtual Environments and Simulation research group. His research interests focus on the development and application of statistical and physically-based methods in image processing, computer vision and pattern recognition. Recent work includes: the development and evaluation of statistical representations of colour and chromaticity (with DC Alexander); the development, implementation and evaluation of Kalman filter-based algorithms for the guidance and control of an agricultural robot (with B J Southall, and the Silsoe Research Institute); the application of statistics, machine learning, genetic programming and intelligent systems techniques to fraud detection (with PJ Bentley and PC Treleaven) and numerous other projects.

 

Charles Falco

Chair of Condensed Matter Physics
College of Optical Sciences
University of Arizona

Quantitative Analysis of Qualitative Images

Recently, renowned artist David Hockney observed that certain drawings and paintings from as early as the Renaissance seemed almost "photographic" in detail.[1]  Following an extensive visual investigation of western art of the past 1000 years, he made the revolutionary claim that artists even of the prominence of van Eyck and Bellini must have used optical aids.  However, many art historians insisted there was no supporting evidence for such a remarkable assertion.  In this talk I will show a range of optical evidence for his claim that Hockney and I subsequently discovered during an unusual, and remarkably-productive, collaboration between an artist and a scientist.[2,3]  These discoveries convincingly demonstrate optical instruments were in use—by artists, not scientists—nearly 200 years earlier than previously even thought possible, and account for the remarkable transformation in the reality of portraits that occurred early in the 15th century.   

As the examples in my talk will show, paintings are much more complex than if projected images simply had been traced.  The new image analysis insights Hockney and I developed in our collaboration enabled us to overcome this complexity, allowing us to extract information that had eluded generations of scholars.[4]  Because of this, these discoveries have significant implications for the fields of machine vision and computerized image analysis as well as for the histories of art and science.   

Acknowledgments:  My work was done in collaboration with David Hockney.

Research supported by DARPA/ONR.

 [1]  David Hockney, Secret Knowledge: Rediscovering the Lost Techniques of the Old Masters (Thames & Hudson, 2001).
[2]  David Hockney and Charles M. Falco, “Optical Insights into Renaissance Art,” Optics and Photonics News 11, 52 (2000).
[3]  Additional information, and pdf files of Ref. [2] and our other publications, can be found at:
            http://www.optics.arizona.edu/ssd/FAQ.html
[4]  “David Hockney and Charles M. Falco dissected with precision the workings of paintings that deceived so many experts in visual composition for so long.”  Eduardo Neiva, in ‘Eyes, Mirror, Light: History's Other Lenses’, Semiotica (2005, in press).

Charles Falco is a Professor of Optical Sciences at the University of Arizona where he holds the UA Chair of Condensed Matter Physics.  He is a Fellow of both the American Physical Society and the Optical Society of America, has published more than 250 scientific manuscripts, co-edited two books and has seven U.S. patents.