Skip to content Skip to navigation

OpenStax-CNX

You are here: Home » Content » CSLS Workshop on Computational Vision and Image Analysis

Navigation

Recently Viewed

This feature requires Javascript to be enabled.
 

CSLS Workshop on Computational Vision and Image Analysis

Module by: Pascal Vontobel. E-mail the author

Workshop Overview

Great advances have been made in the acquisition of image data, from conventional photography, CT scanning, and satellite imaging to the now ubiquitous digital cameras embedded in cell phones and other wireless devices. Although the semantic understanding of the shapes and other objects appearing in images is effortless for human beings, the corresponding problem in machine perception - namely, automatic interpretation via computer programs - remains a major open challenge in modern science. In fact, there are very few systems whose value derives from the analysis rather than collection of image data, and this "semantic gap" impedes scientific and technological advances in many areas, including automated medical diagnosis, robotics, industrial automation, and effective security and surveillance. In this CSLS Workshop, three distinguished experts in the field of Computational Vision and Image Analysis share their thoughts on the current state of the art and future directions in the field.

Remark: This workshop was held on October 30, 2003 as part of the Computational Sciences Lecture Series (CSLS) at the University of Wisconsin-Madison.

Hierarchical Designs for Pattern Recognition

By Prof. Donald Geman (Dept. of Applied Mathematics and Statistics and Center for Imaging Science, Johns Hopkins University, USA)

Slides of talk [PDF] (Not yet available.) | Video [WMV] | Video [MPG]

ABSTRACT: It is unlikely that complex problems in machine perception, such as scene interpretation, will yield directly to improved methods of statistical learning. Some organizational framework is needed to confront the small amount of data relative to the large number of possible explanations, and to make sure that intensive computation is restricted to genuinely ambiguous regions. As an example, I will present a "twenty questions" approach to pattern recognition. The object of analysis is the computational process itself rather than probability distributions (Bayesian inference) or decision boundaries (statistical learning). Under mild assumptions, optimal strategies exhibit a steady progression from broad scope coupled with low power to high power coupled with dedication to specific explanations. Several theoretical results will be mentioned (joint work with Gilles Blanchard) as well as experiments in object detection (joint work with Yali Amit and Francois Fleuret).

Modeling and Inference of Dynamic Visual Processes

By Prof. Stefano Soatto (Department of Computer Science, University of California Los Angeles, USA)

Slides of talk [PDF] (Not yet available.) | Video [WMV]

ABSTRACT: "We see in order to move, and we move in order to see." In this expository talk, I will explore the role of vision as a sensor for interaction with physical space. Since the complexity of the physical world is far superior to that of its measured images, inferring a generic representation of the scene is an intrinsically ill-posed problem. However, the task becomes well-posed within the context of a specific control task. I will display recent results in the inference of dynamical models of visual scenes for the purpose of motion control, shape visualization, rendering, and classification.

Computational Anatomy and Models for Image Analysis

By Prof. Michael Miller (Director of the Center for Imaging Science, The Seder Professor of Biomedical Engineering, Professor of Electrical and Computer Engineering, Johns Hopkins University, USA)

Slides of talk [PDF] (Not yet available.) | Video [WMV]

ABSTRACT: University Recent years have seen rapid advances in the mathematical specification of models for image analysis of human anatomy. As first described in "Computational Anatomy: An Emerging Discipline" (Grenander and Miller, Quarterly of Applied Mathematics, Vol. 56, 617-694, 1998), human anatomy is modelled as a deformable template, an orbit under the group action of infinite dimensional diffeomorphisms. In this talk, we will describe recent advances in CA, specifying a metric on the ensemble of images, and examine distances between elements of the orbits, "Group Actions, Homeomorphisms, and Matching: A General Framework" (Miller and Younes, Int. J. Comp. Vision Vol. 41, 61-84, 2001), "On the Metrics of Euler-Lagrange Equations of Computational Anatomy (Annu. Rev. Biomed. Eng., Vol. 4, 375-405, 2002). Numerous results will be shown comparing shapes through this metric formulation of the deformable template, including results from disease testing on the hippocampus, and cortical structural and functional mapping.

Content actions

Download module as:

Add module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks