Skip to content Skip to navigation Skip to collection information

OpenStax-CNX

You are here: Home » Content » ECE 301 Projects Fall 2003 » Content-Based Image Querying with Complex Wavelets

Navigation

Table of Contents

Lenses

What is a lens?

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
  • Rice University ELEC 301 Projects

    This collection is included inLens: Rice University ELEC 301 Project Lens
    By: Rice University ELEC 301

    Click the "Rice University ELEC 301 Projects" link to see all content affiliated with them.

  • Rice Digital Scholarship

    This collection is included in aLens by: Digital Scholarship at Rice University

    Click the "Rice Digital Scholarship" link to see all content affiliated with them.

Also in these lenses

  • Lens for Engineering

    This module and collection are included inLens: Lens for Engineering
    By: Sidney Burrus

    Click the "Lens for Engineering" link to see all content selected in this lens.

  • EW Public Lens display tagshide tags

    This collection is included inLens: Ed Woodward's Public Lens
    By: Ed Woodward

    Comments:

    "assafdf"

    Click the "EW Public Lens" link to see all content selected in this lens.

    Click the tag icon tag icon to display tags associated with this content.

Recently Viewed

This feature requires Javascript to be enabled.

Tags

(What is a tag?)

These tags come from the endorsement, affiliation, and other lenses that include this content.
 

Content-Based Image Querying with Complex Wavelets

Module by: Tom Mowad, Venkat Chandrasekaran. E-mail the authors

Summary: An introduction to our ELEC 301 project.

Introduction

Thanks to the growth of the World Wide Web over the past decade or so, vast amounts of information are available to anyone in possession of a personal computer with a modem and an Internet connection. Tasks such as finding a favorite poem have been made easy by search engines like Google. One can simply type in a few lines from the poem, and then it’s just a matter of sorting through a few top matches before one has the entire poem on the screen.

While searching textual media is fairly trivial, looking for an image that you have seen before can be a huge problem. If you remember seeing an interesting painting, say Leonardo da Vinci’s Mona Lisa, after walking through a museum, and you’d like to find information on it online, unless you have a word or phrase associated with the painting, such as da Vinci or Mona Lisa, it would be difficult to find any information about the particular work of art. You might be able to find the painting online in some subject-specific database such as an online art gallery; however, such databases for most subjects are fairly uncommon.

Example 1

Figure 1
Mona Lisa
Mona Lisa (monalisa.jpg)

When in search of this work of art, while one may not have textual information related to the painting, one usually does have some information about the image in question; that is, the person has a coarse-scale idea of what the Mona Lisa, for instance, looks like. This information should be fairly useful for finding an actual image of the Mona Lisa, but given current techniques, searches for visual data break down as effective strategies when the database size increases to even a small fraction of the number of images on the World Wide Web.

Our Goal

We would like to come up with some sort of a scheme that allows a user to search through a large database of images. The system would likely work by having the user enter a query image, a low-detail, coarse-scale version of the image he or she would like to find, and then returning small thumbnails of several matching images for the user to skim over. Ideally, we would like such a system to satisfy several properties.

Firstly, our algorithm should be reasonably fast and efficient. It’s fairly obvious that this property is desirable for any algorithm, but would be especially so in our case, where it is likely that such a system is used on a search engine such as Google where there would potentially be thousands, if not millions, of query images entered every minute.

Our algorithm should also be well suited to matching coarse-scale versions of images to high detail versions of the same image. Users should be able to sketch an image in a simple drawing application where a lot of detail is not easy to add to the query image. They should also be able to enter images that have been digitized by the use of a scanner, which we assume introduces blurriness and additional noise such as scratches, dust, etc, to the extent that they would find it highly useful to search for a higher-resolution version of the image online.

Ideally, we would also like our algorithm to be able to handle affine transformations, such as translation, rotation, and scaling. It is unreasonable to expect a user to be able to draw parts of an image in exactly the same region that they appear in the original image. While these three transformations are all important components of an image querying system, we made the decision to focus on translation because it seems like the most likely type of error that a user would make.

Past Work

We structure our approach after that of Jacobs, Finkelstein, and Salesin, who, while at the University of Washington, published a paper on Fast Multiresolution Image Querying, which used the wavelet basis to decompose images to provide a low-resolution version of an image which is highly effective for image matching. The primary drawback is that the approach is ineffective for detecting shifts of an image since the separable discrete wavelet basis is not shift-invariant. Therefore, we propose the use of the complex discrete wavelet basis which possesses a high degree of shift-invariance in its magnitude. When coupled appropriately with the two-dimensional Discrete Fourier Transform, the two-dimensional Complex Discrete Wavelet Transform allows us to match shifted versions of an image with a significantly higher degree of certainty than does the approach of Jacobs, et al.

Collection Navigation

Content actions

Download:

Collection as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Module as:

PDF | More downloads ...

Add:

Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks