Skip to content Skip to navigation Skip to collection information

OpenStax-CNX

You are here: Home » Content » ELEC 301 Projects Fall 2006 » Exploring High Dynamic Range Imaging: §3.1 HDR Image Creation

Navigation

Table of Contents

Lenses

What is a lens?

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
  • Rice University ELEC 301 Projects

    This collection is included inLens: Rice University ELEC 301 Project Lens
    By: Rice University ELEC 301

    Click the "Rice University ELEC 301 Projects" link to see all content affiliated with them.

  • Rice Digital Scholarship

    This collection is included in aLens by: Digital Scholarship at Rice University

    Click the "Rice Digital Scholarship" link to see all content affiliated with them.

Also in these lenses

  • Lens for Engineering

    This collection is included inLens: Lens for Engineering
    By: Sidney Burrus

    Click the "Lens for Engineering" link to see all content selected in this lens.

Recently Viewed

This feature requires Javascript to be enabled.
 

Exploring High Dynamic Range Imaging: §3.1 HDR Image Creation

Module by: Taylor Johnson, Sarah McGee, Robert Ortman, Tianhe Yang. E-mail the authors

Summary: Given that no consumer grade digital cameras can produce images with more than at most about 12-bits per color channel, and 8-bits per color channel is more common, to manipulate HDR images (of say 32-bits per color channel), one must find a way to "estimate" the 8-bits up to 32-bits. This creation of an HDR image can be accomplished using multiple images at different exposure levels (stops).

Since no consumer-grade digital cameras can produce HDR images (of say 32-bits per color channel), how can one get an HDR image to manipulate?

One can estimate an HDR image by combining multiple images across a wide dynamic range. For example, by averaging an overexposed and an underexposed image of the exact same scene, one gets a low-contrast image containing more data of the scene than either of the previous images. This is because each image added information to the scene--the overexposed image adds details in the shadows, while the underexposed image adds details in the highlights.

Example 1

These example images would be combined to generate the HDR image by some method, as described below:

Figure 1: LDR Image to be Used in Composition of HDR Image
Figure 1 (memorial004s.jpg)
Figure 2: LDR Image to be Used in Composition of HDR Image
Figure 2 (memorial009s.jpg)
Figure 3: LDR Image to be Used in Composition of HDR Image
Figure 3 (memorial012s.jpg)
Figure 4: LDR Image to be Used in Composition of HDR Image
Figure 4 (memorial017s.jpg)
Figure 5: LDR Image to be Used in Composition of HDR Image
Figure 5 (memorial020s.jpg)

So, the development of the simplest algorithm to generate an HDR image was rather simple--it is just a straight average of all the source LDR images multiplied by the factor difference between the LDR bitrate and the HDR bitrate. This can be viewed as a straight averaging function, like:

1 N i=0 N1 ( A i 2 HL ) 1 N i=0 N1 ( A i 2 HL )
(1)

where A i A i is the ith input color matrix, H is the HDR bitrate (say 2 32 2 32 ), L is the LDR bitrate (say 2 8 2 8 ), and N is the number of input images. Please also note that bringing the division factor inside the summation is necessary in the real algorithm, as otherwise the resulting value could have overflow and produce inaccurate results.

Now, while we cannot view this directly since we can't visualize the HDR image on conventional displays, we can use a simple tone-mapping operator to view the HDR image mapped back down to LDR. Using the Quantizing Operator, we get this result for using averaging of the input images:

Figure 6: HDR Creation via Averaging (tone-mapped using Quantizing Operator)
Figure 6 (average.jpg)

While this produces decent results given the input images, we think we could help the tone-mapping operators by generating the HDR image with the largest possible dynamic range, and thus most information. The next consideration was an upsampling technique, where randomly determined ranges were interpolated differently. That is, a similar averaging technique was used, but instead of using the raw input color data, it was shifted in a random manner, to be either higher or lower in brightness. So after multiplying by the difference of the ranges given by the HDR and LDR bitrates, a random number was added on the same order. This could be thought of something like:

1 N i=0 N1 (( A i 2 HL )±rand(x 2 HL ,y 2 HL )) 1 N i=0 N1 (( A i 2 HL )±rand(x 2 HL ,y 2 HL ))
(2)

where A i A i is the ith input color matrix, H is the HDR bitrate (say 2 32 2 32 ), L is the LDR bitrate (say 2 8 2 8 ), x is a lower bound coefficient (0.75), y is an upper bound coefficient (1.25), rand generates a random integer between the two inputs, and N is the number of input images.

Again using the Quantizing Operator, we get this result for using upsampling of the input images:

Figure 7: HDR Creation via Upsampling (tone-mapped using Quantizing Operator)
Figure 7 (upsample.jpg)

This lead to the final algorithm, a weighted average, that put more emphasis on certain input images if their average luminances fell into certain ranges (with the upsampling from above also included). For example, if an image is pure white, it probably doesn't contain much useful information, so that image should have less of an impact on the final HDR image. Another thought was also that the middle-tone images probably contain the most overall information, so they should be weighted more highly. This algorithm is piecewise and thus harder to write in a standard form, so please look in the source code section for how this works.

Again using the Quantizing Operator, we get this result for using a weighted average of the input images:

Figure 8: HDR Creation via Weighting (tone-mapped using Quantizing Operator)
Figure 8 (weighted.jpg)

While these images all look very similar, the weighted average does produce the best results in terms of the histogram. The weighted average generally seemed to perform better for the shadows and blow out more highlights than the others, but a histogram comparison clearly shows it is superior. Despite this, given how similar the results of each creation algorithm are, it did not play much in actually tone-mapping the HDR image back down to a displayable LDR range. So, while the goal was to try to help the tone-mapping algorithms by producing the best HDR image possible, it did not have much of a visible effect for any resulting algorithm.

Now, we should explore how the real work happens, in pulling more information from the HDR image to create the best LDR image possible; that is, the one with the highest dynamic range.

Collection Navigation

Content actions

Download:

Collection as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Module as:

PDF | More downloads ...

Add:

Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks