# OpenStax-CNX

You are here: Home » Content » Digital Signal Processing Laboratory (ECE 420) » Introduction to the IDK

### Lenses

What is a lens?

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

#### Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
• TI DSP

This collection is included inLens: Texas Instruments DSP Lens
By: Texas Instruments

"Doug course at UIUC using the TI C54x DSP has been adopted by many EE, CE and CS depts Worldwide "

Click the "TI DSP" link to see all content affiliated with them.

Click the tag icon to display tags associated with this content.

#### Also in these lenses

• Lens for Engineering

This module and collection are included inLens: Lens for Engineering
By: Sidney Burrus

Click the "Lens for Engineering" link to see all content selected in this lens.

• Real-Time DSP with MATLAB

This module is included inLens: DSP with MATLAB lens
By: Bhaskar BhattacharyaAs a part of collection: "Digital Signal Processing Laboratory (ECE 420 55x)"

"Real-Time DSP with MATLAB"

Click the "Real-Time DSP with MATLAB" link to see all content selected in this lens.

Click the tag icon to display tags associated with this content.

### Recently Viewed

This feature requires Javascript to be enabled.

### Tags

(What is a tag?)

These tags come from the endorsement, affiliation, and other lenses that include this content.

Inside Collection (Course):

# Introduction to the IDK

Module by: Mark Butala. E-mail the author

## Introduction

The purpose of this lab is to acquaint you with the TI Image Developers Kit (IDK). The IDK contains a floating point C6711 DSP, and other hardware that enables real time video processing. In addition to the IDK, the video processing lab bench is equipped with an NTSC camera and a standard color computer monitor.

You will complete an introductory exercise to gain familiarity with the IDK programming environment. In the exercise, you will modify a C skeleton to horizontally flip and invert video input from the camera. The output of your video processing algorithm will appear in the top right quadrant of the monitor. In addition, you will analyze existing C code that implements filtering and edge detection algorithms to gain insight into IDK programming methods. The output of these "canned" algorithms, along with the unprocessed input, appears in the other quadrants of the monitor.

An additional goal of this lab is to give you the opportunity to discover tools for developing an original project using the IDK.

## Video Processing Setup

The camera on the video processing lab bench generates an analog video signal in NTSC format. NTSC is a standard for transmitting and displaying video that is used in television. The signal from the camera is connected to the "composite input" on the IDK board (the yellow plug). This is illustrated in Figure 2-1 on page 2-3 of the IDK User's Guide [link]. Notice that the IDK board is actually two boards stacked on top of each other. The bottom board contains the C6711 DSP, where your image processing algorithms will run. The top board is the daughterboard, which contains hardware for interfacing with the camera input and monitor output. For future video processing projects, you may connect a video input other than the camera, such as the output from a DVD player. The output signal from the IDK is in RGB format, so that it may be displayed on a computer monitor.

At this point, a description of the essential terminology of the IDK environment is in order. The video input is first decoded and then sent to the FPGA, which resides on the daughterboard. The FPGA is responsible for the filling of the frame buffer and video capture. For a detailed description the FPGA and its functionality, we advise you to read Chapter 2 of the IDK User's Guide [link].

The Chip Support Library (CSL) is an abstraction layer that allows the IDK daughterboard to be used with the entire family of TI C6000 DSPs (not just the C6711 that we're using); it takes care of what is different from chip to chip.

The Image Data Manager (IDM) is a set of routines responsible for moving data between on chip internal memory and external memory on the board during processing. The IDM helps the programmer by taking care of the pointer updates and buffer management involved in transferring data. Your DSP algorithms will read and write to internal memory, and the IDM will transfer this data to and from external memory. Examples of external memory include temporary "scratch pad" buffers, the input buffer containing data from the camera, and the output buffer with data destined for the RGB output.

The TI C6711 DSP uses a different instruction set than the 5400 DSP's you are familiar with in lab. The IDK environment was designed with high level programming in mind, so that programmers would be isolated from the intricacies of assembly programming. Therefore, we strongly suggest that you do all your programming in C. Programs on the IDK typically consist of a main program that calls an image processing routine. The image processing routine may make several calls to specialized functions. These specialized functions consist of an outer wrapper and an inner component. The component performs processing on one line of an image. The wrapper oversees of the processing of the entire image, using the IDM to move data back and forth between internal memory and external memory. In this lab, you will modify a component to implement the flipping and inverting algorithm.

In addition, the version of Code Composer that the IDK uses is different from the one you have used previously. The IDK uses Code Composer Studio v2.1. It is similar to the other version, but the process of loading code is slightly different.

## Code Overview

The program flow for these image processing applications may be a bit different from your previous experiences in C programming. In most C programs, the main function is where program execution starts and ends. In this real-time application, the main function serves only to setup initializations for the cache, the CSL, and the DMA channel. When it exits, the main task, tskMainFunc(), will execute automatically, starting the DSP/BIOS. This is where our image processing application begins.

The tskMainFunc(), in main.c, opens the handles to the board for image capture (VCAP_open()) and to the display (VCAP_open()) and calls the grayscale function. Here, several data structures are instantiated that are defined in the file img_proc.h. The IMAGE structures will point to the data that is captured by the FPGA and the data that will be output to the display. The SCRATCH_PAD structure points to our internal and external memory buffers used for temporary storage during processing. LPF_PARAMS is used to store filter coefficients for the low pass filter.

The call to img_proc() takes us to the file img_proc.c. First, several variables are declared and defined. The variable quadrant will denote on which quadrant of the screen we currently want output; out_ptr will point to the current output spot in the output image; and pitch refers to the byte offset between two lines. This function is the high level control for our image-processing algorithm. See algorithm flow.

The first function called is the pre_scale_image function in the file pre_scale_image.c. The purpose of this function is to take the 640x480 image and scale it down to a quarter of its size by first downsampling the input rows by two and then averaging every two pixels horizontally. The internal and external memory spaces in the scratch pad are used for this task. The vertical downsampling will occur when only every other line is read into the internal memory from the input image. Within internal memory, we will operate on two lines of data (640 columns/line) at a time, averaging every two pixels (horizontal neighbors) and producing two lines of output (320 columns/line) that are stored in the external memory.

To accomplish this, we will need to take advantage of the IDM by initializing the input and output streams. At the start of the function, two instantiations of a new structure dstr_t are declared. You can view the structure contents of dstr_t on p. 2-11 of the IDK Programmer's Guide [link]. The structure contents are defined with calls to dstr_open(). This data flow for the pre-scale is shown in data flow.

To give you a better understanding of how these streams are created, let's analyze the parameters passed in the first call to dstr_open():

This is a pointer to the place in memory serving as the source of our input data (it's the source because the last function parameter is set to DSTR_INPUT).

### External size: (rows + num_lines) * cols = (240 + 2) * 640

This is the total size of our input data. We will only be taking every other line from in_image->data, so only 240 rows. The extra two rows are for buffer.

This is a pointer to an 8x640 lexographic array, specifically scratchpad->int_data. This is where we will be putting the data on each call to dstr_get().

### Internal size: 2 * num_lines * cols = 2 * 2 * 640

The size of space available for data to be input into int_mem from in_image->data. Because double buffering is used, num_lines is set to 2.

### Number of bytes/line: cols = 640, Number of lines: num_lines = 2

Each time dstr_get() is called, it will return a pointer to 2 lines of data, 640 bytes in length.

### External memory increment/line: stride*cols = 1*640

Left as an exercise.

### Window size: 1 for double buffered

The need for the window size is not really apparent here. It will become apparent when we do the 3x3 block convolution. Then, the window size will be set to 3. This tells the IDM to send a pointer to 3 lines of data when dstr_get() is called, but only increment the stream's internal pointer by 1 (instead of 3) the next time dstr_get() is called. This is not a parameter when setting up an output stream.

### Direction of input: DSTR_INPUT

Sets the direction of data flow. If it had been set to DSTR_OUTPUT (as done in the next call to dstr_open()), we would be setting the data to flow from the Internal Address to the External Address.

Once our data streams are setup, we can begin processing by calling the component function pre_scale() (in pre_scale.c) to operate on one block of data at a time. This function will perform the horizontal scaling by averaging every two pixels. This algorithm operates on four pixels at a time. The entire function is iterated within pre_scale_image() 120 times, which is the number of rows in each quadrant. Before pre_scale_image() exits, the data streams are closed, and one line is added to the top and bottom of the image to provide context necessary for the next processing steps. Now that the input image has been scaled to a quarter of its initial size, we will proceed with the four image processing algorithms. In img_proc.c, the set_ptr() function is called to set the variable out_ptr to point to the correct quadrant on the 640x480 output image. Then copy_image(), copy_image.c, is called, performing a direct copy of the scaled input image into the lower right quadrant of the output.

Next we will set the out_ptr to point to the upper right quadrant of the output image and call conv3x3_image() in conv3x3_image.c. As with pre_scale_image(), the _image indicates this is only the wrapper function for the ImageLIB component, conv3x3(). As before, we must setup our input and output streams. This time, however, data will be read from the external memory, into internal memory for processing, and then written to the output image. Iterating over each row, we compute one line of data by calling the component function conv3x3() in conv3x3.c.

In conv3x3(), you will see that we perform a 3x3 block convolution, computing one line of data with the low pass filter mask. Note here that the variables IN1[i], IN2[i], and IN3[i] all grab only one pixel at a time. This is in contrast to the operation of pre_scale() where the variable in_ptr[i] grabbed 4 pixels at a time. This is because in_ptr was of type unsigned int, which implies that it points to four bytes of data at a time. IN1, IN2, and IN3 are all of type unsigned char, which implies they point to a single byte of data. In block convolution, we are computing the value of one pixel by placing weights on a 3x3 block of pixels in the input image and computing the sum. What happens when we are trying to compute the rightmost pixel in a row? The computation is now bogus. That is why the wrapper function copies the last good column of data into the two rightmost columns. You should also note that the component function ensures output pixels will lie between 0 and 255.

Back in img_proc.c, we can begin the edge detection algorithm, sobel_image(), for the lower left quadrant of the output image. This wrapper function, located in sobel_image.c, performs edge detection by utilizing the assembly written component function sobel() in sobel.asm. The wrapper function is very similar to the others you have seen and should be straightforward to understand. Understanding the assembly file is considerably more difficult since you are not familiar with the assembly language for the c6711 DSP. As you'll see in the assembly file, the comments are very helpful since an "equivalent" C program is given there.

The Sobel algorithm convolves two masks with a 3x3 block of data and sums the results to produce a single pixel of output. This algorithm approximates a 3x3 nonlinear edge enhancement operator. The brightest edges in the result represent a rapid transition (well-defined features), and darker edges represent smoother transitions (blurred or blended features).

## Using the IDK Environment

This section provides a hands-on introduction to the IDK environment that will prepare you for the lab exercise. First, connect the power supply to the IDK module. Two green lights on the IDK board should be illuminated when the power is connected properly.

You will need to create a directory img_proc for this project in your home directory. Enter this new directory, and then copy the following files as follows (again, be sure you're in the directory img_proc when you do this):



copy V:\ece320\idk\c6000\IDK\Examples\NTSC\img_proc
copy V:\ece320\idk\c6000\IDK\Drivers\include
copy V:\ece320\idk\c6000\IDK\Drivers\lib



After the IDK is powered on, open Code Composer 2 by clicking on the "CCS 2" icon on the desktop. From the "Project" menu, select "Open," and then open img_proc.pjt. You should see a new icon appear at the menu on the left side of the Code Composer window with the label img_proc.pjt. Double click on this icon to see a list of folders. There should be a folder labeled "Source." Open this folder to see a list of program files.

The main.c program calls the img_proc.c function that displays the output of four image processing routines in four quadrants on the monitor. The other files are associated with the four image processing routines. If you open the "Include" folder, you will see a list of header files. To inspect the main program, double click on the main.c icon. A window with the C code will appear to the right.

Scroll down to the tskMainFunc() in the main.c code. A few lines into this function, you will see the line LOG_printf(&trace,"Hello\n");. This line prints a message to the message log, which can be useful for debugging. Change the message "Hello\n" to "Your Name\n" (the "\n" is a carriage return). Save the file by clicking the little floppy disk icon at the top left corner of the Code Composer window.

To compile all of the files when the ".out" file has not yet been generated, you need to use the "Rebuild All" command. The rebuild all command is accomplished by clicking the button displaying three little red arrows pointing down on a rectangular box. This will compile every file the main.c program uses. If you've only changed one file, you only need to do a "Incremental Build," which is accomplished by clicking on the button with two little blue arrows pointing into a box (immediately to the left of the "Rebuild All" button). Click the "Rebuild All" button to compile all of the code. A window at the bottom of Code Composer will tell you the status of the compiling (i.e., whether there were any errors or warnings). You might notice some warnings after compilation - don't worry about these.

Click on the "DSP/BIOS" menu, and select "Message Log." A new window should appear at the bottom of Code Composer. Assuming the code has compiled correctly, select "File" -> "Load Program" and load img_proc.out (the same procedure as on the other version of Code Composer). Now select "Debug" -> "Run" to run the program (if you have problems, you may need to select "Debug" -> "Go Main" before running). You should see image processing routines running on the four quadrants of the monitor. The upper left quadrant (quadrant 0) displays a low pass filtered version of the input. The low pass filter "passes" the detail in the image, and attenuates the smooth features, resulting in a "grainy" image. The operation of the low pass filter code, and how data is moved to and from the filtering routine, was described in detail in the previous section. The lower left quadrant (quadrant 2) displays the output of an edge detection algorithm. The top right and bottom right quadrants (quadrants 1 and 3, respectively), show the original input displayed unprocessed. At this point, you should notice your name displayed in the message log.

## Implementation

You will create the component code flip_invert.c to implement an algorithm that horizontally flips and inverts the input image. The code in flip_invert.c will operate on one line of the image at a time. The copyim.c wrapper will call flip_invert.c once for each row of the prescaled input image. The flip_invert function call should appear as follows:



flip_invert(in_data, out_data, cols);



where in_data and out_data are pointers to the input and output buffers in internal memory, and cols is the length of each column of the prescaled image.

The img_proc.c function should call the copyim.c wrapper so that the flipped and inverted image appears in the top right (first) quadrant. The call to copyim is as follows: copyim(scratch_pad, out_img, out_ptr, pitch);

This call is commented out in the im_proc.c code. The algorithm that copies the image (unprocessed) to the screen is currently displayed in quadrant 1, so you will need to comment out its call and replace it with the call to copyim.

Your algorithm should flip the input picture horizontally, such that someone on the left side of the screen looking left in quadrant 3 will appear on the right side of the screen looking right. This is similar to putting a slide in a slide projector backwards. The algorithm should also invert the picture, so that something white appears black and vice versa. The inversion portion of the algorithm is like looking at the negative for a black and white picture. Thus, the total effect of your algorithm will be that of looking at the wrong side of the negative of a picture.

### Hint:

Pixel values are represented as integers between 0 and 255.

To create a new component file, write your code in a file called "flip_invert.c". You may find the component code for the low pass filter in "conv3x3_c.c" helpful in giving you an idea of how to get started. To compile this code, you must include it in the "img_proc" project, so that it appears as an icon in Code Composer. To include your new file, right click on the "img_proc.pjt" icon in the left window of Code Composer, and select "Add Files."

## Content actions

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

#### Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

#### Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks