# Connexions

You are here: Home » Content » Fast Fourier Transforms » The Prime Factor and Winograd Fourier Transform Algorithms

### Lenses

What is a lens?

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

#### Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
• Rice Digital Scholarship

This collection is included in aLens by: Digital Scholarship at Rice University

Click the "Rice Digital Scholarship" link to see all content affiliated with them.

• NSF Partnership

This collection is included inLens: NSF Partnership in Signal Processing
By: Sidney Burrus

Click the "NSF Partnership" link to see all content affiliated with them.

Click the tag icon to display tags associated with this content.

• Featured Content

This collection is included inLens: Connexions Featured Content
By: Connexions

"The Fast Fourier Transform (FFT) is a landmark algorithm used in fields ranging from signal processing to high-performance computing. First popularized by two American scientists in 1965, the […]"

Click the "Featured Content" link to see all content affiliated with them.

Click the tag icon to display tags associated with this content.

#### Also in these lenses

• UniqU content

This collection is included inLens: UniqU's lens
By: UniqU, LLC

Click the "UniqU content" link to see all content selected in this lens.

• Lens for Engineering

This module and collection are included inLens: Lens for Engineering
By: Sidney Burrus

Click the "Lens for Engineering" link to see all content selected in this lens.

### Recently Viewed

This feature requires Javascript to be enabled.

### Tags

(What is a tag?)

These tags come from the endorsement, affiliation, and other lenses that include this content.

# The Prime Factor and Winograd Fourier Transform Algorithms

Module by: C. Sidney Burrus. E-mail the author

The prime factor algorithm (PFA) and the Winograd Fourier transform algorithm (WFTA) are methods for efficiently calculating the DFT which use, and in fact, depend on the Type-1 index map from Multidimensional Index Mapping: Equation 10 and Multidimensional Index Mapping: Equation 6. The use of this index map preceded Cooley and Tukey's paper [6], [18] but its full potential was not realized until it was combined with Winograd's short DFT algorithms. The modern PFA was first presented in [12] and a program given in [1]. The WFTA was first presented in [20] and programs given in [15], [5].

The number theoretic basis for the indexing in these algorithms may, at first, seem more complicated than in the Cooley-Tukey FFT; however, if approached from the general index mapping point of view of Multidimensional Index Mapping, it is straightforward, and part of a common approach to breaking large problems into smaller ones. The development in this section will parallel that in The Cooley-Tukey Fast Fourier Transform Algorithm.

The general index maps of Multidimensional Index Mapping: Equation 6 and Multidimensional Index Mapping: Equation 12 must satisfy the Type-1 conditions of Multidimensional Index Mapping: Equation 7 and Multidimensional Index Mapping: Equation 10 which are

K 1 = a N 2 and K 2 = b N 1 with ( K 1 , N 1 ) = ( K 2 , N 2 ) = 1 K 1 = a N 2 and K 2 = b N 1 with ( K 1 , N 1 ) = ( K 2 , N 2 ) = 1
(1)
K 3 = c N 2 and K 4 = d N 1 with ( K 3 , N 1 ) = ( K 4 , N 2 ) = 1 K 3 = c N 2 and K 4 = d N 1 with ( K 3 , N 1 ) = ( K 4 , N 2 ) = 1
(2)

The row and column calculations in Multidimensional Index Mapping: Equation 15 are uncoupled by Multidimensional Index Mapping: Equation 16 which for this case are

( ( K 1 K 4 ) ) N = ( ( K 2 K 3 ) ) N = 0 ( ( K 1 K 4 ) ) N = ( ( K 2 K 3 ) ) N = 0
(3)

In addition, to make each short sum a DFT, the KiKi must also satisfy

( ( K 1 K 3 ) ) N = N 2 a n d ( ( K 2 K 4 ) ) N = N 1 ( ( K 1 K 3 ) ) N = N 2 a n d ( ( K 2 K 4 ) ) N = N 1
(4)

In order to have the smallest values for KiKi, the constants in Equation 1 are chosen to be

a = b = 1 , c = ( ( N 2 - 1 ) ) N , d = ( ( N 1 - 1 ) ) N a = b = 1 , c = ( ( N 2 - 1 ) ) N , d = ( ( N 1 - 1 ) ) N
(5)

which gives for the index maps in Equation 1

n = ( ( N 2 n 1 + N 1 n 2 ) ) N n = ( ( N 2 n 1 + N 1 n 2 ) ) N
(6)
k = ( ( K 3 k 1 + K 4 k 2 ) ) N k = ( ( K 3 k 1 + K 4 k 2 ) ) N
(7)

The frequency index map is a form of the Chinese remainder theorem. Using these index maps, the DFT in Multidimensional Index Mapping: Equation 15 becomes

X = n 2 = 0 N 2 - 1 n 1 = 0 N 1 - 1 x W N 1 n 1 k 1 W N 2 n 2 k 2 X = n 2 = 0 N 2 - 1 n 1 = 0 N 1 - 1 x W N 1 n 1 k 1 W N 2 n 2 k 2
(8)

which is a pure two-dimensional DFT with no twiddle factors and the summations can be done in either order. Choices other than Equation 5 could be used. For example, a=b=c=d=1a=b=c=d=1 will cause the input and output index map to be the same and, therefore, there will be no scrambling of the output order. The short summations in (96), however, will no longer be short DFT's [1].

An important feature of the short Winograd DFT's described in Winograd’s Short DFT Algorithms that is useful for both the PFA and WFTA is the fact that the multiplier constants in Winograd’s Short DFT Algorithms: Equation 6 or Winograd’s Short DFT Algorithms: Equation 8 are either real or imaginary, never a general complex number. For that reason, multiplication by complex data requires only two real multiplications, not four. That is a very significant feature. It is also true that the jj multiplier can be commuted from the DD operator to the last part of the ATAT operator. This means the DD operator has only real multipliers and the calculations on real data remains real until the last stage. This can be seen by examining the short DFT modules in [3], [11] and in the appendices.

## The Prime Factor Algorithm

If the DFT is calculated directly using Equation 8, the algorithm is called a prime factor algorithm [6], [18] and was discussed in Winograd’s Short DFT Algorithms and Multidimensional Index Mapping: In-Place Calculation of the DFT and Scrambling. When the short DFT's are calculated by the very efficient algorithms of Winograd discussed in Factoring the Signal Processing Operators, the PFA becomes a very powerful method that is as fast or faster than the best Cooley-Tukey FFT's [1], [12].

A flow graph is not as helpful with the PFA as it was with the Cooley-Tukey FFT, however, the following representation in Figure 1 which combines Figures Multidimensional Index Mapping: Figure 1 and Winograd’s Short DFT Algorithms: Figure 2 gives a good picture of the algorithm with the example of Multidimensional Index Mapping: Equation 25

If NN is factored into three factors, the DFT of Equation 8 would have three nested summations and would be a three-dimensional DFT. This principle extends to any number of factors; however, recall that the Type-1 map requires that all the factors be relatively prime. A very simple three-loop indexing scheme has been developed [1] which gives a compact, efficient PFA program for any number of factors. The basic program structure is illustrated in Code 1 with the short DFT's being omitted for clarity. Complete programs are given in [3] and in the appendices.

 C---------------PFA INDEXING LOOPS-------------- DO 10 K = 1, M N1 = NI(K) N2 = N/N1 I(1) = 1 DO 20 J = 1, N2 DO 30 L=2, N1 I(L) = I(L-1) + N2 IF (I(L .GT.N) I(L) = I(L) - N 30 CONTINUE GOTO (20,102,103,104,105), N1 I(1) = I(1) + N1 20 CONTINUE 10 CONTINUE RETURN C----------------MODULE FOR N=2----------------- 102 R1 = X(I(1)) X(I(1)) = R1 + X(I(2)) X(I(2)) = R1 - X(I(2)) R1 = Y(I(1)) Y(I(1)) = R1 + Y(I(2)) Y(I(2)) = R1 - Y(I(2)) GOTO 20 C----------------OTHER MODULES------------------ 103 Length-3 DFT 104 Length-4 DFT 105 Length-5 DFT etc. 

As in the Cooley-Tukey program, the DO 10 loop steps through the M stages (factors of N) and the DO 20 loop calculates the N/N1 length-N1 DFT's. The input index map of Equation 6 is implemented in the DO 30 loop and the statement just before label 20. In the PFA, each stage or factor requires a separately programmed module or butterfly. This lengthens the PFA program but an efficient Cooley-Tukey program will also require three or more butterflies.

Because the PFA is calculated in-place using the input index map, the output is scrambled. There are five approaches to dealing with this scrambled output. First, there are some applications where the output does not have to be unscrambled as in the case of high-speed convolution. Second, an unscrambler can be added after the PFA to give the output in correct order just as the bit-reversed-counter is used for the Cooley-Tukey FFT. A simple unscrambler is given in [3], [1] but it is not in place. The third method does the unscrambling in the modules while they are being calculated. This is probably the fastest method but the program must be written for a specific length [3], [1]. A fourth method is similar and achieves the unscrambling by choosing the multiplier constants in the modules properly [11]. The fifth method uses a separate indexing method for the input and output of each module [3], [17].

## The Winograd Fourier Transform Algorithm

The Winograd Fourier transform algorithm (WFTA) uses a very powerful property of the Type-1 index map and the DFT to give a further reduction of the number of multiplications in the PFA. Using an operator notation where F1F1 represents taking row DFT's and F2F2 represents column DFT's, the two-factor PFA of Equation 8 is represented by

X = F 2 F 1 x X = F 2 F 1 x
(9)

It has been shown [21], [9] that if each operator represents identical operations on each row or column, they commute. Since F1F1 and F2F2 represent length N1N1 and N2N2 DFT's, they commute and Equation 9 can also be written

X = F 1 F 2 x X = F 1 F 2 x
(10)

If each short DFT in FF is expressed by three operators as in Winograd’s Short DFT Algorithms: Equation 8 and Winograd’s Short DFT Algorithms: Figure 2, FF can be factored as

F = A T D A F = A T D A
(11)

where A represents the set of additions done on each row or column that performs the residue reduction as Winograd’s Short DFT Algorithms: Equation 30. Because of the appearance of the flow graph of AA and because it is the first operator on xx, it is called a preweave operator [15]. DD is the set of MM multiplications and ATAT (or BTBT or CTCT) from Winograd’s Short DFT Algorithms: Equation 5 or Winograd’s Short DFT Algorithms: Equation 6 is the reconstruction operator called the postweave. Applying Equation 11 to Equation 9 gives

X = A 2 T D 2 A 2 A 1 T D 1 A 1 x X = A 2 T D 2 A 2 A 1 T D 1 A 1 x
(12)

This is the PFA of Equation 8 and Figure 1 where A1D1A1A1D1A1 represents the row DFT's on the array formed from xx. Because these operators commute, Equation 12 can also be written as

X = A 2 T A 1 T D 2 D 1 A 2 A 1 x X = A 2 T A 1 T D 2 D 1 A 2 A 1 x
(13)

or

X = A 1 T A 2 T D 2 D 1 A 2 A 1 x X = A 1 T A 2 T D 2 D 1 A 2 A 1 x
(14)

but the two adjacent multiplication operators can be premultiplied and the result represented by one operator D=D2D1D=D2D1 which is no longer the same for each row or column. Equation Equation 14 becomes

X = A 1 T A 2 T D A 2 A 1 x X = A 1 T A 2 T D A 2 A 1 x
(15)

This is the basic idea of the Winograd Fourier transform algorithm. The commuting of the multiplication operators together in the center of the algorithm is called nesting and it results in a significant decrease in the number of multiplications that must be done at the execution of the algorithm. Pictorially, the PFA of Figure 1 becomes [12] the WFTA in Figure 2.

The rectangular structure of the preweave addition operators causes an expansion of the data in the center of the algorithm. The 15 data points in Figure 2 become 18 intermediate values. This expansion is a major problem in programming the WFTA because it prevents a straightforward in-place calculation and causes an increase in the number of required additions and in the number of multiplier constants that must be precalculated and stored.

From Figure 2 and the idea of premultiplying the individual multiplication operators, it can be seen why the multiplications by unity had to be considered in Winograd’s Short DFT Algorithms: Table 1. Even if a multiplier in D1D1 is unity, it may not be in D2D1D2D1. In Figure 2 with factors of three and five, there appear to be 18 multiplications required because of the expansion of the length-5 preweave operator, A2A2, however, one of multipliers in each of the length three and five operators is unity, so one of the 18 multipliers in the product is unity. This gives 17 required multiplications - a rather impressive reduction from the 152=225152=225 multiplications required by direct calculation. This number of 17 complex multiplications will require only 34 real multiplications because, as mentioned earlier, the multiplier constants are purely real or imaginary while the 225 complex multiplications are general and therefore will require four times as many real multiplications.

The number of additions depends on the order of the pre- and postweave operators. For example in the length-15 WFTA in Figure 2, if the length-5 had been done first and last, there would have been six row addition preweaves in the preweave operator rather than the five shown. It is difficult to illustrate the algorithm for three or more factors of N, but the ideas apply to any number of factors. Each length has an optimal ordering of the pre- and postweave operators that will minimize the number of additions.

A program for the WFTA is not as simple as for the FFT or PFA because of the very characteristic that reduces the number of multiplications, the nesting. A simple two-factor example program is given in [3] and a general program can be found in [15], [5]. The same lengths are possible with the PFA and WFTA and the same short DFT modules can be used, however, the multiplies in the modules must occur in one place for use in the WFTA.

## Modifications of the PFA and WFTA Type Algorithms

In the previous section it was seen how using the permutation property of the elementary operators in the PFA allowed the nesting of the multiplications to reduce their number. It was also seen that a proper ordering of the operators could minimize the number of additions. These ideas have been extended in formulating a more general algorithm optimizing problem. If the DFT operator FF in Equation 11 is expressed in a still more factored form obtained from Winograd’s Short DFT Algorithms: Equation 30, a greater variety of ordering can be optimized. For example if the AA operators have two factors

F 1 = A 1 T A 1 ' T D 1 A 1 ' A 1 F 1 = A 1 T A 1 ' T D 1 A 1 ' A 1
(16)

The DFT in Equation 10 becomes

X = A 2 T A ' 2 T D 2 A ' 2 A 2 A 1 T A ' 1 T D 1 A ' 1 A 1 x X = A 2 T A ' 2 T D 2 A ' 2 A 2 A 1 T A ' 1 T D 1 A ' 1 A 1 x
(17)

The operator notation is very helpful in understanding the central ideas, but may hide some important facts. It has been shown [21], [11] that operators in different FiFi commute with each other, but the order of the operators within an FiFi cannot be changed. They represent the matrix multiplications in Winograd’s Short DFT Algorithms: Equation 30 or Winograd’s Short DFT Algorithms: Equation 8 which do not commute.

This formulation allows a very large set of possible orderings, in fact, the number is so large that some automatic technique must be used to find the “best". It is possible to set up a criterion of optimality that not only includes the number of multiplications but the number of additions as well. The effects of relative multiply-add times, data transfer times, CPU register and memory sizes, and other hardware characteristics can be included in the criterion. Dynamic programming can then be applied to derive an optimal algorithm for a particular application [9]. This is a very interesting idea as there is no longer a single algorithm, but a class and an optimizing procedure. The challenge is to generate a broad enough class to result in a solution that is close to a global optimum and to have a practical scheme for finding the solution.

Results obtained applying the dynamic programming method to the design of fairly long DFT algorithms gave algorithms that had fewer multiplications and additions than either a pure PFA or WFTA [9]. It seems that some nesting is desirable but not total nesting for four or more factors. There are also some interesting possibilities in mixing the Cooley-Tukey with this formulation. Unfortunately, the twiddle factors are not the same for all rows and columns, therefore, operations cannot commute past a twiddle factor operator. There are ways of breaking the total algorithm into horizontal paths and using different orderings along the different paths [16], [11]. In a sense, this is what the split-radix FFT does with its twiddle factors when compared to a conventional Cooley-Tukey FFT.

There are other modifications of the basic structure of the Type-1 index map DFT algorithm. One is to use the same index structure and conversion of the short DFT's to convolution as the PFA but to use some other method for the high-speed convolution. Table look-up of partial products based on distributed arithmetic to eliminate all multiplications [4] looks promising for certain very specific applications, perhaps for specialized VLSI implementation. Another possibility is to calculate the short convolutions using number-theoretic transforms [2], [15], [16]. This would also require special hardware. Direct calculation of short convolutions is faster on certain pipelined processor such as the TMS-320 microprocessor [13].

## Evaluation of the PFA and WFTA

As for the Cooley-Tukey FFT's, the first evaluation of these algorithms will be on the number of multiplications and additions required. The number of multiplications to compute the PFA in Equation 8 is given by Multidimensional Index Mapping: Equation 3. Using the notation that T(NT(N) is the number of multiplications or additions necessary to calculate a length-N DFT, the total number for a four-factor PFA of length-N, where N=N1N2N3N4N=N1N2N3N4 is

T ( N ) = N 1 N 2 N 3 T ( N 4 ) + N 2 N 3 N 4 T ( N 1 ) + N 3 N 4 N 1 T ( N 2 ) + N 4 N 1 N 2 T ( N 3 ) T ( N ) = N 1 N 2 N 3 T ( N 4 ) + N 2 N 3 N 4 T ( N 1 ) + N 3 N 4 N 1 T ( N 2 ) + N 4 N 1 N 2 T ( N 3 )
(18)

The count of multiplies and adds in Table 1 are calculated from (105) with the counts of the factors taken from Winograd’s Short DFT Algorithms: Table 1. The list of lengths are those possible with modules in the program of length 2, 3, 4, 5, 7, 8, 9 and 16 as is true for the PFA in [3], [1] and the WFTA in [15], [5]. A maximum of four relatively prime lengths can be used from this group giving 59 different lengths over the range from 2 to 5040. The radix-2 or split-radix FFT allows 12 different lengths over the same range. If modules of length 11 and 13 from [8] are added, the maximum length becomes 720720 and the number of different lengths becomes 239. Adding modules for 17, 19 and 25 from [8] gives a maximum length of 1163962800 and a very large and dense number of possible lengths. The length of the code for the longer modules becomes excessive and should not be included unless needed.

The number of multiplications necessary for the WFTA is simply the product of those necessary for the required modules, including multiplications by unity. The total number may contain some unity multipliers but it is difficult to remove them in a practical program. Table 1 contains both the total number (MULTS) and the number with the unity multiplies removed (RMULTS).

Calculating the number of additions for the WFTA is more complicated than for the PFA because of the expansion of the data moving through the algorithm. For example the number of additions, TA, for the length-15 example in Figure 2 is given by

T A ( N ) = N 2 T A ( N 1 ) + T M 1 T A ( N 2 ) T A ( N ) = N 2 T A ( N 1 ) + T M 1 T A ( N 2 )
(19)

where N1=3N1=3, N2=5N2=5, TM1TM1 = the number of multiplies for the length-3 module and hence the expansion factor. As mentioned earlier there is an optimum ordering to minimize additions. The ordering used to calculate Table 1 is the ordering used in [15], [5] which is optimal in most cases and close to optimal in the others.

 Length PFA PFA WFTA WFTA WFTA N Mults Adds Mults RMults Adds 10 20 88 24 20 88 12 16 96 24 16 96 14 32 172 36 32 172 15 50 162 36 34 162 18 40 204 44 40 208 20 40 216 48 40 216 21 76 300 54 52 300 24 44 252 48 36 252 28 64 400 72 64 400 30 100 384 72 68 384 35 150 598 108 106 666 36 80 480 88 80 488 40 100 532 96 84 532 42 152 684 108 104 684 45 190 726 132 130 804 48 124 636 108 92 660 56 156 940 144 132 940 60 200 888 144 136 888 63 284 1236 198 196 1394 70 300 1336 216 212 1472 72 196 1140 176 164 1156 80 260 1284 216 200 1352 84 304 1536 216 208 1536 90 380 1632 264 260 1788 105 590 2214 324 322 2418 112 396 2188 324 308 2332 120 460 2076 288 276 2076 126 568 2724 396 392 3040 140 600 2952 432 424 3224 144 500 2676 396 380 2880 168 692 3492 432 420 3492 180 760 3624 528 520 3936 210 1180 4848 648 644 5256 240 1100 4812 648 632 5136 252 1136 5952 792 784 6584 280 1340 6604 864 852 7148 315 2050 8322 1188 1186 10336 336 1636 7908 972 956 8508 360 1700 8148 1056 1044 8772 420 2360 10536 1296 1288 11352 504 2524 13164 1584 1572 14428 560 3100 14748 1944 1928 17168 630 4100 17904 2376 2372 21932 720 3940 18276 2376 2360 21132 840 5140 23172 2592 2580 24804 1008 5804 29100 3564 3548 34416 1260 8200 38328 4752 4744 46384 1680 11540 50964 5832 5816 59064 2520 17660 82956 9504 9492 99068 5040 39100 179772 21384 21368 232668

from Table 1 we see that compared to the PFA or any of the Cooley-Tukey FFT's, the WFTA has significantly fewer multiplications. For the shorter lengths, the WFTA and the PFA have approximately the same number of additions; however for longer lengths, the PFA has fewer and the Cooley-Tukey FFT's always have the fewest. If the total arithmetic, the number of multiplications plus the number of additions, is compared, the split-radix FFT, PFA and WFTA all have about the same count. Special versions of the PFA and WFTA have been developed for real data [7], [19].

The size of the Cooley-Tukey program is the smallest, the PFA next and the WFTA largest. The PFA requires the smallest number of stored constants, the Cooley-Tukey or split-radix FFT next, and the WFTA requires the largest number. For a DFT of approximately 1000, the PFA stores 28 constants, the FFT 2048 and the WFTA 3564. Both the FFT and PFA can be calculated in-place and the WFTA cannot. The PFA can be calculated in-order without an unscrambler. The radix-2 FFT can also, but it requires additional indexing overhead [10]. The indexing and data transfer overhead is greatest for the WFTA because the separate preweave and postweave sections each require their indexing and pass through the complete data. The shorter modules in the PFA and WFTA and the butterflies in the radix 2 and 4 FFT's are more efficient than the longer ones because intermediate calculations can be kept in cpu registers rather general memory [14]. However, the shorter modules and radices require more passes through the data for a given approximate length. A proper comparison will require actual programs to be compiled and run on a particular machine. There are many open questions about the relationship of algorithms and hardware architecture.

## References

1. Burrus, C. S. and Eschenbacher, P. W. (1981, August). An In–Place, In–Order Prime Factor FFT Algorithm. [Reprinted in it DSP Software, by L.R. Morris, 1983]. IEEE Transactions on Acoustics, Speech, and Signal Processing, 29(4), 806–817.
2. Blahut, Richard E. (1985). Fast Algorithms for Digital Signal Processing. Reading, Mass.: Addison-Wesley.
3. Burrus, C. S. and Parks, T. W. (1985). DFT/FFT and Convolution Algorithms. New York: John Wiley & Sons.
4. Chu, Shuni and Burrus, C. S. (1982, April). A Prime Factor FFT Algorithm using Distributed Arithmetic. IEEE Transactions on Acoustics, Speech, and Signal Processing, 30(2), 217–227.
5. DSP Committee, (Ed.). (1979). Digital Signal Processing II, selected reprints. New York: IEEE Press.
6. Good, I. J. (1958). Interaction Algorithm and Practical Fourier Analysis. [Addendum: vol. 22, 1960, pp 372–375]. J. Royal Statist. Soc., B, 20, 361–372.
7. Heideman, M. T. and Johnson, H. W. and Burrus, C. S. (1984, March). Prime Factor FFT Algorithms for Real–Valued Series. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. (p. 28A.7.1–4). San Diego, CA
8. Johnson, H. W. and Burrus, C. S. (1981). Large DFT Modules: N = 11, 13, 17, 19, and 25. (8105). Technical report. Department of Electrical Engineering, Rice University, Houston, TX 77251–1892.
9. Johnson, H. W. and Burrus, C. S. (1982, May). The Design of Optimal DFT Algorithms using Dynamic Programming. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. (p. 20–23). Paris
10. Johnson, H. W. and Burrus, C. S. (1984, March). An In-Order, In-Place Radix-2 FFT. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. (p. 28A.2.1–4). San Diego, CA
11. Johnson, Howard W. and Burrus, C. S. (1985, February). On the Structure of Efficient DFT Algorithms. IEEE Transactions on Acoustics, Speech, and Signal Processing, 33(1), 248–254.
12. Kolba, D. P. and Parks, T. W. (1977, August). A Prime Factor FFT Algorithm Using High Speed Convolution. [also in]. IEEE Trans. on ASSP, 25, 281–294.
13. Li, Z. and Sorensen, H. V. and Burrus, C. S. (1986, April). FFT and Convolution Algorithms for DSP Microprocessors. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. (p. 284–292). Tokyo, Japan
14. Morris, L. R. (1982, 1983). Digital Signal Processing Software. Toronto, Canada: DSPSW, Inc.
15. McClellan, J. H. and Rader, C. M. (1979). Number Theory in Digital Signal Processing. Englewood Cliffs, NJ: Prentice-Hall.
16. Nussbaumer, H. J. (1981, 1982). Fast Fourier Transform and Convolution Algorithms. (Second). Heidelberg, Germany: Springer-Verlag.
17. Rothweiler, J.H. (1982, February). Implementation of the In-Order Prime Factor FFT Algorithm. IEEE TRANS. ON ASSP, 30, 105-107.
18. Rabiner, L. R. and Rader, C. M. (Eds.). (1972). Digital Signal Processing, selected reprints. New York: IEEE Press.
19. Sorensen, H. V. and Jones, D. L. and Heideman, M. T. and Burrus, C. S. (1987, June). Real Valued Fast Fourier Transform Algorithms. IEEE Transactions on Acoustics, Speech, and Signal Processing, 35(6), 849–863.
20. Winograd, S. (1976, April). On Computing the Discrete Fourier Transform. Proc. National Academy of Sciences, USA, 73, 1006–1006.
21. Winograd, S. (1978, January). On Computing the Discrete Fourier Transform. Mathematics of Computation, 32, 175–199.

## Content actions

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

#### Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

#### Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks