This section comes from a note describing results on efficient algorithms to calculate the discrete Fourier transform (DFT) that were collected over years. Perhaps the most interesting is the discovery that the Cooley-Tukey FFT was described by Gauss in 1805 [46]. That gives some indication of the age of research on the topic, and the fact that a 1995 compiled bibliography [88] on efficient algorithms contains over 3400 entries indicates its volume. Three IEEE Press reprint books contain papers on the FFT [74], [24], [25]. An excellent general purpose FFT program has been described in [34], [35] and is used in Matlab and available over the internet.

In addition to this book there are several others [60], [65], [10], [44], [102], [64], [11], [9], [97] that give a good modern theoretical background for the FFT, one book [12] that gives the basic theory plus both FORTRAN and TMS 320 assembly language programs, and other books [57], [84], [15] that contain chapters on advanced FFT topics. A good up-to-date, on-line reference with both theory and programming techniques is in [3]. The history of the FFT is outlined in [21], [46] and excellent survey articles can be found in [30], [20]. The foundation of much of the modern work on efficient algorithms was done by S. Winograd. These results can be found in [115], [116], [117]. An outline and discussion of his theorems can be found in [57] as well as [60], [65], [10], [44].

Efficient FFT algorithms for length-

The “other” FFT is the prime factor algorithm (PFA) which uses an index map originally developed by Thomas and by Good. The theory of the PFA was derived in [55] and further developed and an efficient in-order and in-place program given in [5], [12]. More results on the PFA are given in [105], [106], [107], [108], [98]. A method has been developed to use dynamic programming to design optimal FFT programs that minimize the number of additions and data transfers as well as multiplications [49]. This new approach designs custom algorithms for a particular computer architecture. An efficient and practical development of Winograd's ideas has given a design method that does not require the rather difficult Chinese remainder theorem [57], [51] for short prime length FFT's. These ideas have been used to design modules of length 11, 13, 17, 19, and 25 [48]. Other methods for designing short DFT's can be found in [104], [56]. A use of these ideas with distributed arithmetic and table look-up rather than multiplication is given in [18]. A program that implements the nested Winograd Fourier transform algorithm (WFTA) is given in [60] but it has not proven as fast or as versatile as the PFA [5]. An interesting use of the PFA was announced [19] in searching for large prime numbers.

These efficient algorithms can not only be used on DFT's but on other transforms with a similar structure. They have been applied to the discrete Hartley transform [93], [13] and the discrete cosine transform [112], [118], [76].

The fast Hartley transform has been proposed as a superior method for real data analysis but that has been shown not to be the case. A well-designed real-data FFT [94] is always as good as or better than a well-designed Hartley transform [93], [29], [68], [109], [80]. The Bruun algorithm [14], [101] also looks promising for real data applications as does the Rader-Brenner algorithm [70], [23], [109]. A novel approach to calculating the inverse DFT is given in [27].

General length algorithms include [91], [40], [32]. For
lengths that are not highly composite or prime, the chirp z-transform in a
good candidate [12], [75] for longer lengths and an efficient
order-

The use of the FFT to calculate discrete convolution was one of its earliest uses. Although the more direct rectangular transform [2] would seem to be more efficient, use of the FFT or PFA is still probably the fastest method on a general purpose computer or DSP chip [66], [94], [29], [61]. On special purpose hardware or special architectures, the use of distributed arithmetic [18] or number theoretic transforms [1] may be even faster. Special algorithms for use with the short-time Fourier transform [81] and for the calculation of a few DFT values [89], [72], [83] and for recursive implementation [113], [35] have also been developed. An excellent analysis of efficient programming the FFT on DSP microprocessors is given in [63], [62]. Formulations of the DFT in terms of tensor or Kronecker products look promising for developing algorithms for parallel and vector computer architectures [95], [102], [53], [110], [103], [39], [38].

Various approaches to calculating approximate DFTs have been based on
cordic methods, short word lengths, or some form of pruning. A new method
that uses the characteristics of the signals being transformed has
combined the discrete wavelet transform (DWT) combined with the DFT to
give an approximate FFT with

The study of efficient algorithms not only has a long history and large bibliography, it is still an exciting research field where new results are used in practical applications.

More information can be found on the Rice DSP Group's web page

Comments:"The Fast Fourier Transform (FFT) is a landmark algorithm used in fields ranging from signal processing to high-performance computing. First popularized by two American scientists in 1965, the […]"