Summary: Compares the efficiency of frequency domain and time domain filtering.

To determine for what signal and filter durations a time- or
frequency-domain implementation would be the most efficient, we
need only count the computations required by each. For the
time-domain, difference-equation approach, we need
*so long as the FFT's power-of-two constraint is
advantageous*.

The frequency-domain approach is not yet viable;
what will we do when the input signal is infinitely long? The
difference equation scenario fits perfectly with the envisioned
digital filtering structure, but so far we have required
the input to have limited duration (so that we could calculate
its Fourier transform). The solution to this problem is quite
simple: Section the input into frames, filter each, and add the
results together. To section a signal means expressing it as a
linear combination of length-

Computational considerations reveal a substantial advantage for
a frequency-domain implementation over a time-domain one. The number
of computations for a time-domain implementation essentially remains
constant whether we section the input or not. Thus, the number of
computations for each output is

Show that as the section length increases, the frequency domain approach becomes increasingly more efficient.

Let *again* by
*decreases*
as

Note that the choice of section duration is arbitrary. Once the
filter is chosen, we should section so that the required FFT length is
precisely a power of two: Choose

Implementing the digital filter shown in the
A/D block
diagram with a frequency-domain implementation requires
some additional signal management not required by time-domain
implementations. Conceptually, a real-time, time-domain filter
could accept each sample as it becomes available, calculate the
difference equation, and produce the output value, all in less
than the sampling interval
*next* section to be filtered. In
programming, the operation of building up sections while
computing on previous ones is known as buffering.
Buffering can also be used in time-domain filters as well but
isn't required.

We want to lowpass filter a signal that contains a sinusoid and a significant amount of noise. The example shown in Figure 1 shows a portion of the noisy signal's waveform. If it weren't for the overlaid sinusoid, discerning the sine wave in the signal is virtually impossible. One of the primary applications of linear filters is noise removal: preserve the signal by matching filter's passband with the signal's spectrum and greatly reduce all other frequency components that may be present in the noisy signal.

A smart Rice engineer has selected a FIR filter having a unit-sample
response corresponding a period-17 sinusoid:

We note that the noise has been dramatically reduced, with a sinusoid now clearly visible in the filtered output. Some residual noise remains because noise components within the filter's passband appear in the output as well as the signal.

Note that when compared to the input signal's sinusoidal component, the output's sinusoidal component seems to be delayed. What is the source of this delay? Can it be removed?

The delay is *not* computational delay here--the
plot shows the first output value is aligned with the filter's first
input--although in real systems this is an important
consideration. Rather, the delay is due to the filter's phase shift: A
phase-shifted sinusoid is equivalent to a time-delayed one:

Comments:"Electrical Engineering Digital Processing Systems in Braille."