Inside Collection: Analog-to-Digital Conversion

Summary: A brief introduction on how to filter digital signals

Because of the Sampling Theorem, we can process, in particular filter, analog signals "with a computer" by constructing the system shown in Figure 1. To use this system, we are assuming that the input signal has a lowpass spectrum and can be bandlimited without affecting important signal aspects. Bandpass signals can also be filtered digitally, but require a more complicated system. Highpass signals cannot be filtered digitally. Note that the input and output filters must be analog filters; trying to operate without them can lead to potentially very inaccurate digitization.

Another implicit assumption is that the digital filter can operate
in *real time*: The computer and the filtering
algorithm must be sufficiently fast so that outputs are computed
faster than input values arrive. The sampling interval, which is
determined by the analog signal's bandwidth, thus determines how long
our program has to compute *each* output
*for *. A frequency domain implementation thus
requires

It could well be that in some problems the time-domain version
is more efficient (more easily satisfies the real time
requirement), while in others the frequency domain approach is
faster. In the latter situations, it is the FFT algorithm for
computing the Fourier transforms that enables the
superiority of frequency-domain implementations. Because complexity
considerations only express how algorithm running-time increases
with system parameter choices, we need to detail both
implementations to determine which will be more suitable for any
given filtering problem. Filtering with a difference equation is
straightforward, and the number of computations that must be
made for each output value is

Derive this value for the number of computations for the general difference equation.

We have

- « Previous module in collection Amplitude Quantization
- Collection home: Analog-to-Digital Conversion

Comments:"Language: en"