Skip to content Skip to navigation Skip to collection information

OpenStax_CNX

You are here: Home » Content » An Introduction to Compressive Sensing » Compressive sensor networks

Navigation

Recently Viewed

This feature requires Javascript to be enabled.
 

Compressive sensor networks

Module by: Marco F. Duarte. E-mail the author

Summary: This module provides an overview of applications of compressive sensing in the context of distributed sensor networks.

Sparse and compressible signals are present in many sensor network applications, such as environmental monitoring, signal field recording and vehicle surveillance. Compressive sensing (CS) has many properties that make it attractive in this settings, such as its low complexity sensing and compression, its universality and its graceful degradation. CS is robust to noise, and allows querying more nodes to obey further detail on signals as they become interesting. Packet drops also do not harm the network nearly as much as many other protocols, only providing a marginal loss for each measurement not obtained by the receiver. As the network becomes more congested, data can be scaled back smoothly.

Thus CS can enable the design of generic compressive sensors that perform random or incoherent projections.

Several methods for using CS in sensor networks have been proposed. Decentralized methods pass data throughout the network, from neighbor to neighbor, and allow the decoder to probe any subset of nodes. In contrast, centralized methods require all information to be transmitted to a centralized data center, but reduce either the amount of information that must be transmitted or the power required to do so. We briefly summarize each class below.

Decentralized algorithms

Decentralized algorithms enable the calculation of compressive measurements at each sensor in the network, thus being useful for applications where monitoring agents traverse the network during operation.

Randomized gossiping

In randomized gossiping [4], each sensor communicates MM random projection of its data sample to a random set of nodes, in each stage aggregating and forwarding the observations received to a new set of random nodes. In essence, a spatial dot product is being performed as each node collects and aggregates information, compiling a sum of the weighted samples to obtain MM CS measurements which becomes more accurate as more rounds of random gossiping occur. To recover the data, a basis that provides data sparsity (or at least compressibility) is required, as well as the random projections used. However, this information does not need to be known while the data is being passed.

The method can also be applied when each sensor observes a compressible signal. In this case, each sensor computes multiple random projections of the data and transmits them using randomized gossiping to the rest of the network. A potential drawback of this technique is the amount of storage required per sensor, as it could be considerable for large networks . In this case, each sensor can store the data from only a subset of the sensors, where each group of sensors of a certain size will be known to contain CS measurements for all the data in the network. To maintain a constant error as the network size grows, the number of transmissions becomes Θ(kMn2)Θ(kMn2), where kk represents the number of groups in which the data is partitioned, MM is the number of values desired from each sensor and n are the number of nodes in the network. The results can be improved by using geographic gossiping algorithms [2].

Distributed sparse random projections

A second method modifies the randomized gossiping approach by limiting the number of communications each node must perform, in order to reduce overall power consumption [5]. Each data node takes MM projections of its data, passing along information to a small set of LL neighbors, and summing the observations; the resulting CS measurements are sparse, since N-LN-L of each row's entries will be zero. Nonetheless, these projections can still be used as CS measurements with quality similar to that of full random projections. Since the CS measurement matrix formed by the data nodes is sparse, a relatively small amount of communication is performed by each encoding node and the overall power required for transmission is reduced.

Centralized algorithms

Decentralized algorithms are used when the sensed data must be routed to a single location; this architecture is common in sensor networks were low power, simple nodes perform sensing and a powerful central location performs data processing.

Compressive wireless sensing

Compressive wireless sensing (CWS) emphasizes the use of synchronous communication to reduce the transmission power of each sensor [1]. In CWS, each sensor calculates a noisy projection of their data sample. Each sensor then transmits the calculated value by analog modulation and transmission of a communication waveform. The projections are aggregated at the central location by the receiving antenna, with further noise being added. In this way, the fusion center receives the CS measurements, from which it can perform reconstruction using knowledge of the random projections.

A drawback of this method is the required accurate synchronization. Although CWS is constraining the power of each node, it is also relying on constructive interference to increase the power received by the data center. The nodes themselves must be accurately synchronized to know when to transmit their data. In addition, CWS assumes that the nodes are all at approximately equal distances from the fusion center, an assumption that is acceptable only when the receiver is far away from the sensor network. Mobile nodes could also increase the complexity of the transmission protocols. Interference or path issues also would have a large effect on CWS, limiting its applicability.

If these limitations are addressed for a suitable application, CWS does offer great power benefits when very little is known about the data beyond sparsity in a fixed basis. Distortion will be proportional to M-2α/(2α+1)M-2α/(2α+1), where αα is some positive constant based on the network structure. With much more a priori information about the sensed data, other methods will achieve distortions proportional to M-2αM-2α.

Distributed compressive sensing

Distributed Compressive Sensing (DCS) provides several models for combining neighboring sparse signals, relying on the fact that such sparse signals may be similar to each other, a concept that is termed joint sparsity [3]. In an example model, each signal has a common component and a local innovation, with the commonality only needing to be encoded once while each innovation can be encoded at a lower measurement rate. Three different joint sparsity models (JSMs) have been developed:

  1. Both common signal and innovations are sparse;
  2. Sparse innovations with shared sparsity structure;
  3. Sparse innovations and dense common signal.

Although JSM 1 would seem preferable due to the relatively limited amount of data, only JSM 2 is computationally feasible for large sensor networks; it has been used in many applications [3]. JSMs 1 and 3 can be solved using a linear program, which has cubic complexity on the number of sensors in the network.

DCS, however, does not address the communication or networking necessary to transmit the measurements to a central location; it relies on standard communication and networking techniques for measurement transmission, which can be tailored to the specific network topology.

References

  1. Bajwa, W. and Haupt, J. and Sayeed, A. and Nowak, R. (2006, Apr.). Compressive Wireless Sensing. In Proc. Int. Symp. Inform. Processing in Sensor Networks (IPSN). Nashville, TN
  2. Dimakis, A. and Sarwate, A. and Wainwright, M. (2006, Apr.). Geographic gossip: Efficient aggregation for sensor networks. In Proc. Int. Symp. Inform. Processing in Sensor Networks (IPSN). Nashville, TN
  3. Duarte, M. F. and Wakin, M. B. and Baron, D. and Baraniuk, R. G. (2006, Apr.). Universal distributed sensing via random projections. In Int. Workshop on Inform. Processing in Sensor Networks (IPSN). (p. 177–185). Nashville, TN
  4. Rabbat, M. and Haupt, J. and Singh, A. and Nowak, R. (2006, Apr.). Decentralized Compression and Predistribution via Randomized Gossiping. In Proc. Int. Symp. Inform. Processing in Sensor Networks (IPSN). Nashville, TN
  5. Wang, W. and Garofalakis, M. and Ramchandran, K. (2007, Apr.). Distributed Sparse Random Projections for Refinable Approximation. In Proc. Int. Symp. Inform. Processing in Sensor Networks (IPSN). Cambridge, MA

Collection Navigation

Content actions

Download:

Collection as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Module as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Add:

Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks