Define HH as the set of all discretevalued functions h:{1,...,N}→{1,...,m}h:{1,...,N}→{1,...,m}. Note that HH is a finite set of size mNmN. Each function h∈Hh∈H can be specified by a binary characteristic matrix φ(h)φ(h) of size m×Nm×N, with each column being a binary vector with exactly one 1 at the location jj , where j=h(i)j=h(i). To construct the overall sampling matrix ΦΦ, we choose dd functions h1,...,hdh1,...,hd independently from the uniform distribution defined on HH, and vertically concatenate their characteristic matrices. Thus, if M=mdM=md, ΦΦ is a binary matrix of size M×NM×N with each column containing exactly dd ones.
Now given any signal xx, we acquire linear measurements y=Φxy=Φx. It is easy to visualize the measurements via the following two properties. First, the coefficients of the measurement vector yy are naturally grouped according to the “mother” binary functions {h1,...,hd}{h1,...,hd}. Second, consider the ithith coefficient of the measurement vector yy, which corresponds to the mother binary function hh. Then, the expression for yiyi is simply given by:
y
i
=
∑
j
:
h
(
j
)
=
i
x
j
.
y
i
=
∑
j
:
h
(
j
)
=
i
x
j
.
(1)In other words, for a fixed signal coefficient index jj, each measurement yiyi as expressed above consists of an observation of xjxj corrupted by other signal coefficients mapped to the same ii by the function hh. Signal recovery essentially consists of estimating the signal values from these “corrupted" observations.
The countmin algorithm is useful in the special case where the entries of the original signal are positive. Given measurements yy using the sampling matrix ΦΦ as constructed above, the estimate of the j th j th signal entry is given by:
x
^
j
=
min
l
y
i
:
h
l
(
j
)
=
i
.
x
^
j
=
min
l
y
i
:
h
l
(
j
)
=
i
.
(2)Intuitively, this means that the estimate of xjxj is formed by simply looking at all measurements that comprise of xjxj corrupted by other signal values, and picking the one with the lowest magnitude.
Despite the simplicity of this algorithm, it is accompanied by an arguably powerful instanceoptimality guarantee: if d=ClogNd=ClogN and m=4/αKm=4/αK, then with high probability, the recovered signal x^x^ satisfies:
∥
x

x
^
∥
∞
≤
α
/
K
·
∥
x

x
*
∥
1
,
∥
x

x
^
∥
∞
≤
α
/
K
·
∥
x

x
*
∥
1
,
(3)where x*x* represents the best KKterm approximation of xx in the ℓ1ℓ1 sense.