Skip to content Skip to navigation

OpenStax-CNX

You are here: Home » Content » A Matrix Times a Vector

Navigation

Lenses

What is a lens?

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
  • Rice Digital Scholarship

    This module is included in aLens by: Digital Scholarship at Rice UniversityAs a part of collection: "Basic Vector Space Methods in Signal and Systems Theory"

    Click the "Rice Digital Scholarship" link to see all content affiliated with them.

Also in these lenses

  • UniqU content

    This module is included inLens: UniqU's lens
    By: UniqU, LLCAs a part of collection: "Basic Vector Space Methods in Signal and Systems Theory"

    Click the "UniqU content" link to see all content selected in this lens.

  • Lens for Engineering

    This module is included inLens: Lens for Engineering
    By: Sidney Burrus

    Click the "Lens for Engineering" link to see all content selected in this lens.

Recently Viewed

This feature requires Javascript to be enabled.
 

A Matrix Times a Vector

Module by: C. Sidney Burrus. E-mail the author

Summary: One can look at the operation of a matrix times a vector as changing the basis set for the vector or as changing the vector with the same basis description. Many signal and systems problem can be posed in this form.

A Matrix Times a Vector

In this chapter we consider the first problem posed in the introduction

A x = b A x = b
(1)

where the matrix AA and vector xx are given and we want to interpret and give structure to the calculation of the vector bb . Equation Equation 1 has a variety of special cases. The matrix AA may be square or may be rectangular. It may have full column or row rank or it may not. It may be symmetric or orthogonal or non-singular or many other characteristics which would be interesting properties as an operator. If we view the vectors as signals and the matrix as an operator or processor, there are two interesting interpretations.

  • The operation Equation 1 is a change of basis or coordinates for a fixed signal. The signal stays the same, the basis (or frame) changes.
  • The operation Equation 1 alters the characteristics of the signal (processes it) but within a fixed basis system. The basis stays the same, the signal changes.

An example of the first would be the discrete Fourier transform (DFT) where one calculates frequency components of a signal which are coordinates in a frequency space for a given signal. The definition of the DFT from [3] can be written as a matrix-vector operation by c=Wxc=Wx which, for w=e-j2π/Nw=e-j2π/N and N=4N=4, is

c 0 c 1 c 2 c 3 = w 0 w 0 w 0 w 0 w 0 w 1 w 2 w 3 w 0 w 2 w 4 w 6 w 0 w 3 w 6 w 9 x 0 x 1 x 2 x 3 c 0 c 1 c 2 c 3 = w 0 w 0 w 0 w 0 w 0 w 1 w 2 w 3 w 0 w 2 w 4 w 6 w 0 w 3 w 6 w 9 x 0 x 1 x 2 x 3
(2)

An example of the second might be convolution where you are processing or filtering a signal and staying in the same space or coordinate system.

y 0 y 1 y 2 = h 0 0 0 0 h 1 h 0 0 h 2 h 1 h 0 x 0 x 1 x 2 . y 0 y 1 y 2 = h 0 0 0 0 h 1 h 0 0 h 2 h 1 h 0 x 0 x 1 x 2 .
(3)

A particularly powerful sequence of operations is to first change the basis for a signal, then process the signal in this new basis, and finally return to the original basis. For example, the discrete Fourier transform (DFT) of a signal is taken followed by setting some of the Fourier coefficients to zero followed by taking the inverse DFT.

Another application of Equation 1 is made in linear regression where the input signals are rows of AA and the unknown weights of the hypothesis are in xx and the outputs are the elements of bb .

Change of Basis

Consider the two views:

  1. The operation given in Equation 1 can be viewed as xx being a set of weights so that bb is a weighted sum of the columns of AA . In other words, bb will lie in the space spanned by the columns of AA at a location determined by xx . This view is a composition of a signal from a set of weights as in Equation 6 and Equation 8 below. If the vector aiai is the ithith column of AA, it is illustrated by
    Ax=x1a1+x2a2+x3a3=b.Ax=x1a1+x2a2+x3a3=b.
    (4)
  2. An alternative view has xx being a signal vector and with bb being a vector whose entries are inner products of xx and the rows of A. In other words, the elements of bb are the projection coefficients of xx onto the coordinates given by the rows of A. The multiplication of a signal by this operator decomposes the signal and gives the coefficients of the decomposition. If a¯ja¯j is the jthjth row of AA we have:
    b1=a¯1xb2=a¯2xetc.b1=a¯1xb2=a¯2xetc.
    (5)
    Regression can be posed from this view with the input signal being the rows of A.

These two views of the operation as a decomposition of a signal or the recomposition of the signal to or from a different basis system are extremely valuable in signal analysis. The ideas from linear algebra of subspaces, inner product, span, orthogonality, rank, etc. are all important here. The dimensions of the domain and range of the operators may or may not be the same. The matrices may or may not be square and may or may not be of full rank [10], [18].

A Basis and Dual Basis

A set of linearly independent vectors xnxn forms a basis for a vector space if every vector xx in the space can be uniquely written

x = n a n x n x = n a n x n
(6)

and the dual basis is defined as a set vectors x˜nx˜n in that space allows a simple inner product (denoted by parenthesis: (x,y)(x,y)) to calculate the expansion coefficients as

a n = ( x , x ˜ n ) = x T x ˜ n a n = ( x , x ˜ n ) = x T x ˜ n
(7)

A basis expansion has enough vectors but none extra. It is efficient in that no fewer expansion vectors will represent all the vectors in the space but is fragil in that losing one coefficient or one basis vector destroys the ability to exactly represent the signal by Equation 6. The expansion Equation 6 can be written as a matrix operation

F a = x F a = x
(8)

where the columns of F are the basis vectors xnxn and the vector a has the expansion coefficients anan as entries. Equation Equation 7 can also be written as a matrix operation

F ˜ x = a F ˜ x = a
(9)

which has the dual basis vectors as rows of F˜F˜. From Equation 8 and Equation 9, we have

F F ˜ x = x F F ˜ x = x
(10)

Since this is true for all xx,

F F ˜ = I F F ˜ = I
(11)

or

F ˜ = F - 1 F ˜ = F - 1
(12)

which states the dual basis vectors are the rows of the inverse of the matrix whose columns are the basis vectors (and vice versa). When the vector set is a basis, F is necessarily square and from Equation 8 and Equation 9, one can show

F F ˜ = F ˜ F . F F ˜ = F ˜ F .
(13)

Because this system requires two basis sets, the expansion basis and the dual basis, it is called biorthogonal.

Orthogonal Basis

If the basis vectors are not only independent but orthonormal, the basis set is its own dual and the inverse of F is simply its transpose.

F - 1 = F ˜ = F T F - 1 = F ˜ = F T
(14)

When done in Hilbert spaces, this decomposition is sometimes called an abstract Fourier expansion [10], [9], [22].

Parseval's Theorem

Because many signals are digital representations of voltage, current, force, velocity, pressure, flow, etc., the inner product of the signal with itself (the norm squared) is a measure of the signal energy qq.

q = ( x , x ) = | | x | | 2 = x T x = n = 0 N - 1 x n 2 q = ( x , x ) = | | x | | 2 = x T x = n = 0 N - 1 x n 2
(15)

Parseval's theorem states that if the basis system is orthogonal, then the norm squared (or “energy”) is invarient across a change in basis. If a change of basis is made with

c = Ax c = Ax
(16)

then

q = ( x , x ) = | | x | | 2 = x T x = n = 0 N - 1 x n 2 = K ( c , c ) = K | | c | | 2 = K c T c = K k = 0 N - 1 c k 2 q = ( x , x ) = | | x | | 2 = x T x = n = 0 N - 1 x n 2 = K ( c , c ) = K | | c | | 2 = K c T c = K k = 0 N - 1 c k 2
(17)

for some constant KK which can be made unity by normalization if desired.

For the discrete Fourier transform (DFT) of xnxn which is

c k = 1 N n = 0 N - 1 x n e - j 2 π n k / N c k = 1 N n = 0 N - 1 x n e - j 2 π n k / N
(18)

the energy calculated in the time domain: q=nxn2q=nxn2 is equal to the norm squared of the frequency coefficients: q=kck2q=kck2, within a multiplicative constant of 1/N1/N. This is because the basis functions of the Fourier transform are orthogonal: “the sum of the squares is the square of the sum” which means means the energy calculated in the time domain is the same as that calculated in the frequency domain. The energy of the signal (the square of the sum) is the sum of the energies at each frequency (the sum of the squares). Because of the orthogonal basis, the cross terms are zero. Although one seldom directly uses Parseval's theorem, its truth is what make sense in talking about frequency domain filtering of a time domain signal. A more general form is known as Plancherel theorem [4].

If a transformation is made on the signal with a non-orthogonal basis system, then Parseval's theorem does not hold and the concept of energy does not move back and forth between domains. We can get around some of these restrictions by using frames rather than bases.

Frames and Tight Frames

In order to look at a more general expansion system than a basis and to generalize the ideas of orthogonality and of energy being calculated in the original expansion system or the transformed system, the concept of frame is defined. A frame decomposition or representation is generally more robust and flexible than a basis decomposition or representation but it requires more computation and memory [11], [20], [4]. Sometimes a frame is called a redundant basis or representing an underdetermined or underspecified set of equations.

If a set of vectors, fkfk, span a vector space (or subspace) but are not necessarily independent nor orthogonal, bounds on the energy in the transform can still be defined. A set of vectors that span a vector space is called a frame if two constants, AA and BB exist such that

0 < A | | x | | 2 k | ( f k , x ) | 2 B | | x | | 2 < 0 < A | | x | | 2 k | ( f k , x ) | 2 B | | x | | 2 <
(19)

and the two constants are called the frame bounds for the system. This can be written

0 < A | | x | | 2 | | c | | 2 B | | x | | 2 < 0 < A | | x | | 2 | | c | | 2 B | | x | | 2 <
(20)

where

c = F x c = F x
(21)

If the fkfk are linearly independent but not orthogonal, then the frame is a non-orthogonal basis. If the fkfk are not independent the frame is called redundant since there are more than the minimum number of expansion vectors that a basis would have. If the frame bounds are equal, A=BA=B, the system is called a tight frame and it has many of features of an orthogonal basis. If the bounds are equal to each other and to one, A=B=1A=B=1, then the frame is a basis and is tight. It is, therefore, an orthogonal basis.

So a frame is a generalization of a basis and a tight frame is a generalization of an orthogonal basis. If , A=BA=B, the frame is tight and we have a scaled Parseval's theorem:

A | | x | | 2 = k | ( f k , x ) | 2 A | | x | | 2 = k | ( f k , x ) | 2
(22)

If A=B>1A=B>1, then the number of expansion vectors are more than needed for a basis and AA is a measure of the redundancy of the system (for normalized frame vectors). For example, if there are three frame vectors in a two dimensional vector space, A=3/2A=3/2.

A finite dimensional matrix version of the redundant case would have FF in Equation 8 with more columns than rows but with full row rank. For example

a 00 a 01 a 02 a 10 a 11 a 12 x 0 x 1 x 2 = b 0 b 1 a 00 a 01 a 02 a 10 a 11 a 12 x 0 x 1 x 2 = b 0 b 1
(23)

has three frame vectors as the columns of AA but in a two dimensional space.

The prototypical example is called the Mercedes-Benz tight frame where three frame vectors that are 120120 apart are used in a two-dimensional plane and look like the Mercedes car hood ornament. These three frame vectors must be as far apart from each other as possible to be tight, hence the 120120 separation. But, they can be rotated any amount and remain tight [20], [13] and, therefore, are not unique.

1 - 0 . 5 - 0 . 5 0 0 . 866 - 0 . 866 x 0 x 1 x 2 = b 0 b 1 1 - 0 . 5 - 0 . 5 0 0 . 866 - 0 . 866 x 0 x 1 x 2 = b 0 b 1
(24)

In the next section, we will use the pseudo-inverse of AA to find the optimal xx for a given bb.

So the frame bounds AA and BB in Equation 19 are an indication of the redundancy of the expansion system fkfk and to how close they are to being orthogonal or tight. Indeed, Equation 19 is a sort of approximate Parseval's theorem [21], [5], [16], [12], [4], [20], [8], [14].

The dual frame vectors are also not unique but a set can be found such that Equation 9 and, therefore, Equation 10 hold (but Equation 13 does not). A set of dual frame vectors could be found by adding a set of arbitrary but independent rows to F until it is square, inverting it, then taking the first NN columns to form F˜F˜ whose rows will be a set of dual frame vectors. This method of construction shows the non-uniqueness of the dual frame vectors. This non-uniqueness is often resolved by optimizing some other parameter of the system [5].

If the matrix operations are implementing a frame decomposition and the rows of F are orthonormal, then F˜=FTF˜=FT and the vector set is a tight frame [21], [5]. If the frame vectors are normalized to ||xk||=1||xk||=1, the decomposition in Equation 6 becomes

x = 1 A n ( x , x ˜ n ) x n x = 1 A n ( x , x ˜ n ) x n
(25)

where the constant AA is a measure of the redundancy of the expansion which has more expansion vectors than necessary [5].

The matrix form is

x = 1 A F F T x x = 1 A F F T x
(26)

where FF has more columns than rows. Examples can be found in [1].

Sinc Expansion as a Tight Frame

The Shannon sampling theorem [2] can be viewied as an infinite dimensional signal expansion where the sinc functions are an orthogonal basis. The sampling theorem with critical sampling, i.e. at the Nyquist rate, is the expansion:

g ( t ) = n g ( T n ) sin ( π T ( t - T n ) ) π T ( t - T n ) g ( t ) = n g ( T n ) sin ( π T ( t - T n ) ) π T ( t - T n )
(27)

where the expansion coefficients are the samples and where the sinc functions are easily shown to be orthogonal.

Over sampling is an example of an infinite-dimensional tight frame [15], [1]. If a function is over-sampled but the sinc functions remains consistent with the upper spectral limit WW, using AA as the amount of over-sampling, the sampling theorem becomes:

A W = π T , for A 1 A W = π T , for A 1
(28)

and we have

g ( t ) = 1 A n g ( T n ) sin ( π A T ( t - T n ) ) π A T ( t - T n ) g ( t ) = 1 A n g ( T n ) sin ( π A T ( t - T n ) ) π A T ( t - T n )
(29)

where the sinc functions are no longer orthogonal. In fact, they are no longer a basis as they are not independent. They are, however, a tight frame and, therefore, have some of the characteristics of an orthogonal basis but with a “redundancy" factor AA as a multiplier in the formula [1] and a generalized Parseval's theorem. Here, moving from a basis to a frame (actually from an orthogonal basis to a tight frame) is almost invisible.

Frequency Response of an FIR Digital Filter

The discrete-time Fourier transform (DTFT) of the impulse response of an FIR digital filter h(n)h(n) is its frequency response. The discrete Fourier transform (DFT) of h(n)h(n) gives samples of the frequency response [2]. This is a powerful analysis tool in digital signal processing (DSP) and suggests that an inverse (or pseudoinverse) method could be useful for design [2].

Conclusions

Frames tend to be more robust than bases in tolerating errors and missing terms. They allow flexibility is designing wavelet systems [5] where frame expansions are often chosen.

In an infinite dimensional vector space, if basis vectors are chosen such that all expansions converge very rapidly, the basis is called an unconditional basis and is near optimal for a wide class of signal representation and processing problems. This is discussed by Donoho in [6].

Still another view of a matrix operator being a change of basis can be developed using the eigenvectors of an operator as the basis vectors. Then a signal can decomposed into its eigenvector components which are then simply multiplied by the scalar eigenvalues to accomplish the same task as a general matrix multiplication. This is an interesting idea but will not be developed here.

Change of Signal

If both xx and bb in Equation 1 are considered to be signals in the same coordinate or basis system, the matrix operator AA is generally square. It may or may not be of full rank and it may or may not have a variety of other properties, but both xx and bb are viewed in the same coordinate system and therefore are the same size.

One of the most ubiquitous of these is convolution where the input to a linear, shift invariant system with impulse response h(n)h(n) is calculated by Equation 1 if AA is the convolution matrix and xx is the input [2].

y 0 y 1 y 2 = h 0 0 0 0 h 1 h 0 0 h 2 h 1 h 0 x 0 x 1 x 2 . y 0 y 1 y 2 = h 0 0 0 0 h 1 h 0 0 h 2 h 1 h 0 x 0 x 1 x 2 .
(30)

It can also be calculated if AA is the arrangement of the input and xx is the the impulse response.

y 0 y 1 y 2 = x 0 0 0 0 x 1 x 0 0 x 2 x 1 x 0 h 0 h 1 h 2 . y 0 y 1 y 2 = x 0 0 0 0 x 1 x 0 0 x 2 x 1 x 0 h 0 h 1 h 2 .
(31)

If the signal is periodic or if the DFT is being used, then what is called a circulate is used to represent cyclic convolution. An example for N=4N=4 is the Toeplitz system

y 0 y 1 y 2 y 3 = h 0 h 3 h 2 h 1 h 1 h 0 h 3 h 2 h 2 h 1 h 0 h 3 h 3 h 2 h 1 h 0 x 0 x 1 x 2 x 3 . y 0 y 1 y 2 y 3 = h 0 h 3 h 2 h 1 h 1 h 0 h 3 h 2 h 2 h 1 h 0 h 3 h 3 h 2 h 1 h 0 x 0 x 1 x 2 x 3 .
(32)

One method of understanding and generating matrices of this sort is to construct them as a product of first a decomposition operator, then a modification operator in the new basis system, followed by a recomposition operator. For example, one could first multiply a signal by the DFT operator which will change it into the frequency domain. One (or more) of the frequency coefficients could be removed (set to zero) and the remainder multiplied by the inverse DFT operator to give a signal back in the time domain but changed by having a frequency component removed. That is a form of signal filtering and one can talk about removing the energy of a signal at a certain frequency (or many) because of Parseval's theorem.

It would be instructive for the reader to make sense out of the cryptic statement “the DFT diagonalizes the cyclic convolution matrix" to add to the ideas in this note.

Factoring the Matrix AA

For insight, algorithm development, and/or computational efficiency, it is sometime worthwhile to factor AA into a product of two or more matrices. For example, the DFTDFT matrix [3] illustrated in Equation 2 can be factored into a product of fairly sparce matrices. If fact, the fast Fourier transform (FFT) can be derived by factoring the DFT matrix into Nlog(N)Nlog(N) factors (if N=2mN=2m), each requiring order NN multiplies. This is done in [3].

Using eigenvalue theory [18], a full rank square matrix can be factored into a product

A V = V Λ A V = V Λ
(33)

where VV is a matrix with columns of the eigenvectors of AA and ΛΛ is a diagonal matrix with the eigenvalues along the diagonal. The inverse is a method to “diagonalize" a matrix

Λ = V - 1 A V Λ = V - 1 A V
(34)

If a matrix has “repeated eigenvalues", in other words, two or more of the NN eigenvalues have the same value but less than NN indepentant eigenvectors, it is not possible to diagonalize the matrix but an “almost" diagonal form called the Jordan normal form can be acheived. Those details can be found in most books on matrix theory [17].

A more general decompostion is the singular value decomposition (SVD) which is similar to the eigenvalue problem but allows rectangular matrices. It is particularly valuable for expressing the pseudoinverse in a simple form and in making numerical calculations [19].

State Equations

If our matrix multiplication equation is a vector differential equation (DE) of the form

x ˙ = A x x ˙ = A x
(35)

or for difference equations and discrete-time signals or digital signals,

x ( n + 1 ) = A x ( n ) x ( n + 1 ) = A x ( n )
(36)

an inverse or even pseudoinverse will not solve for xx . A different approach must be taken [7] and different properties and tools from linear algebra will be used. The solution of this first order vector DE is a coupled set of solutions of first order DEs. If a change of basis is made so that AA is diagonal (or Jordan form), equation Equation 35 becomes a set on uncoupled (or almost uncoupled in the Jordan form case) first order DEs and we know the solution of a first order DE is an exponential. This requires consideration of the eigenvalue problem, diagonalization, and solution of scalar first order DEs [7].

State equations are often used to model or describe a system such as a control system or a digital filter or a numerical algorithm [7], [23].

References

  1. Burrus, C. Sidney and Gopinath, Ramesh A. and Guo, Haitao. (1998). Introduction to Wavelets and the Wavelet Transform. [to appear on the web in Connexions: cnx.org]. Upper Saddle River, NJ: Prentice Hall.
  2. Burrus, C. Sidney. (2008). Digital Signal Processing and Digital Filter Design. [http://cnx.org/content/col10598/latest/]. Connexions, cnx.org.
  3. Burrus, C. Sidney. (2008). Fast Fourier Transforms. [http://cnx.org/content/col10550/latest/]. Connexions, cnx.org.
  4. Christensen, Ole. (2002). An Introduction to Frames and Riesz Bases. Birkhäuser.
  5. Daubechies, Ingrid. (1992). Ten Lectures on Wavelets. [Notes from the 1990 CBMS-NSF Conference on Wavelets and Applications at Lowell, MA]. Philadelphia, PA: SIAM.
  6. Donoho, David L. (1993, December). Unconditional Bases are Optimal Bases for Data Compression and for Statistical Estimation. [Also Stanford Statistics Dept. Report TR-410, Nov. 1992]. Applied and Computational Harmonic Analysis, 1(1), 100–115.
  7. Dorf, Richard C. (1965). Time-Domain Analysis and Design of Control Systems. Addison-Wesley.
  8. Ferreira, Paulo J. S. G. (1999). Mathematics for Multimedia Signal Processing II: Discrete Finite Frames and Signal Reconstruction. [J. S. Byrnes (editor), IOS Press]. Signal Processing for Multimedia, 35–54.
  9. Halmos, Paul R. (1951). Introduction to Hilbert Space and the Theory of Spectral Multiplicity. [second edition 1957]. New York: Chelsea.
  10. Halmos, Paul R. (1958). Finite-Dimensional Vector Spaces. [Springer 1974]. Princeton, NJ: Van Nostrand.
  11. (2004). Wavelets, Frames and Operator Theory. American Mathematical Society.
  12. Kovacevic, Jelena and Chebira, Amina. (2007, July). Life Beyond Bases: The Advent of Frames (Part I). IEEE Signal Processing Magazine, 24(4), 86–104.
  13. Kovacevic, Jelena and Chebira, Amina. (2007, September). Life Beyond Bases: The Advent of Frames (Part II). IEEE Signal Processing Magazine, 24(5), 115–125.
  14. Kovacevic, Jelena and Goyal, Vivek K. and Vetterli, Martin. (2012, March). Signal Processing: Foundations. [available online: http://fourierandwavelets.org/]. On Line publication by fourierandwavelets.org.
  15. Marks II, R. J. (1991). Introduction to Shannon Sampling and Interpolation Theory. New York: Springer-Verlag.
  16. Pei, Soo-Chang and Yeh, Min-Hung. (1997, November). An Introduction to Discrete Finite Frames. IEEE Signal Processing Magazine, 14(6), 84–96.
  17. Strang, Gilbert. (1976). Linear Algebra and Its Applications. [4th Edition, Brooks Cole, 2005]. New York: Academic Press.
  18. Strang, Gilbert. (1986). Introduction to Linear Algebra. [4th Edition, 2009]. New York: Wellesley Cambridge.
  19. Trefethen, Lloyd N. and III, David Bau. (1997). Numerical Linear Algebra. SIAM.
  20. Waldron, Shayne F. D. (2010). An Introduction to Finite Tight Frames, Draft. [http://www.math.auckland.ac.nz/ waldron/Harmonic-frames/Tuan-Shayne/book.pdf]. Springer.
  21. Young, R. M. (1980). An Introduction to Nonharmonic Fourier Series. New York: Academic Press.
  22. Young, N. (1988). An Introduction to Hilbert Space. Cambridge Press.
  23. Zadeh, Lotfi A. and Desoer, Charles A. (1963, 2008). Linear System Theory: The State Space Approach. Dover.

Content actions

Download module as:

Add module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks