# Connexions

You are here: Home » Content » High Performance Computing » Parallel Virtual Machine

• Introduction to the Connexions Edition
• Introduction to High Performance Computing

### Lenses

What is a lens?

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

#### Endorsed by (What does "Endorsed by" mean?)

This content has been endorsed by the organizations listed. Click each link for a list of all content endorsed by the organization.
• HPC Open Edu Cup

This collection is included inLens: High Performance Computing Open Education Cup 2008-2009
By: Ken Kennedy Institute for Information Technology

Click the "HPC Open Edu Cup" link to see all content they endorse.

#### Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
• NSF Partnership

This collection is included inLens: NSF Partnership in Signal Processing
By: Sidney Burrus

Click the "NSF Partnership" link to see all content affiliated with them.

Click the tag icon to display tags associated with this content.

• Featured Content

This collection is included inLens: Connexions Featured Content
By: Connexions

"The purpose of Chuck Severence's book, High Performance Computing has always been to teach new programmers and scientists about the basics of High Performance Computing. This book is for learners […]"

Click the "Featured Content" link to see all content affiliated with them.

#### Also in these lenses

• UniqU content

This collection is included inLens: UniqU's lens
By: UniqU, LLC

Click the "UniqU content" link to see all content selected in this lens.

• Lens for Engineering

This module and collection are included inLens: Lens for Engineering
By: Sidney Burrus

Click the "Lens for Engineering" link to see all content selected in this lens.

• eScience, eResearch and Computational Problem Solving

This collection is included inLens: eScience, eResearch and Computational Problem Solving
By: Jan E. Odegard

Click the "eScience, eResearch and Computational Problem Solving" link to see all content selected in this lens.

### Recently Viewed

This feature requires Javascript to be enabled.

### Tags

(What is a tag?)

These tags come from the endorsement, affiliation, and other lenses that include this content.

Inside Collection (Textbook):

Textbook by: Charles Severance, Kevin Dowd. E-mail the authors

# Parallel Virtual Machine

Module by: Charles Severance, Kevin Dowd. E-mail the authors

The idea behind PVM is to assemble a diverse set of network-connected resources into a “virtual machine.” A user could marshal the resources of 35 idle workstations on the Internet and have their own personal scalable processing system. The work on PVM started in the early 1990s at Oak Ridge National Labs. PVM was pretty much an instant success among computer scientists. It provided a rough framework in which to experiment with using a network of workstations as a parallel processor.

In PVM Version 3, your virtual machine can consist of single processors, shared-memory multiprocessors, and scalable multiprocessors. PVM attempts to knit all of these resources into a single, consistent, execution environment.

To run PVM, you simply need a login account on a set of network computers that have the PVM software installed. You can even install it in your home directory. To create your own personal virtual machine, you would create a list of these computers in a file:


% cat hostfile
frodo.egr.msu.edu
gollum.egr.msu.edu
mordor.egr.msu.edu
%


After some nontrivial machinations with paths and environment variables, you can start the PVM console:


% pvm hostfile
pvm> conf
1 host, 1 data format
HOST     DTID     ARCH   SPEED
frodo     40000 SUN4SOL2    1000
gollum    40001 SUN4SOL2    1000
mordor    40002 SUN4SOL2    1000
pvm> ps
HOST     TID   FLAG 0x COMMAND
frodo    40042     6/c,f pvmgs
pvm> reset
pvm> ps
HOST     TID   FLAG 0x COMMAND
pvm>


Many different users can be running virtual machines using the same pool of resources. Each user has their own view of an empty machine. The only way you might detect other virtual machines using your resources is in the percentage of the time your applications get the CPU.

There is a wide range of commands you can issue at the PVM console. The ps command shows the running processes in your virtual machine. It’s quite possible to have more processes than computer systems. Each process is time-shared on a system along with all the other load on the system. The reset command performs a soft reboot on your virtual machine. You are the virtual system administrator of the virtual machine you have assembled.

To execute programs on your virtual computer, you must compile and link your programs with the PVM library routines:1


% aimk mast slav
making in SUN4SOL2/ for SUN4SOL2
cc -O -I/opt/pvm3/include -DSYSVBFUNC -DSYSVSTR -DNOGETDTBLSIZ
-DSYSVSIGNAL -DNOWAIT3 -DNOUNIXDOM -o mast
../mast.c -L/opt/pvm3/lib/SUN4SOL2 -lpvm3 -lnsl -lsocket
mv mast ˜crs/pvm3/bin/SUN4SOL2
cc -O -I/opt/pvm3/include -DSYSVBFUNC -DSYSVSTR -DNOGETDTBLSIZ
-DSYSVSIGNAL -DNOWAIT3 -DNOUNIXDOM -o slav
../slav.c -L/opt/pvm3/lib/SUN4SOL2 -lpvm3 -lnsl -lsocket
mv slav ˜crs/pvm3/bin/SUN4SOL2
%


When the first PVM call is encountered, the application contacts your virtual machine and enrolls itself in the virtual machine. At that point it should show up in the output of the ps command issued at the PVM console.

From that point on, your application issues PVM calls to create more processes and interact with those processes. PVM takes the responsibility for distributing the processes on the different systems in the virtual machine, based on the load and your assessment of each system’s relative performance. Messages are moved across the network using user datagram protocol (UDP) and delivered to the appropriate process.

Typically, the PVM application starts up some additional PVM processes. These can be additional copies of the same program or each PVM process can run a different PVM application. Then the work is distributed among the processes, and results are gathered as necessary.

There are several basic models of computing that are typically used when working with PVM:

• Master/Slave: : When operating in this mode, one process (usually the initial process) is designated as the master that spawns some number of worker processes. Work units are sent to each worker process, and the results are returned to the master. Often the master maintains a queue of work to be done and as a slave finishes, the master delivers a new work item to the slave. This approach works well when there is little data interaction and each work unit is independent. This approach has the advantage that the overall problem is naturally load-balanced even when there is some variation in the execution time of individual processes.
• Broadcast/Gather: : This type of application is typically characterized by the fact that the shared data structure is relatively small and can be easily copied into every processor’s node. At the beginning of the time step, all the global data structures are broadcast from the master process to all of the processes. Each process then operates on their portion of the data. Each process produces a partial result that is sent back and gathered by the master process. This pattern is repeated for each time step.
• SPMD/Data decomposition: : When the overall data structure is too large to have a copy stored in every process, it must be decomposed across multiple processes. Generally, at the beginning of a time step, all processes must exchange some data with each of their neighboring processes. Then with their local data augmented by the necessary subset of the remote data, they perform their computations. At the end of the time step, necessary data is again exchanged between neighboring processes, and the process is restarted.

The most complicated applications have nonuniform data flows and data that migrates around the system as the application changes and the load changes on the system.

In this section, we have two example programs: one is a master-slave operation, and the other is a data decomposition-style solution to the heat flow problem.

In this example, one process (mast) creates five slave processes (slav) and doles out 20 work units (add one to a number). As a slave process responds, it’s given new work or told that all of the work units have been exhausted:


% cat mast.c
#include <stdio.h>
#include "pvm3.h"

#define MAXPROC 5
#define JOBS 20

main()
{
int mytid,info;
int tids[MAXPROC];

mytid = pvm_mytid();
info=pvm_spawn("slav", (char**)0, 0, "", MAXPROC, tids);

/* Send out the first work */
for(work=0;work<MAXPROC;work++) {
pvm_pkint(&work, 1, 1 ) ;
pvm_send(tids[work],1) ;/* 1 = msgtype */
}

/* Send out the rest of the work requests */
work = MAXPROC;
pvm_recv( -1, 2 ); /* -1 = any task 2 = msgtype */
pvm_upkint( &tid, 1, 1 );
pvm_upkint( &input, 1, 1 );
pvm_upkint( &output, 1, 1 );
printf("Thanks to %d 2*%d=%d\n",tid,input,output);
if ( work < JOBS ) {
pvm_pkint(&work, 1, 1 ) ;
work++;
} else {
input = -1;
pvm_pkint(&input, 1, 1 ) ; /* Tell them to stop */
}
pvm_send(tid,1) ;
}

pvm_exit();
}
%


One of the interesting aspects of the PVM interface is the separation of calls to prepare a new message, pack data into the message, and send the message. This is done for several reasons. PVM has the capability to convert between different floating-point formats, byte orderings, and character formats. This also allows a single message to have multiple data items with different types.

The purpose of the message type in each PVM send or receive is to allow the sender to wait for a particular type of message. In this example, we use two message types. Type one is a message from the master to the slave, and type two is the response.

When performing a receive, a process can either wait for a message from a specific process or a message from any process.

In the second phase of the computation, the master waits for a response from any slave, prints the response, and then doles out another work unit to the slave or tells the slave to terminate by sending a message with a value of -1.

The slave code is quite simple — it waits for a message, unpacks it, checks to see if it is a termination message, returns a response, and repeats:


% cat slav.c
#include <stdio.h>
#include "pvm3.h"

/* A simple program to double integers */
main()
{
int mytid;
int input,output;
mytid = pvm_mytid();

while(1) {
pvm_recv( -1, 1 ); /* -1 = any task 1=msgtype */
pvm_upkint(&input, 1, 1);
if ( input == -1 ) break; /* All done */

output = input * 2;
pvm_pkint( &mytid, 1, 1 );
pvm_pkint( &input, 1, 1 );
pvm_pkint( &output, 1, 1 );
pvm_send( pvm_parent(), 2 );
}
pvm_exit();
}
%


When the master program is executed, it produces the following output:


% pheat
Thanks to 262204 2*0=0
Thanks to 262205 2*1=2
Thanks to 262206 2*2=4
Thanks to 262207 2*3=6
Thanks to 262204 2*5=10
Thanks to 262205 2*6=12
Thanks to 262206 2*7=14
Thanks to 262207 2*8=16
Thanks to 262204 2*9=18
Thanks to 262205 2*10=20
Thanks to 262206 2*11=22
Thanks to 262207 2*12=24
Thanks to 262205 2*14=28
Thanks to 262207 2*16=32
Thanks to 262205 2*17=34
Thanks to 262207 2*18=36
Thanks to 262204 2*13=26
Thanks to 262205 2*19=38
Thanks to 262206 2*15=30
Thanks to 262208 2*4=8
%


Clearly the processes are operating in parallel, and the order of execution is somewhat random. This code is an excellent skeleton for handling a wide range of computations. In the next example, we perform an SPMD-style computation to solve the heat flow problem using PVM.

## Heat Flow in PVM

This next example is a rather complicated application that implements the heat flow problem in PVM. In many ways, it gives some insight into the work that is performed by the HPF environment. We will solve a heat flow in a two-dimensional plate with four heat sources and the edges in zero-degree water, as shown in Figure 1.

The data will be spread across all of the processes using a (*, BLOCK) distribution. Columns are distributed to processes in contiguous blocks, and all the row elements in a column are stored on the same process. As with HPF, the process that “owns” a data cell performs the computations for that cell after retrieving any data necessary to perform the computation.

We use a red-black approach but for simplicity, we copy the data back at the end of each iteration. For a true red-black, you would perform the computation in the opposite direction every other time step.

Note that instead of spawning slave process, the parent process spawns additional copies of itself. This is typical of SPMD-style programs. Once the additional processes have been spawned, all the processes wait at a barrier before they look for the process numbers of the members of the group. Once the processes have arrived at the barrier, they all retrieve a list of the different process numbers:


% cat pheat.f
PROGRAM PHEAT
INCLUDE ’../include/fpvm3.h’
INTEGER NPROC,ROWS,COLS,TOTCOLS,OFFSET
PARAMETER(NPROC=4,MAXTIME=200)
PARAMETER(ROWS=200,TOTCOLS=200)
PARAMETER(COLS=(TOTCOLS/NPROC)+3)
REAL*8 RED(0:ROWS+1,0:COLS+1), BLACK(0:ROWS+1,0:COLS+1)
LOGICAL IAMFIRST,IAMLAST
INTEGER INUM,INFO,TIDS(0:NPROC-1),IERR
INTEGER I,R,C
INTEGER TICK,MAXTIME
CHARACTER*30 FNAME

*     Get the SPMD thing going - Join the pheat group
CALL PVMFJOINGROUP(’pheat’, INUM)

* If we are the first in the pheat group, make some helpers
IF ( INUM.EQ.0 ) THEN
DO I=1,NPROC-1
CALL PVMFSPAWN(’pheat’, 0, ’anywhere’, 1, TIDS(I), IERR)
ENDDO
ENDIF

*     Barrier to make sure we are all here so we can look them up
CALL PVMFBARRIER( ’pheat’, NPROC, INFO )

* Find my pals and get their TIDs - TIDS are necessary for sending
DO I=0,NPROC-1
CALL PVMFGETTID(’pheat’, I, TIDS(I))
ENDDO

At this point in the code, we have NPROC processes executing in an SPMD mode. The next step is to determine which subset of the array each process will compute. This is driven by the INUM variable, which ranges from 0 to 3 and uniquely identifies these processes.

We decompose the data and store only one quarter of the data on each process. Using the INUM variable, we choose our continuous set of columns to store and compute. The OFFSET variable maps between a “global” column in the entire array and a local column in our local subset of the array. Figure 2 shows a map that indicates which processors store which data elements. The values marked with a B are boundary values and won’t change during the simulation. They are all set to 0. This code is often rather tricky to figure out. Performing a (BLOCK, BLOCK) distribution requires a two-dimensional decomposition and exchanging data with the neighbors above and below, in addition to the neighbors to the left and right:


* Compute my geometry - What subset do I process? (INUM=0 values)
* Actual Column = OFFSET + Column (OFFSET = 0)
*     Column 0 = neighbors from left
*     Column 1 = send to left
*     Columns 1..mylen My cells to compute
*     Column mylen = Send to right (mylen=50)
*     Column mylen+1 = Neighbors from Right (Column 51)

IAMFIRST = (INUM .EQ. 0)
IAMLAST = (INUM .EQ. NPROC-1)
OFFSET = (ROWS/NPROC * INUM )
MYLEN = ROWS/NPROC
IF ( IAMLAST ) MYLEN = TOTCOLS - OFFSET
PRINT *,’INUM:’,INUM,’ Local’,1,MYLEN,
+                  ’ Global’,OFFSET+1,OFFSET+MYLEN

* Start Cold
DO C=0,COLS+1
DO R=0,ROWS+1
BLACK(R,C) = 0.0
ENDDO
ENDDO


Now we run the time steps. The first act in each time step is to reset the heat sources. In this simulation, we have four heat sources placed near the middle of the plate. We must restore all the values each time through the simulation as they are modified in the main loop:


* Begin running the time steps
DO TICK=1,MAXTIME

* Set the heat persistent sources
CALL STORE(BLACK,ROWS,COLS,OFFSET,MYLEN,
+     ROWS/3,TOTCOLS/3,10.0,INUM)
CALL STORE(BLACK,ROWS,COLS,OFFSET,MYLEN,
+     2*ROWS/3,TOTCOLS/3,20.0,INUM)
CALL STORE(BLACK,ROWS,COLS,OFFSET,MYLEN,
+     ROWS/3,2*TOTCOLS/3,-20.0,INUM)
CALL STORE(BLACK,ROWS,COLS,OFFSET,MYLEN,
+     2*ROWS/3,2*TOTCOLS/3,20.0,INUM)


Now we perform the exchange of the “ghost values” with our neighboring processes. For example, Process 0 contains the elements for global column 50. To compute the next time step values for column 50, we need column 51, which is stored in Process 1. Similarly, before Process 1 can compute the new values for column 51, it needs Process 0’s values for column 50.

Figure 3 shows how the data is transferred between processors. Each process sends its leftmost column to the left and its rightmost column to the right. Because the first and last processes border unchanging boundary values on the left and right respectively, this is not necessary for columns one and 200. If all is done properly, each process can receive its ghost values from their left and right neighbors.

The net result of all of the transfers is that for each space that must be computed, it’s surrounded by one layer of either boundary values or ghost values from the right or left neighbors:


* Send left and right
IF ( .NOT. IAMFIRST ) THEN
CALL PVMFINITSEND(PVMDEFAULT,TRUE)
CALL PVMFPACK( REAL8, BLACK(1,1), ROWS, 1, INFO )
CALL PVMFSEND( TIDS(INUM-1), 1, INFO )
ENDIF
IF ( .NOT. IAMLAST ) THEN
CALL PVMFINITSEND(PVMDEFAULT,TRUE)
CALL PVMFPACK( REAL8, BLACK(1,MYLEN), ROWS, 1, INFO )
CALL PVMFSEND( TIDS(INUM+1), 2, INFO )
ENDIF
IF ( .NOT. IAMLAST ) THEN
CALL PVMFRECV( TIDS(INUM+1), 1, BUFID )
CALL PVMFUNPACK ( REAL8, BLACK(1,MYLEN+1), ROWS, 1, INFO
ENDIF
IF ( .NOT. IAMFIRST ) THEN
CALL PVMFRECV( TIDS(INUM-1), 2, BUFID )
CALL PVMFUNPACK ( REAL8, BLACK(1,0), ROWS, 1, INFO)
ENDIF


This next segment is the easy part. All the appropriate ghost values are in place, so we must simply perform the computation in our subspace. At the end, we copy back from the RED to the BLACK array; in a real simulation, we would perform two time steps, one from BLACK to RED and the other from RED to BLACK, to save this extra copy:


* Perform the flow
DO C=1,MYLEN
DO R=1,ROWS
RED(R,C) = ( BLACK(R,C) +
+                   BLACK(R,C-1) + BLACK(R-1,C) +
+                   BLACK(R+1,C) + BLACK(R,C+1) ) / 5.0
ENDDO
ENDDO

* Copy back - Normally we would do a red and black version of the loop
DO C=1,MYLEN
DO R=1,ROWS
BLACK(R,C) = RED(R,C)
ENDDO
ENDDO
ENDDO


Now we find the center cell and send to the master process (if necessary) so it can be printed out. We also dump out the data into files for debugging or later visualization of the results. Each file is made unique by appending the instance number to the filename. Then the program terminates:


CALL SENDCELL(RED,ROWS,COLS,OFFSET,MYLEN,INUM,TIDS(0),
+       ROWS/2,TOTCOLS/2)

* Dump out data for verification
IF ( ROWS .LE. 20 ) THEN
FNAME = ’/tmp/pheatout.’ // CHAR(ICHAR(’0’)+INUM)
OPEN(UNIT=9,NAME=FNAME,FORM=’formatted’)
DO C=1,MYLEN
WRITE(9,100)(BLACK(R,C),R=1,ROWS)
100        FORMAT(20F12.6)
ENDDO
CLOSE(UNIT=9)
ENDIF
* Lets all go together
CALL PVMFBARRIER( ’pheat’, NPROC, INFO )
CALL PVMFEXIT( INFO )

END


The SENDCELL routine finds a particular cell and prints it out on the master process. This routine is called in an SPMD style: all the processes enter this routine although all not at precisely the same time. Depending on the INUM and the cell that we are looking for, each process may do something different.

If the cell in question is in the master process, and we are the master process, print it out. All other processes do nothing. If the cell in question is stored in another process, the process with the cell sends it to the master processes. The master process receives the value and prints it out. All the other processes do nothing.

This is a simple example of the typical style of SPMD code. All the processes execute the code at roughly the same time, but, based on information local to each process, the actions performed by different processes may be quite different:


SUBROUTINE SENDCELL(RED,ROWS,COLS,OFFSET,MYLEN,INUM,PTID,R,C)
INCLUDE ’../include/fpvm3.h’
INTEGER ROWS,COLS,OFFSET,MYLEN,INUM,PTID,R,C
REAL*8 RED(0:ROWS+1,0:COLS+1)
REAL*8 CENTER

* Compute local row number to determine if it is ours
I = C - OFFSET
IF ( I .GE. 1 .AND. I.LE. MYLEN ) THEN
IF ( INUM .EQ. 0 ) THEN
PRINT *,’Master has’, RED(R,I), R, C, I
ELSE
CALL PVMFINITSEND(PVMDEFAULT,TRUE)
CALL PVMFPACK( REAL8, RED(R,I), 1, 1, INFO )
PRINT *, ’INUM:’,INUM,’ Returning’,R,C,RED(R,I),I
CALL PVMFSEND( PTID, 3, INFO )
ENDIF
ELSE
IF ( INUM .EQ. 0 ) THEN
CALL PVMFRECV( -1 , 3, BUFID )
CALL PVMFUNPACK ( REAL8, CENTER, 1, 1, INFO)
ENDIF
ENDIF
RETURN
END


Like the previous routine, the STORE routine is executed on all processes. The idea is to store a value into a global row and column position. First, we must determine if the cell is even in our process. If the cell is in our process, we must compute the local column (I) in our subset of the overall matrix and then store the value:


SUBROUTINE STORE(RED,ROWS,COLS,OFFSET,MYLEN,R,C,VALUE,INUM)
REAL*8 RED(0:ROWS+1,0:COLS+1)
REAL VALUE
INTEGER ROWS,COLS,OFFSET,MYLEN,R,C,I,INUM
I = C - OFFSET
IF ( I .LT. 1 .OR. I .GT. MYLEN ) RETURN
RED(R,I) = VALUE
RETURN
END


When this program executes, it has the following output:


% pheat
INUM: 0 Local 1 50 Global 1 50
%


We see two lines of print. The first line indicates the values that Process 0 used in its geometry computation. The second line is the output from the master process of the temperature at cell (100,100) after 200 time steps.

One interesting technique that is useful for debugging this type of program is to change the number of processes that are created. If the program is not quite moving its data properly, you usually get different results when different numbers of processes are used. If you look closely, the above code performs correctly with one process or 30 processes.

Notice that there is no barrier operation at the end of each time step. This is in contrast to the way parallel loops operate on shared uniform memory multiprocessors that force a barrier at the end of each loop. Because we have used an “owner computes” rule, and nothing is computed until all the required ghost data is received, there is no need for a barrier. The receipt of the messages with the proper ghost values allows a process to begin computing immediately without regard to what the other processes are currently doing.

This example can be used either as a framework for developing other grid-based computations, or as a good excuse to use HPF and appreciate the hard work that the HPF compiler developers have done. A well-done HPF implementation of this simulation should outperform the PVM implementation because HPF can make tighter optimizations. Unlike us, the HPF compiler doesn’t have to keep its generated code readable.

## PVM Summary

PVM is a widely used tool because it affords portability across every architecture other than SIMD. Once the effort has been invested in making a code message passing, it tends to run well on many architectures.

The primary complaints about PVM include:

• The need for a pack step separate from the send step
• The fact that it is designed to work in a heterogeneous environment that may incur some overhead
• It doesn’t automate common tasks such as geometry computations

But all in all, for a certain set of programmers, PVM is the tool to use. If you would like to learn more about PVM see PVM — A User’s Guide and Tutorial for Networked Parallel Computing, by Al Geist, Adam Beguelin, Jack Dongarra, Weicheng Jiang, Robert Manchek, and Vaidy Sunderam (MIT Press). Information is also available at www.netlib.org/pvm3/.

## Footnotes

1. Note: the exact compilation may be different on your system.

## Content actions

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

#### Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

#### Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks