Skip to content Skip to navigation Skip to collection information

OpenStax_CNX

You are here: Home » Content » High Performance Computing » Multiprocessor Software Concepts

Navigation

Table of Contents

Lenses

What is a lens?

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

Endorsed by Endorsed (What does "Endorsed by" mean?)

This content has been endorsed by the organizations listed. Click each link for a list of all content endorsed by the organization.
  • HPC Open Edu Cup

    This collection is included inLens: High Performance Computing Open Education Cup 2008-2009
    By: Ken Kennedy Institute for Information Technology

    Click the "HPC Open Edu Cup" link to see all content they endorse.

Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
  • NSF Partnership display tagshide tags

    This collection is included inLens: NSF Partnership in Signal Processing
    By: Sidney Burrus

    Click the "NSF Partnership" link to see all content affiliated with them.

    Click the tag icon tag icon to display tags associated with this content.

  • Featured Content

    This collection is included inLens: Connexions Featured Content
    By: Connexions

    Comments:

    "The purpose of Chuck Severence's book, High Performance Computing has always been to teach new programmers and scientists about the basics of High Performance Computing. This book is for learners […]"

    Click the "Featured Content" link to see all content affiliated with them.

Also in these lenses

  • UniqU content

    This collection is included inLens: UniqU's lens
    By: UniqU, LLC

    Click the "UniqU content" link to see all content selected in this lens.

  • Lens for Engineering

    This module and collection are included inLens: Lens for Engineering
    By: Sidney Burrus

    Click the "Lens for Engineering" link to see all content selected in this lens.

  • eScience, eResearch and Computational Problem Solving

    This collection is included inLens: eScience, eResearch and Computational Problem Solving
    By: Jan E. Odegard

    Click the "eScience, eResearch and Computational Problem Solving" link to see all content selected in this lens.

Recently Viewed

This feature requires Javascript to be enabled.

Tags

(What is a tag?)

These tags come from the endorsement, affiliation, and other lenses that include this content.
 

Multiprocessor Software Concepts

Module by: Charles Severance, Kevin Dowd. E-mail the authors

Now that we have examined the way shared-memory multiprocessor hardware operates, we need to examine how software operates on these types of computers. We still have to wait until the next chapters to begin making our FORTRAN programs run in parallel. For now, we use C programs to examine the fundamentals of multiprocessing and multithreading. There are several techniques used to implement multithreading, so the topics we will cover include:

  • Operating system–supported multiprocessing
  • User space multithreading
  • Operating system-supported multithreading

The last of these is what we primarily will use to reduce the walltime of our applications.

Operating System–Supported Multiprocessing

Most modern general-purpose operating systems support some form of multiprocessing. Multiprocessing doesn’t require more than one physical CPU; it is simply the operating system’s ability to run more than one process on the system. The operating system context-switches between each process at fixed time intervals, or on interrupts or input-output activity. For example, in UNIX, if you use the ps command, you can see the processes on the system:


% ps -a PID TTY TIME CMD 28410 pts/34 0:00 tcsh 28213 pts/38 0:00 xterm 10488 pts/51 0:01 telnet 28411 pts/34 0:00 xbiff 11123 pts/25 0:00 pine 3805 pts/21 0:00 elm 6773 pts/44 5:48 ansys ... % ps –a | grep ansys 6773 pts/44 6:00 ansys

For each process we see the process identifier (PID), the terminal that is executing the command, the amount of CPU time the command has used, and the name of the command. The PID is unique across the entire system. Most UNIX commands are executed in a separate process. In the above example, most of the processes are waiting for some type of event, so they are taking very few resources except for memory. Process 67731 seems to be executing and using resources. Running ps again confirms that the CPU time is increasing for the ansys process:


% vmstat 5 procs memory page disk faults cpu r b w swap free re mf pi po fr de sr f0 s0 -- -- in sy cs us sy id 3 0 0 353624 45432 0 0 1 0 0 0 0 0 0 0 0 461 5626 354 91 9 0 3 0 0 353248 43960 0 22 0 0 0 0 0 0 14 0 0 518 6227 385 89 11 0

Running the vmstat 5 command tells us many things about the activity on the system. First, there are three runnable processes. If we had one CPU, only one would actually be running at a given instant. To allow all three jobs to progress, the operating system time-shares between the processes. Assuming equal priority, each process executes about 1/3 of the time. However, this system is a two-processor system, so each process executes about 2/3 of the time. Looking across the vmstat output, we can see paging activity (pi, po), context switches (cs), overall user time (us), system time (sy), and idle time (id ).

Each process can execute a completely different program. While most processes are completely independent, they can cooperate and share information using interprocess communication (pipes, sockets) or various operating system-supported shared-memory areas. We generally don’t use multiprocessing on these shared-memory systems as a technique to increase single-application performance.

Multiprocessing software

In this section, we explore how programs access multiprocessing features.2 In this example, the program creates a new process using the fork( ) function. The new process (child) prints some messages and then changes its identity using exec( ) by loading a new program. The original process (parent) prints some messages and then waits for the child process to complete:


int globvar; /* A global variable */ main () { int pid,status,retval; int stackvar; /* A stack variable */ globvar = 1; stackvar = 1; printf("Main - calling fork globvar=%d stackvar=%d\n",globvar,stackvar); pid = fork(); printf("Main - fork returned pid=%d\n",pid); if ( pid == 0 ) { printf("Child - globvar=%d stackvar=%d\n",globvar,stackvar); sleep(1); printf("Child - woke up globvar=%d stackvar=%d\n",globvar,stackvar); globvar = 100; stackvar = 100; printf("Child - modified globvar=%d stackvar=%d\n",globvar,stackvar); retval = execl("/bin/date", (char *) 0 ); printf("Child - WHY ARE WE HERE retval=%d\n",retval); } else { printf("Parent - globvar=%d stackvar=%d\n",globvar,stackvar); globvar = 5; stackvar = 5; printf("Parent - sleeping globvar=%d stackvar=%d\n",globvar,stackvar); sleep(2); printf("Parent - woke up globvar=%d stackvar=%d\n",globvar,stackvar); printf("Parent - waiting for pid=%d\n",pid); retval = wait(&status); status = status >> 8; /* Return code in bits 15-8 */ printf("Parent - status=%d retval=%d\n",status,retval); } }

The key to understanding this code is to understand how the fork( ) function operates. The simple summary is that the fork( ) function is called once in a process and returns twice, once in the original process and once in a newly created process. The newly created process is an identical copy of the original process. All the variables (local and global) have been duplicated. Both processes have access to all of the open files of the original process. Figure 1 shows how the fork operation creates a new process.

The only difference between the processes is that the return value from the fork( ) function call is 0 in the new (child) process and the process identifier (shown by the ps command) in the original (parent) process. This is the program output:


recs % cc -o fork fork.c recs % fork Main - calling fork globvar=1 stackvar=1 Main - fork returned pid=19336 Main - fork returned pid=0 Parent - globvar=1 stackvar=1 Parent - sleeping globvar=5 stackvar=5 Child - globvar=1 stackvar=1 Child - woke up globvar=1 stackvar=1 Child - modified globvar=100 stackvar=100 Thu Nov 6 22:40:33 Parent - woke up globvar=5 stackvar=5 Parent - waiting for pid=19336 Parent - status=0 retval=19336 recs %

Tracing this through, first the program sets the global and stack variable to one and then calls fork( ). During the fork( ) call, the operating system suspends the process, makes an exact duplicate of the process, and then restarts both processes. You can see two messages from the statement immediately after the fork. The first line is coming from the original process, and the second line is coming from the new process. If you were to execute a ps command at this moment in time, you would see two processes running called “fork.” One would have a process identifier of 19336.

Figure 1
How a fork operates
This figure consists of a number of groupings of boxes, which are aligned in a column and labeled from top to bottom, global data, code, and stack. The figure shows the boxes in light grey with the label, before fork, and an arrow pointing at code titled, executing. It then shows two groupings of the boxes together, labeled, during fork, parent is suspended and cloned. The grouping on the left is light grey, and large arrows point to the right from this grouping to the second, darker grey grouping. Below this is another grouping of light and dark grey boxes, labeled, after fork, processes execute independently. to the side of the light grey and dark grey boxes labeled, Code, an arrow labeled Executing points at the box.

As both processes start, they execute an IF-THEN-ELSE and begin to perform different actions in the parent and child. Notice that globvar and stackvar are set to 5 in the parent, and then the parent sleeps for two seconds. At this point, the child begins executing. The values for globvar and stackvar are unchanged in the child process. This is because these two processes are operating in completely independent memory spaces. The child process sleeps for one second and sets its copies of the variables to 100. Next, the child process calls the execl( ) function to overwrite its memory space with the UNIX date program. Note that the execl( ) never returns; the date program takes over all of the resources of the child process. If you were to do a ps at this moment in time, you still see two processes on the system but process 19336 would be called “date.” The date command executes, and you can see its output.3

The parent wakes up after a brief two-second sleep and notices that its copies of global and local variables have not been changed by the action of the child process. The parent then calls the wait( ) function to determine if any of its children exited. The wait( ) function returns which child has exited and the status code returned by that child process (in this case, process 19336).

User Space Multithreading

A thread is different from a process. When you add threads, they are added to the existing process rather than starting in a new process. Processes start with a single thread of execution and can add or remove threads throughout the duration of the program. Unlike processes, which operate in different memory spaces, all threads in a process share the same memory space. Figure 2 shows how the creation of a thread differs from the creation of a process. Not all of the memory space in a process is shared between all threads. In addition to the global area that is shared across all threads, each thread has a thread private area for its own local variables. It’s important for programmers to know when they are working with shared variables and when they are working with local variables.

When attempting to speed up high performance computing applications, threads have the advantage over processes in that multiple threads can cooperate and work on a shared data structure to hasten the computation. By dividing the work into smaller portions and assigning each smaller portion to a separate thread, the total work can be completed more quickly.

Multiple threads are also used in high performance database and Internet servers to improve the overall throughput of the server. With a single thread, the program can either be waiting for the next network request or reading the disk to satisfy the previous request. With multiple threads, one thread can be waiting for the next network transaction while several other threads are waiting for disk I/O to complete.

The following is an example of a simple multithreaded application.4 It begins with a single master thread that creates three additional threads. Each thread prints some messages, accesses some global and local variables, and then terminates:


#define_REENTRANT /* basic lines for threads */ #include <stdio.h> #include <pthread.h> #define THREAD_COUNT 3 void *TestFunc(void *); int globvar; /* A global variable */ int index[THREAD_COUNT] /* Local zero-based thread index */ pthread_t thread_id[THREAD_COUNT]; /* POSIX Thread IDs */ main() { int i,retval; pthread_t tid; globvar = 0; printf("Main - globvar=%d\n",globvar); for(i=0;i<THREAD_COUNT;i++) { index[i] = i; retval = pthread_create(&tid,NULL,TestFunc,(void *) index[i]); printf("Main - creating i=%d tid=%d retval=%d\n",i,tid,retval); thread_id[i] = tid; } printf("Main thread - threads started globvar=%d\n",globvar); for(i=0;i<THREAD_COUNT;i++) { printf("Main - waiting for join %d\n",thread_id[i]); retval = pthread_join( thread_id[i], NULL ) ; printf("Main - back from join %d retval=%d\n",i,retval); } printf("Main thread - threads completed globvar=%d\n",globvar); } void *TestFunc(void *parm) { int me,self; me = (int) parm; /* My own assigned thread ordinal */ self = pthread_self(); /* The POSIX Thread library thread number */ printf("TestFunc me=%d - self=%d globvar=%d\n",me,self,globvar); globvar = me + 15; printf("TestFunc me=%d - sleeping globvar=%d\n",me,globvar); sleep(2); printf("TestFunc me=%d - done param=%d globvar=%d\n",me,self,globvar); }
Figure 2
Creating a thread
This figure shows a number of groupings of boxes, labeled global data, code, and stack. A couple discriptions and arrows labeled, executing, are arranged around the boxes.

The global shared areas in this case are those variables declared in the static area outside the main( ) code. The local variables are any variables declared within a routine. When threads are added, each thread gets its own function call stack. In C, the automatic variables that are declared at the beginning of each routine are allocated on the stack. As each thread enters a function, these variables are separately allocated on that particular thread’s stack. So these are the thread-local variables.

Unlike the fork( ) function, the pthread_create( ) function creates a new thread, and then control is returned to the calling thread. One of the parameters of the pthread_create( ) is the name of a function.

New threads begin execution in the function TestFunc( ) and the thread finishes when it returns from this function. When this program is executed, it produces the following output:


recs % cc -o create1 -lpthread -lposix4 create1.c recs % create1 Main - globvar=0 Main - creating i=0 tid=4 retval=0 Main - creating i=1 tid=5 retval=0 Main - creating i=2 tid=6 retval=0 Main thread - threads started globvar=0 Main - waiting for join 4 TestFunc me=0 - self=4 globvar=0 TestFunc me=0 - sleeping globvar=15 TestFunc me=1 - self=5 globvar=15 TestFunc me=1 - sleeping globvar=16 TestFunc me=2 - self=6 globvar=16 TestFunc me=2 - sleeping globvar=17 TestFunc me=2 - done param=6 globvar=17 TestFunc me=1 - done param=5 globvar=17 TestFunc me=0 - done param=4 globvar=17 Main - back from join 0 retval=0 Main - waiting for join 5 Main - back from join 1 retval=0 Main - waiting for join 6 Main - back from join 2 retval=0 Main thread – threads completed globvar=17 recs %

You can see the threads getting created in the loop. The master thread completes the pthread_create( ) loop, executes the second loop, and calls the pthread_join( ) function. This function suspends the master thread until the specified thread completes. The master thread is waiting for Thread 4 to complete. Once the master thread suspends, one of the new threads is started. Thread 4 starts executing. Initially the variable globvar is set to 0 from the main program. The self, me, and param variables are thread-local variables, so each thread has its own copy. Thread 4 sets globvar to 15 and goes to sleep. Then Thread 5 begins to execute and sees globvar set to 15 from Thread 4; Thread 5 sets globvar to 16, and goes to sleep. This activates Thread 6, which sees the current value for globvar and sets it to 17. Then Threads 6, 5, and 4 wake up from their sleep, all notice the latest value of 17 in globvar, and return from the TestFunc( ) routine, ending the threads.

All this time, the master thread is in the middle of a pthread_join( ) waiting for Thread 4 to complete. As Thread 4 completes, the pthread_join( ) returns. The master thread then calls pthread_join( ) repeatedly to ensure that all three threads have been completed. Finally, the master thread prints out the value for globvar that contains the latest value of 17.

To summarize, when an application is executing with more than one thread, there are shared global areas and thread private areas. Different threads execute at different times, and they can easily work together in shared areas.

Limitations of user space multithreading

Multithreaded applications were around long before multiprocessors existed. It is quite practical to have multiple threads with a single CPU. As a matter of fact, the previous example would run on a system with any number of processors, including one. If you look closely at the code, it performs a sleep operation at each critical point in the code. One reason to add the sleep calls is to slow the program down enough that you can actually see what is going on. However, these sleep calls also have another effect. When one thread enters the sleep routine, it causes the thread library to search for other “runnable” threads. If a runnable thread is found, it begins executing immediately while the calling thread is “sleeping.” This is called a user-space thread context switch. The process actually has one operating system thread shared among several logical user threads. When library routines (such as sleep) are called, the thread library5 jumps in and reschedules threads.

We can explore this effect by substituting the following SpinFunc( ) function, replacing TestFunc( ) function in the pthread_create( ) call in the previous example:


void *SpinFunc(void *parm) { int me; me = (int) parm; printf("SpinFunc me=%d - sleeping %d seconds ...\n", me, me+1); sleep(me+1); printf("SpinFunc me=%d – wake globvar=%d...\n", me, globvar); globvar ++; printf("SpinFunc me=%d - spinning globvar=%d...\n", me, globvar); while(globvar < THREAD_COUNT ) ; printf("SpinFunc me=%d – done globvar=%d...\n", me, globvar); sleep(THREAD_COUNT+1); }

If you look at the function, each thread entering this function prints a message and goes to sleep for 1, 2, and 3 seconds. Then the function increments globvar (initially set to 0 in main) and begins a while-loop, continuously checking the value of globvar. As time passes, the second and third threads should finish their sleep( ), increment the value for globvar, and begin the while-loop. When the last thread reaches the loop, the value for globvar is 3 and all the threads exit the loop. However, this isn’t what happens:


recs % create2 & [1] 23921 recs % Main - globvar=0 Main - creating i=0 tid=4 retval=0 Main - creating i=1 tid=5 retval=0 Main - creating i=2 tid=6 retval=0 Main thread - threads started globvar=0 Main - waiting for join 4 SpinFunc me=0 - sleeping 1 seconds ... SpinFunc me=1 - sleeping 2 seconds ... SpinFunc me=2 - sleeping 3 seconds ... SpinFunc me=0 - wake globvar=0... SpinFunc me=0 - spinning globvar=1... recs % ps PID TTY TIME CMD 23921 pts/35 0:09 create2 recs % ps PID TTY TIME CMD 23921 pts/35 1:16 create2 recs % kill -9 23921 [1] Killed create2 recs %

We run the program in the background6 and everything seems to run fine. All the threads go to sleep for 1, 2, and 3 seconds. The first thread wakes up and starts the loop waiting for globvar to be incremented by the other threads. Unfortunately, with user space threads, there is no automatic time sharing. Because we are in a CPU loop that never makes a system call, the second and third threads never get scheduled so they can complete their sleep( ) call. To fix this problem, we need to make the following change to the code:

while(globvar < THREAD_COUNT ) sleep(1) ;
    

With this sleep7 call, Threads 2 and 3 get a chance to be “scheduled.” They then finish their sleep calls, increment the globvar variable, and the program terminates properly.

You might ask the question, “Then what is the point of user space threads?” Well, when there is a high performance database server or Internet server, the multiple logical threads can overlap network I/O with database I/O and other background computations. This technique is not so useful when the threads all want to perform simultaneous CPU-intensive computations. To do this, you need threads that are created, managed, and scheduled by the operating system rather than a user library.

Operating System-Supported Multithreading

When the operating system supports multiple threads per process, you can begin to use these threads to do simultaneous computational activity. There is still no requirement that these applications be executed on a multiprocessor system. When an application that uses four operating system threads is executed on a single processor machine, the threads execute in a time-shared fashion. If there is no other load on the system, each thread gets 1/4 of the processor. While there are good reasons to have more threads than processors for noncompute applications, it’s not a good idea to have more active threads than processors for compute-intensive applications because of thread-switching overhead. (For more detail on the effect of too many threads, see Appendix D, How FORTRAN Manages Threads at Runtime.

If you are using the POSIX threads library, it is a simple modification to request that your threads be created as operating-system rather rather than user threads, as the following code shows:


#define _REENTRANT /* basic 3-lines for threads */ #include <stdio.h> #include <pthread.h> #define THREAD_COUNT 2 void *SpinFunc(void *); int globvar; /* A global variable */ int index[THREAD_COUNT]; /* Local zero-based thread index */ pthread_t thread_id[THREAD_COUNT]; /* POSIX Thread IDs */ pthread_attr_t attr; /* Thread attributes NULL=use default */ main() { int i,retval; pthread_t tid; globvar = 0; pthread_attr_init(&attr); /* Initialize attr with defaults */ pthread_attr_setscope(&attr, PTHREAD_SCOPE_SYSTEM); printf("Main - globvar=%d\n",globvar); for(i=0;i<THREAD_COUNT;i++) { index[i] = i; retval = pthread_create(&tid,&attr,SpinFunc,(void *) index[i]); printf("Main - creating i=%d tid=%d retval=%d\n",i,tid,retval); thread_id[i] = tid; } printf("Main thread - threads started globvar=%d\n",globvar); for(i=0;i<THREAD_COUNT;i++) { printf("Main - waiting for join %d\n",thread_id[i]); retval = pthread_join( thread_id[i], NULL ) ; printf("Main - back from join %d retval=%d\n",i,retval); } printf("Main thread - threads completed globvar=%d\n",globvar); }

The code executed by the master thread is modified slightly. We create an “attribute” data structure and set the PTHREAD_SCOPE_SYSTEM attribute to indicate that we would like our new threads to be created and scheduled by the operating system. We use the attribute information on the call to pthread_create( ). None of the other code has been changed. The following is the execution output of this new program:


recs % create3 Main - globvar=0 Main - creating i=0 tid=4 retval=0 SpinFunc me=0 - sleeping 1 seconds ... Main - creating i=1 tid=5 retval=0 Main thread - threads started globvar=0 Main - waiting for join 4 SpinFunc me=1 - sleeping 2 seconds ... SpinFunc me=0 - wake globvar=0... SpinFunc me=0 - spinning globvar=1... SpinFunc me=1 - wake globvar=1... SpinFunc me=1 - spinning globvar=2... SpinFunc me=1 - done globvar=2... SpinFunc me=0 - done globvar=2... Main - back from join 0 retval=0 Main - waiting for join 5 Main - back from join 1 retval=0 Main thread - threads completed globvar=2 recs %

Now the program executes properly. When the first thread starts spinning, the operating system is context switching between all three threads. As the threads come out of their sleep( ), they increment their shared variable, and when the final thread increments the shared variable, the other two threads instantly notice the new value (because of the cache coherency protocol) and finish the loop. If there are fewer than three CPUs, a thread may have to wait for a time-sharing context switch to occur before it notices the updated global variable.

With operating-system threads and multiple processors, a program can realistically break up a large computation between several independent threads and compute the solution more quickly. Of course this presupposes that the computation could be done in parallel in the first place.

Footnotes

  1. ANSYS is a commonly used structural-analysis package.
  2. These examples are written in C using the POSIX 1003.1 application programming interface. This example runs on most UNIX systems and on other POSIX-compliant systems including OpenNT, Open- VMS, and many others.
  3. It’s not uncommon for a human parent process to “fork” and create a human child process that initially seems to have the same identity as the parent. It’s also not uncommon for the child process to change its overall identity to be something very different from the parent at some later point. Usually human children wait 13 years or so before this change occurs, but in UNIX, this happens in a few microseconds. So, in some ways, in UNIX, there are many parent processes that are “disappointed” because their children did not turn out like them!
  4. This example uses the IEEE POSIX standard interface for a thread library. If your system supports POSIX threads, this example should work. If not, there should be similar routines on your system for each of the thread functions.
  5. The pthreads library supports both user-space threads and operating-system threads, as we shall soon see. Another popular early threads package was called cthreads.
  6. Because we know it will hang and ignore interrupts.
  7. Some thread libraries support a call to a routine sched_yield( ) that checks for runnable threads. If it finds a runnable thread, it runs the thread. If no thread is runnable, it returns immediately to the calling thread. This routine allows a thread that has the CPU to ensure that other threads make progress during CPU-intensive periods of its code.

Collection Navigation

Content actions

Download:

Collection as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Module as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Add:

Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks