Skip to content Skip to navigation Skip to collection information

OpenStax-CNX

You are here: Home » Content » High Performance Computing » A Real Example

Navigation

Table of Contents

Lenses

What is a lens?

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

Endorsed by Endorsed (What does "Endorsed by" mean?)

This content has been endorsed by the organizations listed. Click each link for a list of all content endorsed by the organization.
  • HPC Open Edu Cup

    This collection is included inLens: High Performance Computing Open Education Cup 2008-2009
    By: Ken Kennedy Institute for Information Technology

    Click the "HPC Open Edu Cup" link to see all content they endorse.

Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
  • NSF Partnership display tagshide tags

    This collection is included inLens: NSF Partnership in Signal Processing
    By: Sidney Burrus

    Click the "NSF Partnership" link to see all content affiliated with them.

    Click the tag icon tag icon to display tags associated with this content.

  • Featured Content

    This collection is included inLens: Connexions Featured Content
    By: Connexions

    Comments:

    "The purpose of Chuck Severence's book, High Performance Computing has always been to teach new programmers and scientists about the basics of High Performance Computing. This book is for learners […]"

    Click the "Featured Content" link to see all content affiliated with them.

Also in these lenses

  • UniqU content

    This collection is included inLens: UniqU's lens
    By: UniqU, LLC

    Click the "UniqU content" link to see all content selected in this lens.

  • Lens for Engineering

    This module and collection are included inLens: Lens for Engineering
    By: Sidney Burrus

    Click the "Lens for Engineering" link to see all content selected in this lens.

  • eScience, eResearch and Computational Problem Solving

    This collection is included inLens: eScience, eResearch and Computational Problem Solving
    By: Jan E. Odegard

    Click the "eScience, eResearch and Computational Problem Solving" link to see all content selected in this lens.

Recently Viewed

This feature requires Javascript to be enabled.

Tags

(What is a tag?)

These tags come from the endorsement, affiliation, and other lenses that include this content.
 

A Real Example

Module by: Charles Severance, Kevin Dowd. E-mail the authors

In all of the above examples, we have focused on the mechanics of shared memory, thread creation, and thread termination. We have used the sleep( ) routine to slow things down sufficiently to see interactions between processes. But we want to go very fast, not just learn threading for threading’s sake.

The example code below uses the multithreading techniques described in this chapter to speed up a sum of a large array. The hpcwall routine is from (Reference).

This code allocates a four-million-element double-precision array and fills it with random numbers between 0 and 1. Then using one, two, three, and four threads, it sums up the elements in the array:


#define _REENTRANT /* basic 3-lines for threads */ #include <stdio.h> #include <stdlib.h> #include <pthread.h> #define MAX_THREAD 4 void *SumFunc(void *); int ThreadCount; /* Threads on this try */ double GlobSum; /* A global variable */ int index[MAX_THREAD]; /* Local zero-based thread index */ pthread_t thread_id[MAX_THREAD]; /* POSIX Thread IDs */ pthread_attr_t attr; /* Thread attributes NULL=use default */ pthread_mutex_t my_mutex; /* MUTEX data structure */ #define MAX_SIZE 4000000 double array[MAX_SIZE]; /* What we are summing... */ void hpcwall(double *); main() { int i,retval; pthread_t tid; double single,multi,begtime,endtime; /* Initialize things */ for (i=0; i<MAX_SIZE; i++) array[i] = drand48(); pthread_attr_init(&attr); /* Initialize attr with defaults */ pthread_mutex_init (&my_mutex, NULL); pthread_attr_setscope(&attr, PTHREAD_SCOPE_SYSTEM); /* Single threaded sum */ GlobSum = 0; hpcwall(&begtime); for(i=0; i<MAX_SIZE;i++) GlobSum = GlobSum + array[i]; hpcwall(&endtime); single = endtime - begtime; printf("Single sum=%lf time=%lf\n",GlobSum,single); /* Use different numbers of threads to accomplish the same thing */ for(ThreadCount=2;ThreadCount<=MAX_THREAD; ThreadCount++) { printf("Threads=%d\n",ThreadCount); GlobSum = 0; hpcwall(&begtime); for(i=0;i<ThreadCount;i++) { index[i] = i; retval = pthread_create(&tid,&attr,SumFunc,(void *) index[i]); thread_id[i] = tid; } for(i=0;i<ThreadCount;i++) retval = pthread_join(thread_id[i],NULL); hpcwall(&endtime); multi = endtime - begtime; printf("Sum=%lf time=%lf\n",GlobSum,multi); printf("Efficiency = %lf\n",single/(multi*ThreadCount)); } /* End of the ThreadCount loop */ } void *SumFunc(void *parm){ int i,me,chunk,start,end; double LocSum; /* Decide which iterations belong to me */ me = (int) parm; chunk = MAX_SIZE / ThreadCount; start = me * chunk; end = start + chunk; /* C-Style - actual element + 1 */ if ( me == (ThreadCount-1) ) end = MAX_SIZE; printf("SumFunc me=%d start=%d end=%d\n",me,start,end); /* Compute sum of our subset*/ LocSum = 0; for(i=start;i<end;i++ ) LocSum = LocSum + array[i]; /* Update the global sum and return to the waiting join */ pthread_mutex_lock (&my_mutex); GlobSum = GlobSum + LocSum; pthread_mutex_unlock (&my_mutex); }

First, the code performs the sum using a single thread using a for-loop. Then for each of the parallel sums, it creates the appropriate number of threads that call SumFunc( ). Each thread starts in SumFunc( ) and initially chooses an area to operation in the shared array. The “strip” is chosen by dividing the overall array up evenly among the threads with the last thread getting a few extra if the division has a remainder.

Then, each thread independently performs the sum on its area. When a thread has finished its computation, it uses a mutex to update the global sum variable with its contribution to the global sum:


recs % addup Single sum=7999998000000.000000 time=0.256624 Threads=2 SumFunc me=0 start=0 end=2000000 SumFunc me=1 start=2000000 end=4000000 Sum=7999998000000.000000 time=0.133530 Efficiency = 0.960923 Threads=3 SumFunc me=0 start=0 end=1333333 SumFunc me=1 start=1333333 end=2666666 SumFunc me=2 start=2666666 end=4000000 Sum=7999998000000.000000 time=0.091018 Efficiency = 0.939829 Threads=4 SumFunc me=0 start=0 end=1000000 SumFunc me=1 start=1000000 end=2000000 SumFunc me=2 start=2000000 end=3000000 SumFunc me=3 start=3000000 end=4000000 Sum=7999998000000.000000 time=0.107473 Efficiency = 0.596950 recs %

There are some interesting patterns. Before you interpret the patterns, you must know that this system is a three-processor Sun Enterprise 3000. Note that as we go from one to two threads, the time is reduced to one-half. That is a good result given how much it costs for that extra CPU. We characterize how well the additional resources have been used by computing an efficiency factor that should be 1.0. This is computed by multiplying the wall time by the number of threads. Then the time it takes on a single processor is divided by this number. If you are using the extra processors well, this evaluates to 1.0. If the extra processors are used pretty well, this would be about 0.9. If you had two threads, and the computation did not speed up at all, you would get 0.5.

At two and three threads, wall time is dropping, and the efficiency is well over 0.9. However, at four threads, the wall time increases, and our efficiency drops very dramatically. This is because we now have more threads than processors. Even though we have four threads that could execute, they must be time-sliced between three processors.1 This is even worse that it might seem. As threads are switched, they move from processor to processor and their caches must also move from processor to processor, further slowing performance. This cache-thrashing effect is not too apparent in this example because the data structure is so large, most memory references are not to values previously in cache.

It’s important to note that because of the nature of floating-point (see (Reference)), the parallel sum may not be the same as the serial sum. To perform a summation in parallel, you must be willing to tolerate these slight variations in your results.

Footnotes

  1. It is important to match the number of runnable threads to the available resources. In compute code, when there are more threads than available processors, the threads compete among themselves, causing unnecessary overhead and reducing the efficiency of your computation.

Collection Navigation

Content actions

Download:

Collection as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Module as:

PDF | More downloads ...

Add:

Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks