Skip to content Skip to navigation Skip to collection information

OpenStax_CNX

You are here: Home » Content » High Performance Computing » Blocking to Ease Memory Access Patterns

Navigation

Table of Contents

Lenses

What is a lens?

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

Endorsed by Endorsed (What does "Endorsed by" mean?)

This content has been endorsed by the organizations listed. Click each link for a list of all content endorsed by the organization.
  • HPC Open Edu Cup

    This collection is included inLens: High Performance Computing Open Education Cup 2008-2009
    By: Ken Kennedy Institute for Information Technology

    Click the "HPC Open Edu Cup" link to see all content they endorse.

Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
  • NSF Partnership display tagshide tags

    This collection is included inLens: NSF Partnership in Signal Processing
    By: Sidney Burrus

    Click the "NSF Partnership" link to see all content affiliated with them.

    Click the tag icon tag icon to display tags associated with this content.

  • Featured Content

    This collection is included inLens: Connexions Featured Content
    By: Connexions

    Comments:

    "The purpose of Chuck Severence's book, High Performance Computing has always been to teach new programmers and scientists about the basics of High Performance Computing. This book is for learners […]"

    Click the "Featured Content" link to see all content affiliated with them.

Also in these lenses

  • UniqU content

    This collection is included inLens: UniqU's lens
    By: UniqU, LLC

    Click the "UniqU content" link to see all content selected in this lens.

  • Lens for Engineering

    This module and collection are included inLens: Lens for Engineering
    By: Sidney Burrus

    Click the "Lens for Engineering" link to see all content selected in this lens.

  • eScience, eResearch and Computational Problem Solving

    This collection is included inLens: eScience, eResearch and Computational Problem Solving
    By: Jan E. Odegard

    Click the "eScience, eResearch and Computational Problem Solving" link to see all content selected in this lens.

Recently Viewed

This feature requires Javascript to be enabled.

Tags

(What is a tag?)

These tags come from the endorsement, affiliation, and other lenses that include this content.
 

Blocking to Ease Memory Access Patterns

Module by: Charles Severance, Kevin Dowd. E-mail the authors

Blocking is another kind of memory reference optimization. As with loop interchange, the challenge is to retrieve as much data as possible with as few cache misses as possible. We’d like to rearrange the loop nest so that it works on data in little neighborhoods, rather than striding through memory like a man on stilts. Given the following vector sum, how can we rearrange the loop?


DO I=1,N DO J=1,N A(J,I) = A(J,I) + B(I,J) ENDDO ENDDO

This loop involves two vectors. One is referenced with unit stride, the other with a stride of N. We can interchange the loops, but one way or another we still have N-strided array references on either A or B, either of which is undesirable. The trick is to block references so that you grab a few elements of A, and then a few of B, and then a few of A, and so on — in neighborhoods. We make this happen by combining inner and outer loop unrolling:


DO I=1,N,2 DO J=1,N,2 A(J,I) = A(J,I) + B(I,J) A(J+1,I) = A(J+1,I) + B(I,J+1) A(J,I+1) = A(J,I+1) + B(I+1,J) A(J+1,I+1) = A(J+1,I+1) + B(I+1,J+1) ENDDO ENDDO

Use your imagination so we can show why this helps. Usually, when we think of a two-dimensional array, we think of a rectangle or a square (see Figure 1). Remember, to make programming easier, the compiler provides the illusion that two-dimensional arrays A and B are rectangular plots of memory as in Figure 1. Actually, memory is sequential storage. In FORTRAN, a two-dimensional array is constructed in memory by logically lining memory “strips” up against each other, like the pickets of a cedar fence. (It’s the other way around in C: rows are stacked on top of one another.) Array storage starts at the upper left, proceeds down to the bottom, and then starts over at the top of the next column. Stepping through the array with unit stride traces out the shape of a backwards “N,” repeated over and over, moving to the right.

Figure 1
Arrays A and B
This figure shows two boxes. The first, labeled, Array A, contains a large grey letter, A, and a dashed grid pattern covering the box. The second, labeled Array B, contains a large grey letter, B, of the same size as letter A, with an identical dashed grid pattern inside the box.
Figure 2
How array elements are stored
This figure contains two objects. The first is an N-shaped zig-zag arrow, first going downward, then moving diagonally up and to the right, then moving downward again. The first downward section is labeled, column 1, and the second is labeled, column 2. The second object is a box filled with a pattern of similar zig-zag arrows, titled array. Inside the box is a dashed grid pattern overlaid on the arrows. A horizontal section of the grid is labled, denotes cache line boundary.

Imagine that the thin horizontal lines of Figure 2 cut memory storage into pieces the size of individual cache entries. Picture how the loop will traverse them. Because of their index expressions, references to A go from top to bottom (in the backwards “N” shape), consuming every bit of each cache line, but references to B dash off to the right, using one piece of each cache entry and discarding the rest (see Figure 3, top). This low usage of cache entries will result in a high number of cache misses.

If we could somehow rearrange the loop so that it consumed the arrays in small rectangles, rather than strips, we could conserve some of the cache entries that are being discarded. This is exactly what we accomplished by unrolling both the inner and outer loops, as in the following example. Array A is referenced in several strips side by side, from top to bottom, while B is referenced in several strips side by side, from left to right (see Figure 3, bottom). This improves cache performance and lowers runtime.

For really big problems, more than cache entries are at stake. On virtual memory machines, memory references have to be translated through a TLB. If you are dealing with large arrays, TLB misses, in addition to cache misses, are going to add to your runtime.

Figure 3
2×2 squares
This figure contains four objects. The first is a box labeled Array A containing an N-shaped zig-zag arrow, first going downward, then moving diagonally up and to the right, then moving downward again. Inside the box is a dashed grid pattern. The arrows are on the far left side of the box, and their length takes up the entire height of the box. The second is labeled, Array B, and contains a similar zig-zag of arrows, this time pointing to the right and in the topmost portion of the box. Array A and B are positioned next to each other. Below them is a horizontal line dividing the figure into two parts. Below the line are two more boxes, also labeled Array A and Array B, but with different arrows inside. The arrows in Array A are pictured as follows. Two small arrows in the top-left portion of the box point downward, and out of them is a larger arrow pointing down that stretches the entire height of the box. coming from the right and at a diagonal is a thick, faded line that meets the arrow at the bottom-left portion of the object. Array B contains the same shape, except with the arrows pointing to the right, from the uppermost portion of the object.

Here’s something that may surprise you. In the code below, we rewrite this loop yet again, this time blocking references at two different levels: in 2×2 squares to save cache entries, and by cutting the original loop in two parts to save TLB entries:


DO I=1,N,2 DO J=1,N/2,2 A(J,I) = A(J,I) + B(I,J) A(J+1,I) = A(J+1,I) + B(I+1,J) A(J,I+1) = A(J,I+1) + B(I+1,J) A(J+1,I+1) = A(J+1,I+1) + B(I+1,J+1) ENDDO ENDDO DO I=1,N,2 DO J=N/2+1,N,2 A(J,I) = A(J,I) + B(I,J) A(J+1,I) = A(J+1,I) + B(I+1,J) A(J,I+1) = A(J,I+1) + B(I+1,J) A(J+1,I+1) = A(J+1,I+1) + B(I+1,J+1) ENDDO ENDDO

You might guess that adding more loops would be the wrong thing to do. But if you work with a reasonably large value of N, say 512, you will see a significant increase in performance. This is because the two arrays A and B are each 256 KB × 8 bytes = 2 MB when N is equal to 512 — larger than can be handled by the TLBs and caches of most processors.

The two boxes in Figure 4 illustrate how the first few references to A and B look superimposed upon one another in the blocked and unblocked cases. Unblocked references to B zing off through memory, eating through cache and TLB entries. Blocked references are more sparing with the memory system.

Figure 4
Picture of unblocked versus blocked references
This figure contains two objects. The first is a box, labled, Blocked, with sets of arrows in a grid pointing in different directions. The for viewing purposes, the grid can be thought of as having four quadrants. In the upper-left quadrant, four parallel evenly-spaced arrows of equal length point to the right, and four identical arrows point down, crossing at sixteen points. In the upper-right quadrant, four parallel evenly-spaced arrows of equal length point down. And in the lower-left quadrant, four parallel evenly-spaced arrows of equal length point to the right. Below this object is a large, wide arrow pointing to the right, labeled, Strided Memory References. The second objects, labeled Unblocked, is similar in nature with arrows inside a box. There are eight arrows in this box. The arrows are parallel, spaced evenly, and of equal length. Four arrows come from the top-left of the object and point down. The other four arrows come from the top-left and point to the right. Below this object is a wider arrow pointed to the right, labeled strided memory references. An asterisk at the bottom of the figure, related to the titles of the two objects, is labeled, Arrays A and B are superimposed.

You can take blocking even further for larger problems. This code shows another method that limits the size of the inner loop and visits it repeatedly:


II = MOD (N,16) JJ = MOD (N,4) DO I=1,N DO J=1,JJ A(J,I) = A(J,I) + B(J,I) ENDDO ENDDO DO I=1,II DO J=JJ+1,N A(J,I) = A(J,I) + B(J,I) A(J,I) = A(J,I) + 1.0D0 ENDDO ENDDO DO I=II+1,N,16 DO J=JJ+1,N,4 DO K=I,I+15 A(J,K) = A(J,K) + B(K,J) A(J+1,K) = A(J+1,K) + B(K,J+1) A(J+2,K) = A(J+2,K) + B(K,J+2) A(J+3,K) = A(J+3,K) + B(K,J+3) ENDDO ENDDO ENDDO

Where the inner I loop used to execute N iterations at a time, the new K loop executes only 16 iterations. This divides and conquers a large memory address space by cutting it into little pieces.

While these blocking techniques begin to have diminishing returns on single-processor systems, on large multiprocessor systems with nonuniform memory access (NUMA), there can be significant benefit in carefully arranging memory accesses to maximize reuse of both cache lines and main memory pages.

Again, the combined unrolling and blocking techniques we just showed you are for loops with mixed stride expressions. They work very well for loop nests like the one we have been looking at. However, if all array references are strided the same way, you will want to try loop unrolling or loop interchange first.

Collection Navigation

Content actions

Download:

Collection as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Module as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Add:

Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks