Jump to content

User:Simplexsigil/sandbox

From Wikipedia, the free encyclopedia

Concrete Implementations of Prefixsum Algorithms

[edit]

An implementation of a parallel prefix sum algorithm, like other parallel algorithms, has to take the parallelisation architecture of the platform into account. More specifically, multiple algorithms exist which are adapted for platforms working on shared memory as well as algorithms which are well suited for platforms using distributed memory, relying on message passing as the only form of inter process communication.

Shared Memory: Two-Level Algorithm

[edit]

The following algorithm assumes a shared memory machine model; all processing elements (PEs) have access to the same memory. A version of this algorithm is implemented in the Multi-Core Standard Template Library (MCSTL)[1][2], a parallel implementation of the C++ standard template library which provides adapted versions for parallel computing of various algorithms.


In order to concurrently calculate the prefix sum over data elements with processing elements, the data is divided into blocks, each containing elements (for simplicity we assume that divides ).

Note, that although the algorithm divides the data into blocks, only processing elements run in parallel at a time.

In a first sweep, each PE calculates a local prefix sum for its block. The last block does not need to be calculated, since these prefix sums are only calculated to be used as offsets to the prefix sums of succeeding blocks and the last block is by definition not succeeded.

The offsets which are stored in the last position of each block are accumulated in a prefix sum of their own and stored in their succeeding positions. For being a small number, it is faster to do this sequentially, for a large , this step could be done in parallel as well.

A second sweep is performed. This time the first block does not have to be processed, since it does not need to account for the offset of a preceding block. However, in this sweep the last block is included instead and the prefix sums for each block are calculated taking the prefix sum block offsets calculated in the previous sweep into account.

function prefix_sum(elements) {
    n := size(elements)
    p := number of processing elements
    prefix_sum := [0...0] of size n
    
    do parallel i = 0 to p-1 {
        // i := index of current PE
        from j = i * n / (p+1) to (i+1) * n / (p+1) - 1 do {
            // This only stores the prefix sum of the local blocks
            store_prefix_sum_with_offset_in(elements, 0, prefix_sum)
        }
    }
    
    x = 0
    
    for i = 1 to p {
        x +=  prefix_sum[i * n / (p+1) - 1] // Build the prefix sum over the first p blocks
        prefix_sum[i * n / (p+1)] = x       // Save the results to be used as offsets in second sweep
    }
    
    do parallel i = 1 to p {
        // i := index of current PE
        from j = i * n / (p+1) to (i+1) * n / (p+1) - 1 do {
            offset := prefix_sum[i * n / (p+1)]
            // Calculate the prefix sum taking the sum of preceding blocks as offset
            store_prefix_sum_with_offset_in(elements, offset, prefix_sum)
        }
    }
    
    return prefix_sum
}

Distributed Memory: Hypercube Algorithm

[edit]

The Hypercube Prefix Sum Algorithm[3] is well adapted for distributed memory platforms and works with the exchange of messages between the processing elements. It assumes to have processor elements (PEs) participating in the algorithm equal to the number of corners in a -dimensional hypercube.

Different hypercubes for varying number of nodes

Throughout the algorithm, each PE is seen as a corner in a hypothetical hyper cube with knowledge of the total prefix sum as well as the prefix sum of all elements up to itself (according to the ordered indices among the PEs), both in its own hypercube.

  • The algorithm starts by assuming every PE is the single corner of a zero dimensional hyper cube and therefore and are equal to the local prefix sum of its own elements.
  • The algorithm goes on by unifying hypercubes which are adjacent along one dimension. During each unification, is exchanged and aggregated between the two hyper cubes which keeps the invariant that all PEs at corners of this new hyper cube store the total prefix sum of this newly unified hyper cube in their variable . However, only the hyper cube containing the PEs with higher index also adds this to their local variable , keeping the invariant that only stores the value of the prefix sum of all elements at PEs with indices smaller or equal to their own index.

In a -dimensional hyper cube with PEs at the corners, the algorithm has to be repeated times to have the zero-dimensional hyper cubes be unified into one -dimensional hyper cube.

Assuming a duplex communication model where the of two adjacent PEs in different hyper cubes can be exchanged in both directions in one communication step, this means communication startups.

i := Index of own processor element (PE)
m := prefix sum of local elements of this PE
d := number of dimensions of the hyper cube

x = m;     // Invariant: The prefix sum up to this PE in the current sub cube
σ = m;     // Invariant: The prefix sum of all elements in the current sub cube

for (k=0; k<=d-1; k++){
    y = σ @ PE(i xor 2^k)  // Get the total prefix sum of the opposing sub cube along dimension k
    σ = σ + y              // Aggregate the prefix sum of both sub cubes

    if(i && 2^k){
        x = x + y  // Only aggregate the prefix sum from the other sub cube, if this PE is the higher index one.
    }
}

Large Message Sizes: Pipelined Binary Tree

[edit]

The Pipelined Binary Tree Algorithm[4] is another algorithm for distributed memory platforms which is specifically well suited for large message sizes.


Like the hypercube algorithm, it assumes a special communication structure. The processing elements (PEs) are hypothetically arranged in a binary tree (e.g. a Fibonacci Tree) with infix numeration according to their index within the PEs. Communication on such a tree always occurs between parent and child nodes.

The infix numeration ensures that for any given PEj, the indices of all nodes reachable by its left subtree are less than and the indices of all nodes in the right subtree are greater than . The parent's index is greater than any of the indices in PEj's subtree if PEj is a left child and smaller if PEj is a right child. This allows for the following reasoning:

Information exchange between processing elements during upward (blue) and downward (red) phase in the Pipelined Binary Tree Prefix Sum algorithm.
  • The local prefix sum of the left subtree has to be aggregated to calculate PEj's local prefix sum .
  • The local prefix sum of the right subtree has to be aggregated to calculate the local prefix sum of higher level PEh which are reached on a path containing a left children connection (which means ).
  • The total prefix sum of PEj is necessary to calculate the total prefix sums in the right subtree (e.g. for the highest index node in the subtree).
  • PEj needs to include the total prefix sum of the first higher order node which is reached via an upward path including a right children connection to calculate its total prefix sum.

Note the distinction between subtree-local and total prefix sums. The points two, three and four can lead to believe they would form a circular dependency, but this is not the case. Lower level PEs might require the total prefix sum of higher level PEs to calculate their total prefix sum, but higher level PEs only require subtree local prefix sums to calculate their total prefix sum. The root node as highest level node only requires the local prefix sum of its left subtree to calculate its own prefix sum. Each PE on the path from PE0 to the root PE only requires the local prefix sum of its left subtree to calculate its own prefix sum, whereas every node on the path from PEp-1 (last PE) to the PEroot requires the total prefix sum of its parent to calculate its own total prefix sum.

This leads to a two phase algorithm:

Upward Phase
Propagate the subtree local prefix sum to its parent for each PEj.

Downward phase
Propagate the exclusive (exclusive PEj as well as the PEs in its left subtree) total prefix sum of all lower index PEs which are not included in the addressed subtree of PEj to lower level PEs in the left child subtree of PEj. Propagate the inclusive prefix sum to the right child subtree of PEj.

Note that the algorithm is run in parallel at each PE and the PEs will block upon receive until their children/parents provide them with packets.

k := number of packets in a message m of a PE
m @ {left, right, parent, this} := // Messages at the different PEs

x = m @ this

// Upward phase - Calculate subtree local prefix sums
for j=0 to k-1: // Pipelining: For each packet of a message
    if hasLeftChild:
        blocking receive m[j] @ left // This replaces the local m[j] with the received m[j]
        // Aggregate inclusive local prefix sum from lower index PEs
        x[j] = m[j]  x[j] 

    if hasRightChild:
        blocking receive m[j] @ right
        // We do not aggregate m[j] into the local prefix sum, since the right children are higher index PEs
        send x[j]  m[j] to parent
    else:
        send x[j] to parent

// Downward phase
for j=0 to k-1:
    m[j] @ this = 0

    if hasParent:
        blocking receive m[j] @ parent
        // For a left child m[j] is the parents exclusive prefix sum, for a right child the inclusive prefix sum
        x[j] = m[j]  x[j] 
    
    send m[j] to left  // The total prefix sum of all PE's smaller than this or any PE in the left subtree
    send x[j] to right // The total prefix sum of all PE's smaller or equal than this PE
Pipelining
[edit]

If the message of length can be divided into packets and the operator ⨁ can be used on each of the corresponding message packets separately, pipelining is possible.[4]

If the algorithm is used without pipelining, there are always only two levels (the sending PEs and the receiving PEs) of the binary tree at work while all other PEs are waiting. If there are processing elements and a balanced binary tree is used, the tree has levels, the length of the path from to is therefore which represents the maximum number of non parallel communication operations during the upward phase, likewise, the communication on the downward path is also limited to startups. Assuming a communication startup time of and a bytewise transmission time of , upward and downward phase are limited to in a non pipelined scenario.

Upon division into k packets, each of size and sending them separately, the first packet still needs to be propagated to as part of a local prefix sum and this will occur again for the last packet if . However, in between, all the PEs along the path can work in parallel and each third communication operation (receive left, receive right, send to parent) sends a packet to the next level, so that one phase can be completed in communication operations and both phases together need which is favourable for large message sizes .

The algorithm can further be optimised by making use of full-duplex or telephone model communication and overlapping the upward and the downward phase.[4]

References

[edit]
  1. ^ Singler, Johannes. "MCSTL: The Multi-Core Standard Template Library". Retrieved 2019-03-29.
  2. ^ Singler, Johannes; Sanders, Peter; Putze, Felix (2007). "MCSTL: The Multi-core Standard Template Library". 4641: 682–694. doi:10.1007/978-3-540-74466-5_72. ISSN 0302-9743. {{cite journal}}: Cite journal requires |journal= (help)
  3. ^ Ananth Grama; Vipin Kumar; Anshul Gupta (2003). Introduction to Parallel Computing. Addison-Wesley. pp. 85, 86. ISBN 978-0-201-64865-2.
  4. ^ a b c Sanders, Peter; Träff, Jesper Larsson (2006). "Parallel Prefix (Scan) Algorithms for MPI". 4192: 49–57. doi:10.1007/11846802_15. ISSN 0302-9743. {{cite journal}}: Cite journal requires |journal= (help)