PDR.PODM Distributed Memory

From crtc.cs.odu.edu
Revision as of 14:08, 22 April 2020 by Pthomadakis (talk | contribs) (15 MPI ranks depth: 4)
Jump to: navigation, search

Issues

  • No reuse of leaves refined by worker nodes. The picture below shows the issue. Two neighbour leaves (0,1) each refined as the main leaf (0 top, 1 bottom) but not refined as a neighor.

PDR PODM Leaves not refined.png

  • Current algorithm uses neighbour traversal to distribute cells to octree leaving some cells out in some cases. Such a case can happen when a cell is part of an octree leaf based on its circumcenter but

it does not have any neighbour in the same leaf.

PDR PODM Cells not distributed.png

  • During unpacking the incident cell for each vertex is not set correctly. Specifically, in the case that the initial incident cell is not part of the working unit (Leaf + LVL.1 Neighbours) and thus is not local,

it is set to the infinite cell. This causes PODM to crash randomly for some cases.

  • Another issue comes from the way global IDs are updated for each cell's neighbors' IDs. The code that updates the cell's connectivity using global IDs takes the neighbor's pointer, retrieves its global ID and

updates the neighborID field. However, when the neighbour is part of another work unit's leaf and is not local this pointer is NULL. In this case the neighborID field is wrongly reset to the infinite cell ID, which as result, deletes the connectivity information forever.

  • The function that unpacks the required leaves before refinement does not discard duplicate vertices. Duplicate vertices will always be present since each leaf is packed and sent individually, and as a result,

neighbouring leaves will include the shared vertices. Because duplicate vertices are not handled, multiple vertex objects are created that are in fact the same point geometrically. Thus, two cells that share a common vertex could have pointers to two different vertex objects and, as a result, each cell views a different state about the same vertex.

Fixes

PDR Fix.png

Work Unit After Refinement.png


Interesting Findings

15 MPI ranks depth: 3


PDR PODM Histogram Time 15 d3.png PDR PODM Histogram Tasks 15 d3.png PDR PODM Time Break Down 15 d3.png

PDR PODM Parallelism.png PDR PODM Histogram 15.png

15 MPI ranks depth: 4


PDR PODM Histogram Time 15.png PDR PODM Histogram Tasks 15.png PDR PODM Time Break Down 15.png

40 MPI ranks / 40 cores depth: 4

Total Time: 824.29

Total Tasks: 11413

PDR PODM Histogram Time 40.png PDR PODM Histrogram Tasks 40.png PDR PODM Time Break Down 40.png

160 MPI ranks / 10 cores depth: 4

Total Time: 378.32

Total Tasks: 12652

PDR PODM Histogram Time 160 10.png PDR PODM Histrogram Tasks 160 10.png PDR PODM Time Break Down 160 10.png

After Parallel int to pointer

15 MPI ranks depth: 3


PDR PODM Histogram Time 15 d3 par int2ptr.png PDR PODM Histogram Tasks 15 d3 par int2ptr.png


Sequential Parallel
PDR PODM Time Break Down 15 d3.png PDR PODM Time Break Down 15 d3 par int2ptr.png

15 MPI ranks depth: 4

Total time: 569.7

Total tasks: 8761

PDR PODM Histogram Time 15 par int2ptr.png PDR PODM Histogram Tasks 15 par int2ptr.png


Sequential Parallel
PDR PODM Time Break Down 15.png PDR PODM Time Break Down 15 par int2ptr.png

After Parallel int to pointer and leaf distribution

15 MPI ranks depth: 4

Total time: 452.3

Total tasks: 8813

PDR PODM Histogram Time 15 par int2ptr leaf dist bad elements.png PDR PODM Histogram Tasks 15 par int2ptr leaf dist bad elements.png


Sequential Parallel(int2ptr) Parallel(leafDist,BadEl)
PDR PODM Time Break Down 15.png PDR PODM Time Break Down 15 par int2ptr.png PDR PODM Time Break Down 15 par int2ptr leaf dist bad elements.png