Difference between revisions of "PDR.AFLR"

From crtc.cs.odu.edu
Jump to: navigation, search
(Introduction)
(Overview)
Line 2: Line 2:
  
 
== Overview ==
 
== Overview ==
 +
''' TODO '''
 
The goal of this project is the development of a parallel mesh generator using CRTC’s PDR theory, which mathematically guarantees the following mesh generation requirements:
 
The goal of this project is the development of a parallel mesh generator using CRTC’s PDR theory, which mathematically guarantees the following mesh generation requirements:
 
:# '''Stability''': the quality of the mesh generated in parallel must be comparable to that of a mesh generated sequentially. The quality is defined in terms of the shape of the elements (using a chosen space-dependent metric), and the number of the elements (fewer is better for the same shape constraint).
 
:# '''Stability''': the quality of the mesh generated in parallel must be comparable to that of a mesh generated sequentially. The quality is defined in terms of the shape of the elements (using a chosen space-dependent metric), and the number of the elements (fewer is better for the same shape constraint).

Revision as of 09:19, 28 March 2018

Introduction

Overview

TODO The goal of this project is the development of a parallel mesh generator using CRTC’s PDR theory, which mathematically guarantees the following mesh generation requirements:

  1. Stability: the quality of the mesh generated in parallel must be comparable to that of a mesh generated sequentially. The quality is defined in terms of the shape of the elements (using a chosen space-dependent metric), and the number of the elements (fewer is better for the same shape constraint).
  2. Robustness: the ability of the software to correctly and efficiently process any input data. Operator intervention into a massively parallel computation is not only highly expensive, but most likely infeasible due to the large number of concurrently processed sub-problems.
  3. Code re-use: a modular design of the parallel software that builds upon a previously designed sequential meshing code, such that it can be replaced and/or updated with a minimal effort. Due to the complexity of meshing codes, this is the only practical approach for keeping up with the ever-evolving sequential algorithms.
  4. Scalability: the ratio of the time taken by the best sequential implementation to the time taken by the parallel implementation. The speedup is always limited by the inverse of the sequential fraction of the software, and therefore all non-trivial stages of the computation must be parallelized to leverage the current architectures with millions of cores.
  5. Reproducibility : (weak & strong) TetGen meets none of these. For details see [LINK]. When it comes to shared memory no problem , in distributed memory (explain) This is why results are on limited geometries.

The design and implementation of a sequential industrial strength code is labor intensive, it takes about 100 man-years. Parallel mesh generation code is even more labor intensive (by an order of magnitude for traditional parallel machines and expected to be higher for current and emerging architectures due to multiple memory and network hierarchies, fault-tolerance and power aware issues); however, because of the underlying theory that allows code re-use, this initial parallel implementation was achieved in less than six months with impressive functionality, i.e., the same as the sequential TetGen code.

The general idea of Delaunay refinement is based on the insertion of additional (Steiner) points inside the circumdisks of poor quality elements, which causes these elements to be destroyed, until they are gradually eliminated and replaced by better quality elements. It has been proven that this algorithm terminates by producing a mesh with guaranteed bounds on radius-edge ratio and on the density of elements. The main concern when parallelizing Delaunay refinement algorithms is the compatibility (i.e., data dependence) between Steiner points concurrently inserted by multiple threads or processes. Two points are Delaunay-independent if they can be safely inserted concurrently. PDR is based on overlapping the mesh with an octree, defining buffer zones around each leaf of the octree, and proving that points inserted outside the buffer zone of a leaf are always Delaunay-independent with respect to any points inserted inside this leaf.

There are two approaches to PDR, progressive and non-progressive, both of which rely on octree (data) decomposition in order to mathematically guarantee element quality and termination of the PDR algorithm for uniform isotropic Delaunay-based methods.

PDR is used with TetGen 1.4 which can be found here.

Summary

Reproducibility

Stability

Scalability