- Albany at RPI
Albany serves as a demonstration application of new libraries, advanced solution and discretization capabilities, and new software development methods focused towards challenging applications in computational mechanics.
As for the integration of Albany with SCOREC's MeshAdapt, two major steps were taken (1) Integration of SCOREC's GMI/FMDB with Albany: The development to transform geometric model/mesh data loaded in GMI and FMDB into Albany-compatible data structure which is based on exodus, is completed, and demonstrated in both serial and parallel regression tests. (2) The integration of SCOREC's MeshAdapt: The development is well underway. A template interface to the meshAdapt capability in Albany is completed and being tested; thus far two traits classes have been developed to demonstrate isotropic adaptation designed to maintain high quality finite elements during large deformation and material necking events....
Attached Parallel Field supports storage and manipulation of fields.
Anonymous (read only) access: svn co http://redmine.scorec.rpi.edu/anonsvn/mathfield/trunk
Computational Science and Engineering (CSE) Seminar Series will serve to provide an informal platform for exchange and collaboration among computational science researchers. The speakers are intended to be graduate students and post-doctoral researchers who work in this broad area. Occasionally, guest speakers from outside RPI will be invited. ...
FASTMath Software Strategies
The Inter-Processor Communication Manager (IPComMan) is designed to improve the scalability of data exchange costs by exploiting communications of a local neighborhood for each processor. The basic idea of the package is to keep the message-passing within subdomains, and eliminate, or reduce, the number of collective calls needed. The communication tool takes care of the message flow with a subset of MPI functions and takes advantage of non-blocking functions from both sender’s and receiver’s sides. IPComMan automatically manages the completion and delivery of send and receive requests posted while overlapping communication with computation. The package provides several useful features: i) automatic message packing, ii) management of sends and receives with non-blocking MPI functions, iii) communication pattern reusability, iv) asynchronous behavior unless the other is specified, and v) support of dynamically changing neighborhoods during communication steps. ...
M3DC1-Parallel Adaptive Fusion simulations with SCOREC tools including PUMI, APF and MeshAdapt
Provide services to support mesh topology modification.
1. Adaptive mesh modification to match a given anisotropic mesh size field
2. Mesh curving that can take a straight-sided mesh and curve the mesh entities on the boundary to better match the domain geometry...
Parallel unstructured simulations at extreme scale require that the mesh be distributed across a large number of processors with equal workload and minimum inter-part communications. ParMA's goal is to dynamically partition unstructured meshes directly using the existing mesh adjacency information to account for multiple criteria. Diffusive partition improvement procedures support large meshes (billions of mesh regions) on large core count machines (>100,000) and account for...
Parallel Control Utility (PCU) is a library designed to handle for threads the same basic tasks that MPI handles for processes. This means message passing between threads, getting thread rank and hardware location, etc.
New users should go to the Files tab above and download the latest tarball....
- PUMIPUMI is a parallel unstructured mesh management infrastructure to support a full range of operations on unstructured meshes on massively parallel computers consisting of the following libraries:
- PCU: Communication, threading, and file IO built on MPI ...
- PUMI_MESH (FMDB)
PUMI is a parallel unstructured mesh management infrastructure to support a full range of operations on unstructured meshes on massively parallel computers consisiting of six libraries: pumi_util for common features, pumi_comm for phased message passing and thread management, pumi_geom for geometric model interface, pumi_mesh for distributed mesh management, pumi_partition for partition improvement and puma_field for field management. PUMI provides a core capability used in all the automated adaptive simulation software developed at RPI’s Scientific Computation Research Center which is currently being used on projects sponsored by the DOE, NSF, Army, NASA and several companies. ...
Parallel Adaptive Loop Construction for SLAC's ACE3P Analysis Suite
Also available in: Atom