Projects

  • Albany at RPI

    Albany serves as a demonstration application of new libraries, advanced solution and discretization capabilities, and new software development methods focused towards challenging applications in computational mechanics.

    As for the integration of Albany with SCOREC's MeshAdapt, two major steps were taken (1) Integration of SCOREC's GMI/FMDB with Albany: The development to transform geometric model/mesh data loaded in GMI and FMDB into Albany-compatible data structure which is based on exodus, is completed, and demonstrated in both serial and parallel regression tests. (2) The integration of SCOREC's MeshAdapt: The development is well underway. A template interface to the meshAdapt capability in Albany is completed and being tested; thus far two traits classes have been developed to demonstrate isotropic adaptation designed to maintain high quality finite elements during large deformation and material necking events....

  • APF

    Attached Parallel Field supports storage and manipulation of fields.

    Anonymous (read only) access: svn co http://redmine.scorec.rpi.edu/anonsvn/mathfield/trunk

  • Biotissue

    Multiscale model of soft tissue.

    Anonymous (read only) access: svn co http://redmine.scorec.rpi.edu/anonsvn/biotissue

    Current working version: /branches/biotissue_amsi

    Please see wiki: https://www.scorec.rpi.edu/wiki/Biotissue_Project

  • BuildUtil

    BuildUtil is a Makefile framework to handle multi-platform building of co-dependant software modules. It is very SCOREC-specific, though can be made to work elsewhere.

    Anonymous (read only) access:
    svn co http://redmine.scorec.rpi.edu/anonsvn/buildutil...

  • CSESeminars

    Computational Science and Engineering (CSE) Seminar Series will serve to provide an informal platform for exchange and collaboration among computational science researchers. The speakers are intended to be graduate students and post-doctoral researchers who work in this broad area. Occasionally, guest speakers from outside RPI will be invited. ...

  • FASTMath

    FASTMath Software Strategies

  • IPComMan

    The Inter-Processor Communication Manager (IPComMan) is designed to improve the scalability of data exchange costs by exploiting communications of a local neighborhood for each processor. The basic idea of the package is to keep the message-passing within subdomains, and eliminate, or reduce, the number of collective calls needed. The communication tool takes care of the message flow with a subset of MPI functions and takes advantage of non-blocking functions from both sender’s and receiver’s sides. IPComMan automatically manages the completion and delivery of send and receive requests posted while overlapping communication with computation. The package provides several useful features: i) automatic message packing, ii) management of sends and receives with non-blocking MPI functions, iii) communication pattern reusability, iv) asynchronous behavior unless the other is specified, and v) support of dynamically changing neighborhoods during communication steps. ...

  • ITAPS

    Interoperable Technolgoies for Advanced Petascale Simulations

    The new SVN repository is at https://redmine.scorec.rpi.edu/svn/itaps

  • lammps-cuda

    A version of lammps which uses Nvidia's CUDA toolkit to run FFTs on GPUs

    Anonymous SVN is available from
    http://redmine.scorec.rpi.edu/anonsvn/lammps-cuda

  • M3D-C1

    M3DC1-Parallel Adaptive Fusion simulations with SCOREC tools including PUMI, APF and MeshAdapt

  • MeshAdapt

    Provide services to support mesh topology modification.
    1. Adaptive mesh modification to match a given anisotropic mesh size field
    2. Mesh curving that can take a straight-sided mesh and curve the mesh entities on the boundary to better match the domain geometry...

  • ParMA

    Parallel unstructured simulations at extreme scale require that the mesh be distributed across a large number of processors with equal workload and minimum inter-part communications. ParMA's goal is to dynamically partition unstructured meshes directly using the existing mesh adjacency information to account for multiple criteria. Diffusive partition improvement procedures support large meshes (billions of mesh regions) on large core count machines (>100,000) and account for...

  • PCU

    Parallel Control Utility (PCU) is a library designed to handle for threads the same basic tasks that MPI handles for processes. This means message passing between threads, getting thread rank and hardware location, etc.

    New users should go to the Files tab above and download the latest tarball....

  • PHASTA

    Computational fluid dynamics with a stabilized finite element method. Project includes not only the solver but also some pre and post processing capability.

    Anonymous (read only) checkout:
    svn co http://redmine.scorec.rpi.edu/anonsvn/phasta

  • PUMI
    PUMI is a parallel unstructured mesh management infrastructure to support a full range of operations on unstructured meshes on massively parallel computers consisting of the following libraries:
    • PCU: Communication, threading, and file IO built on MPI ...
  • PUMI_GEOM (GMI)

    Geometric Modeling Interface (GMI)

    Anonymous (read only) access:
    svn co http://redmine.scorec.rpi.edu/anonsvn/gmi

  • PUMI_MESH (FMDB)

    PUMI is a parallel unstructured mesh management infrastructure to support a full range of operations on unstructured meshes on massively parallel computers consisiting of six libraries: pumi_util for common features, pumi_comm for phased message passing and thread management, pumi_geom for geometric model interface, pumi_mesh for distributed mesh management, pumi_partition for partition improvement and puma_field for field management. PUMI provides a core capability used in all the automated adaptive simulation software developed at RPI’s Scientific Computation Research Center which is currently being used on projects sponsored by the DOE, NSF, Army, NASA and several companies. ...

  • SLAC

    Parallel Adaptive Loop Construction for SLAC's ACE3P Analysis Suite

Also available in: Atom