What might an ITAPS release installation look like¶
An ITAPS release involves many different pieces of software. Nonetheless, they are divided into roughly two classes; interface software and service software.
Interface software involves the implementation of one or more ITAPS interfaces such as iBase, iMesh/iMeshP, iGeom, etc. However, there are different implementations of these interfaces. By default, an ITAPS release will involve all of the implementations. In addition, we should make it possible for users to, at their option, select which implementations they want installed.
Service software is built on top of interface software. Naturally, the question of which interface implementation a given service is built on arises. If application developers are willing to employ shared libraries for ITAPS related software, and if all interface implementations are Application Binary Interface (ABI) compatible, this question can be answered at run time of the application by setting the LD_LIBRARY_PATH variable (or other platform-specific equivalent) to point to the desired ITAPS software paths. However, if either application developers are not willing to used shared libaries (in my experience many are not) or interface implmenetations are not 100% ABI compatible2, then the choice of which implementation a service is built on must be determined at build and installation time of each service. This means that each service must be installed multiple times, for each interface implementation. In the short term and in the installation directory structure described here, this later approach is the one taken.
<itaps-install-root>/ <itaps-version-number>/ (see note 3) <arch/compiler-moniker>/ (see note 4) mpich-1.2.7p1/ (see note 7) cgma-10.2.3/ hdf5-1.8.5-patch1/ netCDF-4.0/ FMDB-1.2/ interfaces/ (see note 6) iMesh/ lib/ bin/ include/ test/ examples/ iMeshP lib/ bin/ include/ test/ examples/ services/ Swapping/ lib/ bin/ include/ test/ examples/ Mesquite lib/ bin/ include/ test/ examples/ eyeMesh lib/ bin/ include/ test/ examples/ GRUMMP-0.6.2/ interfaces/ (see note 5) lib/ bin/ include/ test/ examples/ services/ Swapping/ lib/ bin/ include/ test/ examples/ Mesquite lib/ bin/ include/ test/ examples/ eyeMesh lib/ bin/ include/ test/ examples/
From this structure, how would a user know/discover that MOAB and FMDB implement iMeshP but GRUMMP does not? Likewise for iRel or iGeom? If we collapse all interface implementations into a single lib/bin/include installation trio, it becomse impossible to see which interfaces are implemented. On the other hand, scattering essential libraries across many different installation points means that applications need to pass that many more -I<path-to-include> directives when compiling and that many more -L<path-to-lib> directives when linking.
The design of ITAPS interfaces as well as the implementations of those interfaces is intended to support multiple instances of differnt implementation's interfaces living together in harmony within the same executable with one caveat; ensuing interface symbol collision must be managed by the use of either an interface multiplexor or shared libraries with disparate namespaces. Presently, the ITAPS project does not provide complete solutions to either of these approaches for all of the ITAPS interfaces1.
Note that iMeshIO is a bit of an odd-ball service here because it actually spans multiple implementations. And, it must in order to support conversion between them (e.g. read from iMesh for MOAB and write to iMesh for FMDB for example). For this reason, it does not make sense to install iMeshIO to any particular implementations bin directory.
Directory Structure Proposal #3¶
- Invert relative depth order of arch moniker and itaps version so that 3rd party libs can be placed above any particular ITAPS version.
- Add an addition level to decouple support_libs from implementations
<itaps-install-root>/ <arch/compiler-moniker>/ (see note 4) 3rd_party (above ITAPS version so can be shared by multiple releases) mpich-1.2.7p1/ (see note 7) hdf5-1.8.5-patch1/ netCDF-4.0/ cgma-10.2.3/ (cgm is an iGeom implementation) README (like a .settings file; which interfaces implemented) lib/ include/ FMDB-1.2/ README (like a .settings file; which interfaces implemented) lib/ libparmetis.a --> ../../../parmetis/lib/libparmetisa. libiZoltan.a libFMDB.a libSCORECUtil.a libipcomman.a libSwapping.a libMesquite.a include/ FMDB/ (itaps headers here) SCORECModel/ SCORECUtil/ bin/ (built against FMDB) eyeMesh iMesh_unitTest iMeshP_unitTest HelloMesh SimpleIterator Swapping (tests) Mesquite (test) MOAB-4.0/ README (like a .settings file; which interfaces implemented) lib/ libhdf5.a --> ../../../hdf5-1.8.5p1/lib/libhdf5.a libnetcdf.a --> ../../../netcdf-4.0/lib/libnetcdf.a libMOAB.a libdagmc.a libSwapping.a libMesquite.a libiZoltan.a include/ iMesh.h iBase.h iMesh_f.h iBase_f.h bin/ eyeMesh iMesh_unitTest iMeshP_unitTest HelloMesh SimpleIterator Swapping (test) Mesquite (test) GRUMMP-0.6.2/ README (like a .settings file; which interfaces implemented) . . . RefImpl/ README (like a .settings file; which interfaces implemented) . . .
What about serial and parallel variations? We probably need to keep these separate. For some implementations, like GRUMMP, there is no distinction between serial and parallel. There is only a serial interface to GRUMMP. We could add another directory level for ser and par variants. We could include this as part of the architecture or compiler moniker. If we want to compile MOAB for parallel, does that mean all its prerequisites (hdf5, netcdf, etc.) also need to be compiled for parallel? Probably a good idea.
3 The version number level can be optional. In addition, it can be collapsed into the top-level root directory name So, instead of a root of '/usr/local/itaps/' and version number '1.2' yielding '/usr/local/itaps/1.2', we'd have '/usr/local/itaps-1.2/'.
4 The arch/compiler moniker level can be optional. It is useful in situations such as cross-mounted filesystems where the same filesystem path is visible from multiple platforms with different cpu architectures and/or for platforms with multiple compilers. This level is used to distinguish among cpu architectures and/or compilers.
6 In this FMDB-1.2 example, there are separate dirs for each of the ITAPS interfaces that FMDB implements. However, in reality, FMDB doesn't install that way. It installs all interfaces to same installation point. Though maybe it handles iMesh/iMeshP a little differently.
5 In this GRUMMP example, there are not separate dirs for each of the ITAPS interfaces. All the interfaces that GRUMMP implements are packged together in a single installation point. This is more likely the case than
7 Products like MPICH and HDF5 are going to be used across more than one implementation and, for that matter, across more than one itaps release. For this reason, they should NOT be installed underneath any particular implementation dir. But, to use the same build of these libraries across ITAPS versions, we would need to move them above the arch/compiler level and that would be problematic. If the arch/compiler level is necessary, it will also be necessary for these libraries. One solution to that is to move the arch/compiler level to the top. Products like Ipcomman, autopack, that are used by just one implementation (FMDB in this case) should be installed underneath their respective implementatoin's directory.
1 Even if the ITAPS project does provide essential software engineering glue for multiplexor or shared libs with disparate namespaces, I think we still have a challenge where selection of a given interface implementation is concerned. That is, how does an application using a service that can potentially use any one of the implementations inform that service which interface implementation to use? I am not sure this issue can be straightforwardly resolved because the application needs to tell something (either the multiplexor or the shared lib loader), which implementation to use. But, in theory, the multiplexor or shared lib loader is underneath the service and not directly accessible by the application.
2 Due to existing known and probably other unknown variations in interface specification between existing implementations, it is likely that 100% ABI compatibility is not possible. Certainly, there are already known differences in implementation defined behavior between implementations. At the same time, the hope is that most service software is or will not be relying upon implementation defined behavior.