1 2 Release Lessons

I view this 1.2 release as a sort of practice run. I think we need to do this exercise at least one more time before end of May and maybe once again before end of summer, each time getting more of the wrinkles worked out.

Committment to Code and Specification Freeze

We need to be committed to freezing things well in advance of the release and then commit to changing only those things in the release candidate that are actually broken with a minimum impact on everything else. I liked Jason's comment when we decided to switch from void** to void* in the iMesh.h file for the specification. He said he wasn't going to cut another Mesquite release to match it and I think that is a perfectly good answer. Releasing software is sort of like a train or boat departure. We can't constantly stop things mid-release and go back and fix small things.

We need release tarballs available from all stakeholders well in advance of the release date; at least 1-2 weeks. Those tarballs cannot be 'nightly' or 'current'. The have to be fixed, releases. Even if they are a minor release to match the ITAPS release, they need to be fixed releases and not something that will change next week or next month.

Timing of releases is also important. Deciding what should be in the release well ahead of time will also help. Fridays may be a bad day of the week to choose for a release.

Not doing anything parallel yet. Need to fix asap.

I don't mean that implementations are not doing a lot here. What I mean is that as far as releasing goes, we don't have very much practice in place testing, releasing and (important to me anyways) installing parallel implementations and services with a tool like build_itaps.

There are variations in parallel specification
  • can entities be created without being in a part
  • how do we have a compliance test without load a parallel mesh in a common way is a problem. Tim T. to add a ticket for this.

Tarball preparation

It was my hope that relase_distros dir would be reserved for cases where we had no other choice but to place a tarball into our SVN repo to ensure a given, fixed version (e.g. not a nightly or current that has the potential to change out from underneath us) of a product was available in perpetuity. As long as stakeholders commit to maintaining such tarballs at their own, prescribed URLs, there is no need to pollute the itaps repo with them.

The build_itaps script can get a tarball (or subversion dir-tree) from whatever URL it is available. However, the build_itaps script won't satisify all needs and so an independent inventory of tarballs and URLs for a given release needs to be maintained somwhere as well. Doing this on a live web page on itaps.org is probably only sufficient for the current release. We need to keep such an inventory for each release. Probably a simple text or html table maintained as a file in a released version of the repo would be sufficient (e.g. in tags/1.2/<whateve>)

In addition, the build_itaps script can wget/curl tarballs or svn co repos or parts of repos. So, there is no need, for example, to take source code content already in the ITAPS repo and create a tarball of it and put it into the release_distros dir like we did for RefImpl and Swapping.

Also, we should really refrain from creating different tarballs with the same names. Each time a tarball is cut, the tarball name as well as the directory it untars too should change in some way so one can distinguish between them. This is particularly true when several release candidates are prepared during a release cycle. If someone gets hold of any one of those candidates, they'll be awfully confused by fact that same tarballs have different contents and behave differently.

Wget vs. curl vs. svn checkout

To make things simple, I would have preferred to use wget for everything. Wget can download whole dir tress as well as individual files. But, there are problems. Mac's don't have wget by default. Mac's have curl. And, curl cannot download whole directory trees; only individual files. In a few cases, the software to be downloaded is/was actually some directory in some subversion repo. That is fine. As long as the version we need to point at is going to remain static, it is fine if it is just a revision. But, we can use svn co as the download command on Macs as well as other systems.

Product names and installed names

Ran into a lot of problems initialy with things like tarball file names not matching the untar'd dir name. Was also concerned about case sensitivity. Wherever build_itaps script has control, it uses all lower-case names.

Odd-ball configure, build, make, make check, and/or install behaviors

FMDB and GRUMMP do some unuusal things. FMDB is released as 3 separate tarballs. That would be fine except that those tarballs are actually interdependent. You can't configure one without having source for another. So, you have to have downloaded all and untar'd all before you can start configuring any one. One of FBMD's tarballs is just two the combination of two other tarballs. FMDB's GMI product is an iGeom implementation but there is no --enable-igeom flag for it. GRUMMP will download, configure and build CGM during its configure if you don't provide a configure flag that points it to an installed CGM. GRUMMP hides all the compile and link command make would normally output and instead just prints Compiling foo... messages. Yeah, the output on stdout is cleaner. But, if anything goes wrong, there is no way to easily see what it was doing, what -I flags were used, what -L flags, -D flags, etc. GRUMMP uses test or tests as make target for running tests. I am sort of used to GNU standard check target for that.

IMESH_DEFS=<path-to-iMesh-Defs.inc> does not integrate well into CMake-based applications

Getting VisIt plugins to build using the ITAPS way, (e.g. IMESH_DEFS=<path-to-iMesh-Defs.inc>) was problematic. I still have not fixed this. VisIt uses CMake to generate its Makefiles. But, iMesh-Defs.inc file is Makefile specific. And, there is no way to get CMake to add an a line like include $(IMESH_DEFS) in the resulting Makefile. So, I wind up having to add a bunch of CMake string manipulation to VisIt's CMake files for ITAPS plugin to manipulate contents of iMesh-Defs.inc and do the right thing the CMake way. One improvement here would be to have implementations also produce a iMesh-Defs.cmake file (which would be very easy to do).

Would be useful to do an iMesh-Defs.cmake file. Mark M. to provide example of how.

Shared libs vs. static libs

I don't think most users can deal with shared libs effectively. In addition, I haven't really tested if various ITAPS interface implementations are application binary (ABI) compatible. So, I am currently disinclined to expose shared libs to users. After build_itaps installs a product, it separates shared libs into their own directory like so...

include bin lib lib.a lib.so

Where lib is a symlink to either lib.a or lib.so directories. By default, lib symlinks to lib.a

As an aside, I don't have much experience with shared libs on Macs. I know VisIt has to do a lot of special stuff to get this to work. Its not a simple matter of pointing LD_LIBRARY_PATH at some install point.

Maybe much better simply to provide control to build_itaps script to build either shared or static or both and have the default be static. A problem is that then requires all stakeholder products to support this and not all do.

Fortran headers were out of date

iMesh_f.h and iBase_f.h were woefully out of date relative to the C headers. I fixed this on RC branch. However, these same headers installed by implementations were also very out of date. We need to do something to keep implementations in sync. with repo header files. In mean time, to patch this situation up, build_itaps actually downloads and copies iMesh_f.h and iBase_f.h from svn repo to implementation's install point.

Compiling fortran applications

I had very limited success compiling fortran applications, like HelloMeshF77 against implementations. The problems relate to linking fortran application against C++ library and getting all the library dependencies (-lstdc++ for example). I would really like it if we could devise a way to make this just work out of the box.

We should integrate this with testing asap.

Compiling with different compilers

We should try compiling with different compilers. To my knowledge, we are not. I know Ellen did a round a long while back compiling various implementations with run_iMesh_unitTest but I don't recall the results of that.

We are interested in this and should probably test it.

Which ITAPS interface(s) does an implementation implement?

While this may be documented in numerous ways (web, release notes, whatever), this information is not codified in any automatic, machine useable way, presently. It is important when building services to know which interfaces the service requires and which implementations impolement which interfaces. This was manually codified in build_itaps with XXX_IMPLEMENTS and XXX_REQUIRES variables. These are space separated lists of ITAPS interface names. Implementaions advertise the XXX_IMPLEMENTS variable while services advertise the XXX_REQUIRES variable. When build_itaps goes to build services, it iterates over all services listed and for each service finds any (installed) implementation that supports all the interfaces it requires.

At the same time, even if build_itaps knows this information from internal variables that were manually added for this purpose, how is any user going to obtain this information and/or write Makefiles that might link to various implementaions based on it? In short, we need machine useable information for each implementation for this purpose; someting like an ITAPS-Defs.inc file for all ITAPS interfaces a given implementation supports might do the trick. This really may be useful only for something like an uber-build script like build_itaps.

There is no real user's manual available for iMesh, iGeom, etc.

The header files provide user-level manual information but we're not putting that stuff on line in a way that is easily available to users. I emailed a proposed way of doing that (doxygen is another). We do doxygenate some of the services I guess but where is that stuff available on line?

We need to keep a previous releases web page

A web page that maintains information on current and previous release including release notes (announcements) as well as inventory of related software would be useful. Where would we put this on our web site?

There is a lot of stuff we are not testing but could easily add

We don't test fortran interface compilation and we should, just as we test everything else. We have some simple fortran example code we should be compiling against each implementation on a regular basis.

We don't test mix-n-match implementations; RPI's iGeom with ANL's iMesh in Lasso for example.

We don't test ABI compatibility of anything. While Jason and Jim routinely use shared libs for ITASP related software, I am not sure if they are using them to routinely load different stakeholder's implementations of ITAPS interface or always the ANL implementations. If the former, great, then maybe things are working. If the latter, than IMHO, that experience is not a sufficient test of ABI compatibility. At any rate, we should test it too.

Service vs. Implmentation

Lasso is iRel relating iGeom and iMesh things. In some sense, it is a service on top of iGeom and iMesh. build_itaps currently builds services slightly differently than implementaions, using IMESH_DEFS=<path-to-iMesh-Defs.inc>. Alas, in order to get LASSO to integrate with build_itaps script, I fixed its iGeom/iMesh implementations to ANL implementaions. Will have to fix this later.