Release Notes for QUDA v0.2 16 December 2009 --------------------------- Overview: QUDA is a library for performing calculations in lattice QCD on graphics processing units (GPUs) using NVIDIA's "C for CUDA" API. This release includes optimized kernels for applying the Wilson Dirac operator and clover-improved Wilson Dirac operator, kernels for performing various BLAS-like operations, and full inverters built on these kernels. Mixed-precision implementations of both CG and BiCGstab are provided, with support for double, single, and half (16-bit fixed-point) precision. Software compatibility: The library has been tested under Linux (CentOS 5.3 and Ubuntu 8.04) using release 2.3 of the CUDA toolkit. There are known issues with releases 2.1 and 2.2, but 2.0 should work if one is forced to use an older version (for compatibility with an old driver, for example). Under Mac OS X, the library fails to compile due to bugs in CUDA 2.3. It might work with CUDA 2.2 or 2.0, but this hasn't been tested. Hardware compatibility: For a list of supported devices, see http://www.nvidia.com/object/cuda_learn_products.html Before building the library, you should determine the "compute capability" of your card, either from NVIDIA's documentation or by running the deviceQuery example in the CUDA SDK, and set GPU_ARCH in make.inc appropriately. Setting 'GPU_ARCH = sm_13' will enable double precision support. Installation: In the source directory, copy 'make.inc.example' to 'make.inc', and edit the first few lines to specify the CUDA install path, the platform (x86 or x86_64), and the GPU architecture (see "Hardware compatibility" above). Then type 'make' to build the library. As an optional step, 'make tune' will invoke tests/blas_test to perform autotuning of the various BLAS-like functions needed by the inverters. This involves testing many combinations of parameters (corresponding to different numbers of CUDA threads per block and blocks per grid for each kernel) and writing the optimal values to lib/blas_param.h. The new values will take effect the next time the library is built. Ideally, the autotuning should be performed on the machine where the library is to be used, since the optimal parameters will depend on the CUDA device and host hardware. In summary, for an optimized install, run make && make tune && make By default, the autotuning is performed using CUDA device 0. To select a different device number, set DEVICE in make.inc appropriately. Using the library: Include the header file include/quda.h in your application, link against lib/libquda.a, and study tests/invert_test.c for an example of the interface. The various inverter options are enumerated in include/enum_quda.h. Known issues: * When building for the 'sm_13' GPU architecture (which enables double precision support), one of the stages in the build process requires over 5 GB of memory. If too little memory is available, the compilation will either take a very long time (given enough swap space) or fail completely. In addition, the CUDA C compiler requires over 1 GB of disk space in /tmp for the creation of temporary files. * For compatibility with CUDA, on 32-bit platforms the library is compiled with the GCC option -malign-double. This differs from the GCC default and may affect the alignment of various structures, notably those of type QudaGaugeParam and QudaInvertParam, defined in invert_quda.h. Therefore, any code to be linked against QUDA should also be compiled with this option. Contact information: For help or to report a bug, please contact Mike Clark (mikec@seas.harvard.edu) or Ron Babich (rbabich@bu.edu). If you find this code useful in your work, please cite: M. A. Clark, R. Babich, K. Barros, R. Brower, and C. Rebbi, "Solving Lattice QCD systems of equations using mixed precision solvers on GPUs" (2009), arXiv:0911.3191 [hep-lat]. Please also drop us a note so that we may inform you of updates and bug-fixes. The most recent public release will always be available online at http://lattice.bu.edu/quda/