Molecular dynamics has been described as a computational microscope, a versatile, high resolution method that can help to guide experiment or explore detailed mechanisms of molecular motion. Applications are wide ranging and include examples like the fracture or indentation of materials, structural rearrangements in crystals, and the transport of ions and small molecules through membranes. The evolution of computer hardware is rapidly changing the subject, with new algorithms needed for graphics processing units and hybrid computing architectures. The challenge is to boost the time and spatial scales accessible in simulation, while maintaining or improving accuracy in essential characteristics of the systems under study. The workshop will explore a range of new types of methods for accelerating molecular dynamics and for expanding its range of application.
The tutorial runs from 1pm Monday (April 30) to noon on Wednesday (May 2). It is organized by Berk Hess (KTH, Stockholm) and David Hardy (University of Illinois) with additional lecturers providing talks on special topics. The tutorial will address a variety of topics related to developing software for molecular dynamics in the HPC environment, including GPU computing and other current topics. The tutorial is intended for PhD students and other researchers who are either working in computational molecular science (interpreted broadly) or in high performance computing. The tutorial has strictly limited attendance due to space constraints.
The arrival of highly parallel, heterogeneous hardware in the computing mainstream challenges conventional software models. This workshop brings together a series of talks on projects which seek to address the issues, focusing in particular on the synthesis between pattern and domain oriented programming abstractions and the ability to autotune code for performance portability.
This workshop, presented and led by Mike Giles (Oxford), will cover a variety of topics related to the use of GPUs for advanced scientific computing applications, including programming methods and example case studies. UK academics, including PhD students and postdoctoral researchers are encouraged to attend.
The aims of the meeting are to showcase state of the art scientific computing applications, to identify some of the challenges posed by next generation high performance machines and indicate promising approaches to tackling the problem.
General-purpose graphics-processing units (GPGPUs) have a growing presence in high-performance computing, as demonstrated by the latest Top 500 listing (November 2010) that lists three GPGPU-powered supercomputers in the top four. For computational science, GPGPUs offer the potential for a step-change in capability for a range of applications — facilitating faster, larger, and more complex simulations. However, to achieve this potential, one needs to invest effort to understand and apply GPU-specific programming models: this is both non-trivial and quite distinct from CPU-based programming. NAIS is providing a training course, delivering overview-level training for numerical analysts and applications scientists who wish to include GPGPU support in their algorithm/application code. The course - which runs over three half-days - will consider CUDA programming (at the time of writing, the most popular model for GPGPU programming); performance and optimisation techniques; and parallel programing for multi-GPGPU computations. The course will also introduce some alternative programming models such as directives-based, OpenCL, and MATLAB for GPUs. The course includes a mix of taught sessions and hands-on practicals. To get the most from the course, one should be familiar with Unix/Linux-based systems (e.g. use of batch systems and command-line interface), and be reasonably competent in either C or FORTRAN90.
GPU computing has made Teraflop supercomputing available to anyone with a computer. Algorithm, application and library developers need to be aware of and consider the potential in GPU computing and how it now extends into conventional multi-core x86 computing. NVIDIA introduced CUDA for GPU computing in February 2007. The rate of adoption has been remarkable as have been the improvements in application performance (10-times to 1000-times) for a variety of problem domains. NVIDA estimates that over a 1/3 billion CUDA-enabled GPUs have been sold world-wide. CUDA is now taught at 454 institutions worldwide. This talk will discuss how simple it is to express problems in CUDA and particularly with the Thrust API. Results for a generic machine-learning data mining problem on a single GPU show an 85-times speedup over a modern quad-core Xeon processor (341-times single core performance) for a PCA/NLPCA problems using Nelder-Mead. The parallel mapping developed by Farber at Los Alamos is generally applicable to a range of optimization problems (SVM, MDS, EM, ICS, ...) and optimization methods (Powell, Levenberg-Marquardt, Conjugate Gradient, ...). Scaling results will demonstrate that this same mapping, and CUDA implementation exhibits near linear scaling to 500 GPUs. A CPU version scales to over 60,000 processing cores and delivers over 1/3 of a petaflop. Speedups using CUDA in a number of other problems domains plus links to downloadable source code will be provided. Finally, recent developments make CUDA a potential development language like Java, FORTRAN, and C++ for all application development including those applications intended for only x86 architecture deployments.
The 24th edition of the UK's Biennial Numerical Analysis conference. NAIS is providing a substantial part of the funding for this event.
The Distributed and Unified Numerics Environment (DUNE) is a modular toolbox for solving partial differential equations using grid-based methods with special emphasise on parallel computing using distributed grids. This one week course will give an introduction to the DUNE core modules including the DUNE grid interface library, and the DUNE-FEM module. The school consists of lectures providing required background information but consists to a large part of hands on practical session.
The focus of this 3-day meeting is on fast algorithms, numerical techniques, integral equations, iterative solvers, parallelization, optimization methods, recursive algorithms and high performance computing. Applications to be discussed include scattering and RCS, antennas and radiation, radar, metamaterials, optics, biomedical applications, wireless systems and propagation.