The age of Big Data is here: data of huge sizes is becoming ubiquitous. This workshop aims to bring together researchers working on novel optimization algorithms and codes capable of working in the Big Data setting.
The aim of this conference is to bring together all of the stakeholders involved in solving the software challenges of the exascale – from application developers, through numerical library experts, programming model developers and integrators, to tools designers.
NAIS Lecturer, Peter Richtarik, will give a talk at the Strathclyde NA Seminar series
Title: How to climb a billion dimensional hill using a coin and a compass and count the steps before departure" : Parallel coordinate descent methods for big data optimization
3.15pm, Rm EM182, Heriot Watt. ‘On the Origins of Domain Decomposition Methods’, Martin Gander, University of Geneva. Domain decomposition methods have been developed in various contexts, and with very different goals in mind. I will start my presentation with the historical inventions of the Schwarz method, the Schur methods and Waveform Relaxation. I will show for a simple mode l problem how all these domain decomposition methods function, give precise results for the model problem, and also explain the most general convergence results available currently for these methods. I will conclude with the parareal algorithm as a new variant for parallelization of evolution problems in the time direction.
3.30pm, JCMB 5215, Kings Buildings. Chris L Farmer, University of Oxford. ‘A variational smoothing filter for sequential inverse problems’. Uncertainty quantification can begin by specifying the initial state of a system as a probability measure. Part of the state (the 'parameters') might not evolve, and might not be directly observable. Many inverse problems are generalisations of uncertainty quantification such that one modifies the probability measure to be consistent with measurements, a forward model and the initial measure. The main problem in the field is to devise a method for computing the posterior probability measure of the states, including the parameters and the variables, from a sequence of noise-corrupted observations. Bayesian statistics provides a framework for this, but leads to very challenging computational problems, particularly when the dimension of the state space is very large, as with problems arising from the discretisation of a partial differential equation theory. In this talk we show how to motivate and implement a 'Variational Smoothing Filter'. The full abstract is available on the Event Website.
12 Noon, Monday 17th September Tammy Kolda, Sandia National Laboratories, Capturing Community Behaviour in Very Large Networks Room LT908, Livingstone Tower, University of Strathclyde Tammy Kolda is a distinguished member of technical staff in the Informatics and Systems Assessments department at Sandia National Laboratories in Livermore, California. Her research interests include multilinear algebra and tensor decompositions, graph models and algorithms, data mining, optimization, nonlinear solvers, parallel computing and the design of scientific software. She serves as Section Editor for the Software and High Performance computing Section of the SIAM J Sci comp (SISC).
In today's digital world, with ever increasing amounts of readily-available data comes the need to solve optimization problems of unprecedented sizes. Machine learning, compressed sensing, natural language processing, truss topology design and computational genetics are some of many prominent application domains where it is easy to formulate optimization problems with tens of thousands or millions of variables. Many modern optimization algorithms, while exhibiting great efficiency in modest dimensions, are not designed to scale to instances of this size and are hence often, unfortunately, not applicable. On the other hand, simple methods, some having been proposed decades ago, are experiencing a comeback — albeit in modern forms. This workshop aims to bring together researchers working on novel optimization algorithms capable of working in large-scale setting.