The calculation of partial derivatives is a fundamental need in scientific computing. Automatic differentiation (AD) can be applied straightforwardly to obtain all necessary partial derivatives (usually first and, possibly, second derivatives) regardless of a code?s complexity. However, the space and time efficiency of AD can be dramatically improved?sometimes transforming a problem from intractable to highly feasible?if inherent problem structure is used to apply AD in a judicious manner. Automatic Differentiation in MATLAB using ADMAT with Applications discusses the efficient use of AD to solve real problems, especially multidimensional zero-finding and optimization, in the MATLAB environment. This book is concerned with the determination of the first and second derivatives in the context of solving scientific computing problems with an emphasis on optimization and solutions to nonlinear systems. The authors focus on the application rather than the implementation of AD, solve real nonlinear problems with high performance by exploiting the problem structure in the application of AD, and provide many easy to understand applications, examples, and MATLAB templates.
Mathematics of Computing -- Numerical Analysis.
This IMA Volume in Mathematics and its Applications LARGE-SCALE OPTIMIZATION WITH APPLICATIONS, PART II: OPTIMAL DESIGN AND CONTROL is one of the three volumes based on the proceedings of the 1995 IMA three week Summer Program on "Large-Scale Optimization with Applications to Inverse Problems, Optimal Control and Design, and Molecular and Struc tural Optimization." The other two related proceedings appeared as Vol ume 92: Large-Scale Optirpization with Applications, Part I: Optimization in Inverse Problems and Design and Volume 94: Large-Scale Optimization with Applications, Part III: Molecular Structure and Optimization. We would like to thank Lorenz T. Biegler, Thomas F. Coleman, An drew R. Conn, and Fadil N. Santosa for their excellent work as organizers of the meetings and for editing the proceedings. We also take this opportunity to thank the National Science Founda tion (NSF), the Department of Energy (DOE), and the Alfred P. Sloan support made the workshops possible.
No image available
No image available
Efficient triangular solvers for use on message passing multiprocessors are required, in several contexts, under the assumption that the matrix is distributed by columns (or rows) in a wrap fashion. In this paper we describe a new efficient parallel triangular solver for this problem. This new algorithm is based on the previous method of Li and Coleman [1986] but is considerably more efficient when $\frac{n}{p}$ is relatively modest, where $p$ is the number of processors and $n$ is the problem dimension. A useful theoretical analysis is provided as well as extensive numerical results obtained on an Intel iPSC with $p \leq 128$.
No image available
Finally, we describe an analogous row-oriented algorithm.
No image available
No image available
We show how a direct active set method for solving definite and indefinite quadratic programs with simple bounds can be efficiently implemented for large sparse problems. All of the necessary factorizations can be carried out in a static data structure that is set up before the numeric computation begins. The space required for these factorizations is no larger than that required for a single sparse Cholesky factorization of a matrix with the same sparsity structure as the Hessian of the quadratic. We propose several improvements to this basic algorithm: a new way to find a search direction in the indefinite case that allows us to free more than one variable at a time and a new heuristic method for finding a starting point. These ideas are motivated by the two-norm trust region problem. Additionally, we also show how projection techniques can be used to add several constraints to the active set at each iteration. Our experimental results show that an algorithm with these improvements runs much faster than the basic algorithm for positive definite problems and finds local minima with lower function values for indefinite problems.