Scientific Computing and Numerical Analysis

Submitted by Tony I Garcia on

By Anne Greenbaum 

Scientific computing and numerical analysis lie at the core of Applied Mathematics. From structural engineering to climate modeling to computational neuroscience and finance, all areas of applied math rely heavily on computation. Typically, the physical world is modeled through differential equations, which are then replaced by linear or nonlinear algebraic equations, which one can attempt to solve on a computer. Questions about existence and uniqueness of solutions, about the accuracy of computed solutions to the algebraic equations and how well they approximate solutions to the differential equations lie in the realm of numerical analysis, along with questions about how rapidly these approximate solutions can be obtained. Questions about how accurately the differential equations model the physical system are also important, and sometimes one simply is not sure about the governing differential equations. In such cases, Monte Carlo simulations are often used with probabilities of various events estimated from experiments. Then statistical issues come to the fore. In the past, numerical analysts have not dealt much with these types of questions, but this is changing, as randomness and stochasticity are playing a larger role in numerical algorithms. With the availability of massive amounts of data, even the modeling methods are changing, with machines learning how to predict outcomes from a set of training data.

My field is numerical linear algebra, and, in the end, that is what almost all of these different questions come down to. In data science, for example, principal component analysis is often used to reduce an extremely large matrix, such as the Netflix matrix of movie preferences, to a smaller one that retains the most important information. In linear algebra, this is known as the singular value decomposition, or SVD. One exercise that I enjoy doing in my numerical linear algebra class is to take a complicated image (imagedemo in Matlab generates a grayscale rendering of an Albrecht Dürer print) and see how many of the singular values and vectors are necessary to recover a recognizable image. One generally finds that even complicated images can be fairly well represented using significantly less data, as illustrated below:

Figure 1: Grayscale image of an Albrecht Dürer print represented by a 648 x 509 matrix and the same image approximated using 10, 50, or 100 singular vectors, or, principal components.

Figure 1: Grayscale image of an Albrecht Dürer print represented by a 648 x 509 matrix and the same image approximated using 10, 50, or 100 singular vectors, or, principal components.

Last year, I received the Boeing Endowed Professorship in Applied Mathematics, for a three-year term. While I do not work directly with anyone from the Boeing company, much of the work that I do is of relevance there. Too often, engineers and mathematicians rely on matrix eigenvalues to analyze the stability of solutions to differential equations. They forget that while eigenvalues determine asymptotic stability — what happens in the limit as t  ∞ they do not necessarily describe early time behavior of such systems. Various quantities have been proposed to describe transient behavior, but initially, growth or decay of the solution is governed by the largest eigenvalue of the symmetric part of the matrix: (A + Aᵗ)/2, which is the real part of the rightmost point of the numerical range. The figure below shows the eigenvalues of a matrix related to the Boeing 767 aircraft after being “stabilized'” through a controller designed to push all eigenvalues of the matrix into the left half-plane. [See: J. Burke, A. Lewis, and M. Overton, IFAC Proceedings, vol. 36, issue 11, pp. 175-181, 2003, for some other approaches.] Unfortunately, the numerical range extends far into the right half-plane, with the largest eigenvalue of (A + Aᵗ)/2 being on the order of 10⁶! Fortunately, this was not the design that was ultimately chosen.

“Stabilized'” matrix related to the Boeing 767 aircraft. All eigenvalues are in the left half-plane, but the numerical range extends far into the right half-plane.

Figure 2: “Stabilized'” matrix related to the Boeing 767 aircraft. All eigenvalues are in the left half-plane, but the numerical range extends far into the right half-plane.

When I was an undergraduate, I majored in Math, which was fairly good preparation for what I do today. A better preparation might have been Math combined with computer science and some applications. For this reason, I am very enthusiastic about our Applied and Computational Mathematical Sciences (ACMS) major here at the University of Washington. I am the current director of this program, which is joint between the departments of Math, Applied Math, Computer Science, and Statistics. All students in this program take core courses in the four departments, but they can then branch off into several different pathways. This multidisciplinary program will soon be augmented by a new undergraduate degree program in Applied Mathematics, as described in the Autumn 2018 newsletter. Together, these programs will provide more opportunities for the growing number of students interested in the applied mathematical sciences.

News Topic
Share