TMA265 / MMA600 Numerical linear algebra
Course PM
This page contains the program of the course: lectures, exercise sessions and computer labs. Other information, such as learning outcomes, teachers, literature and examination, are in a separate course PM.
Registering for courses for PhD students: for registering, please, sent me mail to larisa@chalmers.se
PhD students taking a Chalmers Master’s course: apply to the course by contacting the course examiner. Additionally, PhD students at Chalmers should register for the course
using the link:
PhD students at GU should register via antagning.se
Registration of PhD students to the exam:
- via Doktorandportalen (for Chalmers PhD students)
- by sending an e-mail to utb-adm.mv@chalmers.se (for GU PhD students)
- the course will be registered when the result is reported in Ladok.
Announcement: starting from 1 November 2024 I will give PhD/Master's course
Numerical methods and machine learning algorithms for solution of inverse problems, 7.5 Hp
Here is the link to the course: Numerical methods and machine learning algorithms for solution of inverse problems
Program
The schedule of the course is in TimeEdit.
The course will be given in the hybrid format: the theoretical lectures will be in
live or in Zoom and the computer sessions can be performed in computer labs
via supervision in Zoom: students can ask questions in Zoom as well as come to my office during time of computer labs.
We will follow schedule in the Table below, see all dates and Zoom links for lectures and comp.labs in Time Edit.
Zoom link for all lectures and comp.labs:
Meeting ID: 677 1750 7274
Passcode: 644985
List of formulas for examination
Proposition for groups for computer labs: go to People--> Groups in CANVAS and choose your group.
In case of any difficulties to create group - write me e-mail.
Lectures and comp.ex. in 2024
Day | Time | Place | Remarks |
---|---|---|---|
MONDAYS: Pascal 02.09 09.09, 16.09, 23.09, 30.09, 07.10, 14.10, 21.10 |
13:15-15:00 | Zoom/ lecture |
First two lectures (02.09 and 06.09) will be only in Zoom |
WEDNESDAYS: 04.09, 11.09, 18.09, 25.09, 02.10, 09.10, 16.10, 23.10 |
13:15-15:00 | MVF24, MVF25, Zoom supervision | Computer exercises |
THURSDAYS: Pascal 05.09, 12.09, 19.09, 26.09, 03.10, 10.10, 17.10, 24.10 |
10:00-11:45 | Zoom/Pascal | Lecture |
FRIDAYS: 06.09, 13.09, 20.09, 27.09, 04.10, 11.10, 18.10 25.10 |
13:15-15:00 | MVF24, MVF25, Zoom supervision | Computer exercises |
29.10.2024 | 14.00-18.00 | exam | Examination, see schema |
09.01.2025 | 14.00-18.00 | exam | Examination, see schema |
Grades at Examination (written examination + bonus points)
Grades Chalmers | Points | Grades GU | Points |
- | < 15 | U | < 15 |
3 | 15-20 | G | 15-27 |
4 | 21-27 | VG | > 27 |
5 | > 27 |
NEWS
There are modified computer labs in Matlab and C++/PETSc:
Link to all computer labs with instructions
Separate links to computer labs:
Computer Lab. 1: Solution of least squares problem
Computer Lab. 2: Least squares and machine learning algorithms for classification problem
Computer Lab. 4: Principal component analysis for image recognition
Computer Lab. 5: Solution of coefficient inverse problem for Helmholtz equation
Deadlines for computer exercises and homeworks (download reports and programs via CANVAS):
Homeworks 1 : 12 September
Homework 2: 19 September
Comp.ex. 1 : 2 October
Homeworks 3 and 4: 10 October
Comp.ex. 2: 18 October
Comp.ex. 3, 4: 25 October
Comp.ex.5 : 30 October
- Lecture 1
Introduction and organization of the course. Introduction to linear algebra and numerical linear algebra. If this looks unfamiliar you should better consult you former literature in linear algebra and refresh your knowledges. In the course we will concentrate on the three building bricks in Numerical Linear Algebra (NLA) : (1) Linear Systems, (2) Overdetermined systems by least squares, and (3) Eigenproblems. We considered example of application of linear systems: image compression using SVD, image deblurring. We introduced some basic concepts and notations: transpose, lower and upper triangular matrices, singular matrices, symmetric and positive definite matrix, conjugate transpose matrix, row echelon form, rank, cofactor.
Lecture 1 (part 1, organization)
Lecture 1 ( part 2, introduction to NLA)
- Lecture 2
- IEEE system and floating-point numbers.
- We will discuss perturbation theory in polynomial evaluation. See section 8.2.1 in the course book.
- Lecture 2
- Lecture 3
We will discuss why we need norms: (1) to measure errors, (2) to measure stability, (3) for convergence criteria of iterative methods. Sherman - Morrison formula. See chapter 6 in the course book. - System of linear equations. Gaussian elimination, LU-factorization. Gaussian elimination (and LU factorization) by outer products (elementary transformations). Pivoting, partial and total. See chapter 8 in the course book.
- Lecture 3
- Lecture 4
We will discuss the need of pivoting and uniqueness of factorization of A. Different versions of algorithms of Gaussian elimination. Error bounds for the solution Ax=b. Roundoff error analysis in LU factorization. See sections 8.1.2, 8.2.2, 8.2.3, 8.2.4 in the course book. - Estimating condition numbers. Hagers's algorithm.
- Lecture 4
- Lecture 5
Componentwise error estimates. Improving the Accuracy of a Solution for Ax=b: Newton's method and equilibration technique. Convergence of the Newton's method. Real Symmetric Positive Definite Matrices. Cholesky algorithm. - See sections 8.2.5, 8.3, 8.4.1 of the course book.
- Band Matrices, example: application of Cholesky decomposition in the solution of ODE. See sections 8.4.2, 8.4.3 in the course book.
- Example: how to solve Poisson's equation on a unit square using LU and Cholesky factorization. See section 8.4.4 in the course book.
- Lecture 5
- Lecture 6
Linear least squares problems. Introduction and applications. The normal equations. Example: Polynomial fitting to curve. See introduction to chapter 9 and sections 9.1, 9.3 in the course book.
QR and Singular value decomposition (SVD). QR and SVD for linear least squares problems. Example of application of linear systems: image compression using SVD in Matlab. See sections 9.4, 9.6 in the course book.- Lecture 6
- Lecture 7
- Least squares and classification algorithms.
- See sections 9.2 in the course book and the the paper Numerical analysis of least squares and perceptron learning for classification problems, https://arxiv.org/abs/2004.01138 for classification.
- Machine learning algorithms for classification.
- Lecture 7
- Lecture 8
- Solution of nonlinear least squares problem.
- See section 9.2 in the course book.
- SVD and Principle Component Analysis (PCA) to find patterns and object orientation.
- Example of PCA. PCA for recognition of handwritten digits.
- Lecture 8
- Lecture 9
- Householder transformations and Givens rotations. QR-factorization by Householder transformation. Examples of performing Householder transformation for QR decomposition and tridiagonalization of matrix.
- See section 9.5.1 of the course book.
- Givens rotation. QR-factorization by Givens rotation. See section 9.5.2 of the course book.
- Moore-Penrose pseudoinverse. Rank-Deficient Least Squares Problems. See sections 9.6.1, 9.6.2 of the course book.
- Lecture 9
- Lecture 10
- Introduction to spectral theory, eigenvalues, right and left eigenvectors. Similar matrices. Canonical forms: Jordan form.
Canonical forms: Jordan form and Schur form, real Schur form. Gerschgorin's theorem. Perturbation theory, Bauer-Fike theorem. See chapter 5 of the course book. - Discussed algorithms for the non-symmetric eigenproblems: power method, inverse iteration. See chapter 10, sections 10.1, 10.2 in the course book.
- Lecture 10
- Lecture 11
Discussed algorithms for the non-symmetric eigenproblems: inverse iteration with shift, orthogonal iteration, QR iteration and QR-iteration with shift. Hessenberg matrices, preservation of Hessenberg form. Hessenberg reduction. Tridiagonal and bidiagonal reduction. - See sections 10.2, 10.3,10.4,10.5, 10.6, 10.7 in the course book.
- Lecture 11
- Lecture 12
Regular Matrix Pencils and Weierstrass Canonical Form. Singular Matrix Pencils and the Kronecker Canonical Form. Application of Jordan and Weierstrass Forms to Differential Equations. Symmetric eigenproblems Perturbation theory: Weyl's theorem. Corollary regarding similar result for singular values. Application of Weyl's theorem: Error bounds for eigenvalues computed by a stable method. Courant-Fischer (CF) theorem. Inertia, Sylvester's inertia theorem. Definite Pencils. Theorem that the Rayleigh quotient is a good approximation to an eigenvalue. Algorithms for the Symmetric eigenproblem: Tridiagonal QR iteration, Rayleigh quotient iteration. - In the course book see section 5.3 for matrix pencils and sections 11.1,11.2 for algorithms for the symmetric eigenproblem.
- Lecture 12
- Lecture 13
Theorem that the Rayleigh quotient is a good approximation to an eigenvalue. Algorithms for the Symmetric eigenproblem: Tridiagonal QR iteration, Rayleigh quotient iteration, Divide-and-conquer algorithm. QR iteration with Wilkinson's shift. Divide-and-conquer, bisection and inverse iteration, different versions of Jacobi's method. - See sections 11.2, 11.3,11.4, 11.5 in the course book.
- Lecture 13
- Lecture 14
Algorithms for symmetric matrices (continuation): different versions of Jacobi's method. Algorithms for the SVD: QR iteration and Its Variations for the Bidiagonal SVD. The basic iterative methods (Jacobi, Gauss-Seidel and Successive overrelaxation (SOR)) for solution of linear systems. Jacobi, Gauss-Seidel and SOR for the solution of the Poisson's equation in two dimension. Study of Convergence of Jacobi, Gauss-Seidel and SOR. - In the course book see sections 11.5, 11.6,11.7,11.8 for algorithms for symmetric matrices and Chapter 12 for iterative methods for solution of linear system.
- Lecture 14
Lecture 15
Introduction to the Krylov subspace methods. Conjugate gradient algorithm. Preconditioning for Linear Systems. Preconditioned conjugate gradient algorithm. Common preconditioners.
In the course book see sections 11.5, 11.6,11.7,11.8 for algorithms for symmetric matrices and Chapter 12 for iterative methods for solution of linear system.
Homeworks
- To pass this course you should do 2 compulsory home assignments before the final exam. Choose any 2 of 4 assignments presented here:
- Homeworks
- Homeworks should do done individually (not in groups).
- Download pdf files with homeworks in "Assignment" in CANVAS . Alternatively, sent pdf file with your assignment to my e-mail larisa@chalmers.se or leave it in the red box beside my office before deadline (see the course page "Program" for deadlines for every home assignment). Handwritten home assignments can be left in the red box located beside my office.
Recommended exercises in the course book
Chapters | Exercises |
---|---|
8 | 8.7, 8.10, 8.16 |
9 | 9.5, 9.7, 9.8, 9.9, 9.11, 9.12, 9.14, 9.15 |
11 | 11.1 - 11-13 |
Computer labs
Computer labs with instructions
- To pass this course you should do any 2 computer assignments
- Paper with real values of parameters for comp.ex.1 (see Table II, page 345 )
- File iris.csv for comp.ex 2 ans comp.ex.3
- Matlab program testiris2.m which classify data from iris.csv into 2 classes
- Test dataset mnist_test_10.csv with handwritten digits
- Train dataset mnist_train.csv with handwritten digits
- Matlab program loadmnist_matlab.m for reading dataset mnist_test_10.csv
- Matlab function import_mnist.m used by the program loadmnist_matlab.m
- Matlab program for comp.ex. 4 (PCA for image recognition)
- PETSc code for solution Helmholtz equation (see Programs for download)
- Every computer exercise should be presented in the form of written report and programs written in Matlab or C++/PETSC. You can download latex-template for the report here:
- latex-template (pdf-file)
- latex-template (tex-file)
- You can work in the groups by 2 persons.
- Sent final report for every computer assignment with description of your work together with Matlab or C++/PETSc programs for testing to my e-mail before the deadline.
- Report should contain: description of used techniques, tables and figures confirming your investigations, analysis of obtained results. Summarized results should be described in the final section ``Conclusions''.
- Matlab and C++/PETSc programs for examples in the course book are available for free download: go to the link of the course book and click to ``GitHub Page with MATLAB® Source Codes'' on the bottom of this page.
Reference literature:
- Learning MATLAB, Tobin A. Driscoll. Provides a brief introduction to Matlab to the one who already knows computer programming. Available as e-book from Chalmers library.
- Physical Modeling in MATLAB 3/E, Allen B. Downey
The book is free to download from the web. The book gives an introduction for those who have not programmed before. It covers basic MATLAB programming with a focus on modeling and simulation of physical systems.
Course summary:
Date | Details | Due |
---|---|---|