- 快召唤伙伴们来围观吧
- 微博 QQ QQ空间 贴吧
- 文档嵌入链接
- 复制
- 微信扫一扫分享
- 已成功复制到剪贴板
Maths for Signals and Systems Linear Algebra in Engineering ...
展开查看详情
1 .Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 15, Tuesday 8 th and Friday 11 th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON
2 .Positive definite matrices A symmetric or Hermitian matrix is positive definite if and only if ( iff ) all its eigenvalues are real and positive. Therefore, the pivots are positive and the determinant is positive. However, positive determinant doesn’t guarantee positive definiteness. Example: Consider the matrix Eigenvalues are obtained from: The eigenvalues are positive and the matrix is symmetric, therefore, the matrix is positive definite.
3 .Positive definite matrices cont. We are talking about symmetric matrices. We have various tests for positive definiteness. Consider the case of a positive definite matrix The eigenvalues are positive . The pivots are positive , . All determinates of leading (“north west”) sub-matrices are positive . is any vector. . This is called Quadratic Form .
4 .Positive semi-definite matrices Example: Consider the matrix Which sufficiently large values of makes the matrix positive definite? The answer is (The determinant is If we obtain the matrix . For the matrix is positive semi-definite . The eigenvalues are and . One of its eigenvalues is zero. It has only one pivot since the matrix is singular. The pivots are and . Its quadratic form is . In that case the matrix marginally failed the test.
5 .Graph of quadratic form In mathematics, a quadratic form is a homogeneous polynomial of degree two in a number of variables. For example, the condition for positive-definiteness of a matrix, , is a quadratic form in the variables and . For the positive definite case we have: Obviously, first derivatives must be zero at the minimum. This condition is not enough. Second derivatives’ matrix is positive definite, i.e., for we have Positive for a number turns into positive definite for a matrix. Not positive definite Positive definite minimum
6 .Example: , + , A horizontal intersection could be It is an ellipse. Its quadratic form is For the positive definite case we have First derivatives are zero Matrix of second derivatives is positive Example 1 minimum
7 .Example 1 cont. Example: , + , Note that computing the square form is effectively elimination and The pivots and the multipliers appear in the quadratic form when we compute the square. Pivots are the multipliers of the squared functions so positive pivots imply sum of squares and hence positive definiteness.
8 .Example 2 Example: Consider the matrix The leading (“north west”) determinants are 2,3,4. The pivots are 2, 3/2, 4/3. The quadratic form is . This can be written as: The eigenvalues of are , The matrix is positive definite when .
9 .Positive definite matrices cont. If a matrix is positive-definite, its inverse it also positive definite. This comes from the fact that the eigenvalues of the inverse of a matrix are equal to the inverses of the eigenvalues of the original matrix. If matrices and are positive definite, then their sum is positive definite. This comes from the fact The same comment holds for positive semi-definiteness. Consider the matrix of size , (rectangular, not square). In that case we are interested in the matrix which is square. Is positive definite?
10 .The case of and Is positive definite? In order for for every , the null space of must be zero. In case of being a rectangular matrix of size with , the rank of must be . In case of being a rectangular matrix of size with , the null space of cannot be zero and therefore, is not positive definite. Following the above analysis, it is straightforward to show that is positive definite if and the rank of is .
11 .Similar matrices Consider two square matrices and . Suppose that for some invertible matrix the relationship holds. In that case we say that and are similar matrices. Example: Consider a matrix which has a full set of eigenvectors. In that case . Based on the above is similar to . Similar matrices have the same eigenvalues. Matrices with identical eigenvalues are not necessarily similar. There are different families of matrices with the same eigenvalues. Consider the matrix with eigenvalues and corresponding eigenvectors and the matrix . We have Therefore, is also an eigenvalue of with corresponding eigenvector .
12 .Matrices with identical eigenvalues with some repeated Consider the families of matrices with repeated eigenvalues. Example: Lets take the size matrices with eigenvalues . The following two matrices and have eigenvalues but they belong to different families. There are two families of matrices with eigenvalues . The matrix has no “relatives”. The only matrix similar to it, is itself. The big family includes and any matrix of the form , . These matrices are not diagonalizable since they only have one non-zero eigenvector.
13 .Singular Value Decomposition (SVD) The so called Singular Value Decomposition ( SVD ) is one of the main highlights in Linear Algebra. Consider a matrix of dimension and rank . I would like to diagonalize . What I know so far is . This diagonalization has the following weaknesses: has to be square. There are not always enough eigenvectors. For example consider the matrix , It only has the eigenvector . Goal: I am looking for a type of decomposition which can be applied to any matrix.
14 .Singular Value Decomposition (SVD) cont. I am looking for a type of matrix factorization of the form where is any real matrix of dimension and furthermore, is a unitary matrix with columns , of dimension . is an rectangular matrix with non-negative real entries only along the main diagonal . The main diagonal is defined by the elements , . is a unitary matrix with columns , of dimension . is, in general, different to . The above type of decomposition is called Singular Value Decomposition . The non-zero elements of are the so called Singular Values of matrix . They are chosen to be positive . When is a square invertible matrix then . When is a symmetric matrix, the eigenvectors of are orthonormal, so . Therefore, for symmetric matrices SVD is effectively an eigenvector decomposition and . For complex matrices, transpose must be replaced with conjugate transpose .
15 .Singular Value Decomposition (SVD) cont. From , the following relationship hold: Do not forget that and are assumed to be unitary matrices and therefore, If I manage to write , the matrix is decomposed as: In the above expression is a matrix of dimension (square matrix). From the form of the original , you can easily deduct that is a diagonal matrix. It has non-zero elements across the diagonal. These are the squares of the singular values of which are located along the main diagonal of the rectangular matrix . if the original matrix is a square matrix. Please note the difference between the “diagonal” (square matrices) and the “main diagonal” (rectangular matrices). Therefore, the above expression is the eigenvector decomposition o as follows:
16 .Singular Value Decomposition (SVD) cont. Similarly, the eigenvector decomposition of is: In the above expression is a matrix of dimension . Similarly to , it is a square matrix with non-zero elements across the diagonal. Based on the properties stated in previous slides, the number and values of non-zero elements of matrices and are identical. Note that these two matrices have different dimensions if . In that case one of them (the bigger one) has at least one zero element in its diagonal since they both have rank which is . From this and previous slides, we deduct that we can determine all the factors of SVD by the eigenvector decompositions of matrices and .
17 .Useful properties Let be an matrix and let be an matrix with . Then the eigenvalues of are the eigenvalues of with the extra eigenvalues being 0. Therefore, the non-zero eigenvalues of and are identical. Therefore: Let be an matrix with . Then the eigenvalues of are the eigenvalues of with the extra eigenvalues being 0. Similar comments for are valid. Matrices , and have the same rank. Let be an matrix with and rank . The matrix has singular values . Both and have non-zero eigenvalues which are the squares of the singular values of . Furthermore: is of dimension . It has eigenvectors associated with its non-zero eigenvalues and eigenvectors associated with its zero eigenvalues. is of dimension . It has eigenvectors associated with its non-zero eigenvalues and eigenvectors associated with its zero eigenvalues.
18 .Singular Value Decomposition (SVD) cont. I can write and . Matrices and have already been defined previously. Note that in the above matrices, I put first in the columns the eigenvectors of and which correspond to non-zero eigenvalues. To take the above even further, I order the eigenvectors according to the magnitude of the associated eigenvalue. The eigenvector that corresponds to the maximum eigenvalue is placed in the first column and so on. This ordering is very helpful in various real life applications.
19 .Singular Value Decomposition (SVD) cont. As already shown, from we obtain that or Therefore, we can break into a set of relationships of the form . Note that is a scalar and and vectors. For the relationship tells us that: The vectors are in the row space of This is because from we have , Furthermore, since the ’s associated with are orthonormal, they form a basis of the row space. The vectors are in the column space of This observation comes directly from , i.e., s are linear combinations of columns of Furthermore, the s associated with are orthonormal. Thus, they form a basis of the column space.
20 .Singular Value Decomposition (SVD) cont. Based on the facts that: , form an orthonormal basis of the row space of , form an orthonormal basis of the column space of , we conclude that: with SVD, an orthonormal basis of the row space, which is given by the columns of is mapped by matrix to an orthonormal basis of the column space given by the columns of . This comes from . The additional ’s which correspond to the zero eigenvalues of matrix are taken from the null space of .
21 .Examples of different matrices We managed to find an orthonormal basis ( ) of the row space and an orthonormal basis ( ) of the column space that diagonalize the matrix to . In general, the basis of is different to the basis of . The SVD is written as: The form of matrix depends on the dimensions It is of dimension . Its elements are chosen as: are the non-zero eigenvalues of or . are the non-zero singular values of .
22 .Examples of different matrices cont. Example: Example:
23 .Truncated or Reduced Singular Value Decomposition In the expression for SVD we can reformulate the dimensions of all matrices involved by ignoring the eigenvectors which correspond to zero eigenvalues. In that case we have: where: The dimension of is . The dimension of is . The dimension of is . The dimension of is . The above formulation is called Truncated or Reduced Singular Value Decomposition . As seen, the Truncated SVD gives the splitting of into a sum of matrices, each of rank 1. In the case of a square, invertible matrix ( , the two decompositions are identical.
24 .Singular Value Decomposition. Example 1. Example: and The eigenvalues of are and . The eigenvectors of are and Similarly Therefore, the eigenvectors of and and CAREFUL: ’s are chosen to satisfy the relationship , . Therefore, the SVD of is:
25 .Singular Value Decomposition. Example 2. Example: (singular) and The eigenvalues of are and . The eigenvectors of are and Similarly is chosen to satisfy the relationship . is chosen to be perpendicular to . Note that the presence of and does not affect the calculations, since their elements are multiplied by zeros. Therefore, the SVD of is:
26 .Singular Value Decomposition. Example 2 cont. The SVD of is: The truncated SVD is:
27 .Singular Value Decomposition. Example 3. Example: . We see that . The eigenvalues of are and and (obviously). The eigenvectors of are , and . Similarly . is chosen to satisfy the relationship . is chosen to satisfy the relationship . Note that the presence of does not affect the calculations, since its elements are multiplied by zeros.
28 .Singular Value Decomposition. Example. Therefore, the SVD of is The truncated SVD for this example is:
29 .Pseudoinverse Suppose that is a matrix of dimension and rank . The SVD of matrix is given by: I define a matrix of dimension as follows: The matrix is called the Pseudoinverse of matrix or the Moore Penrose inverse. . The matrix is of dimension (square) and has rank . It is defined as follows: . The matrix is of dimension and has rank . It is defined as above.