Linear Algebra Question List
1. Matrices. Basic operations on matrices.
A matrix is a derived table of numbers.
Two matrices A and B of the same size m*n are called equal if they match element by element, i.e. .
Basic operations on matrices.
· Multiplying a matrix by a number . The product of the matrix A by the number ƛ is called the matrix B (B = ƛ * A), the elements of which are calculated as ,
The sum of two matrices A and B of the same size m * n is the matrix C u003d A + B, the elements of which .
· The difference of two matrices of the same size is determined through the previous operations, i.e. A-B=A+(-B)
· The multiplication of matrix A by matrix B is defined when the number of columns of the first matrix is equal to the number of rows of the second. Then the product of matrices is such a matrix C, each element of which is equal to the sum of the products of the elements of the i-row of matrix A and the j-column of matrix B.
2. Square matrix. Diagonal and identity matrices. Examples
Classification of matrices.
1) A matrix consisting of one row/column is called row-vector/column-vector. ;
2) Sq. a matrix of the nth order is a matrix in which the number of rows is equal to the number of columns.
3) Matrix elements , for which i=j, is called diagonal . They form the main diagonal.
4) If all off-diagonal elements of the matrix are equal to zero, then it is called diagonal .
5) The identity matrix is a diagonal matrix whose elements are equal to 1.
6) A square matrix is called symmetric , in which the elements located symmetrically with respect to the main diagonal are equal.
7) The triangular matrix is square. a matrix in which all elements on one of the sides of the main diagonal are equal to 0.
3. Transposed matrix. Example .
Matrix transposition is the transition from matrix A to , in which the rows and columns are swapped in order.
is called transposed with respect to matrix A.
4. Elementary line transformations. Equivalent matrices.
swapping any two rows of the matrix;
multiplication of any row of a matrix by a constant k, k
addition to any row of the matrix of another row.
Elementary column transformations are defined similarly.
Elementary transformations are reversible .
If a matrix A is passed from a matrix B with the help of equivalent transformations over rows, then such matrices are called equivalent and denoted .
5. Expression of the determinant directly in terms of its elements (for square matrices of size 2*2 and 3*3)
Each square matrix A can be assigned a number calculated according to certain rules, called the square determinant. matrices.
The determinant of a second-order matrix or second-order determinant is a number that is calculated by the formula
The determinant of a matrix of the third order or the determinant of the third order is a number that is calculated by the formula:
6. Minor of the square matrix corresponding to the element of the matrix .
Let matrix A be square. matrix of the nth order. Minor element called the determinant of the (n -1) order, obtained from the matrix A by deleting the i – th row and j – th column.
7. Algebraic Complement of a Square Matrix Corresponding to a Matrix Element
ALGEBRAIC COMPLEMENT – the concept of matrix algebra; in relation to the element aij of the square matrix A is formed by multiplying the minor of the element aij by (–1)i + j (denoted by Aij):
Aij = (–1)i+j Mij, where Mij is the minor of the element aij of the matrix A=[aij], i.e. the determinant of the matrix obtained from the matrix A by deleting the row and column where the element aij stands at the intersection. The concept of AD is used, in particular, in the operation of matrix inversion.
8. Calculation of the determinant by expanding in any row (column) of the matrix
Let’s choose the i,j-th element of this matrix and delete the i-th row and the j-th column. As a result, we get an (n – 1) order matrix, the determinant of which is called the element minor and is denoted by the symbol Mi j:
The algebraic complement Ai,j of the element ai j is defined by the formula
Theorem on the expansion of the determinant by the elements of a string. The determinant of the matrix A is equal to the sum of the products of the row elements and their algebraic complements:
Theorem on the decomposition of the determinant in terms of the elements of the column . The determinant of the matrix A is equal to the sum of the products of the column elements and their algebraic complements:
9. Properties of determinants
· When adding a linear combination of other rows (columns) to any row (column), the determinant will not change.
· If two rows (columns) of a matrix are the same, then its determinant is equal to zero.
· If two (or several) rows (columns) of a matrix are linearly dependent, then its determinant is equal to zero.
· If you rearrange two rows (columns) of a matrix, then its determinant is multiplied by (-1).
· The common factor of the elements of any row of the determinant can be taken out of the sign of the determinant.
· If at least one row (column) of the matrix is zero, then the determinant is equal to zero.
· The sum of the products of all elements of any string and their algebraic complements is equal to the determinant.
· The sum of the products of all elements of any series and the algebraic complements of the corresponding elements of the parallel series is equal to zero.
· The determinant of the product of square matrices of the same order is equal to the product of their determinants (see also the Binet-Cauchy formula).
10. Nonsingular matrices. Example.
Definition. A non-singular matrix is a square matrix of the th order, the determinant of which is different from zero. Otherwise, the matrix is called degenerate.
11. Inverse matrix.
Matrix is called the inverse with respect to the square matrix A, if when multiplying this matrix by the given one, both on the right and on the left, the identity matrix is obtained:
* A = A * = E
If the determinant of a matrix is different from 0, then such a matrix is called non- singular , otherwise (if ) such a matrix is called degenerate or singular .
The adjoint matrix of a square matrix A is the matrix each element of which is the algebraic complement of the elements of the transposed matrix:
Theorem on the existence of an inverse matrix. The inverse matrix A exists and is unique if and only if the original matrix is nonsingular ( nonsingular )
12. Formula for calculating the inverse matrix .
Algorithm for finding the inverse matrix
Find the determinant of the original matrix. If the determinant is 0, then the matrix A is degenerate and the inverse matrix does not exist .
If the determinant is not equal to zero, then we find the transposed matrix ( ).
Compose adjunct matrix . Find the algebraic complement of the elements of the transposed matrix and compose the adjoint matrix from them.
Calculate inverse matrix: = * , where – determinant of the original matrix
Check the correctness of the calculations based on the definition: * A = A * = E
13. Basic properties of the inverse matrix.
( = *
· = =
14. Matrix rank
In a size matrix by deleting any rows and columns, one can select square submatrices -th order, where
The determinant of such submatrices is called the minors -th order of matrix A.
The rank of a matrix A is the highest order of non-zero minors of this matrix. Denoted rand A or r (A).
From the definition follows:
Dimension matrix rank does not exceed the smallest of its dimensions, that is, r (A)
r (A) = 0 only when all elements of the matrix are 0, i.e. A = 0
For a square matrix of the nth order r (A)=n , if and only if the matrix A is non- singular
Determining the rank of a matrix by enumeration of all minors is quite difficult. To make it easier, elementary transformations are used that preserve the rank of the matrix:
· Dropping a null row (column).
· Multiplication of all elements of a row (column) of a matrix by a number that is not equal to zero.
· Changing the order of rows (columns).
· Addition to a given element of one row (column) of the corresponding elements of another row (column), multiplied by any number.
· Matrix rank will not change under elementary transformations of the matrix.
· The rank of the step matrix is equal to r , because there is an rth order minor not equal to 0.
15. Kronecker-Kapeli theorem
A system of linear equations is consistent if and only if the rank of the matrix of the system is equal to the rank of the extended matrix of this system.
16. Solution of systems of linear equations by the Gauss method
The Gauss method is a method of sequential elimination of variables; is that, with the help of elementary transformations, the system of equations is reduced to an equivalent system of a stepped or triangular form.
17. Cramer’s formula
Cramer’s method (for nxn square matrix) – Let – determinant of m-tsy A, but 1 is obtained from the matrix with the j-th column replaced by a column of free terms. Then if ≠ 0, then the system has units. solution determined by Cramer’s formula: x j = j / (j=1…n)
18 . Fundamental system of solutions (FSD) of a homogeneous system of linear equations.
It is called n − r linearly independent solutions of this system ( or the basis of the kernel of the operator )
FSR theorem: let the rank of the main m-tsy, r(A) be less than n (r 1) FSR exists, y 1 , y 2 , y k , consists of k vectors (k=nr) 2) The general solution of the system has the form: X total u003d c 1 y 1 + c 2 y 2 + … + c n – r y n – r 3) If n=r, then FSR does not exist
1) FSR exists, y 1 , y 2 , y k , consists of k vectors (k=nr)
2) The general solution of the system has the form: X total u003d c 1 y 1 + c 2 y 2 + … + c n – r y n – r
3) If n=r, then FSR does not exist
18. Vectors on the plane and in space
19. Vector space and its simplest properties.
20. Dimension and basis of a vector space.
A vector space R n is called n -dimensional if n linearly independent vectors can be found in it, but it does not contain more than n linearly independent vectors.
The dimension of a space is the maximum number of linearly independent vectors contained in it. Let us denote R n by dim.
A space that has a finite dimension is called finite -dimensional. A space in which one can find arbitrarily many linearly independent vectors is called infinite-dimensional .
The set of n linearly independent vectors of an n – dimensional vector space R n is called its basis .
Theor1. Each vector x of a linear n-dimensional space R n can be represented, and, moreover, units. way, as a linear combination of basis vectors.
Theor2. If a – linearly independent vectors of the space R n and any vector linearly expressed through , then these vectors form a basis in R n .
Transition to a new basis
Let there be two bases in the space R n : and .
Each of the vectors of the new basis can be linearly expressed in terms of the vectors of the old basis:
New basis vectors are obtained from the old ones using the matrix
Moreover, the coefficients of their expansions in terms of the old basis vectors form the columns of this matrix. The matrix A is called the transition matrix from basis to basis .
22. Dot product. Euclidean space.
Def.: The scalar product of two vectors (x,y)= ∑ is the number
properties: 1) (x,y)=(y,x) 2) (x,y+z)=(x,y)+(x,z) 3) (λx,y)=λ(x, y) λ>0 any real number
4) (x,x)>0(if x≠0)
Def.: A linear (vector) space in which the scalar product of vectors is given, satisfies properties 1-4, is called Euclidean space.
23. The concept of linear space
Linear operations on vectors have the following properties:
1) x+y=y+x (commutative) addition law
2) (x+y)+z=x+(y+z) (associative) law of addition
3) a(bx)=ab(x) , where a,b are real numbers
6) 0=(0,000) zero vector x+0=x
7) for any X there is -X
Def.: The set of vectors with real components, in which the operations of adding vectors and multiplying a vector by a number that satisfies properties (1-7) are defined, is called a vector (linear) space
24. Linear operator. Actions on Linear Operators
Linear operator – if a law is given according to which each vector x of the space R n is associated with a unique vector y of the space R m , then they say that the operator A (x) is given, acting from the space R n to R m
Actions on linear operators :
1)( Ã + ῀B)*(x)= Ã (x)+῀B(x)
2)(ʎ῀А)(х)=ʎ( Ã (х))
3)( Ã ῀B)(x)=A(B(x))
25. Eigenvectors and Eigenvalues of a Linear Operator
Def: Vector X≠0-is called an eigenvector of matrix A if there is such a number N that the equality is fulfilled: AH=ʎX(3).
Def: The number ʎ in equality (3) is called the eigenvalue of the matrix A, corresponding to the vector X. |A- ʎE|=| | – characteristic equation of matrix A
26. Matrix notation for linear operators.
The relationship between the vector x and its image y= Ã(x) can be expressed in the matrix form Y=A*X , where A is the matrix of the linear operator. Vector X consists of coordinates X=(x1,x2,…xn), and vector Y consists of coordinates Y=(x1,x2…xn)
27. Dependence between matrices of the same operator in different bases (theorem)
Suppose A and A* are matrices of the linear operator Г in the bases А-е1,е2,…еn, А*-е1*,е2*,…еn*, they are related by the relation А*=С-1*А*С, where C is the transition matrix from the old basis to the new one