Linear dependence of rows and columns of a property matrix. Linear dependence and independence of matrix rows

Let

Dimension matrix columns. Linear combination of matrix columns called a column matrix, with some real or complex numbers called linear combination coefficients. If in a linear combination we take all the coefficients equal to zero, then the linear combination is equal to the zero column matrix.

The columns of the matrix are called linearly independent , if their linear combination is equal to zero only when all the coefficients of the linear combination are equal to zero. The columns of the matrix are called linearly dependent , if there is a set of numbers among which at least one is nonzero, and the linear combination of columns with these coefficients is equal to zero

Definitions can be given similarly linear dependence And linear independence matrix rows. In what follows, all theorems are formulated for the columns of the matrix.

Theorem 5

If there is a zero among the matrix columns, then the matrix columns are linearly dependent.

Proof. Consider a linear combination in which all coefficients are equal to zero for all non-zero columns and one for all zero columns. It is equal to zero, and among the coefficients of the linear combination there is a nonzero coefficient. Therefore, the columns of the matrix are linearly dependent.

Theorem 6

If matrix columns are linearly dependent, that's all matrix columns are linearly dependent.

Proof. For definiteness, we will assume that the first columns of the matrix linearly dependent. Then, by the definition of linear dependence, there is a set of numbers among which at least one is nonzero, and the linear combination of columns with these coefficients is equal to zero

Let's make a linear combination of all columns of the matrix, including the remaining columns with zero coefficients

But . Therefore, all columns of the matrix are linearly dependent.

Consequence. Among linearly independent matrix columns, any are linearly independent. (This statement can be easily proven by contradiction.)

Theorem 7

In order for the columns of a matrix to be linearly dependent, it is necessary and sufficient that at least one column of the matrix be a linear combination of the others.

Proof.

Necessity. Let the columns of the matrix be linearly dependent, that is, there is a set of numbers among which at least one is different from zero, and the linear combination of columns with these coefficients is equal to zero

Let us assume for definiteness that . Then that is, the first column is a linear combination of the rest.

Adequacy. Let at least one column of the matrix be a linear combination of the others, for example, , where are some numbers.

Then , that is, the linear combination of columns is equal to zero, and among the numbers in the linear combination at least one (at ) is different from zero.

Let the rank of the matrix be . Any non-zero minor of order 1 is called basic . Rows and columns at the intersection of which stands basic minor, are called basic .

Consider an arbitrary, not necessarily square, matrix A of size mxn.

Matrix rank.

The concept of matrix rank is associated with the concept of linear dependence (independence) of the rows (columns) of the matrix. Let's consider this concept for strings. For columns - similarly.

Let us denote the drains of matrix A:

e 1 =(a 11,a 12,…,a 1n); e 2 =(a 21,a 22,…,a 2n);…, e m =(a m1,a m2,…,a mn)

e k =e s if a kj =a sj , j=1,2,…,n

Arithmetic operations over the rows of the matrix (addition, multiplication by a number) are introduced as operations carried out element by element: λе k =(λа k1,λа k2,…,λа kn);

e k +е s =[(a k1 +a s1),(a k2 +a s2),…,(a kn +a sn)].

Line e is called linear combination rows e 1, e 2,…, e k, if it is equal to the sum of the products of these lines by arbitrary real numbers:

e=λ 1 e 1 +λ 2 e 2 +…+λ k e k

The lines e 1, e 2,…, e m are called linearly dependent, if there are real numbers λ 1 ,λ 2 ,…,λ m , not all equal to zero, that the linear combination of these strings is equal to the zero string: λ 1 e 1 +λ 2 e 2 +…+λ m e m = 0 ,Where 0 =(0,0,…,0) (1)

If a linear combination is equal to zero if and only if all coefficients λ i are equal to zero (λ 1 =λ 2 =...=λ m =0), then the rows e 1, e 2,..., e m are called linearly independent.

Theorem 1. In order for the strings e 1 , e 2 ,…, e m to be linearly dependent, it is necessary and sufficient that one of these strings be a linear combination of the remaining strings.

Proof. Necessity. Let the strings e 1, e 2,…, e m be linearly dependent. Let, for definiteness, (1) λ m ≠0, then

That. the string e m is a linear combination of the remaining strings. Etc.

Adequacy. Let one of the strings, for example e m, be a linear combination of the remaining strings. Then there will be numbers such that the equality holds, which can be rewritten as ,

where at least 1 of the coefficients, (-1), is not equal to zero. Those. the rows are linearly dependent. Etc.

Definition. Minor kth order matrix A of size mxn is called a k-th order determinant with elements lying at the intersection of any k rows and any k columns of matrix A. (k≤min(m,n)). .

Example., 1st order minors: =, =;

2nd order minors: , 3rd order

A 3rd order matrix has 9 1st order minors, 9 2nd order minors and 1 3rd order minor (the determinant of this matrix).

Definition. Rank of matrix A called highest order non-zero minors of this matrix. Designation - rg A or r(A).

Matrix rank properties.

1) the rank of the matrix A nxm does not exceed the smaller of its dimensions, i.e.

r(A)≤min(m,n).

2) r(A)=0 when all matrix elements are equal to 0, i.e. A=0.

3) For square matrix And nth order r(A)=n, when A is non-degenerate.



(The rank of a diagonal matrix is ​​equal to the number of its non-zero diagonal elements).

4) If the rank of the matrix is ​​equal to r, then the matrix has at least one minor of order r, not equal to zero, and all minors of large orders are equal to zero.

The following relations hold for the ranks of the matrix:

2) r(A+B)≤r(A)+r(B); 3) r(AB)≤min(r(A),r(B));

3) r(A+B)≥│r(A)-r(B)│; 4) r(A T A)=r(A);

5) r(AB)=r(A), if B is a square non-singular matrix.

6) r(AB)≥r(A)+r(B)-n, where n is the number of columns of matrix A or rows of matrix B.

Definition. A non-zero minor of order r(A) is called basic minor. (Matrix A may have several basis minors). Rows and columns at the intersection of which there is a basis minor are called respectively base strings And base columns.

Theorem 2 (about the basis minor). The underlying rows (columns) are linearly independent. Any row (any column) of matrix A is a linear combination of the basis rows (columns).

Proof. (For strings). If the basic rows were linearly dependent, then according to Theorem (1) one of these rows would be a linear combination of other basic rows, then, without changing the value of the basic minor, you can subtract the indicated linear combination from this row and get a zero row, and this contradicts to the fact that the basis minor is different from zero. That. the basis rows are linearly independent.

Let us prove that any row of the matrix A is a linear combination of the basis rows. Because with arbitrary changes of rows (columns) the determinant retains the property of being equal to zero, then, without loss of generality, we can assume that the basis minor is in the upper left corner of the matrix

A=, those. located on the first r rows and first r columns. Let 1£j£n, 1£i£m. Let us show that the determinant of (r+1) order

If j£r or i£r, then this determinant is equal to zero, because he will have two identical columns or two identical lines.

If j>r and i>r, then this determinant is a minor of the (r+1)th order of the matrix A. Since The rank of the matrix is ​​r, which means that any minor of a higher order is equal to 0.

Expanding it according to the elements of the last (added) column, we get

a 1j A 1j +a 2j A 2j +…+a rj A rj +a ij A ij =0, where the last algebraic complement A ij coincides with the basis minor M r and therefore A ij = M r ≠0.

Dividing the last equality by A ij, we can express the element a ij as a linear combination: , where .

Let us fix the value of i (i>r) and find that for any j (j=1,2,…,n) i-th elements strings e i are linearly expressed through the elements of strings e 1, e 2,…, e r, i.e. i-th line is a linear combination of the basis strings: . Etc.

Theorem 3. (necessary and sufficient condition the determinant being equal to zero). In order for the nth order determinant D to be equal to zero, it is necessary and sufficient that its rows (columns) be linearly dependent.

Proof (p.40). Necessity. If the nth order determinant D is equal to zero, then the basis minor of its matrix is ​​of order r

Thus, one row is a linear combination of the others. Then, by Theorem 1, the rows of the determinant are linearly dependent.

Adequacy. If the rows D are linearly dependent, then by Theorem 1 one row A i is a linear combination of the remaining rows. Subtracting the specified linear combination from the string A i without changing the value of D, we obtain a zero string. Therefore, according to the properties of determinants, D=0. etc.

Theorem 4. During elementary transformations, the rank of the matrix does not change.

Proof. As was shown when considering the properties of determinants, when transforming square matrices, their determinants either do not change, or are multiplied by a non-zero number, or change sign. In this case, the highest order of non-zero minors of the original matrix is ​​preserved, i.e. the rank of the matrix does not change. Etc.

If r(A)=r(B), then A and B are equivalent: A~B.

Theorem 5. Using elementary transformations, you can reduce the matrix to stepped view. The matrix is ​​called stepwise, if it has the form:

A=, where a ii ≠0, i=1,2,…,r; r≤k.

The condition r≤k can always be achieved by transposing.

Theorem 6. The rank of an echelon matrix is ​​equal to the number of its non-zero rows .

Those. The rank of the step matrix is ​​equal to r, because there is a nonzero minor of order r:

Note that the rows and columns of the matrix can be considered as arithmetic vectors of dimensions m And n, respectively. Thus, the size matrix can be interpreted as a set m n-dimensional or n m-dimensional arithmetic vectors. By analogy with geometric vectors, we introduce the concepts of linear dependence and linear independence of the rows and columns of a matrix.

4.8.1. Definition. Line
called linear combination of strings with odds
, if all elements of this line have the following equality:

,
.

4.8.2. Definition.

Strings
are called linearly dependent, if there is a non-trivial linear combination of them equal to the zero row, i.e. there are numbers that are not all equal to zero


,
.

4.8.3. Definition.

Strings
are called linearly independent, if only their trivial linear combination is equal to the zero row, i.e.

,

4.8.4. Theorem. (Criterion for linear dependence of matrix rows)

In order for the rows to be linearly dependent, it is necessary and sufficient that at least one of them is a linear combination of the others.

Proof:

Necessity. Let the lines
are linearly dependent, then there is a nontrivial linear combination of them equal to the zero row:

.

Without loss of generality, assume that the first of the coefficients of the linear combination is nonzero (otherwise, the rows can be renumbered). Dividing this ratio by , we get


,

that is, the first row is a linear combination of the others.

Adequacy. Let one of the lines, for example, , is a linear combination of the others, then

that is, there is a non-trivial linear combination of strings
, equal to the zero string:

which means the lines
are linearly dependent, which is what needed to be proven.

Comment.

Similar definitions and statements can be formulated for the columns of the matrix.

§4.9. Matrix rank.

4.9.1. Definition. Minor order matrices size
called the order determinant with elements located at the intersection of some of it lines and columns.

4.9.2. Definition. Non-zero minor order matrices size
called basic minor, if all minors of the matrix are of order
are equal to zero.

Comment. A matrix can have several basis minors. Obviously, they will all be of the same order. It is also possible that the matrix size
minor order is different from zero, and the minors are of order
does not exist, that is
.

4.9.3. Definition. The rows (columns) that form the basis minor are called basic rows (columns).

4.9.4. Definition. Rank of a matrix is ​​called the order of its basis minor. Matrix rank denoted by
or
.

Comment.

Note that due to the equality of the rows and columns of the determinant, the rank of the matrix does not change when it is transposed.

4.9.5. Theorem. (Invariance of matrix rank under elementary transformations)

The rank of a matrix does not change during its elementary transformations.

No proof.

4.9.6. Theorem. (About the basic minor).

The underlying rows (columns) are linearly independent. Any row (column) of a matrix can be represented as a linear combination of its basic rows (columns).

Proof:

Let's do the proof for strings. The proof of the statement for columns can be carried out by analogy.

Let the rank of the matrix sizes
equals , A
− basic minor. Without loss of generality, we assume that the basis minor is located in the upper left corner (otherwise, the matrix can be reduced to this form using elementary transformations):

.

Let us first prove the linear independence of the basis rows. We will carry out the proof by contradiction. Let us assume that the basis rows are linearly dependent. Then, according to Theorem 4.8.4, one of the strings can be represented as a linear combination of the remaining basic strings. Therefore, if we subtract the specified linear combination from this row, we get a zero row, which means that the minor
is equal to zero, which contradicts the definition of a basis minor. Thus, we have obtained a contradiction; therefore, the linear independence of the basis rows has been proven.

Let us now prove that every row of a matrix can be represented as a linear combination of basis rows. If the line number in question from 1 to r, then, obviously, it can be represented as a linear combination with a coefficient equal to 1 for the line and zero coefficients for the remaining rows. Let us now show that if the line number from
before
, it can be represented as a linear combination of basis strings. Consider the matrix minor
, obtained from the basis minor
adding a line and an arbitrary column
:

Let us show that this minor
from
before
and for any column number from 1 to .

Indeed, if the column number from 1 to r, then we have a determinant with two identical columns, which is obviously equal to zero. If the column number from r+1 to , and the line number from
before
, That
is a minor of the original matrix of higher order than the basis minor, which means that it is equal to zero from the definition of basis minor. Thus, it has been proven that the minor
is zero for any line number from
before
and for any column number from 1 to . Expanding it over the last column, we get:

Here
− corresponding algebraic additions. notice, that
, since therefore
is a basic minor. Therefore, the elements of the line k can be represented as a linear combination of the corresponding elements of the basis rows with coefficients independent of the column number :

Thus, we have proven that an arbitrary row of a matrix can be represented as a linear combination of its basis rows. The theorem has been proven.

Lecture 13

4.9.7. Theorem. (On the rank of a non-singular square matrix)

In order for a square matrix to be non-singular, it is necessary and sufficient that the rank of the matrix is ​​equal to the size of this matrix.

Proof:

Necessity. Let the square matrix size n is non-degenerate, then
, therefore, the determinant of the matrix is ​​a basis minor, i.e.

Adequacy. Let
then the order of the basis minor is equal to the size of the matrix, therefore the basis minor is the determinant of the matrix , i.e.
by definition of a basic minor.

Consequence.

In order for a square matrix to be non-singular, it is necessary and sufficient that its rows be linearly independent.

Proof:

Necessity. Since a square matrix is ​​non-singular, its rank is equal to the size of the matrix
that is, the determinant of the matrix is ​​a basis minor. Therefore, by Theorem 4.9.6 on the basis minor, the rows of the matrix are linearly independent.

Adequacy. Since all rows of the matrix are linearly independent, its rank is not less than the size of the matrix, which means
therefore, by the previous Theorem 4.9.7, the matrix is non-degenerate.

4.9.8. The method of bordering minors for finding the rank of a matrix.

Note that part of this method has already been implicitly described in the proof of the basis minor theorem.

4.9.8.1. Definition. Minor
called bordering relative to minor
, if it is obtained from a minor
by adding one new row and one new column to the original matrix.

4.9.8.2. The procedure for finding the rank of a matrix using the bordering minors method.

    We find any current minor of the matrix that is different from zero.

    We calculate all minors bordering it.

    If they are all equal to zero, then the current minor is a basis one, and the rank of the matrix is ​​equal to the order of the current minor.

    If among the bordering minors there is at least one non-zero, then it is considered current and the procedure continues.

Using the method of bordering minors, we find the rank of the matrix

.

It is easy to specify the current non-zero second order minor, e.g.

.

We calculate the minors bordering it:




Consequently, since all bordering minors of the third order are equal to zero, then the minor
is basic, that is

Comment. From the example considered, it is clear that the method is quite labor-intensive. Therefore, in practice, the method of elementary transformations is much more often used, which will be discussed below.

4.9.9. Finding the rank of a matrix using the method of elementary transformations.

Based on Theorem 4.9.5, it can be argued that the rank of the matrix does not change under elementary transformations (that is, the ranks of equivalent matrices are equal). Therefore, the rank of the matrix is ​​equal to the rank of the step matrix obtained from the original one by elementary transformations. The rank of a step matrix is ​​obviously equal to the number of its non-zero rows.

Let's determine the rank of the matrix

by the method of elementary transformations.

Let's present the matrix to step view:

The number of non-zero rows of the resulting echelon matrix is ​​three, therefore,

4.9.10. Rank of a system of linear space vectors.

Consider the system of vectors
some linear space . If it is linearly dependent, then a linearly independent subsystem can be distinguished in it.

4.9.10.1. Definition. Rank of the vector system
linear space the maximum number of linearly independent vectors of this system is called. Vector system rank
denoted as
.

Comment. If a system of vectors is linearly independent, then its rank is equal to the number of vectors in the system.

Let us formulate a theorem showing the connection between the concepts of the rank of a system of vectors in a linear space and the rank of a matrix.

4.9.10.2. Theorem. (On the rank of a system of vectors in linear space)

The rank of a system of vectors in a linear space is equal to the rank of a matrix whose columns or rows are the coordinates of vectors in some basis of the linear space.

No proof.

Consequence.

In order for a system of vectors in a linear space to be linearly independent, it is necessary and sufficient that the rank of the matrix, the columns or rows of which are the coordinates of the vectors in a certain basis, is equal to the number of vectors in the system.

The proof is obvious.

4.9.10.3. Theorem (On the dimension of a linear shell).

Dimension of linear hull vectors
linear space equal to the rank of this vector system:

No proof.

Let k rows and k columns (k ≤ min(m; n)) be randomly selected in a matrix A of dimensions (m; n). The matrix elements located at the intersection of the selected rows and columns form a square matrix of order k, the determinant of which is called the minor M kk of order k y or the kth order minor of the matrix A.

The rank of a matrix is ​​the maximum order of r nonzero minors of the matrix A, and any minor of order r that is nonzero is a basis minor. Designation: rang A = r. If rang A = rang B and the sizes of matrices A and B are the same, then matrices A and B are called equivalent. Designation: A ~ B.

The main methods for calculating the rank of a matrix are the method of bordering minors and the method.

Method of bordering minors

The essence of the bordering minors method is as follows. Let a minor of order k, different from zero, have already been found in the matrix. Then we consider below only those minors of order k+1 that contain (i.e., border) a minor of kth order that is different from zero. If all of them are equal to zero, then the rank of the matrix is ​​equal to k, otherwise among the bordering minors of the (k+1)th order there is a non-zero one and the whole procedure is repeated.

Linear independence of rows (columns) of a matrix

The concept of matrix rank is closely related to the concept of linear independence of its rows (columns).

Matrix rows:

are called linearly dependent if there are numbers λ 1, λ 2, λ k such that the equality is true:

The rows of matrix A are called linearly independent if the above equality is possible only in the case when all numbers λ 1 = λ 2 = … = λ k = 0

The linear dependence and independence of the columns of matrix A are determined in a similar way.

If any row (a l) of matrix A (where (a l)=(a l1 , a l2 ,…, a ln)) can be represented as

The concept of a linear combination of columns is defined in a similar way. The following theorem about the basis minor is valid.

The basis rows and basis columns are linearly independent. Any row (or column) of matrix A is a linear combination of basis rows (columns), i.e., rows (columns) intersecting the basis minor. Thus, the rank of matrix A: rang A = k is equal to the maximum number of linearly independent rows (columns) of matrix A.

Those. The rank of a matrix is ​​the dimension of the largest square matrix within the matrix for which the rank needs to be determined, for which the determinant is not equal to zero. If the original matrix is ​​not square, or if it is square but its determinant is zero, then for square matrices of lower order the rows and columns are chosen arbitrarily.

In addition to determinants, the rank of a matrix can be calculated by the number of linearly independent rows or columns of the matrix. It is equal to the number of linearly independent rows or columns, whichever is smaller. For example, if a matrix has 3 linearly independent rows and 5 linearly independent columns, then its rank is three.

Examples of finding the rank of a matrix

Using the method of bordering minors, find the rank of the matrix

Solution: Second order minor

the bordering minor M 2 is also nonzero. However, both minors are of fourth order, bordering M 3 .

are equal to zero. Therefore, the rank of matrix A is 3, and the basis minor is, for example, the minor M 3 presented above.

The method of elementary transformations is based on the fact that elementary transformations of a matrix do not change its rank. Using these transformations, you can bring the matrix to a form where all its elements, except a 11, a 22, ..., a rr (r ≤min (m, n)), are equal to zero. This obviously means that rank A = r. Note that if an nth-order matrix has the form of an upper triangular matrix, that is, a matrix in which all elements under the main diagonal are equal to zero, then its definition is equal to the product of the elements on the main diagonal. This property can be used when calculating the rank of a matrix using the method of elementary transformations: it is necessary to use them to reduce the matrix to a triangular one and then, by selecting the corresponding determinant, we find that the rank of the matrix is ​​equal to the number of elements of the main diagonal that are different from zero.

Using the method of elementary transformations, find the rank of the matrix

Solution. Let us denote the i-th row of matrix A by the symbol α i . At the first stage, we will perform elementary transformations

At the second stage, we perform the transformations

As a result we get

A system of vectors of the same order is called linearly dependent if a zero vector can be obtained from these vectors through an appropriate linear combination. (It is not allowed that all coefficients of a linear combination be equal to zero, since this would be trivial.) Otherwise, the vectors are called linearly independent. For example, the following three vectors:

are linearly dependent, since that is easy to check. In the case of a linear dependence, any vector can always be expressed through a linear combination of other vectors. In our example: either or This is easy to check with the appropriate calculations. This leads to the following definition: a vector is linearly independent of other vectors if it cannot be represented as a linear combination of these vectors.

Let us consider a system of vectors without specifying whether it is linearly dependent or linearly independent. For each system consisting of column vectors a, it is possible to identify the maximum possible number of linearly independent vectors. This number, denoted by the letter , is the rank of this vector system. Since each matrix can be viewed as a system of column vectors, the rank of a matrix is ​​defined as the maximum number of linearly independent column vectors it contains. Row vectors are also used to determine the rank of a matrix. Both methods give the same result for the same matrix, and cannot exceed the smallest of or The rank of a square matrix of order ranges from 0 to . If all vectors are zero, then the rank of such a matrix is ​​zero. If all vectors are linearly independent of each other, then the rank of the matrix is ​​equal. If we form a matrix from the above vectors, then the rank of this matrix is ​​2. Since every two vectors can be reduced to a third by a linear combination, then the rank is less than 3.

But we can make sure that any two vectors of them are linearly independent, hence the rank

A square matrix is ​​called singular if its column vectors or row vectors are linearly dependent. The determinant of such a matrix is ​​equal to zero and its inverse matrix does not exist, as noted above. These conclusions are equivalent to each other. As a result, a square matrix is ​​called non-singular, or non-singular, if its column vectors or row vectors are independent of each other. The determinant of such a matrix is ​​not equal to zero and its inverse matrix exists (compare with p. 43)

The rank of the matrix has a quite obvious geometric interpretation. If the rank of the matrix is ​​equal to , then the -dimensional space is said to be spanned by vectors. If the rank is then the vectors lie in an -dimensional subspace that includes all of them. So, the rank of the matrix corresponds to the minimum required dimension of the space “which contains all the vectors”; a -dimensional subspace in an -dimensional space is called an -dimensional hyperplane. The rank of the matrix corresponds to the smallest dimension of the hyperplane in which all vectors still lie.

Orthogonality. Two vectors a and b are said to be mutually orthogonal if their scalar product is zero. If the order matrix has the equality where D is a diagonal matrix, then the column vectors of matrix A are pairwise mutually orthogonal. If these column vectors are normalized, that is, reduced to a length equal to 1, then equality holds and we speak of orthonormal vectors. If B is a square matrix and the equality holds, then the matrix B is called orthogonal. In this case, it follows from formula (1.22) that the Orthogonal matrix is ​​always non-singular. Hence, from the orthogonality of the matrix, the linear independence of its row vectors or column vectors follows. The converse statement is false: the linear independence of a system of vectors does not imply the pairwise orthogonality of these vectors.