The rank of the zero matrix is ​​equal to. Matrix rank. Elementary transformations of matrix rows

Determining the rank of a matrix

Consider a matrix \(A\) of type \((m,n)\). Let, for definiteness, \(m \leq n\). Let's take \(m\) rows and choose \(m\) columns of the matrix \(A\), at the intersection of these rows and columns we get a square matrix of order \(m\), the determinant of which is called minor order \(m\) matrices \(A\). If this minor is different from 0, it is called basic minor and they say that the rank of the matrix \(A\) is equal to \(m\). If this determinant is equal to 0, then other \(m\) columns are chosen, at their intersection there are elements that form another minor of order \(m\). If the minor is 0, we continue the procedure. If among all possible minors of order \(m\) there are no nonzeros, we select \(m-1\) rows and columns from the matrix \(A\), at their intersection a square matrix of order \(m-1\) appears , its determinant is called a minor of order \(m-1\) of the original matrix. Continuing the procedure, we look for a non-zero minor, going through all possible minors, lowering their order.

Definition.

The non-zero minor of a given matrix of the highest order is called basic minor of the original matrix, its order is called rank matrices \(A\), rows and columns, at the intersection of which there is a basis minor, are called basis rows and columns. The rank of a matrix is ​​denoted by \(rang(A)\).

From this definition follow simple properties of the rank of a matrix: it is an integer, and the rank of a non-zero matrix satisfies the inequalities: \(1 \leq rank(A) \leq \min(m,n)\).

How will the rank of the matrix change if a row is deleted? Add some line?

Check answer

1) Rank may decrease by 1.

2) Rank can increase by 1.

Linear dependence and linear independence of matrix columns

Let \(A\) be a matrix of type \((m,n)\). Consider the columns of the matrix \(A\) - these are columns of \(m\) numbers each. Let's denote them \(A_1,A_2,...,A_n\). Let \(c_1,c_2,...,c_n\) be some numbers.

Definition.

Column \[ D=c_1A_1+c_2A_2+...+c_nA_n = \sum _(m=1)^nc_mA_m \] is called a linear combination of columns \(A_1,A_2,...,A_n\), numbers \(c_1,c_2 ,...,c_n\) are called the coefficients of this linear combination.

Definition.

Let \(p\) columns \(A_1, A_2, ..., A_p\) be given. If there are numbers \(c_1,c_2,...,c_p\) such that

1. not all these numbers are equal to zero,

2. linear combination \(c_1A_1+c_2A_2+...+c_pA_p =\sum _(m=1)^pc_mA_m\) is equal to the zero column (i.e. a column whose all elements are zeros), then we say that the columns \( A_1, A_2, ..., A_p\) are linearly dependent. If for a given set of columns such numbers \(c_1,c_2,...,c_n\) do not exist, the columns are called linearly independent.

Example. Consider 2-columns

\[ A_1=\left(\begin(array)(c) 1 \\ 0 \end(array) \right), A_2=\left(\begin(array)(c) 0 \\ 1 \end(array) \right), \] then for any numbers \(c_1,c_2\) we have: \[ c_1A_1+c_2A_2=c_1\left(\begin(array)(c) 1 \\ 0 \end(array) \right)+ c_2\left(\begin(array)(c) 0 \\ 1 \end(array) \right)=\left(\begin(array)(c) c_1 \\ c_2 \end(array) \right). \]

This linear combination is equal to the zero column if and only if both numbers \(c_1,c_2\) are equal to zero. Thus, these columns are linearly independent.

Statement. In order for the columns to be linearly dependent, it is necessary and sufficient that one of them is a linear combination of the others.

Let the columns \(A_1,A_2,...,A_m\) be linearly dependent, i.e. for some constants \(\lambda _1, \lambda _2,...,\lambda _m\), not all of which are equal to 0, the following holds: \[ \sum _(k=1)^m\lambda _kA_k=0 \ ] (on the right side is the zero column). Let, for example, \(\lambda _1 \neq 0\). Then \[ A_1=\sum _(k=2)^mc_kA_k, \quad c_k=-\lambda _k/\lambda _1, \quad \quad (15) \] i.e. the first column is a linear combination of the others.

The basis minor theorem

Theorem.

For any non-zero matrix \(A\) the following is true:

1. The basis columns are linearly independent.

2. Any matrix column is a linear combination of its basis columns.

(The same is true for strings).

Let, for definiteness, \((m,n)\) be the type of matrix \(A\), \(rang(A)=r \leq n\) and the basis minor is located in the first \(r\) rows and columns matrices \(A\). Let \(s\) be any number between 1 and \(m\), \(k\) be any number between 1 and \(n\). Consider a minor of the following form: \[ D=\left| \begin(array)(ccccc) a_(11) & a_(12) & \ldots & a_(1r) & a_(1s) \\ a_(21) & a_(22) & \ldots & a_(2r) & a_(2s) \\ \dots &\ldots & \ldots & \ldots & \ldots \\ a_(r1) & a_(r2) & \ldots & a_(rr) & a_(rs) \\ a_(k1) & a_(k2) & \ldots & a_(kr) & a_(ks) \\ \end(array) \right| , \] i.e. We assigned the \(s-\)th column and the \(k-\)th row to the basis minor. By definition of the rank of a matrix, this determinant is equal to zero (if we chose \(s\leq r\) or \(k \leq r\), then this minor has 2 identical columns or 2 identical rows, if \(s>r\) and \(k>r\) - by definition of rank, a minor of size greater than \(r\) becomes zero). Let's expand this determinant along the last line, we get: \[ a_(k1)A_(k1)+a_(k2)A_(k2)+....+a_(kr)A_(kr)+a_(ks)A_(ks )=0. \quad \quad(16) \]

Here the numbers \(A_(kp)\) are the algebraic complements of the elements from the bottom row \(D\). Their values ​​do not depend on \(k\), because are formed using elements from the first \(r\) lines. In this case, the value \(A_(ks)\) is a basic minor, different from 0. Let us denote \(A_(k1)=c_1,A_(k2)=c_2,...,A_(ks)=c_s \neq 0 \). Let us rewrite (16) in new notation: \[ c_1a_(k1)+c_2a_(k2)+...+c_ra_(kr)+c_sa_(ks)=0, \] or, dividing by \(c_s\), \[ a_(ks)=\lambda_1a_(k1)+\lambda_2a_(k2)+...+\lambda_ra_(kr), \quad \lambda _p=-c_p/c_s. \] This equality is valid for any value of \(k\), so \[ a_(1s)=\lambda_1a_(11)+\lambda_2a_(12)+...+\lambda_ra_(1r), \] \[ a_ (2s)=\lambda_1a_(21)+\lambda_2a_(22)+...+\lambda_ra_(2r), \] \[ ..................... .................................... \] \[ a_(ms)=\lambda_1a_(m1) +\lambda_2a_(m2)+...+\lambda_ra_(mr). \] So, the \(s-\)th column is a linear combination of the first \(r\) columns. The theorem has been proven.

Comment.

From the basis minor theorem it follows that the rank of a matrix is ​​equal to the number of its linearly independent columns (which is equal to the number of linearly independent rows).

Corollary 1.

If the determinant is zero, then it has a column that is a linear combination of the other columns.

Corollary 2.

If the rank of a matrix is ​​less than the number of columns, then the columns of the matrix are linearly dependent.

Calculating the rank of a matrix and finding the basis minor

Some matrix transformations do not change its rank. Such transformations can be called elementary. The corresponding facts can be easily verified using the properties of determinants and determining the rank of a matrix.

1. Rearranging columns.

2. Multiplying the elements of any column by a non-zero factor.

3. Adding to a column any other column multiplied by an arbitrary number.

4. Crossing out the zero column.

The same is true for strings.

Using these transformations, the matrix can be transformed into the so-called "trapezoidal" form - a matrix with only zeros under the main diagonal. For a "trapezoidal" matrix, the rank is the number of non-zero elements on the main diagonal, and the basis minor is the minor whose diagonal coincides with the set of non-zero elements on the main diagonal of the transformed matrix.

Example. Consider the matrix

\[ A=\left(\begin(array)(cccc) 2 &1 & 11 & 2 \\ 1 & 0 & 4 & -1 \\ 11 & 4 & 56 & 5 \\ 2 & -1 & 5 & - 6 \end(array) \right).

Here we sequentially do the following steps: 1) rearrange the second line to the top, 2) subtract the first line from the rest with a suitable factor, 3) subtract the second line from the third 4 times, add the second line to the fourth, 4) cross out the zero lines - the third and fourth . Our final matrix has acquired the desired shape: there are non-zero numbers on the main diagonal, and zeros under the main diagonal. After this, the procedure stops and the number of non-zero elements on the main diagonal is equal to the rank of the matrix. The basic minor is the first two rows and the first two columns. At their intersection there is a matrix of order 2 with a non-zero determinant. At the same time, going back along the chain of transformations, you can trace where this or that row (this or that column) in the final matrix came from, i.e. determine the basis rows and columns in the original matrix. In this case, the first two rows and the first two columns form the basis minor.

“If you want to learn to swim, then boldly enter the water, and if you want to learn to solve problems, That solve them
D. Polya (1887-1985)

(Mathematician. Made a great contribution to the popularization of mathematics. Wrote several books about how to solve problems and how to teach solving problems.)

Consider the matrix

Let's highlight in it k-rows And k-columns (k≤(min(m,n))). From the elements located at the intersection of the selected rows and columns, we will compose a determinant kth order. All such determinants are called minors of this matrix.

Let's consider all possible minors of the matrix A, different from zero.

Matrix rank A is the largest order of the non-zero minor of this matrix.

If all elements of a matrix are equal to zero, then the rank of this matrix is ​​taken equal to zero.

A minor whose order determines the rank of the matrix is ​​called basic.

A matrix can have several basis minors.

Matrix rank A denoted by r(A). If r(A)=r(B), then the matrices A And IN are called equivalent. They write A̴∼B.

Matrix rank properties:

  1. When a matrix is ​​transposed, its rank does not change.
  2. If you delete the zero row (column) from the matrix, the rank of the matrix will not change.
  3. The rank of the matrix does not change during elementary matrix transformations.

By elementary transformations we mean:

  • Rearranging matrix rows;
  • Multiplying a string by a number other than zero;
  • Adding to the elements of one line the corresponding elements of another line, multiplied by an arbitrary number.

When calculating the rank of a matrix, elementary transformations, the method of reducing the matrix to a stepwise form, and the method of bordering minors can be used.

Method for reducing a matrix to a stepwise The idea is that with the help of elementary transformations this matrix is ​​reduced to a step matrix.

The matrix is ​​called stepped , if in each of its lines the first non-zero element is to the right than in the previous one (i.e., steps are obtained, the height of each step must be equal to one).

Examples of step matrices:

Examples of non-echelon matrices:

EXAMPLE: Find the rank of the matrix:

SOLUTION:

Let us reduce this matrix to a step matrix using elementary transformations.

1. Swap the first and third lines.

2. We get zeros under one in the first column.

By adding the first line multiplied by (-3) to the second line, the first line multiplied by (-5) to the third line, and the first line multiplied by (-3) to the fourth line, we get

To make it clearer where else you need to get zeros, let’s draw steps in the matrix. (The matrix will be stepped if there are zeros everywhere under the steps)

3. By adding the second line multiplied by (-1) to the third line, and the second line multiplied by (-1) to the fourth line, we get zeros under the steps in the second column.

If we draw the steps again, we will see that the matrix is ​​stepped.

Her rank is r=3(the number of rows of the step matrix, in each of which at least one element is different from zero). Therefore, the rank of this matrix r=3.

The solution can be written like this:

(Roman numerals indicate line numbers)

Answer: r=3.

Minor order k+1, containing a minor of order k called bordering the minor.

Method of bordering minors is based on the fact that the rank of a given matrix is ​​equal to the order of a minor of this matrix that is non-zero, and all minors bordering it are equal to zero.

Let some matrix be given:

.

Let us select in this matrix arbitrary strings and arbitrary columns
. Then the determinant th order, composed of matrix elements
, located at the intersection of selected rows and columns, is called a minor th order matrix
.

Definition 1.13. Matrix rank
is the largest order of the non-zero minor of this matrix.

To calculate the rank of a matrix, one should consider all its minors of the lowest order and, if at least one of them is different from zero, proceed to considering the minors of the highest order. This approach to determining the rank of a matrix is ​​called the bordering method (or the method of bordering minors).

Problem 1.4. Using the method of bordering minors, determine the rank of the matrix
.

.

Consider first-order edging, for example,
. Then we move on to consider some second-order edging.

For example,
.

Finally, let's analyze the third-order bordering.

.

So the highest order of a non-zero minor is 2, hence
.

When solving Problem 1.4, you can notice that a number of second-order bordering minors are nonzero. In this regard, the following concept applies.

Definition 1.14. A basis minor of a matrix is ​​any non-zero minor whose order is equal to the rank of the matrix.

Theorem 1.2.(Basis minor theorem). The basis rows (basis columns) are linearly independent.

Note that the rows (columns) of a matrix are linearly dependent if and only if at least one of them can be represented as a linear combination of the others.

Theorem 1.3. The number of linearly independent matrix rows is equal to the number of linearly independent matrix columns and is equal to the rank of the matrix.

Theorem 1.4.(Necessary and sufficient condition for the determinant to be equal to zero). In order for the determinant -th order was equal to zero, it is necessary and sufficient that its rows (columns) be linearly dependent.

Calculating the rank of a matrix based on its definition is too cumbersome. This becomes especially important for matrices of high orders. In this regard, in practice, the rank of a matrix is ​​calculated based on the application of Theorems 10.2 - 10.4, as well as the use of the concepts of matrix equivalence and elementary transformations.

Definition 1.15. Two matrices
And are called equivalent if their ranks are equal, i.e.
.

If matrices
And are equivalent, then note
.

Theorem 1.5. The rank of the matrix does not change due to elementary transformations.

We will call elementary matrix transformations
any of the following operations on a matrix:

Replacing rows with columns and columns with corresponding rows;

Rearranging matrix rows;

Crossing out a line whose elements are all zero;

Multiplying a string by a number other than zero;

Adding to the elements of one line the corresponding elements of another line multiplied by the same number
.

Corollary of Theorem 1.5. If matrix
obtained from matrix using a finite number of elementary transformations, then the matrix
And are equivalent.

When calculating the rank of a matrix, it should be reduced to a trapezoidal form using a finite number of elementary transformations.

Definition 1.16. We will call trapezoidal a form of representation of a matrix when, in the bordering minor of the highest order other than zero, all elements below the diagonal ones vanish. For example:

.

Here
, matrix elements
go to zero. Then the form of representation of such a matrix will be trapezoidal.

As a rule, matrices are reduced to a trapezoidal shape using the Gaussian algorithm. The idea of ​​the Gauss algorithm is that, by multiplying the elements of the first row of the matrix by the corresponding factors, it is achieved that all elements of the first column located below the element
, would turn to zero. Then, multiplying the elements of the second column by the corresponding factors, we ensure that all elements of the second column located below the element
, would turn to zero. Then proceed in the same way.

Problem 1.5. Determine the rank of a matrix by reducing it to a trapezoidal shape.

.

To make it easier to use the Gaussian algorithm, you can swap the first and third lines.






.

It's obvious that here
. However, to bring the result to a more elegant form, you can further continue transforming the columns.








.


Let A be a matrix of sizes m\times n and k be a natural number not exceeding m and n: k\leqslant\min\(m;n\). Minor kth order matrix A is the determinant of a k-th order matrix formed by the elements at the intersection of arbitrarily chosen k rows and k columns of the matrix A. When denoting minors, we will indicate the numbers of the selected rows as upper indices, and the numbers of the selected columns as lower indices, arranging them in ascending order.


Example 3.4. Write minors of different orders of the matrix


A=\begin(pmatrix)1&2&1&0\\ 0&2&2&3\\ 1&4&3&3\end(pmatrix)\!.


Solution. Matrix A has dimensions 3\times4 . It has: 12 minors of the 1st order, for example, minor M_(()_2)^(()_3)=\det(a_(32))=4; 18 2nd order minors, for example, M_(()_(23))^(()^(12))=\begin(vmatrix)2&1\\2&2\end(vmatrix)=2; 4 3rd order minors, for example,


M_(()_(134))^(()^(123))= \begin(vmatrix)1&1&0\\0&2&3\\ 1&3&3 \end(vmatrix)=0.

In a matrix A of dimensions m\times n, the r-th order minor is called basic, if it is non-zero and all minors of (r+1)-ro order are equal to zero or do not exist at all.


Matrix rank is called the order of the basis minor. There is no basis minor in a zero matrix. Therefore, the rank of the zero matrix is, by definition, equal to zero. The rank of matrix A is denoted by \operatorname(rg)A.


Example 3.5. Find all basis minors and matrix rank


A=\begin(pmatrix)1&2&2&0\\0&2&2&3\\0&0&0&0\end(pmatrix)\!.


Solution. All third-order minors of this matrix are equal to zero, since these determinants have a zero third row. Therefore, only a second-order minor located in the first two rows of the matrix can be basic. Going through 6 possible minors, we select non-zero


M_(()_(12))^(()^(12))= M_(()_(13))^(()^(12))= \begin(vmatrix)1&2\\0&2 \end( vmatrix)\!,\quad M_(()_(24))^(()^(12))= M_(()_(34))^(()^(12))= \begin(vmatrix) 2&0\\2&3\end(vmatrix)\!,\quad M_(()_(14))^(()^(12))= \begin(vmatrix)1&0\\0&3\end(vmatrix)\!.


Each of these five minors is a basic one. Therefore, the rank of the matrix is ​​2.

Notes 3.2


1. If all the kth order minors in a matrix are equal to zero, then the higher order minors are also equal to zero. Indeed, expanding the minor of (k+1)-ro order over any row, we obtain the sum of the products of the elements of this row by minors of the kth order, and they are equal to zero.


2. The rank of a matrix is ​​equal to the highest order of the nonzero minor of this matrix.


3. If a square matrix is ​​non-singular, then its rank is equal to its order. If a square matrix is ​​singular, then its rank is less than its order.


4. Designations are also used for rank \operatorname(Rg)A,~ \operatorname(rang)A,~ \operatorname(rank)A.


5. Block matrix rank is defined as the rank of a regular (numeric) matrix, i.e. regardless of its block structure. In this case, the rank of a block matrix is ​​not less than the ranks of its blocks: \operatorname(rg)(A\mid B)\geqslant\operatorname(rg)A And \operatorname(rg)(A\mid B)\geqslant\operatorname(rg)B, since all minors of the matrix A (or B ) are also minors of the block matrix (A\mid B) .

Theorems on the basis minor and the rank of the matrix

Let us consider the main theorems expressing the properties of linear dependence and linear independence of columns (rows) of a matrix.


Theorem 3.1 on the basis minor. In an arbitrary matrix A, each column (row) is a linear combination of the columns (rows) in which the basis minor is located.


Indeed, without loss of generality, we assume that in a matrix A of size m\times n the basis minor is located in the first r rows and first r columns. Consider the determinant


D=\begin(vmatrix)~ a_(11)&\cdots&a_(1r)\!\!&\vline\!\!&a_(1k)~\\ ~\vdots&\ddots &\vdots\!\!&\ vline\!\!&\vdots~\\ ~a_(r1)&\cdots&a_(rr)\!\!&\vline\!\!&a_(rk)~\\\hline ~a_(s1)&\cdots&a_ (sr)\!\!&\vline\!\!&a_(sk)~\end(vmatrix),


which is obtained by assigning the corresponding elements of the sth row and kth column to the basis minor of the matrix A. Note that for any 1\leqslant s\leqslant m and this determinant is equal to zero. If s\leqslant r or k\leqslant r , then the determinant D contains two identical rows or two identical columns. If s>r and k>r, then the determinant D is equal to zero, since it is a minor of (r+l)-ro order. Expanding the determinant along the last line, we get


a_(s1)\cdot D_(r+11)+\ldots+ a_(sr)\cdot D_(r+1r)+a_(sk)\cdot D_(r+1\,r+1)=0,


where D_(r+1\,j) are the algebraic complements of the elements of the last row. Note that D_(r+1\,r+1)\ne0 since this is a basis minor. That's why


a_(sk)=\lambda_1\cdot a_(s1)+\ldots+\lambda_r\cdot a_(sr), Where \lambda_j=-\frac(D_(r+1\,j))(D_(r+1\,r+1)),~j=1,2,\ldots,r.


Writing the last equality for s=1,2,\ldots,m, we get

\begin(pmatrix)a_(1k)\\\vdots\\a_(mk)\end(pmatrix)= \lambda_1\cdot\! \begin(pmatrix)a_(11)\\\vdots\\a_(m1)\end(pmatrix)+\ldots \lambda_r\cdot\! \begin(pmatrix)a_(1r)\\\vdots\\a_(mr)\end(pmatrix)\!.


those. kth column (for any 1\leqslant k\leqslant n) is a linear combination of the columns of the basis minor, which is what we needed to prove.


The basis minor theorem serves to prove the following important theorems.

Condition for the determinant to be zero

Theorem 3.2 (necessary and sufficient condition for the determinant to be zero). In order for a determinant to be equal to zero, it is necessary and sufficient that one of its columns (one of its rows) be a linear combination of the remaining columns (rows).


Indeed, necessity follows from the basis minor theorem. If the determinant of a square matrix of order n is equal to zero, then its rank is less than n, i.e. at least one column is not included in the basis minor. Then this chosen column, by Theorem 3.1, is a linear combination of the columns in which the basis minor is located. By adding, if necessary, to this combination other columns with zero coefficients, we obtain that the selected column is a linear combination of the remaining columns of the matrix. Sufficiency follows from the properties of the determinant. If, for example, the last column A_n of the determinant \det(A_1~A_2~\cdots~A_n) linearly expressed through the rest


A_n=\lambda_1\cdot A_1+\lambda_2\cdot A_2+\ldots+\lambda_(n-1)\cdot A_(n-1),


then adding to A_n column A_1 multiplied by (-\lambda_1) , then column A_2 multiplied by (-\lambda_2) , etc. column A_(n-1) multiplied by (-\lambda_(n-1)) we get the determinant \det(A_1~\cdots~A_(n-1)~o) with a null column that is equal to zero (property 2 of the determinant).

Invariance of matrix rank under elementary transformations

Theorem 3.3 (on the invariance of rank under elementary transformations). During elementary transformations of the columns (rows) of a matrix, its rank does not change.


Indeed, let it be. Let us assume that as a result of one elementary transformation of the columns of matrix A we obtained matrix A". If a type I transformation was performed (permutation of two columns), then any minor (r+l)-ro of the order of matrix A" is either equal to the corresponding minor (r+l )-ro of the order of the matrix A, or differs from it in sign (property 3 of the determinant). If a type II transformation was performed (multiplying the column by the number \lambda\ne0 ), then any minor (r+l)-ro of the order of the matrix A" is either equal to the corresponding minor (r+l)-ro of the order of the matrix A or different from it factor \lambda\ne0 (property 6 of the determinant). If a type III transformation was performed (adding to one column another column multiplied by the number \Lambda), then any minor of the (r+1)th order of the matrix A" is either equal to the corresponding minor. (r+1)-th order of the matrix A (property 9 of the determinant), or is equal to the sum of two minors (r+l)-ro of the order of the matrix A (property 8 of the determinant). Therefore, under an elementary transformation of any type, all minors (r+l)-ro of the order of the matrix A" are equal to zero, since all minors (r+l)-ro of the order of the matrix A are equal to zero. Thus, it has been proven that under elementary transformations of columns the rank matrix cannot increase. Since transformations inverse to elementary ones are elementary, the rank of the matrix cannot decrease under elementary transformations of the columns, i.e., it is similarly proved that the rank of the matrix does not change under elementary transformations of the rows.


Corollary 1. If one row (column) of a matrix is ​​a linear combination of its other rows (columns), then this row (column) can be deleted from the matrix without changing its rank.


Indeed, such a string can be made zero using elementary transformations, and a zero string cannot be included in the basis minor.


Corollary 2. If the matrix is ​​reduced to the simplest form (1.7), then


\operatorname(rg)A=\operatorname(rg)\Lambda=r\,.


Indeed, the matrix of the simplest form (1.7) has a basis minor of the rth order.


Corollary 3. Any non-singular square matrix is ​​elementary, in other words, any non-singular square matrix is ​​equivalent to an identity matrix of the same order.


Indeed, if A is a non-singular square matrix of nth order, then \operatorname(rg)A=n(see paragraph 3 of comments 3.2). Therefore, bringing the matrix A to the simplest form (1.7) by elementary transformations, we obtain the identity matrix \Lambda=E_n , since \operatorname(rg)A=\operatorname(rg)\Lambda=n(see Corollary 2). Therefore, matrix A is equivalent to the identity matrix E_n and can be obtained from it as a result of a finite number of elementary transformations. This means that matrix A is elementary.

Theorem 3.4 (about the rank of the matrix). The rank of a matrix is ​​equal to the maximum number of linearly independent rows of this matrix.


In fact, let \operatorname(rg)A=r. Then the matrix A has r linearly independent rows. These are the lines in which the base minor is located. If they were linearly dependent, then this minor would be equal to zero by Theorem 3.2, and the rank of the matrix A would not be equal to r. Let us show that r is the maximum number of linearly independent rows, i.e. any p rows are linearly dependent for p>r . Indeed, we form the matrix B from these p rows. Since matrix B is part of matrix A, then \operatorname(rg)B\leqslant \operatorname(rg)A=r

This means that at least one row of matrix B is not included in the basis minor of this matrix. Then, by the basis minor theorem, it is equal to a linear combination of the rows in which the basis minor is located. Therefore, the rows of matrix B are linearly dependent. Thus, the matrix A has at most r linearly independent rows.


Corollary 1. The maximum number of linearly independent rows in a matrix is ​​equal to the maximum number of linearly independent columns:


\operatorname(rg)A=\operatorname(rg)A^T.


This statement follows from Theorem 3.4 if we apply it to the rows of a transposed matrix and take into account that the minors do not change during transposition (property 1 of the determinant).


Corollary 2. During elementary transformations of the rows of a matrix, the linear dependence (or linear independence) of any system of columns of this matrix is ​​preserved.


In fact, let us choose any k columns of a given matrix A and compose the matrix B from them. Let the matrix A" be obtained as a result of elementary transformations of the rows of matrix A, and the matrix B" be obtained as a result of the same transformations of the rows of matrix B. By Theorem 3.3 \operatorname(rg)B"=\operatorname(rg)B. Therefore, if the columns of matrix B were linearly independent, i.e. k=\operatorname(rg)B(see Corollary 1), then the columns of the matrix B" are also linearly independent, since k=\operatorname(rg)B". If the columns of matrix B were linearly dependent (k>\operatorname(rg)B), then the columns of matrix B" are also linearly dependent (k>\operatorname(rg)B"). Consequently, for any columns of matrix A, linear dependence or linear independence is preserved under elementary row transformations.


Notes 3.3


1. By Corollary 1 of Theorem 3.4, the property of columns indicated in Corollary 2 is also true for any system of matrix rows if elementary transformations are performed only on its columns.


2. Corollary 3 of Theorem 3.3 can be refined as follows: any non-singular square matrix, using elementary transformations of only its rows (or only its columns), can be reduced to an identity matrix of the same order.


In fact, using only elementary row transformations, any matrix A can be reduced to the simplified form \Lambda (Fig. 1.5) (see Theorem 1.1). Since the matrix A is non-singular (\det(A)\ne0), its columns are linearly independent. This means that the columns of the matrix \Lambda are also linearly independent (Corollary 2 of Theorem 3.4). Therefore, the simplified form \Lambda of a non-singular matrix A coincides with its simplest form (Fig. 1.6) and is the identity matrix \Lambda=E (see Corollary 3 of Theorem 3.3). Thus, by transforming only the rows of a non-singular matrix, it can be reduced to the identity matrix. Similar reasoning is valid for elementary transformations of the columns of a non-singular matrix.

Rank of product and sum of matrices

Theorem 3.5 (on the rank of the product of matrices). The rank of the product of matrices does not exceed the rank of factors:


\operatorname(rg)(A\cdot B)\leqslant \min\(\operatorname(rg)A,\operatorname(rg)B\).


Indeed, let matrices A and B have sizes m\times p and p\times n . Let us assign to matrix A the matrix C=AB\colon\,(A\mid C). Of course that \operatorname(rg)C\leqslant\operatorname(rg)(A\mid C), since C is part of the matrix (A\mid C) (see paragraph 5 of remarks 3.2). Note that each column C_j, according to the matrix multiplication operation, is a linear combination of columns A_1,A_2,\ldots,A_p matrices A=(A_1~\cdots~A_p):


C_(j)=A_1\cdot b_(1j)+A_2\cdot b_(2j)+\ldots+A_(p)\cdot b_pj),\quad j=1,2,\ldots,n.


Such a column can be deleted from the matrix (A\mid C) without changing its rank (Corollary 1 of Theorem 3.3). Crossing out all columns of matrix C, we get: \operatorname(rg)(A\mid C)=\operatorname(rg)A. From here, \operatorname(rg)C\leqslant\operatorname(rg)(A\mid C)=\operatorname(rg)A. Similarly, we can prove that the condition is simultaneously satisfied \operatorname(rg)C\leqslant\operatorname(rg)B, and draw a conclusion about the validity of the theorem.


Consequence. A is a non-singular square matrix, then \operatorname(rg)(AB)= \operatorname(rg)B And \operatorname(rg)(CA)=\operatorname(rg)C, i.e. the rank of a matrix does not change when it is multiplied from the left or right by a non-singular square matrix.


Theorem 3.6 on the rank of sums of matrices. The rank of the sum of matrices does not exceed the sum of the ranks of the terms:


\operatorname(rg)(A+B)\leqslant \operatorname(rg)A+\operatorname(rg)B.


Indeed, let's create a matrix (A+B\mid A\mid B). Note that each column of matrix A+B is a linear combination of columns of matrices A and B. That's why \operatorname(rg)(A+B\mid A\mid B)= \operatorname(rg)(A\mid B). Considering that the number of linearly independent columns in the matrix (A\mid B) does not exceed \operatorname(rg)A+\operatorname(rg)B, a \operatorname(rg)(A+B)\leqslant \operatorname(rg)(A+B\mid A\mid B)(see section 5 of Remarks 3.2), we obtain the inequality being proved.

Elementary The following matrix transformations are called:

1) permutation of any two rows (or columns),

2) multiplying a row (or column) by a non-zero number,

3) adding to one row (or column) another row (or column), multiplied by a certain number.

The two matrices are called equivalent, if one of them is obtained from the other using a finite set of elementary transformations.

Equivalent matrices are not, generally speaking, equal, but their ranks are equal. If matrices A and B are equivalent, then it is written as follows: A ~ B.

Canonical A matrix is ​​a matrix in which at the beginning of the main diagonal there are several ones in a row (the number of which can be zero), and all other elements are equal to zero, for example,

Using elementary transformations of rows and columns, any matrix can be reduced to canonical. The rank of a canonical matrix is ​​equal to the number of ones on its main diagonal.

Example 2 Find the rank of a matrix

A=

and bring it to canonical form.

Solution. From the second line, subtract the first and rearrange these lines:

.

Now from the second and third lines we subtract the first, multiplied by 2 and 5, respectively:

;

subtract the first from the third line; we get a matrix

B = ,

which is equivalent to matrix A, since it is obtained from it using a finite set of elementary transformations. Obviously, the rank of matrix B is 2, and therefore r(A)=2. Matrix B can easily be reduced to canonical. By subtracting the first column, multiplied by suitable numbers, from all subsequent ones, we turn to zero all the elements of the first row, except the first, and the elements of the remaining rows do not change.

.

Then, subtracting the second column, multiplied by suitable numbers, from all subsequent ones, we turn to zero all elements of the second row, except the second, and obtain the canonical matrix: Kronecker - Capelli theorem

- compatibility criterion for a system of linear algebraic equations:

In order for a linear system to be consistent, it is necessary and sufficient that the rank of the extended matrix of this system be equal to the rank of its main matrix.

Proof (system compatibility conditions)

Necessity Let system

joint

Then there are numbers such that . Therefore, the column is a linear combination of the columns of the matrix. From the fact that the rank of a matrix will not change if a row (column) is deleted or added from the system of its rows (columns), which is a linear combination of other rows (columns), it follows that .

Adequacy

    Let . Let's take some basic minor in the matrix. Since, then it will also be the basis minor of the matrix. Then, according to the basis theorem minor

    , the last column of the matrix will be a linear combination of the basis columns, that is, the columns of the matrix. Let Therefore, the column of free terms of the system is a linear combination of the columns of the matrix.

Consequences

Number of main variables15 . 2 systems

equal to the rank of the system.

Joint will be defined (its solution is unique) if the rank of the system is equal to the number of all its variables.

Homogeneous system of equations

Number of main variables15 . 3 Offer

Joint Homogeneous system of equations

is always joint.

Proof

is always joint.

. For this system, the set of numbers , , , is a solution.15 . 1 In this section we will use the matrix notation of the system: .

Indeed, multiplying a non-zero solution by various numbers, we will obtain different solutions.

Definition15 . 5 We will say that the solutions systems form fundamental system of solutions, if columns form a linearly independent system and any solution to the system is a linear combination of these columns.