How to find the rank of an extended matrix example. Matrix rank. Elementary transformations of matrix rows

Let some matrix be given:

.

Let us select in this matrix arbitrary strings and arbitrary columns
. Then the determinant th order, composed of matrix elements
, located at the intersection of selected rows and columns, is called a minor th order matrix
.

Definition 1.13. Matrix rank
is the largest order of the non-zero minor of this matrix.

To calculate the rank of a matrix, one should consider all its minors of the lowest order and, if at least one of them is different from zero, proceed to considering the minors of the highest order. This approach to determining the rank of a matrix is ​​called the bordering method (or the method of bordering minors).

Problem 1.4. Using the method of bordering minors, determine the rank of the matrix
.

.

Consider first-order edging, for example,
. Then we move on to consider some second-order edging.

For example,
.

Finally, let's analyze the third-order bordering.

.

So the highest order of a non-zero minor is 2, hence
.

When solving Problem 1.4, you can notice that a number of second-order bordering minors are nonzero. In this regard, the following concept applies.

Definition 1.14. A basis minor of a matrix is ​​any non-zero minor whose order is equal to the rank of the matrix.

Theorem 1.2.(Basis minor theorem). The basis rows (basis columns) are linearly independent.

Note that the rows (columns) of a matrix are linearly dependent if and only if at least one of them can be represented as a linear combination of the others.

Theorem 1.3. The number of linearly independent matrix rows is equal to the number of linearly independent matrix columns and is equal to the rank of the matrix.

Theorem 1.4.(Necessary and sufficient condition for the determinant to be equal to zero). In order for the determinant -th order was equal to zero, it is necessary and sufficient that its rows (columns) be linearly dependent.

Calculating the rank of a matrix based on its definition is too cumbersome. This becomes especially important for matrices of high orders. In this regard, in practice, the rank of a matrix is ​​calculated based on the application of Theorems 10.2 - 10.4, as well as the use of the concepts of matrix equivalence and elementary transformations.

Definition 1.15. Two matrices
And are called equivalent if their ranks are equal, i.e.
.

If matrices
And are equivalent, then note
.

Theorem 1.5. The rank of the matrix does not change due to elementary transformations.

We will call elementary matrix transformations
any of the following operations on a matrix:

Replacing rows with columns and columns with corresponding rows;

Rearranging matrix rows;

Crossing out a line whose elements are all zero;

Multiplying a string by a number other than zero;

Adding to the elements of one line the corresponding elements of another line multiplied by the same number
.

Corollary of Theorem 1.5. If matrix
obtained from matrix using a finite number of elementary transformations, then the matrix
And are equivalent.

When calculating the rank of a matrix, it should be reduced to a trapezoidal form using a finite number of elementary transformations.

Definition 1.16. We will call trapezoidal a form of matrix representation when in the bordering minor of the highest order non-zero, all elements below the diagonal ones vanish. For example:

.

Here
, matrix elements
go to zero. Then the form of representation of such a matrix will be trapezoidal.

As a rule, matrices are reduced to a trapezoidal shape using the Gaussian algorithm. The idea of ​​the Gauss algorithm is that, by multiplying the elements of the first row of the matrix by the corresponding factors, it is achieved that all elements of the first column located below the element
, would turn to zero. Then, multiplying the elements of the second column by the corresponding factors, we ensure that all elements of the second column located below the element
, would turn to zero. Then proceed in the same way.

Problem 1.5. Determine the rank of a matrix by reducing it to a trapezoidal shape.

.

To make it easier to use the Gaussian algorithm, you can swap the first and third lines.






.

It's obvious that here
. However, to bring the result to a more elegant form, you can further continue transforming the columns.








.

“If you want to learn to swim, then boldly enter the water, and if you want to learn to solve problems, That solve them
D. Polya (1887-1985)

(Mathematician. Made a great contribution to the popularization of mathematics. Wrote several books about how to solve problems and how to teach solving problems.)

Consider the matrix

Let's highlight in it k-rows And k-columns (k≤(min(m,n))). From the elements located at the intersection of the selected rows and columns, we will compose a determinant kth order. All such determinants are called minors of this matrix.

Let's consider all possible minors of the matrix A, different from zero.

Matrix rank A is the largest order of the non-zero minor of this matrix.

If all elements of a matrix are equal to zero, then the rank of this matrix is ​​taken equal to zero.

A minor whose order determines the rank of the matrix is ​​called basic.

A matrix can have several basis minors.

Matrix rank A denoted by r(A). If r(A)=r(B), then the matrices A And IN are called equivalent. They write A̴∼B.

Matrix rank properties:

  1. When a matrix is ​​transposed, its rank does not change.
  2. If you delete the zero row (column) from the matrix, the rank of the matrix will not change.
  3. The rank of the matrix does not change during elementary matrix transformations.

By elementary transformations we mean:

  • Rearranging matrix rows;
  • Multiplying a string by a number other than zero;
  • Adding to the elements of one line the corresponding elements of another line, multiplied by an arbitrary number.

When calculating the rank of a matrix, elementary transformations, the method of reducing the matrix to a stepwise form, and the method of bordering minors can be used.

Method for reducing a matrix to a stepwise The idea is that with the help of elementary transformations this matrix is ​​reduced to a step matrix.

The matrix is ​​called stepped , if in each of its lines the first non-zero element is to the right than in the previous one (i.e., steps are obtained, the height of each step must be equal to one).

Examples of step matrices:

Examples of non-echelon matrices:

EXAMPLE: Find the rank of the matrix:

SOLUTION:

Let us reduce this matrix to a step matrix using elementary transformations.

1. Swap the first and third lines.

2. We get zeros under one in the first column.

By adding the first line multiplied by (-3) to the second line, the first line multiplied by (-5) to the third line, and the first line multiplied by (-3) to the fourth line, we get

To make it clearer where else you need to get zeros, let’s draw steps in the matrix. (The matrix will be stepped if there are zeros everywhere under the steps)

3. By adding the second line multiplied by (-1) to the third line, and the second line multiplied by (-1) to the fourth line, we get zeros under the steps in the second column.

If we draw the steps again, we will see that the matrix is ​​stepped.

Her rank is r=3(the number of rows of the step matrix, in each of which at least one element is different from zero). Therefore, the rank of this matrix r=3.

The solution can be written like this:

(Roman numerals indicate line numbers)

Answer: r=3.

Minor order k+1, containing a minor of order k called bordering the minor.

Method of bordering minors is based on the fact that the rank of a given matrix is ​​equal to the order of a minor of this matrix that is non-zero, and all minors bordering it are equal to zero.

Determining the rank of a matrix

Consider a matrix \(A\) of type \((m,n)\). Let, for definiteness, \(m \leq n\). Let's take \(m\) rows and choose \(m\) columns of the matrix \(A\), at the intersection of these rows and columns we get a square matrix of order \(m\), the determinant of which is called minor order \(m\) matrices \(A\). If this minor is different from 0, it is called basic minor and they say that the rank of the matrix \(A\) is equal to \(m\). If this determinant is equal to 0, then other \(m\) columns are chosen, at their intersection there are elements that form another minor of order \(m\). If the minor is 0, we continue the procedure. If among all possible minors of order \(m\) there are no nonzeros, we select \(m-1\) rows and columns from the matrix \(A\), at their intersection a square matrix of order \(m-1\) appears , its determinant is called a minor of order \(m-1\) of the original matrix. Continuing the procedure, we look for a non-zero minor, going through all possible minors, lowering their order.

Definition.

The non-zero minor of a given matrix of the highest order is called basic minor of the original matrix, its order is called rank matrices \(A\), rows and columns, at the intersection of which there is a basis minor, are called basis rows and columns. The rank of a matrix is ​​denoted by \(rang(A)\).

From this definition follow simple properties of the rank of a matrix: it is an integer, and the rank of a non-zero matrix satisfies the inequalities: \(1 \leq rank(A) \leq \min(m,n)\).

How will the rank of the matrix change if a row is deleted? Add some line?

Check answer

1) Rank may decrease by 1.

2) Rank can increase by 1.

Linear dependence and linear independence of matrix columns

Let \(A\) be a matrix of type \((m,n)\). Consider the columns of the matrix \(A\) - these are columns of \(m\) numbers each. Let's denote them \(A_1,A_2,...,A_n\). Let \(c_1,c_2,...,c_n\) be some numbers.

Definition.

Column \[ D=c_1A_1+c_2A_2+...+c_nA_n = \sum _(m=1)^nc_mA_m \] is called a linear combination of columns \(A_1,A_2,...,A_n\), numbers \(c_1,c_2 ,...,c_n\) are called the coefficients of this linear combination.

Definition.

Let \(p\) columns \(A_1, A_2, ..., A_p\) be given. If there are numbers \(c_1,c_2,...,c_p\) such that

1. not all these numbers are equal to zero,

2. linear combination \(c_1A_1+c_2A_2+...+c_pA_p =\sum _(m=1)^pc_mA_m\) is equal to the zero column (i.e. a column whose all elements are zeros), then we say that the columns \( A_1, A_2, ..., A_p\) are linearly dependent. If for a given set of columns such numbers \(c_1,c_2,...,c_n\) do not exist, the columns are called linearly independent.

Example. Consider 2-columns

\[ A_1=\left(\begin(array)(c) 1 \\ 0 \end(array) \right), A_2=\left(\begin(array)(c) 0 \\ 1 \end(array) \right), \] then for any numbers \(c_1,c_2\) we have: \[ c_1A_1+c_2A_2=c_1\left(\begin(array)(c) 1 \\ 0 \end(array) \right)+ c_2\left(\begin(array)(c) 0 \\ 1 \end(array) \right)=\left(\begin(array)(c) c_1 \\ c_2 \end(array) \right). \]

This linear combination is equal to the zero column if and only if both numbers \(c_1,c_2\) are equal to zero. Thus, these columns are linearly independent.

Statement. In order for the columns to be linearly dependent, it is necessary and sufficient that one of them is a linear combination of the others.

Let the columns \(A_1,A_2,...,A_m\) be linearly dependent, i.e. for some constants \(\lambda _1, \lambda _2,...,\lambda _m\), not all of which are equal to 0, the following holds: \[ \sum _(k=1)^m\lambda _kA_k=0 \ ] (on the right side is the zero column). Let, for example, \(\lambda _1 \neq 0\). Then \[ A_1=\sum _(k=2)^mc_kA_k, \quad c_k=-\lambda _k/\lambda _1, \quad \quad (15) \] i.e. the first column is a linear combination of the others.

The basis minor theorem

Theorem.

For any non-zero matrix \(A\) the following is true:

1. The basis columns are linearly independent.

2. Any matrix column is a linear combination of its basis columns.

(The same is true for strings).

Let, for definiteness, \((m,n)\) be the type of matrix \(A\), \(rang(A)=r \leq n\) and the basis minor is located in the first \(r\) rows and columns matrices \(A\). Let \(s\) be any number between 1 and \(m\), \(k\) be any number between 1 and \(n\). Consider a minor of the following form: \[ D=\left| \begin(array)(ccccc) a_(11) & a_(12) & \ldots & a_(1r) & a_(1s) \\ a_(21) & a_(22) & \ldots & a_(2r) & a_(2s) \\ \dots &\ldots & \ldots & \ldots & \ldots \\ a_(r1) & a_(r2) & \ldots & a_(rr) & a_(rs) \\ a_(k1) & a_(k2) & \ldots & a_(kr) & a_(ks) \\ \end(array) \right| , \] i.e. We assigned the \(s-\)th column and the \(k-\)th row to the basis minor. By definition of the rank of a matrix, this determinant is equal to zero (if we chose \(s\leq r\) or \(k \leq r\), then this minor has 2 identical columns or 2 identical rows, if \(s>r\) and \(k>r\) - by definition of rank, a minor of size greater than \(r\) becomes zero). Let's expand this determinant along the last line, we get: \[ a_(k1)A_(k1)+a_(k2)A_(k2)+....+a_(kr)A_(kr)+a_(ks)A_(ks )=0. \quad \quad(16) \]

Here the numbers \(A_(kp)\) are the algebraic complements of the elements from the bottom row \(D\). Their values ​​do not depend on \(k\), because are formed using elements from the first \(r\) rows. In this case, the value \(A_(ks)\) is a basic minor, different from 0. Let us denote \(A_(k1)=c_1,A_(k2)=c_2,...,A_(ks)=c_s \neq 0 \). Let us rewrite (16) in new notation: \[ c_1a_(k1)+c_2a_(k2)+...+c_ra_(kr)+c_sa_(ks)=0, \] or, dividing by \(c_s\), \[ a_(ks)=\lambda_1a_(k1)+\lambda_2a_(k2)+...+\lambda_ra_(kr), \quad \lambda _p=-c_p/c_s. \] This equality is valid for any value of \(k\), so \[ a_(1s)=\lambda_1a_(11)+\lambda_2a_(12)+...+\lambda_ra_(1r), \] \[ a_ (2s)=\lambda_1a_(21)+\lambda_2a_(22)+...+\lambda_ra_(2r), \] \[ ..................... .................................... \] \[ a_(ms)=\lambda_1a_(m1) +\lambda_2a_(m2)+...+\lambda_ra_(mr). \] So, the \(s-\)th column is a linear combination of the first \(r\) columns. The theorem has been proven.

Comment.

From the basis minor theorem it follows that the rank of a matrix is ​​equal to the number of its linearly independent columns (which is equal to the number of linearly independent rows).

Corollary 1.

If the determinant is zero, then it has a column that is a linear combination of the other columns.

Corollary 2.

If the rank of a matrix is ​​less than the number of columns, then the columns of the matrix are linearly dependent.

Calculating the rank of a matrix and finding the basis minor

Some matrix transformations do not change its rank. Such transformations can be called elementary. The corresponding facts can be easily verified using the properties of determinants and determining the rank of a matrix.

1. Rearrangement of columns.

2. Multiplying the elements of any column by a non-zero factor.

3. Adding any other column to a column, multiplied by an arbitrary number.

4. Crossing out the zero column.

The same is true for strings.

Using these transformations, the matrix can be transformed into the so-called "trapezoidal" form - a matrix with only zeros under the main diagonal. For a "trapezoidal" matrix, the rank is the number of non-zero elements on the main diagonal, and the basis minor is the minor whose diagonal coincides with the set of non-zero elements on the main diagonal of the transformed matrix.

Example. Consider the matrix

\[ A=\left(\begin(array)(cccc) 2 &1 & 11 & 2 \\ 1 & 0 & 4 & -1 \\ 11 & 4 & 56 & 5 \\ 2 & -1 & 5 & - 6 \end(array) \right).

Here we sequentially do the following steps: 1) rearrange the second line to the top, 2) subtract the first line from the rest with a suitable factor, 3) subtract the second line from the third 4 times, add the second line to the fourth, 4) cross out the zero lines - the third and fourth . Our final matrix has acquired the desired shape: there are non-zero numbers on the main diagonal, and zeros under the main diagonal. After this, the procedure stops and the number of non-zero elements on the main diagonal is equal to the rank of the matrix. The basic minor is the first two rows and the first two columns. At their intersection there is a matrix of order 2 with a nonzero determinant. At the same time, going back along the chain of transformations, you can trace where this or that row (this or that column) in the final matrix came from, i.e. determine the basis rows and columns in the original matrix. In this case, the first two rows and the first two columns form the basis minor.


The rank of a matrix is ​​an important numerical characteristic. The most typical problem that requires finding the rank of a matrix is ​​checking the consistency of a system of linear algebraic equations. In this article we will give the concept of matrix rank and consider methods for finding it. To better understand the material, we will analyze in detail the solutions to several examples.

Page navigation.

Determination of the rank of a matrix and necessary additional concepts.

Before voicing the definition of the rank of a matrix, you should have a good understanding of the concept of a minor, and finding the minors of a matrix implies the ability to calculate the determinant. So, if necessary, we recommend that you recall the theory of the article, methods for finding the determinant of a matrix, and properties of the determinant.

Let's take a matrix A of order . Let k be some natural number not exceeding the smallest of the numbers m and n, that is, .

Definition.

Minor kth order matrix A is the determinant of a square matrix of order, composed of elements of matrix A, which are located in pre-selected k rows and k columns, and the arrangement of elements of matrix A is preserved.

In other words, if in the matrix A we delete (p–k) rows and (n–k) columns, and from the remaining elements we create a matrix, preserving the arrangement of the elements of the matrix A, then the determinant of the resulting matrix is ​​a minor of order k of the matrix A.

Let's look at the definition of a matrix minor using an example.

Consider the matrix .

Let's write down several first-order minors of this matrix. For example, if we choose the third row and second column of matrix A, then our choice corresponds to a first-order minor . In other words, to obtain this minor, we crossed out the first and second rows, as well as the first, third and fourth columns from the matrix A, and made up a determinant from the remaining element. If we choose the first row and third column of matrix A, then we get a minor .

Let us illustrate the procedure for obtaining the considered first-order minors
And .

Thus, the first-order minors of a matrix are the matrix elements themselves.

Let's show several second-order minors. Select two rows and two columns. For example, take the first and second rows and the third and fourth columns. With this choice we have a second-order minor . This minor could also be created by deleting the third row, first and second columns from matrix A.

Another second-order minor of the matrix A is .

Let us illustrate the construction of these second-order minors
And .

Similarly, third-order minors of the matrix A can be found. Since there are only three rows in matrix A, we select them all. If we select the first three columns of these rows, we get a third-order minor

It can also be constructed by crossing out the last column of the matrix A.

Another third order minor is

obtained by deleting the third column of matrix A.

Here is a picture showing the construction of these third order minors
And .

For a given matrix A there are no minors of order higher than third, since .

How many minors of the kth order are there of a matrix A of order ?

The number of minors of order k can be calculated as , where And - the number of combinations from p to k and from n to k, respectively.

How to construct all minors of order k of matrix A of order p by n?

We will need many matrix row numbers and many column numbers. We write everything down combinations of p elements by k(they will correspond to the selected rows of matrix A when constructing a minor of order k). To each combination of row numbers we sequentially add all combinations of n elements of k column numbers. These sets of combinations of row numbers and column numbers of matrix A will help to compose all minors of order k.

Let's look at it with an example.

Example.

Find all second order minors of the matrix.

Solution.

Since the order of the original matrix is ​​3 by 3, the total of second order minors will be .

Let's write down all combinations of 3 to 2 row numbers of matrix A: 1, 2; 1, 3 and 2, 3. All combinations of 3 to 2 column numbers are 1, 2; 1, 3 and 2, 3.

Let's take the first and second rows of matrix A. By selecting the first and second columns, the first and third columns, the second and third columns for these rows, we obtain the minors, respectively

For the first and third rows, with a similar choice of columns, we have

It remains to add the first and second, first and third, second and third columns to the second and third rows:

So, all nine second-order minors of matrix A have been found.

Now we can proceed to determining the rank of the matrix.

Definition.

Matrix rank is the highest order of the non-zero minor of the matrix.

The rank of matrix A is denoted as Rank(A) . You can also find the designations Rg(A) or Rang(A) .

From the definitions of matrix rank and matrix minor, we can conclude that the rank of a zero matrix is ​​equal to zero, and the rank of a nonzero matrix is ​​not less than one.

Finding the rank of a matrix by definition.

So, the first method for finding the rank of a matrix is method of enumerating minors. This method is based on determining the rank of the matrix.

Let us need to find the rank of a matrix A of order .

Let's briefly describe algorithm solving this problem by enumerating minors.

If there is at least one element of the matrix that is different from zero, then the rank of the matrix is ​​at least equal to one (since there is a first-order minor that is not equal to zero).

Next we look at the second order minors. If all second-order minors are equal to zero, then the rank of the matrix is ​​equal to one. If there is at least one non-zero minor of the second order, then we proceed to enumerate the minors of the third order, and the rank of the matrix is ​​at least equal to two.

Similarly, if all third-order minors are zero, then the rank of the matrix is ​​two. If there is at least one third-order minor other than zero, then the rank of the matrix is ​​at least three, and we move on to enumerating fourth-order minors.

Note that the rank of the matrix cannot exceed the smallest of the numbers p and n.

Example.

Find the rank of the matrix .

Solution.

Since the matrix is ​​non-zero, its rank is not less than one.

Minor of the second order is different from zero, therefore, the rank of matrix A is at least two. We move on to enumerating third-order minors. Total of them things.




All third order minors are equal to zero. Therefore, the rank of the matrix is ​​two.

Answer:

Rank(A) = 2 .

Finding the rank of a matrix using the method of bordering minors.

There are other methods for finding the rank of a matrix that allow you to obtain the result with less computational work.

One such method is edge minor method.

Let's deal with concept of edge minor.

It is said that a minor M ok of the (k+1)th order of the matrix A borders a minor M of order k of the matrix A if the matrix corresponding to the minor M ok “contains” the matrix corresponding to the minor M .

In other words, the matrix corresponding to the bordering minor M is obtained from the matrix corresponding to the bordering minor M ok by deleting the elements of one row and one column.

For example, consider the matrix and take a second order minor. Let's write down all the bordering minors:

The method of bordering minors is justified by the following theorem (we present its formulation without proof).

Theorem.

If all minors bordering the kth order minor of a matrix A of order p by n are equal to zero, then all minors of order (k+1) of the matrix A are equal to zero.

Thus, to find the rank of a matrix it is not necessary to go through all the minors that are sufficiently bordering. The number of minors bordering the minor of the kth order of a matrix A of order , is found by the formula . Note that there are no more minors bordering the k-th order minor of the matrix A than there are (k + 1) order minors of the matrix A. Therefore, in most cases, using the method of bordering minors is more profitable than simply enumerating all the minors.

Let's move on to finding the rank of the matrix using the method of bordering minors. Let's briefly describe algorithm this method.

If the matrix A is nonzero, then as a first-order minor we take any element of the matrix A that is different from zero. Let's look at its bordering minors. If they are all equal to zero, then the rank of the matrix is ​​equal to one. If there is at least one non-zero bordering minor (its order is two), then we proceed to consider its bordering minors. If they are all zero, then Rank(A) = 2. If at least one bordering minor is non-zero (its order is three), then we consider its bordering minors. And so on. As a result, Rank(A) = k if all bordering minors of the (k + 1)th order of the matrix A are equal to zero, or Rank(A) = min(p, n) if there is a non-zero minor bordering a minor of order (min( p, n) – 1) .

Let's look at the method of bordering minors to find the rank of a matrix using an example.

Example.

Find the rank of the matrix by the method of bordering minors.

Solution.

Since element a 1 1 of matrix A is nonzero, we take it as a first-order minor. Let's start searching for a bordering minor that is different from zero:

An edge minor of the second order, different from zero, is found. Let's look at its bordering minors (their things):

All minors bordering the second-order minor are equal to zero, therefore, the rank of matrix A is equal to two.

Answer:

Rank(A) = 2 .

Example.

Find the rank of the matrix using bordering minors.

Solution.

As a non-zero minor of the first order, we take the element a 1 1 = 1 of the matrix A. The surrounding minor of the second order not equal to zero. This minor is bordered by a third-order minor
. Since it is not equal to zero and there is not a single bordering minor for it, the rank of matrix A is equal to three.

Answer:

Rank(A) = 3 .

Finding the rank using elementary matrix transformations (Gauss method).

Let's consider another way to find the rank of a matrix.

The following matrix transformations are called elementary:

  • rearranging rows (or columns) of a matrix;
  • multiplying all elements of any row (column) of a matrix by an arbitrary number k, different from zero;
  • adding to the elements of a row (column) the corresponding elements of another row (column) of the matrix, multiplied by an arbitrary number k.

Matrix B is called equivalent to matrix A, if B is obtained from A using a finite number of elementary transformations. The equivalence of matrices is denoted by the symbol “~”, that is, written A ~ B.

Finding the rank of a matrix using elementary matrix transformations is based on the statement: if matrix B is obtained from matrix A using a finite number of elementary transformations, then Rank(A) = Rank(B) .

The validity of this statement follows from the properties of the determinant of the matrix:

  • When rearranging the rows (or columns) of a matrix, its determinant changes sign. If it is equal to zero, then when the rows (columns) are rearranged, it remains equal to zero.
  • When multiplying all elements of any row (column) of a matrix by an arbitrary number k other than zero, the determinant of the resulting matrix is ​​equal to the determinant of the original matrix multiplied by k. If the determinant of the original matrix is ​​equal to zero, then after multiplying all the elements of any row or column by the number k, the determinant of the resulting matrix will also be equal to zero.
  • Adding to the elements of a certain row (column) of a matrix the corresponding elements of another row (column) of the matrix, multiplied by a certain number k, does not change its determinant.

The essence of the method of elementary transformations consists in reducing the matrix whose rank we need to find to a trapezoidal one (in a particular case, to an upper triangular one) using elementary transformations.

Why is this being done? The rank of matrices of this type is very easy to find. It is equal to the number of lines containing at least one non-zero element. And since the rank of the matrix does not change when carrying out elementary transformations, the resulting value will be the rank of the original matrix.

We give illustrations of matrices, one of which should be obtained after transformations. Their appearance depends on the order of the matrix.


These illustrations are templates to which we will transform the matrix A.

Let's describe method algorithm.

Let us need to find the rank of a non-zero matrix A of order (p can be equal to n).

So, . Let's multiply all elements of the first row of matrix A by . In this case, we obtain an equivalent matrix, denoting it A (1):

To the elements of the second row of the resulting matrix A (1) we add the corresponding elements of the first row, multiplied by . To the elements of the third line we add the corresponding elements of the first line, multiplied by . And so on until the p-th line. Let's get an equivalent matrix, denote it A (2):

If all elements of the resulting matrix located in rows from the second to the p-th are equal to zero, then the rank of this matrix is ​​equal to one, and, consequently, the rank of the original matrix is ​​equal to one.

If in the lines from the second to the p-th there is at least one non-zero element, then we continue to carry out transformations. Moreover, we act in exactly the same way, but only with the part of matrix A (2) marked in the figure.

If , then we rearrange the rows and (or) columns of matrix A (2) so that the “new” element becomes non-zero.


Let A be a matrix of sizes m\times n and k be a natural number not exceeding m and n: k\leqslant\min\(m;n\). Minor kth order matrix A is the determinant of a k-th order matrix formed by the elements at the intersection of arbitrarily chosen k rows and k columns of the matrix A. When denoting minors, we will indicate the numbers of the selected rows as upper indices, and the numbers of the selected columns as lower indices, arranging them in ascending order.


Example 3.4. Write minors of different orders of the matrix


A=\begin(pmatrix)1&2&1&0\\ 0&2&2&3\\ 1&4&3&3\end(pmatrix)\!.


Solution. Matrix A has dimensions 3\times4 . It has: 12 minors of the 1st order, for example, minor M_(()_2)^(()_3)=\det(a_(32))=4; 18 2nd order minors, for example, M_(()_(23))^(()^(12))=\begin(vmatrix)2&1\\2&2\end(vmatrix)=2; 4 3rd order minors, for example,


M_(()_(134))^(()^(123))= \begin(vmatrix)1&1&0\\0&2&3\\ 1&3&3 \end(vmatrix)=0.

In a matrix A of dimensions m\times n, the r-th order minor is called basic, if it is nonzero and all minors of (r+1)-ro order are equal to zero or do not exist at all.


Matrix rank is called the order of the basis minor. There is no basis minor in a zero matrix. Therefore, the rank of a zero matrix is, by definition, equal to zero. The rank of matrix A is denoted by \operatorname(rg)A.


Example 3.5. Find all basis minors and matrix rank


A=\begin(pmatrix)1&2&2&0\\0&2&2&3\\0&0&0&0\end(pmatrix)\!.


Solution. All third-order minors of this matrix are equal to zero, since these determinants have a zero third row. Therefore, only a second-order minor located in the first two rows of the matrix can be basic. Going through 6 possible minors, we select those that are non-zero


M_(()_(12))^(()^(12))= M_(()_(13))^(()^(12))= \begin(vmatrix)1&2\\0&2 \end( vmatrix)\!,\quad M_(()_(24))^(()^(12))= M_(()_(34))^(()^(12))= \begin(vmatrix) 2&0\\2&3\end(vmatrix)\!,\quad M_(()_(14))^(()^(12))= \begin(vmatrix)1&0\\0&3\end(vmatrix)\!.


Each of these five minors is a basic one. Therefore, the rank of the matrix is ​​2.

Notes 3.2


1. If in a matrix all minors of the kth order are equal to zero, then minors of higher order are also equal to zero. Indeed, expanding the minor of (k+1)-ro order over any row, we obtain the sum of the products of the elements of this row by the minors of the kth order, and they are equal to zero.


2. The rank of a matrix is ​​equal to the highest order of the nonzero minor of this matrix.


3. If a square matrix is ​​non-singular, then its rank is equal to its order. If a square matrix is ​​singular, then its rank is less than its order.


4. Designations are also used for rank \operatorname(Rg)A,~ \operatorname(rang)A,~ \operatorname(rank)A.


5. Block matrix rank is defined as the rank of a regular (numeric) matrix, i.e. regardless of its block structure. In this case, the rank of a block matrix is ​​not less than the ranks of its blocks: \operatorname(rg)(A\mid B)\geqslant\operatorname(rg)A And \operatorname(rg)(A\mid B)\geqslant\operatorname(rg)B, since all minors of the matrix A (or B ) are also minors of the block matrix (A\mid B) .

Theorems on the basis minor and the rank of the matrix

Let us consider the main theorems expressing the properties of linear dependence and linear independence of columns (rows) of a matrix.


Theorem 3.1 on the basis minor. In an arbitrary matrix A, each column (row) is a linear combination of the columns (rows) in which the basis minor is located.


Indeed, without loss of generality, we assume that in a matrix A of size m\times n the basis minor is located in the first r rows and first r columns. Consider the determinant


D=\begin(vmatrix)~ a_(11)&\cdots&a_(1r)\!\!&\vline\!\!&a_(1k)~\\ ~\vdots&\ddots &\vdots\!\!&\ vline\!\!&\vdots~\\ ~a_(r1)&\cdots&a_(rr)\!\!&\vline\!\!&a_(rk)~\\\hline ~a_(s1)&\cdots&a_ (sr)\!\!&\vline\!\!&a_(sk)~\end(vmatrix),


which is obtained by assigning the corresponding elements of the sth row and kth column to the basis minor of the matrix A. Note that for any 1\leqslant s\leqslant m and this determinant is equal to zero. If s\leqslant r or k\leqslant r , then the determinant D contains two identical rows or two identical columns. If s>r and k>r, then the determinant D is equal to zero, since it is a minor of (r+l)-ro order. Expanding the determinant along the last line, we get


a_(s1)\cdot D_(r+11)+\ldots+ a_(sr)\cdot D_(r+1r)+a_(sk)\cdot D_(r+1\,r+1)=0,


where D_(r+1\,j) are the algebraic complements of the elements of the last row. Note that D_(r+1\,r+1)\ne0 since this is a basis minor. That's why


a_(sk)=\lambda_1\cdot a_(s1)+\ldots+\lambda_r\cdot a_(sr), Where \lambda_j=-\frac(D_(r+1\,j))(D_(r+1\,r+1)),~j=1,2,\ldots,r.


Writing the last equality for s=1,2,\ldots,m, we get

\begin(pmatrix)a_(1k)\\\vdots\\a_(mk)\end(pmatrix)= \lambda_1\cdot\! \begin(pmatrix)a_(11)\\\vdots\\a_(m1)\end(pmatrix)+\ldots \lambda_r\cdot\! \begin(pmatrix)a_(1r)\\\vdots\\a_(mr)\end(pmatrix)\!.


those. kth column (for any 1\leqslant k\leqslant n) is a linear combination of the columns of the basis minor, which is what we needed to prove.


The basis minor theorem serves to prove the following important theorems.

Condition for the determinant to be zero

Theorem 3.2 (necessary and sufficient condition for the determinant to be zero). In order for a determinant to be equal to zero, it is necessary and sufficient that one of its columns (one of its rows) be a linear combination of the remaining columns (rows).


Indeed, necessity follows from the basis minor theorem. If the determinant of a square matrix of order n is equal to zero, then its rank is less than n, i.e. at least one column is not included in the basis minor. Then this chosen column, by Theorem 3.1, is a linear combination of the columns in which the basis minor is located. By adding, if necessary, to this combination other columns with zero coefficients, we obtain that the selected column is a linear combination of the remaining columns of the matrix. Sufficiency follows from the properties of the determinant. If, for example, the last column A_n of the determinant \det(A_1~A_2~\cdots~A_n) linearly expressed through the rest


A_n=\lambda_1\cdot A_1+\lambda_2\cdot A_2+\ldots+\lambda_(n-1)\cdot A_(n-1),


then adding to A_n column A_1 multiplied by (-\lambda_1), then column A_2 multiplied by (-\lambda_2), etc. column A_(n-1) multiplied by (-\lambda_(n-1)) we get the determinant \det(A_1~\cdots~A_(n-1)~o) with a null column that is equal to zero (property 2 of the determinant).

Invariance of matrix rank under elementary transformations

Theorem 3.3 (on the invariance of rank under elementary transformations). During elementary transformations of the columns (rows) of a matrix, its rank does not change.


Indeed, let it be. Let us assume that as a result of one elementary transformation of the columns of matrix A we obtained matrix A". If a type I transformation was performed (permutation of two columns), then any minor (r+l)-ro of the order of matrix A" is either equal to the corresponding minor (r+l )-ro of the order of the matrix A, or differs from it in sign (property 3 of the determinant). If a type II transformation was performed (multiplying the column by the number \lambda\ne0 ), then any minor (r+l)-ro of the order of the matrix A" is either equal to the corresponding minor (r+l)-ro of the order of the matrix A or different from it factor \lambda\ne0 (property 6 of the determinant). If a type III transformation was performed (adding to one column another column multiplied by the number \Lambda), then any minor of the (r+1)th order of the matrix A" is either equal to the corresponding minor. (r+1)-th order of the matrix A (property 9 of the determinant), or is equal to the sum of two minors (r+l)-ro of the order of the matrix A (property 8 of the determinant). Therefore, under an elementary transformation of any type, all minors (r+l)-ro of the order of the matrix A" are equal to zero, since all minors (r+l)-ro of the order of the matrix A are equal to zero. Thus, it has been proven that under elementary transformations of columns the rank matrix cannot increase. Since transformations inverse to elementary ones are elementary, the rank of the matrix cannot decrease under elementary transformations of the columns, i.e., it is similarly proved that the rank of the matrix does not change under elementary transformations of the rows.


Corollary 1. If one row (column) of a matrix is ​​a linear combination of its other rows (columns), then this row (column) can be deleted from the matrix without changing its rank.


Indeed, such a string can be made zero using elementary transformations, and a zero string cannot be included in the basis minor.


Corollary 2. If the matrix is ​​reduced to the simplest form (1.7), then


\operatorname(rg)A=\operatorname(rg)\Lambda=r\,.


Indeed, the matrix of the simplest form (1.7) has a basis minor of the rth order.


Corollary 3. Any non-singular square matrix is ​​elementary, in other words, any non-singular square matrix is ​​equivalent to an identity matrix of the same order.


Indeed, if A is a non-singular square matrix of nth order, then \operatorname(rg)A=n(see paragraph 3 of comments 3.2). Therefore, bringing the matrix A to the simplest form (1.7) by elementary transformations, we obtain the identity matrix \Lambda=E_n , since \operatorname(rg)A=\operatorname(rg)\Lambda=n(see Corollary 2). Therefore, matrix A is equivalent to the identity matrix E_n and can be obtained from it as a result of a finite number of elementary transformations. This means that matrix A is elementary.

Theorem 3.4 (about the rank of the matrix). The rank of a matrix is ​​equal to the maximum number of linearly independent rows of this matrix.


In fact, let \operatorname(rg)A=r. Then the matrix A has r linearly independent rows. These are the lines in which the base minor is located. If they were linearly dependent, then this minor would be equal to zero by Theorem 3.2, and the rank of the matrix A would not be equal to r. Let us show that r is the maximum number of linearly independent rows, i.e. any p rows are linearly dependent for p>r. Indeed, we form the matrix B from these p rows. Since matrix B is part of matrix A, then \operatorname(rg)B\leqslant \operatorname(rg)A=r

This means that at least one row of matrix B is not included in the basis minor of this matrix. Then, by the basis minor theorem, it is equal to a linear combination of the rows in which the basis minor is located. Therefore, the rows of matrix B are linearly dependent. Thus, the matrix A has at most r linearly independent rows.


Corollary 1. The maximum number of linearly independent rows in a matrix is ​​equal to the maximum number of linearly independent columns:


\operatorname(rg)A=\operatorname(rg)A^T.


This statement follows from Theorem 3.4 if we apply it to the rows of a transposed matrix and take into account that the minors do not change during transposition (property 1 of the determinant).


Corollary 2. During elementary transformations of the rows of a matrix, the linear dependence (or linear independence) of any system of columns of this matrix is ​​preserved.


In fact, let us choose any k columns of a given matrix A and compose the matrix B from them. Suppose that as a result of elementary transformations of the rows of matrix A, matrix A" was obtained, and as a result of the same transformations of rows of matrix B, matrix B" was obtained. By Theorem 3.3 \operatorname(rg)B"=\operatorname(rg)B. Therefore, if the columns of matrix B were linearly independent, i.e. k=\operatorname(rg)B(see Corollary 1), then the columns of the matrix B" are also linearly independent, since k=\operatorname(rg)B". If the columns of matrix B were linearly dependent (k>\operatorname(rg)B), then the columns of matrix B" are also linearly dependent (k>\operatorname(rg)B"). Consequently, for any columns of matrix A, linear dependence or linear independence is preserved under elementary row transformations.


Notes 3.3


1. By Corollary 1 of Theorem 3.4, the property of columns indicated in Corollary 2 is also true for any system of matrix rows if elementary transformations are performed only on its columns.


2. Corollary 3 of Theorem 3.3 can be refined as follows: any non-singular square matrix, using elementary transformations of only its rows (or only its columns), can be reduced to an identity matrix of the same order.


In fact, using only elementary row transformations, any matrix A can be reduced to the simplified form \Lambda (Fig. 1.5) (see Theorem 1.1). Since the matrix A is non-singular (\det(A)\ne0), its columns are linearly independent. This means that the columns of the matrix \Lambda are also linearly independent (Corollary 2 of Theorem 3.4). Therefore, the simplified form \Lambda of a non-singular matrix A coincides with its simplest form (Fig. 1.6) and is the identity matrix \Lambda=E (see Corollary 3 of Theorem 3.3). Thus, by transforming only the rows of a non-singular matrix, it can be reduced to the identity matrix. Similar reasoning is valid for elementary transformations of the columns of a non-singular matrix.

Rank of product and sum of matrices

Theorem 3.5 (on the rank of the product of matrices). The rank of the product of matrices does not exceed the rank of factors:


\operatorname(rg)(A\cdot B)\leqslant \min\(\operatorname(rg)A,\operatorname(rg)B\).


Indeed, let matrices A and B have sizes m\times p and p\times n . Let us assign to matrix A the matrix C=AB\colon\,(A\mid C). Of course that \operatorname(rg)C\leqslant\operatorname(rg)(A\mid C), since C is part of the matrix (A\mid C) (see paragraph 5 of remarks 3.2). Note that each column C_j, according to the matrix multiplication operation, is a linear combination of columns A_1,A_2,\ldots,A_p matrices A=(A_1~\cdots~A_p):


C_(j)=A_1\cdot b_(1j)+A_2\cdot b_(2j)+\ldots+A_(p)\cdot b_pj),\quad j=1,2,\ldots,n.


Such a column can be deleted from the matrix (A\mid C) without changing its rank (Corollary 1 of Theorem 3.3). Crossing out all columns of matrix C, we get: \operatorname(rg)(A\mid C)=\operatorname(rg)A. From here, \operatorname(rg)C\leqslant\operatorname(rg)(A\mid C)=\operatorname(rg)A. Similarly, we can prove that the condition is simultaneously satisfied \operatorname(rg)C\leqslant\operatorname(rg)B, and draw a conclusion about the validity of the theorem.


Consequence. If A is a non-singular square matrix, then And \operatorname(rg)(AB)= \operatorname(rg)B, i.e. the rank of a matrix does not change when it is multiplied from the left or right by a non-singular square matrix.


Theorem 3.6 on the rank of sums of matrices. The rank of the sum of matrices does not exceed the sum of the ranks of the terms:


\operatorname(rg)(A+B)\leqslant \operatorname(rg)A+\operatorname(rg)B.


Indeed, let's create a matrix (A+B\mid A\mid B). Note that each column of matrix A+B is a linear combination of columns of matrices A and B. That's why \operatorname(rg)(A+B\mid A\mid B)= \operatorname(rg)(A\mid B). Considering that the number of linearly independent columns in the matrix (A\mid B) does not exceed \operatorname(rg)A+\operatorname(rg)B,a \operatorname(rg)(A+B)\leqslant \operatorname(rg)(A+B\mid A\mid B)(see section 5 of Remarks 3.2), we obtain the inequality being proved.