The minor order of which determines the rank of the matrix is ​​called. Calculating the rank of a matrix using elementary transformations

Elementary The following matrix transformations are called:

1) permutation of any two rows (or columns),

2) multiplying a row (or column) by a non-zero number,

3) adding to one row (or column) another row (or column), multiplied by a certain number.

The two matrices are called equivalent, if one of them is obtained from the other using a finite set of elementary transformations.

Equivalent matrices are not, generally speaking, equal, but their ranks are equal. If matrices A and B are equivalent, then it is written as follows: A ~ B.

Canonical A matrix is ​​a matrix in which at the beginning of the main diagonal there are several ones in a row (the number of which can be zero), and all other elements are equal to zero, for example,

With help elementary transformations rows and columns, any matrix can be reduced to canonical. Rank of the canonical matrix equal to the number units on its main diagonal.

Example 2 Find the rank of a matrix

A=

and bring it to canonical form.

Solution. From the second line, subtract the first and rearrange these lines:

.

Now from the second and third lines we subtract the first, multiplied by 2 and 5, respectively:

;

subtract the first from the third line; we get a matrix

B = ,

which is equivalent to matrix A, since it is obtained from it using a finite set of elementary transformations. Obviously, the rank of matrix B is 2, and therefore r(A)=2. Matrix B can easily be reduced to canonical. By subtracting the first column, multiplied by suitable numbers, from all subsequent ones, we turn to zero all the elements of the first row, except the first, and the elements of the remaining rows do not change. Then, subtracting the second column, multiplied by suitable numbers, from all subsequent ones, we turn to zero all elements of the second row, except the second, and obtain the canonical matrix:

.

Kronecker - Capelli theorem- compatibility criterion for a system of linear algebraic equations:

In order for a linear system to be consistent, it is necessary and sufficient that the rank of the extended matrix of this system be equal to rank its main matrix.

Proof (system compatibility conditions)

Necessity

Let system joint Then there are numbers such that . Therefore, the column is a linear combination of the columns of the matrix. From the fact that the rank of a matrix will not change if a row (column) is deleted or added from the system of its rows (columns), which is a linear combination of other rows (columns), it follows that .

Adequacy

Let . Let's take some basic minor in the matrix. Since, then it will also be the basis minor of the matrix. Then, according to the basis theorem minor, the last column of the matrix will be a linear combination of the basis columns, that is, the columns of the matrix. Therefore, the column of free terms of the system is a linear combination of the columns of the matrix.

Consequences

    Number of main variables systems equal to the rank of the system.

    Joint system will be defined (its solution is unique) if the rank of the system is equal to the number of all its variables.

Homogeneous system of equations

Offer15 . 2 Homogeneous system of equations

is always joint.

Proof. For this system, the set of numbers , , , is a solution.

In this section we will use the matrix notation of the system: .

Offer15 . 3 The sum of solutions to a homogeneous system of linear equations is a solution to this system. A solution multiplied by a number is also a solution.

Proof. Let them serve as solutions to the system. Then and. Let . Then

Since, then - the solution.

Let be an arbitrary number, . Then

Since, then - the solution.

Consequence15 . 1 If a homogeneous system linear equations has a non-zero solution, then it has infinitely many different solutions.

Indeed, multiplying a non-zero solution by various numbers, we will obtain different solutions.

Definition15 . 5 We will say that the solutions systems form fundamental system of solutions, if columns form linearly independent system and any solution to the system is a linear combination of these columns.

>>Matrix rank

Matrix rank

Determining the rank of a matrix

Consider a rectangular matrix. If in this matrix we select arbitrarily k lines and k columns, then the elements at the intersection of the selected rows and columns form a square matrix of kth order. The determinant of this matrix is ​​called minor of kth order matrix A. Obviously, matrix A has minors of any order from 1 to the smallest of the numbers m and n. Among all nonzero minors of the matrix A, there is at least one minor whose order is the greatest. The largest of the non-zero minor orders of a given matrix is ​​called rank matrices. If the rank of matrix A is r, this means that matrix A has a non-zero minor of order r, but every minor of order greater than r, equal to zero. The rank of matrix A is denoted by r(A). Obviously, the relation holds

Calculating the rank of a matrix using minors

The rank of the matrix is ​​found either by the method of bordering minors or by the method of elementary transformations. When calculating the rank of a matrix using the first method, you should move from lower order minors to higher order minors. If a minor D of the kth order of the matrix A, different from zero, has already been found, then only the (k+1) order minors bordering the minor D require calculation, i.e. containing it as a minor. If they are all equal to zero, then the rank of the matrix is ​​equal to k.

Example 1.Find the rank of the matrix using the method of bordering minors

.

Solution.We start with 1st order minors, i.e. from the elements of matrix A. Let us choose, for example, a minor (element) M 1 = 1, located in the first row and first column. Bordering with the help of the second row and third column, we obtain a minor M 2 = different from zero. We now turn to the 3rd order minors bordering M2. There are only two of them (you can add a second or fourth column). Let's calculate them: = 0. Thus, all bordering minors of the third order turned out to be equal to zero. The rank of matrix A is two.

Calculating the rank of a matrix using elementary transformations

ElementaryThe following matrix transformations are called:

1) permutation of any two rows (or columns),

2) multiplying a row (or column) by a non-zero number,

3) adding to one row (or column) another row (or column), multiplied by a certain number.

The two matrices are called equivalent, if one of them is obtained from the other using a finite set of elementary transformations.

Equivalent matrices are not, generally speaking, equal, but their ranks are equal. If matrices A and B are equivalent, then it is written as follows: A~B.

CanonicalA matrix is ​​a matrix in which at the beginning of the main diagonal there are several ones in a row (the number of which can be zero), and all other elements are equal to zero, for example,

.

Using elementary transformations of rows and columns, any matrix can be reduced to canonical. The rank of a canonical matrix is ​​equal to the number of ones on its main diagonal.

Example 2Find the rank of a matrix

A=

and bring it to canonical form.

Solution. From the second line, subtract the first and rearrange these lines:

.

Now from the second and third lines we subtract the first, multiplied by 2 and 5, respectively:

;

subtract the first from the third line; we get a matrix

B = ,

which is equivalent to matrix A, since it is obtained from it using a finite set of elementary transformations. Obviously, the rank of matrix B is 2, and therefore r(A)=2. Matrix B can easily be reduced to canonical. By subtracting the first column, multiplied by suitable numbers, from all subsequent ones, we turn to zero all the elements of the first row, except the first, and the elements of the remaining rows do not change. Then, subtracting the second column, multiplied by suitable numbers, from all subsequent ones, we turn to zero all elements of the second row, except the second, and obtain the canonical matrix:

.

Determining the rank of a matrix

Consider a matrix \(A\) of type \((m,n)\). Let, for definiteness, \(m \leq n\). Let's take \(m\) rows and choose \(m\) columns of the matrix \(A\), at the intersection of these rows and columns we get a square matrix of order \(m\), the determinant of which is called minor order \(m\) matrices \(A\). If this minor is different from 0, it is called basic minor and they say that the rank of the matrix \(A\) is equal to \(m\). If this determinant is equal to 0, then other \(m\) columns are chosen, at their intersection there are elements that form another minor of order \(m\). If the minor is 0, we continue the procedure. If among all possible minors of order \(m\) there are no nonzeros, we select \(m-1\) rows and columns from the matrix \(A\), at their intersection a square matrix of order \(m-1\) appears , its determinant is called a minor of order \(m-1\) of the original matrix. Continuing the procedure, we look for a non-zero minor, going through all possible minors, lowering their order.

Definition.

The non-zero minor of a given matrix of the highest order is called basic minor of the original matrix, its order is called rank matrices \(A\), rows and columns, at the intersection of which there is a basis minor, are called basis rows and columns. The rank of a matrix is ​​denoted by \(rang(A)\).

From this definition it follows simple properties matrix rank: this is an integer, and the rank is not zero matrix satisfies the inequalities: \(1 \leq rang(A) \leq \min(m,n)\).

How will the rank of the matrix change if a row is deleted? Add some line?

Check answer

1) Rank may decrease by 1.

2) Rank can increase by 1.

Linear dependence and linear independence of matrix columns

Let \(A\) be a matrix of type \((m,n)\). Consider the columns of the matrix \(A\) - these are columns of \(m\) numbers each. Let's denote them \(A_1,A_2,...,A_n\). Let \(c_1,c_2,...,c_n\) be some numbers.

Definition.

Column \[ D=c_1A_1+c_2A_2+...+c_nA_n = \sum _(m=1)^nc_mA_m \] is called a linear combination of columns \(A_1,A_2,...,A_n\), numbers \(c_1,c_2 ,...,c_n\) are called the coefficients of this linear combination.

Definition.

Let \(p\) columns \(A_1, A_2, ..., A_p\) be given. If there are numbers \(c_1,c_2,...,c_p\) such that

1. not all these numbers are equal to zero,

2. linear combination \(c_1A_1+c_2A_2+...+c_pA_p =\sum _(m=1)^pc_mA_m\) is equal to the zero column (i.e. a column whose all elements are zeros), then we say that the columns \( A_1, A_2, ..., A_p\) are linearly dependent. If for this set There are no columns of such numbers \(c_1,c_2,...,c_n\), the columns are called linearly independent.

Example. Consider 2-columns

\[ A_1=\left(\begin(array)(c) 1 \\ 0 \end(array) \right), A_2=\left(\begin(array)(c) 0 \\ 1 \end(array) \right), \] then for any numbers \(c_1,c_2\) we have: \[ c_1A_1+c_2A_2=c_1\left(\begin(array)(c) 1 \\ 0 \end(array) \right)+ c_2\left(\begin(array)(c) 0 \\ 1 \end(array) \right)=\left(\begin(array)(c) c_1 \\ c_2 \end(array) \right). \]

This linear combination is equal to the zero column if and only if both numbers \(c_1,c_2\) are equal to zero. Thus, these columns are linearly independent.

Statement. In order for the columns to be linearly dependent, it is necessary and sufficient that one of them is a linear combination of the others.

Let the columns \(A_1,A_2,...,A_m\) be linearly dependent, i.e. for some constants \(\lambda _1, \lambda _2,...,\lambda _m\), not all of which are equal to 0, the following holds: \[ \sum _(k=1)^m\lambda _kA_k=0 \ ] (on the right side is the zero column). Let, for example, \(\lambda _1 \neq 0\). Then \[ A_1=\sum _(k=2)^mc_kA_k, \quad c_k=-\lambda _k/\lambda _1, \quad \quad (15) \] i.e. the first column is a linear combination of the others.

The basis minor theorem

Theorem.

For any non-zero matrix \(A\) the following is true:

1. The basis columns are linearly independent.

2. Any matrix column is a linear combination of its basis columns.

(The same is true for strings).

Let, for definiteness, \((m,n)\) be the type of matrix \(A\), \(rang(A)=r \leq n\) and the basis minor is located in the first \(r\) rows and columns matrices \(A\). Let \(s\) be any number between 1 and \(m\), \(k\) be any number between 1 and \(n\). Let's consider minor the following type: \[ D=\left| \begin(array)(ccccc) a_(11) & a_(12) & \ldots & a_(1r) & a_(1s) \\ a_(21) & a_(22) & \ldots & a_(2r) & a_(2s) \\ \dots &\ldots & \ldots & \ldots & \ldots \\ a_(r1) & a_(r2) & \ldots & a_(rr) & a_(rs) \\ a_(k1) & a_(k2) & \ldots & a_(kr) & a_(ks) \\ \end(array) \right| , \] i.e. We assigned the \(s-\)th column and the \(k-\)th row to the basis minor. By definition of the rank of the matrix, this determinant is equal to zero (if we chose \(s\leq r\) or \(k \leq r\) , then in this minor there are 2 identical columns or 2 identical lines, if \(s>r\) and \(k>r\) - by definition of rank, a minor of size greater than \(r\) becomes zero). Let us expand this determinant into last line, we get: \[ a_(k1)A_(k1)+a_(k2)A_(k2)+....+a_(kr)A_(kr)+a_(ks)A_(ks)=0. \quad \quad(16) \]

Here the numbers \(A_(kp)\) - algebraic additions elements from the bottom row \(D\). Their values ​​do not depend on \(k\), because are formed using elements from the first \(r\) lines. In this case, the value \(A_(ks)\) is a basic minor, different from 0. Let us denote \(A_(k1)=c_1,A_(k2)=c_2,...,A_(ks)=c_s \neq 0 \). Let us rewrite (16) in new notation: \[ c_1a_(k1)+c_2a_(k2)+...+c_ra_(kr)+c_sa_(ks)=0, \] or, dividing by \(c_s\), \[ a_(ks)=\lambda_1a_(k1)+\lambda_2a_(k2)+...+\lambda_ra_(kr), \quad \lambda _p=-c_p/c_s. \] This equality is valid for any value of \(k\), so \[ a_(1s)=\lambda_1a_(11)+\lambda_2a_(12)+...+\lambda_ra_(1r), \] \[ a_ (2s)=\lambda_1a_(21)+\lambda_2a_(22)+...+\lambda_ra_(2r), \] \[ ..................... .................................... \] \[ a_(ms)=\lambda_1a_(m1) +\lambda_2a_(m2)+...+\lambda_ra_(mr). \] So, the \(s-\)th column is a linear combination of the first \(r\) columns. The theorem is proven.

Comment.

From the basis minor theorem it follows that the rank of a matrix is ​​equal to the number of its linearly independent columns (which is equal to the number of linearly independent rows).

Corollary 1.

If the determinant is zero, then it has a column that is a linear combination of the other columns.

Corollary 2.

If the rank of a matrix is ​​less than the number of columns, then the columns of the matrix are linearly dependent.

Calculating the rank of a matrix and finding the basis minor

Some matrix transformations do not change its rank. Such transformations can be called elementary. The corresponding facts can be easily verified using the properties of determinants and determining the rank of a matrix.

1. Rearranging columns.

2. Multiplying the elements of any column by a non-zero factor.

3. Adding any other column to a column, multiplied by an arbitrary number.

4. Crossing out the zero column.

The same is true for strings.

Using these transformations, the matrix can be transformed into the so-called "trapezoidal" form - a matrix with only zeros under the main diagonal. For a "trapezoidal" matrix, the rank is the number of non-zero elements on the main diagonal, and the basis minor is the minor whose diagonal coincides with the set of non-zero elements on the main diagonal of the transformed matrix.

Example. Consider the matrix

\[ A=\left(\begin(array)(cccc) 2 &1 & 11 & 2 \\ 1 & 0 & 4 & -1 \\ 11 & 4 & 56 & 5 \\ 2 & -1 & 5 & - 6 \end(array) \right). \] We will transform it using the above transformations. \[ A=\left(\begin(array)(cccc) 2 &1 & 11 & 2 \\ 1 & 0 & 4 & -1 \\ 11 & 4 & 56 & 5 \\ 2 & -1 & 5 & - 6 \end(array) \right) \mapsto \left(\begin(array)(cccc) 1 & 0 & 4 & -1 \\ 2 & 1 & 11 & 2 \\ 11 & 4 & 56 & 5 \\ 2 & -1 & 5 & -6 \end(array) \right) \mapsto \left(\begin(array)(cccc) 1 & 0 & 4 & -1 \\ 0 & 1 & 3 & 4 \\ 0 & 4 & 12 & 16 \\ 0 & -1 & -3 & -4 \end(array) \right) \mapsto \] \[ \left(\begin(array)(cccc) 1 & 0 & 4 & - 1 \\ 0 & 1 & 3 & 4 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end(array) \right)\mapsto \left(\begin(array)(cccc) 1 & 0 & 4 & -1 \\ 0 & 1 & 3 & 4 \end(array)\right). \]

Here we consistently do next steps: 1) rearrange the second line to the top, 2) subtract the first line from the rest with a suitable factor, 3) subtract the second line from the third 4 times, add the second line to the fourth, 4) cross out the zero lines - the third and fourth. Our final matrix has acquired the desired shape: there are non-zero numbers on the main diagonal, and zeros under the main diagonal. After this, the procedure stops and the number of non-zero elements on the main diagonal is equal to the rank of the matrix. Basic minor in this case - the first two rows and the first two columns. At their intersection there is a matrix of order 2 with a nonzero determinant. At the same time, returning along the chain of transformations to reverse side, you can trace where this or that row (this or that column) came from in the final matrix, i.e. determine the basis rows and columns in the original matrix. IN in this case the first two rows and first two columns form the basis minor.

We will also consider the important practical application Topics: study of a system of linear equations for consistency.

What is the rank of a matrix?

The humorous epigraph of the article contains a large amount of truth. We usually associate the word “rank” with some kind of hierarchy, most often with a career ladder. The more knowledge, experience, abilities, connections, etc. a person has. – the higher his position and range of opportunities. In youth terms, rank means general degree"coolness".

And our mathematical brothers live by the same principles. Let's take a few random ones for a walk zero matrices:

Let's think about it, if in the matrix all zeros, then what rank can we talk about? Everyone is familiar with the informal expression “ complete zero" In the society of matrices everything is exactly the same:

Rank of the zero matrixany size equals zero.

Note : The zero matrix is ​​denoted by the Greek letter "theta"

In order to better understand the rank of the matrix, hereinafter I will use materials to help analytical geometry. Consider zero vector our three-dimensional space, which does not set a specific direction and is useless for building affine basis. From an algebraic point of view, the coordinates of this vector are written in matrix“one by three” and logical (in the indicated geometric sense) assume that the rank of this matrix is ​​zero.

Now let's look at a few non-zero column vectors And row vectors:


Each instance has at least one non-zero element, and that's something!

Rank of any non-null row vector (column vector) equal to one

And generally speaking - if in the matrix arbitrary sizes there is at least one non-zero element, then its rank not less units.

Algebraic row vectors and column vectors are to a certain extent abstract, so let's turn again to the geometric association. Non-zero vector sets a very definite direction in space and is suitable for constructing basis, therefore the rank of the matrix will be considered equal to one.

Theoretical information : in linear algebra, a vector is an element of a vector space (defined by 8 axioms), which, in particular, can be an ordered row (or column) of real numbers with the operations of addition and multiplication by defined for them real number. With more detailed information about vectors can be found in the article Linear transformations.

linearly dependent(expressed through each other). From a geometric point of view, the second line contains the coordinates of the collinear vector , which did not advance the matter at all in building three-dimensional basis, being in this sense superfluous. Thus, the rank of this matrix is ​​also equal to one.

Let's rewrite the coordinates of the vectors into columns ( transpose the matrix):

What has changed in terms of rank? Nothing. The columns are proportional, which means the rank is equal to one. By the way, note that all three lines are also proportional. They can be identified with the coordinates three collinear vectors of the plane, of which only one useful for constructing a "flat" basis. And this is entirely consistent with our geometric sense of rank.

An important statement follows from the above example:

The rank of the matrix in rows is equal to the rank of the matrix in columns. I already mentioned this a little in the lesson about effective methods for calculating the determinant.

Note : from the linear dependence of the rows it follows linear dependence columns (and vice versa). But in order to save time, and out of habit, I will almost always talk about linear dependence of strings.

Let's continue training our beloved pet. Let's add the coordinates of another collinear vector to the matrix in the third row :

Did he help us in constructing a three-dimensional basis? Of course not. All three vectors walk back and forth along the same path, and the rank of the matrix is ​​equal to one. You can take as many collinear vectors as you like, say, 100, put their coordinates into a “one hundred by three” matrix, and the rank of such a skyscraper will still remain one.

Let's get acquainted with the matrix, the rows of which linearly independent. A pair of non-collinear vectors is suitable for constructing a three-dimensional basis. The rank of this matrix is ​​two.

What is the rank of the matrix? The lines don’t seem to be proportional... so, in theory, they are three. However, the rank of this matrix is ​​also two. I added the first two lines and wrote the result at the bottom, i.e. linearly expressed the third line through the first two. Geometrically, the rows of the matrix correspond to the coordinates of three coplanar vectors, and among this three there are a pair of non-collinear comrades.

As you can see, linear dependence in the considered matrix is ​​not obvious, and today we will learn how to bring it out into the open.

I think many people can guess what the rank of a matrix is!

Consider a matrix whose rows linearly independent. Vectors form affine basis, and the rank of this matrix is ​​three.

As you know, any fourth, fifth, tenth vector of three-dimensional space will be linearly expressed in terms of basis vectors. Therefore, if you add any number of rows to a matrix, then its rank will still be equal to three.

Similar reasoning can be carried out for matrices larger sizes(of course, without any geometric meaning).

Definition : the rank of a matrix is maximum amount linearly independent rows. Or: The rank of a matrix is ​​the maximum number of linearly independent columns. Yes, their number is always the same.

An important practical guideline also follows from the above: the rank of the matrix does not exceed its minimum dimension. For example, in the matrix four rows and five columns. The minimum dimension is four, therefore, the rank of this matrix certainly will not exceed 4.

Designations: in world theory and practice there is no generally accepted standard for designating the rank of a matrix; the most common one can be found: - as they say, an Englishman writes one thing, a German another. Therefore, based on the famous joke about American and Russian hell, let’s denote the rank of the matrix with a native word. For example: . And if the matrix is ​​“unnamed”, of which there are many, then you can simply write .

How to find the rank of a matrix using minors?

If my grandmother had a fifth column in her matrix, then she would have to calculate another minor of the 4th order (“blue”, “raspberry” + 5th column).

Conclusion: maximum order non-zero minor is equal to three, which means .

Perhaps not everyone has fully comprehended this phrase: a minor of the 4th order is equal to zero, but among the minors of the 3rd order there was a non-zero one - therefore the maximum order non-zero minor and equals three.

The question arises: why not immediately calculate the determinant? Well, firstly, in most tasks the matrix is ​​not square, and secondly, even if you get a non-zero value, the task will most likely be rejected, since it usually implies standard solution"down up". And in the example considered, the zero determinant of the 4th order allows us to state that the rank of the matrix is ​​only less than four.

I must admit, I came up with the problem I analyzed myself in order to better explain the method of bordering minors. In real practice, everything is simpler:

Example 2

Find the rank of a matrix using the edge minors method

The solution and answer are at the end of the lesson.

When does the algorithm work fastest? Let's return to the same four-by-four matrix. . Obviously, the solution will be the shortest in the case of “good” corner minors:

And, if , then , otherwise – .

The thinking is not at all hypothetical - there are many examples where the whole matter is limited only to angular minors.

However, in some cases another method is more effective and preferable:

How to find the rank of a matrix using the Gaussian method?

The paragraph is intended for readers who are already familiar with Gaussian method and more or less got their hands on it.

From a technical point of view, the method is not novel:

1) using elementary transformations, we reduce the matrix to a stepwise form;

2) the rank of the matrix is ​​equal to the number of rows.

It is absolutely clear that using the Gaussian method does not change the rank of the matrix, and the essence here is extremely simple: according to the algorithm, during elementary transformations, all unnecessary proportional (linearly dependent) rows are identified and removed, resulting in a “dry residue” - the maximum number of linearly independent rows.

Let's transform the old familiar matrix with the coordinates of three collinear vectors:

(1) The first line was added to the second line, multiplied by –2. The first line was added to the third line.

(2) Zero lines are removed.

Thus, there is one line left, hence . Needless to say, this is much faster than calculating nine zero minors of the 2nd order and only then drawing a conclusion.

I remind you that in itself algebraic matrix nothing can be changed, and transformations are performed only for the purpose of determining the rank! By the way, let’s dwell once again on the question, why not? Source matrix carries information that is fundamentally different from the information of the matrix and row. In some mathematical models(no exaggeration) the difference in one number can be a matter of life and death. ...I remembered primary and secondary school mathematics teachers who mercilessly cut grades by 1-2 points for the slightest inaccuracy or deviation from the algorithm. And it was terribly disappointing when, instead of a seemingly guaranteed “A”, it turned out “good” or even worse. Understanding came much later - how else to entrust satellites, nuclear warheads and power plants to a person? But don't worry, I don't work in these areas =)

Let's move on to more meaningful tasks, where, among other things, we will get acquainted with important computational techniques Gauss method:

Example 3

Find the rank of a matrix using elementary transformations

Solution: a “four by five” matrix is ​​given, which means that its rank is certainly no more than 4.

In the first column, there is no 1 or -1, therefore, necessary additional actions aimed at obtaining at least one unit. Throughout the existence of the site, I have been repeatedly asked the question: “Is it possible to rearrange columns during elementary transformations?” Here - we rearranged the first and second columns, and everything is fine! In most tasks where it is used Gaussian method, the columns can indeed be rearranged. BUT NOT NEEDED. And the point is not even in possible confusion with variables, the point is that in the classical course of higher mathematics this action is not traditionally considered, so such a curtsy will be looked at VERY crookedly (or even forced to redo everything).

The second point concerns numbers. As you make your decision, it is helpful to use the following rule of thumb: elementary transformations should, if possible, reduce the matrix numbers. After all, it is much easier to work with one, two, three than, for example, with 23, 45 and 97. And the first action is aimed not only at obtaining a one in the first column, but also at eliminating the numbers 7 and 11.

At first complete solution, then comments:

(1) The first line was added to the second line, multiplied by –2. The first line was added to the third line, multiplied by –3. And to the heap: the 1st line was added to the 4th line, multiplied by –1.

(2) The last three lines are proportional. The 3rd and 4th lines were removed, the second line was moved to the first place.

(3) The first line was added to the second line, multiplied by –3.

The matrix reduced to echelon form has two rows.

Answer:

Now it's your turn to torture the four-by-four matrix:

Example 4

Find the rank of a matrix using the Gaussian method

I remind you that Gaussian method does not imply unambiguous rigidity, and your decision will most likely differ from my decision. A brief example of a task at the end of the lesson.

Which method should I use to find the rank of a matrix?

In practice, it is often not stated at all which method should be used to find the rank. In such a situation, the condition should be analyzed - for some matrices it is more rational to solve through minors, while for others it is much more profitable to apply elementary transformations:

Example 5

Find the rank of a matrix

Solution: the first method somehow immediately disappears =)

A little higher, I advised not to touch the columns of the matrix, but when there is a zero column, or proportional/coinciding columns, then it is still worth amputating:

(1) The fifth column is zero, remove it from the matrix. Thus, the rank of the matrix is ​​no more than four. The first line was multiplied by –1. This is another signature feature of the Gauss method, which turns the following action into a pleasant walk:

(2) To all lines, starting from the second, the first line was added.

(3) The first line was multiplied by –1, the third line was divided by 2, the fourth line was divided by 3. The second line was added to the fifth line, multiplied by –1.

(4) The third line was added to the fifth line, multiplied by –2.

(5) The last two lines are proportional, the fifth is deleted.

The result is 4 lines.

Answer:

Standard five-story building for independent study:

Example 6

Find the rank of a matrix

Quick Solution and the answer at the end of the lesson.

It should be noted that the phrase “matrix rank” is not so often encountered in practice, and in most problems you can do without it altogether. But there is one task where the concept in question is the main character, and we will conclude the article with this practical application:

How to study a system of linear equations for consistency?

Often, in addition to the solution systems of linear equations according to the condition, it is first required to examine it for compatibility, that is, to prove that any solution exists at all. Key role plays in such a test Kronecker-Capelli theorem, which I will formulate in required form:

If rank system matrices equal to rank extended matrix system, then the system is consistent, and if given number coincides with the number of unknowns, then the solution is unique.

Thus, to study the system for compatibility it is necessary to check the equality , Where - system matrix(remember the terminology from the lesson Gauss method), A - extended system matrix(i.e. a matrix with coefficients of variables + a column of free terms).


The rank of a matrix is ​​an important numerical characteristic. The most typical problem that requires finding the rank of a matrix is ​​checking the consistency of a system of linear algebraic equations. In this article we will give the concept of matrix rank and consider methods for finding it. To better understand the material, we will analyze in detail the solutions to several examples.

Page navigation.

Determination of the rank of a matrix and necessary additional concepts.

Before voicing the definition of the rank of a matrix, you should have a good understanding of the concept of a minor, and finding the minors of a matrix implies the ability to calculate the determinant. So, if necessary, we recommend that you recall the theory of the article, methods for finding the determinant of a matrix, and properties of the determinant.

Let's take a matrix A of order . Let k be some natural number, not exceeding the smallest of the numbers m and n, that is, .

Definition.

Minor kth order matrix A is called the determinant square matrix order, composed of elements of matrix A, which are located in pre-selected k rows and k columns, and the location of the elements of matrix A is preserved.

In other words, if in the matrix A we delete (p–k) rows and (n–k) columns, and from the remaining elements we create a matrix, preserving the arrangement of the elements of the matrix A, then the determinant of the resulting matrix is ​​a minor of order k of the matrix A.

Let's look at the definition of a matrix minor using an example.

Consider the matrix .

Let's write down several first-order minors of this matrix. For example, if we choose the third row and second column of matrix A, then our choice corresponds to a first-order minor . In other words, to obtain this minor, we crossed out the first and second rows, as well as the first, third and fourth columns from the matrix A, and made up a determinant from the remaining element. If we choose the first row and third column of matrix A, then we get a minor .

Let us illustrate the procedure for obtaining the considered first-order minors
And .

Thus, the first-order minors of a matrix are the matrix elements themselves.

Let's show several second-order minors. Select two rows and two columns. For example, take the first and second rows and the third and fourth columns. With this choice we have a second-order minor . This minor could also be created by deleting the third row, first and second columns from matrix A.

Another second-order minor of the matrix A is .

Let us illustrate the construction of these second-order minors
And .

Similarly, third-order minors of the matrix A can be found. Since there are only three rows in matrix A, we select them all. If we select the first three columns of these rows, we get a third-order minor

It can also be constructed by crossing out the last column of the matrix A.

Another third order minor is

obtained by deleting the third column of matrix A.

Here is a picture showing the construction of these third order minors
And .

For a given matrix A there are no minors of order higher than third, since .

How many minors of the kth order are there of a matrix A of order ?

The number of minors of order k can be calculated as , where And - the number of combinations from p to k and from n to k, respectively.

How to construct all minors of order k of matrix A of order p by n?

We will need many matrix row numbers and many column numbers. We write everything down combinations of p elements by k(they will correspond to the selected rows of matrix A when constructing a minor of order k). To each combination of row numbers we sequentially add all combinations of n elements of k column numbers. These sets of combinations of row numbers and column numbers of matrix A will help to compose all minors of order k.

Let's look at it with an example.

Example.

Find all second order minors of the matrix.

Solution.

Since the order of the original matrix is ​​3 by 3, the total of second order minors will be .

Let's write down all combinations of 3 to 2 row numbers of matrix A: 1, 2; 1, 3 and 2, 3. All combinations of 3 to 2 column numbers are 1, 2; 1, 3 and 2, 3.

Let's take the first and second rows of matrix A. By selecting the first and second columns, the first and third columns, the second and third columns for these rows, we obtain the minors, respectively

For the first and third rows, with a similar choice of columns, we have

It remains to add the first and second, first and third, second and third columns to the second and third rows:

So, all nine second-order minors of matrix A have been found.

Now we can proceed to determining the rank of the matrix.

Definition.

Matrix rank- This highest order matrix minor, different from zero.

The rank of matrix A is denoted as Rank(A) . You can also find the designations Rg(A) or Rang(A) .

From the definitions of matrix rank and matrix minor, we can conclude that the rank of a zero matrix is ​​equal to zero, and the rank of a nonzero matrix is ​​not less than one.

Finding the rank of a matrix by definition.

So, the first method for finding the rank of a matrix is method of enumerating minors. This method is based on determining the rank of the matrix.

Let us need to find the rank of a matrix A of order .

Let's briefly describe algorithm solving this problem by enumerating minors.

If there is at least one element of the matrix that is different from zero, then the rank of the matrix is ​​at least equal to one (since there is a first-order minor that is not equal to zero).

Next we look at the second order minors. If all second-order minors are equal to zero, then the rank of the matrix is ​​equal to one. If there is at least one non-zero minor of the second order, then we proceed to enumerate the minors of the third order, and the rank of the matrix is ​​at least equal to two.

Similarly, if all third-order minors are zero, then the rank of the matrix is ​​two. If there is at least one third-order minor other than zero, then the rank of the matrix is ​​at least three, and we move on to enumerating fourth-order minors.

Note that the rank of the matrix cannot exceed the smallest of the numbers p and n.

Example.

Find the rank of the matrix .

Solution.

Since the matrix is ​​non-zero, its rank is not less than one.

Minor of the second order is different from zero, therefore, the rank of matrix A is at least two. Let's move on to enumerating third-order minors. Total of them things.




All third order minors are equal to zero. Therefore, the rank of the matrix is ​​two.

Answer:

Rank(A) = 2 .

Finding the rank of a matrix using the method of bordering minors.

There are other methods for finding the rank of a matrix that allow you to obtain the result with less computational work.

One such method is edge minor method.

Let's deal with concept of edge minor.

It is said that a minor M ok of the (k+1)th order of the matrix A borders a minor M of order k of the matrix A if the matrix corresponding to the minor M ok “contains” the matrix corresponding to the minor M .

In other words, the matrix corresponding to the bordering minor M is obtained from the matrix corresponding to the bordering minor M ok by deleting the elements of one row and one column.

For example, consider the matrix and take a second order minor. Let's write down all the bordering minors:

The method of bordering minors is justified by the following theorem (we present its formulation without proof).

Theorem.

If all minors bordering the kth order minor of a matrix A of order p by n are equal to zero, then all minors of order (k+1) of the matrix A are equal to zero.

Thus, to find the rank of a matrix it is not necessary to go through all the minors that are sufficiently bordering. The number of minors bordering the minor of the kth order of a matrix A of order , is found by the formula . Note that there are no more minors bordering the kth order minor of the matrix A than there are (k + 1) order minors of the matrix A. Therefore, in most cases, using the method of bordering minors is more profitable than simply enumerating all the minors.

Let's move on to finding the rank of the matrix using the method of bordering minors. Let's briefly describe algorithm this method.

If the matrix A is nonzero, then as a first-order minor we take any element of the matrix A that is different from zero. Let's look at its bordering minors. If they are all equal to zero, then the rank of the matrix is ​​equal to one. If there is at least one non-zero bordering minor (its order is two), then we proceed to consider its bordering minors. If they are all zero, then Rank(A) = 2. If at least one bordering minor is non-zero (its order is three), then we consider its bordering minors. And so on. As a result, Rank(A) = k if all bordering minors of the (k + 1)th order of the matrix A are equal to zero, or Rank(A) = min(p, n) if there is a non-zero minor bordering a minor of order (min( p, n) – 1) .

Let's look at the method of bordering minors to find the rank of a matrix using an example.

Example.

Find the rank of the matrix by the method of bordering minors.

Solution.

Since element a 1 1 of matrix A is nonzero, we take it as a first-order minor. Let's start searching for a bordering minor that is different from zero:

An edge minor of the second order, different from zero, is found. Let's look at its bordering minors (their things):

All minors bordering the second-order minor are equal to zero, therefore, the rank of matrix A is equal to two.

Answer:

Rank(A) = 2 .

Example.

Find the rank of the matrix using bordering minors.

Solution.

As a non-zero minor of the first order, we take the element a 1 1 = 1 of the matrix A. The surrounding minor of the second order not equal to zero. This minor is bordered by a third-order minor
. Since it is not equal to zero and there is no bordering minor for it, the rank of matrix A is equal to three.

Answer:

Rank(A) = 3 .

Finding the rank using elementary matrix transformations (Gauss method).

Let's consider another way to find the rank of a matrix.

The following matrix transformations are called elementary:

  • rearranging rows (or columns) of a matrix;
  • multiplying all elements of any row (column) of a matrix by an arbitrary number k, different from zero;
  • adding to the elements of a row (column) the corresponding elements of another row (column) of the matrix, multiplied by an arbitrary number k.

Matrix B is called equivalent to matrix A, if B is obtained from A using a finite number of elementary transformations. The equivalence of matrices is denoted by the symbol “~”, that is, written A ~ B.

Finding the rank of a matrix using elementary matrix transformations is based on the statement: if matrix B is obtained from matrix A using a finite number of elementary transformations, then Rank(A) = Rank(B) .

The validity of this statement follows from the properties of the determinant of the matrix:

  • When rearranging the rows (or columns) of a matrix, its determinant changes sign. If it is equal to zero, then when the rows (columns) are rearranged, it remains equal to zero.
  • When multiplying all elements of any row (column) of a matrix by an arbitrary number k other than zero, the determinant of the resulting matrix is ​​equal to the determinant of the original matrix multiplied by k. If the determinant of the original matrix is ​​equal to zero, then after multiplying all the elements of any row or column by the number k, the determinant of the resulting matrix will also be equal to zero.
  • Adding to the elements of a certain row (column) of a matrix the corresponding elements of another row (column) of the matrix, multiplied by a certain number k, does not change its determinant.

The essence of the method of elementary transformations consists in reducing the matrix whose rank we need to find to a trapezoidal one (in a particular case, to an upper triangular one) using elementary transformations.

Why is this being done? The rank of matrices of this type is very easy to find. It is equal to the number of lines containing at least one non-zero element. And since the rank of the matrix does not change when carrying out elementary transformations, the resulting value will be the rank of the original matrix.

We give illustrations of matrices, one of which should be obtained after transformations. Their appearance depends on the order of the matrix.


These illustrations are templates to which we will transform the matrix A.

Let's describe method algorithm.

Let us need to find the rank of a non-zero matrix A of order (p can be equal to n).

So, . Let's multiply all elements of the first row of matrix A by . In this case, we obtain an equivalent matrix, denoting it A (1):

To the elements of the second row of the resulting matrix A (1) we add the corresponding elements of the first row, multiplied by . To the elements of the third line we add the corresponding elements of the first line, multiplied by . And so on until the p-th line. Let's get an equivalent matrix, denote it A (2):

If all elements of the resulting matrix located in rows from the second to the p-th are equal to zero, then the rank of this matrix is ​​equal to one, and, consequently, the rank of the original matrix is ​​equal to one.

If in the lines from the second to the p-th there is at least one non-zero element, then we continue to carry out transformations. Moreover, we act in exactly the same way, but only with the part of matrix A (2) marked in the figure.

If , then we rearrange the rows and (or) columns of matrix A (2) so that the “new” element becomes non-zero.