Linear independence of matrices. Linear dependence and linear independence of rows and columns of a matrix

The concepts of linear dependence and linear independence are defined equally for rows and columns. Therefore, the properties associated with these concepts formulated for columns are, of course, also valid for rows.

1. If a column system includes a zero column, then it is linearly dependent.

2. If a column system has two equal columns, then it is linearly dependent.

3. If a column system has two proportional columns, then it is linearly dependent.

4. A system of columns is linearly dependent if and only if at least one of the columns is a linear combination of the others.

5. Any columns included in the linear independent system, form a linearly independent subsystem.

6. A column system containing a linearly dependent subsystem is linearly dependent.

7. If a system of columns is linearly independent, and after adding a column to it, it turns out to be linearly dependent, then the column can be expanded into columns, and, moreover, in a unique way, i.e. the expansion coefficients can be found uniquely.

Let us prove, for example, the last property. Since the system of columns is linearly dependent, there are numbers that are not all equal to 0, which

In this equality. In fact, if , then

This means that a nontrivial linear combination of columns is equal to the zero column, which contradicts the linear independence of the system. Therefore, and then, i.e. a column is a linear combination of columns. It remains to show the uniqueness of such a representation. Let's assume the opposite. Let there be two expansions and , and not all coefficients of the expansions are respectively equal to each other (for example, ). Then from the equality

We get (\alpha_1-\beta_1)A_1+\ldots+(\alpha_k-\beta_k)A_k=o

sequentially, the linear combination of columns is equal to the zero column. Since not all of its coefficients are equal to zero (at least), this combination is nontrivial, which contradicts the condition of linear independence of the columns. The resulting contradiction confirms the uniqueness of the expansion.

Example 3.2. Prove that two non-zero columns and are linearly dependent if and only if they are proportional, i.e. .

Solution. In fact, if the columns are linearly dependent, then there are numbers that are not equal to zero at the same time, such that . And in this equality. Indeed, assuming that , we obtain a contradiction, since the column is also non-zero. Means, . Therefore, there is a number such that . The need has been proven.

Conversely, if , then . We obtained a non-trivial linear combination of columns equal to the zero column. This means that the columns are linearly dependent.

Example 3.3. Consider all kinds of systems formed from columns

Examine each system for linear dependence.
Solution. Let's consider five systems containing one column each. According to paragraph 1 of Remarks 3.1: systems are linearly independent, and a system consisting of one zero column is linearly dependent.

Let's consider systems containing two columns:

– each of the four systems is linearly dependent, since it contains a zero column (property 1);

– the system is linearly dependent, since the columns are proportional (property 3): ;

– each of the five systems is linearly independent, since the columns are disproportionate (see the statement of example 3.2).

Consider systems containing three columns:

– each of the six systems is linearly dependent, since it contains a zero column (property 1);

– systems are linearly dependent, since they contain a linearly dependent subsystem (property 6);

– systems and are linearly dependent, since the last column is linearly expressed through the rest (property 4): and, respectively.

Finally, systems of four or five columns are linearly dependent (by property 6).

Matrix rank

In this section, we will consider another important numerical characteristic of a matrix, related to the extent to which its rows (columns) depend on each other.

Definition 14.10 Let a matrix of sizes and a number not exceeding the smallest of the numbers and be given: . Let's choose randomly the matrix rows and columns (the row numbers may differ from the column numbers). The determinant of a matrix composed of elements at the intersection of selected rows and columns is called matrix order minor.

Example 14.9 Let .

A first-order minor is any element of the matrix. So 2, , are minors of the first order.

Second order minors:

1. take rows 1, 2, columns 1, 2, we get a minor ;

2. take rows 1, 3, columns 2, 4, we get a minor ;

3. take rows 2, 3, columns 1, 4, we get minor

Third order minors:

rows here can only be selected in one way,

1. take columns 1, 3, 4, we get minor ;

2. take columns 1, 2, 3, we get minor .

Proposition 14.23 If all minors of an order matrix are equal to zero, then all minors of order , if they exist, are also equal to zero.

Proof. Let's take an arbitrary minor of order . This is the determinant of the order matrix. Let's break it down along the first line. Then in each term of the expansion one of the factors will be a minor of the order of the original matrix. By condition, order minors are equal to zero. Therefore, the minor order will be equal to zero.

Definition 14.11 The rank of a matrix is ​​the largest order of the matrix minors other than zero. Rank zero matrix is considered equal to zero.

There is no single, standard designation for the matrix rank. Following the textbook, we will denote it.

Example 14.10 The matrix of Example 14.9 has rank 3 because there is a third-order minor other than zero, but there are no fourth-order minors.

Matrix rank is equal to 1, since there is a non-zero first-order minor (matrix element) and all second-order minors are equal to zero.

The rank of a non-singular square matrix of order is equal to , since its determinant is a minor of the order and is non-zero for a non-singular matrix.

Proposition 14.24 When a matrix is ​​transposed, its rank does not change, that is, .

Proof. A transposed minor of the original matrix will be a minor of the transposed matrix, and vice versa, any minor is a transposed minor of the original matrix. When transposing, the determinant (minor) does not change (Proposition 14.6). Therefore, if all minors of an order in the original matrix are equal to zero, then all minors of the same order in are also equal to zero. If the minor of order in the original matrix is ​​different from zero, then b is a minor of the same order, different from zero. Hence, .

Definition 14.12 Let the rank of the matrix be . Then any minor of order , other than zero, is called a basis minor.

Example 14.11 Let . The determinant of the matrix is ​​zero, since the third row is equal to the sum of the first two. The second order minor, located in the first two rows and first two columns, is equal to . Consequently, the rank of the matrix is ​​two, and the considered minor is basic.

A basic minor is also a minor located, say, in the first and third rows, first and third columns: . The base will be the minor in the second and third rows, first and third columns: .

The minor in the first and second rows and the second and third columns is zero and therefore will not be a basis. The reader can independently check which other second-order minors will be basic and which will not.

Since the columns (rows) of a matrix can be added, multiplied by numbers, and formed linear combinations, it is possible to introduce definitions of linear dependence and linear independence of a system of columns (rows) of a matrix. These definitions are similar to the same definitions 10.14, 10.15 for vectors.

Definition 14.13 A system of columns (rows) is called linearly dependent if there is such a set of coefficients, at least one of which is different from zero, that a linear combination of columns (rows) with these coefficients will be equal to zero.

Definition 14.14 A system of columns (rows) is linearly independent if the equality to zero of a linear combination of these columns (rows) implies that all coefficients of this linear combination are equal to zero.

The following proposition, similar to Proposition 10.6, is also true.

Sentence 14.25 A system of columns (rows) is linearly dependent if and only if one of the columns (one of the rows) is a linear combination of other columns (rows) of this system.

Let us formulate a theorem called basis minor theorem.

Theorem 14.2 Any matrix column is a linear combination of the columns passing through basic minor.

The proof can be found in linear algebra textbooks, for example, in,.

Proposition 14.26 The rank of a matrix is ​​equal to the maximum number of its columns forming a linearly independent system.

Proof. Let the rank of the matrix be . Let's take the columns passing through the basis minor. Let's assume that these columns form a linearly dependent system. Then one of the columns is a linear combination of the others. Therefore, in a basis minor, one column will be a linear combination of the other columns. By Propositions 14.15 and 14.18, this basis minor must be equal to zero, which contradicts the definition of a basis minor. Therefore, the assumption that the columns passing through the basis minor are linearly dependent is not true. So, the maximum number of columns forming a linearly independent system is greater than or equal to .

Let us assume that the columns form a linearly independent system. Let's make a matrix out of them. All matrix minors are matrix minors. Therefore, the basis minor of the matrix has an order no greater than . According to the basis minor theorem, a column that does not pass through the basis minor of a matrix is ​​a linear combination of the columns passing through the basis minor, that is, the matrix columns form a linearly dependent system. This is contrary to the choice of columns that form the matrix. Therefore, the maximum number of columns forming a linearly independent system cannot be greater than . This means that it is equal to what was stated.

Proposition 14.27 The rank of a matrix is ​​equal to the maximum number of its rows forming a linearly independent system.

Proof. According to Proposition 14.24, the rank of the matrix does not change during transposition. The rows of the matrix become its columns. The maximum number of new columns of the transposed matrix (former rows of the original) forming a linearly independent system is equal to the rank of the matrix.

Proposition 14.28 If the determinant of a matrix is ​​zero, then one of its columns (one of the rows) is a linear combination of the remaining columns (rows).

Proof. Let the matrix order be equal to . The determinant is the only minor of a square matrix that has order . Since it is equal to zero, then . Consequently, a system of columns (rows) is linearly dependent, that is, one of the columns (one of the rows) is a linear combination of the others.

The results of Propositions 14.15, 14.18 and 14.28 give the following theorem.

Theorem 14.3 The determinant of a matrix is ​​equal to zero if and only if one of its columns (one of the rows) is a linear combination of the remaining columns (rows).

Finding the rank of a matrix by calculating all its minors requires too much computational work. (The reader may check that there are 36 second-order minors in a fourth-order square matrix.) Therefore, a different algorithm is used to find the rank. To describe it, a number of additional information will be required.

Definition 14.15 We call them elementary matrix transformations the following actions above them:

1) rearrangement of rows or columns;
2) multiplying a row or column by a number other than zero;
3) adding to one of the rows another row multiplied by a number or adding to one of the columns another column multiplied by a number.

Proposition 14.29 During elementary transformations, the rank of the matrix does not change.

Proof. Let the rank of the matrix be equal to , - the matrix resulting from performing an elementary transformation.

Let's consider permutation of strings. Let be a minor of the matrix, then the matrix has a minor that either coincides with or differs from it by rearranging the rows. And vice versa, any matrix minor can be associated with a matrix minor that either coincides with or differs from it in row order. Therefore, from the fact that all minors of an order in a matrix are equal to zero, it follows that in the matrix all minors of this order are also equal to zero. And since the matrix has a minor of order , different from zero, then the matrix also has a minor of order, different from zero, that is .

Consider multiplying a string by a number other than zero. A minor from a matrix corresponds to a minor from a matrix that either coincides with or differs from it in only one row, which is obtained from the minor row by multiplying by a number other than zero. In the latter case. In all cases, either and are simultaneously equal to zero, or at the same time different from zero. Hence, .

where are some numbers (some of these numbers or even all of them may be equal to zero). This means that there are the following equalities between the elements of the columns:

From (3.3.1) it follows that

If equality (3.3.3) is true if and only if , then the rows are called linearly independent. Relation (3.3.2) shows that if one of the rows is linearly expressed in terms of the others, then the rows are linearly dependent.

It is easy to see the opposite: if the strings are linearly dependent, then there is a string that is a linear combination of the other strings.

Let, for example, in (3.3.3), then .

Definition. Let a certain r-th order minor be identified in matrix A, and let the (r+1)-th order minor of the same matrix entirely contain the minor . We will say that in this case the minor borders the minor (or is bordering for ).

Now we will prove an important lemma.

Lemma about bordering minors. If a minor of order r of the matrix A= is different from zero, and all minors bordering it are equal to zero, then any row (column) of the matrix A is a linear combination of its rows (columns) that make up .

Proof. Without losing the generality of reasoning, we will assume that a nonzero minor of the rth order is in the left top corner matrices A= :



.

For the first k rows of matrix A, the statement of the lemma is obvious: it is enough to include in a linear combination the same row with a coefficient equal to one, and the rest - with coefficients equal to zero.

Let us now prove that the remaining rows of matrix A are linearly expressed through the first k rows. To do this, we construct a minor of (r+1) order by adding the kth line () to the minor and l th column():

.

The resulting minor is equal to zero for all k and l. If , then it is equal to zero as containing two identical columns. If , then the resulting minor is an edge minor for and, therefore, is equal to zero by the conditions of the lemma.

Let's decompose the minor according to the elements of the last l th column:

Assuming , we get:

(3.3.6)

Expression (3.3.6) means that kth line matrix A is linearly expressed through the first r rows.

Since when a matrix is ​​transposed, the values ​​of its minors do not change (due to the property of determinants), then everything proven is also true for columns. The theorem has been proven.

Corollary I. Any row (column) of a matrix is ​​a linear combination of its basis rows (columns). Indeed, the basis minor of the matrix is ​​nonzero, and all minors bordering it are equal to zero.

Corollary II. A determinant of nth order is equal to zero if and only if it contains linearly dependent rows(columns). The sufficiency of the linear dependence of rows (columns) for the determinant to be equal to zero was proven earlier as a property of determinants.

Let's prove the necessity. Let us be given a square matrix of nth order whose only minor is zero. It follows that the rank of this matrix is ​​less than n, i.e. there is at least one row that is a linear combination of the basis rows of this matrix.

Let us prove another theorem about the rank of the matrix.

Theorem. The maximum number of linearly independent rows of a matrix is ​​equal to the maximum number of its linearly independent columns and is equal to the rank of this matrix.

Proof. Let the rank of matrix A= be equal to r. Then any of its k basis rows are linearly independent, otherwise the basis minor would be equal to zero. On the other hand, any r+1 or more rows are linearly dependent. Assuming the contrary, we could find a minor of order greater than r that is nonzero by Corollary 2 of the previous lemma. The latter contradicts the fact that the maximum order of non-zero minors is r. Everything proven for rows is also true for columns.

In conclusion, we will outline another method for finding the rank of a matrix. The rank of a matrix can be determined by finding the minor maximum order, different from zero.

At first glance, this requires a calculation, although finite, but perhaps very large number minors of this matrix.

The following theorem allows, however, to introduce significant simplifications into this.

Theorem. If the minor of matrix A is non-zero, and all minors bordering it are equal to zero, then the rank of the matrix is ​​equal to r.

Proof. It is enough to show that any subsystem of matrix rows for S>r will be linearly dependent under the conditions of the theorem (it will follow that r is the maximum number of linearly independent matrix rows or any of its minors of order greater than k are equal to zero).

Let's assume the opposite. Let the rows be linearly independent. By the lemma about bordering minors, each of them will be linearly expressed in terms of the lines containing the minor and which, due to the fact that they are non-zero, are linearly independent:

Now consider the following linear combination:

or

Using (3.3.7) and (3.3.8), we obtain

,

which contradicts linear row independence.

Consequently, our assumption is incorrect and, therefore, any S>r rows under the conditions of the theorem are linearly dependent. The theorem has been proven.

Let's consider the rule for calculating the rank of a matrix - the method of bordering minors, based on this theorem.

When calculating the rank of a matrix, one should move from minors of lower orders to minors of higher orders. If a minor of the rth order, different from zero, has already been found, then it is necessary to calculate only the minors of the (r+1)th order bordering the minor. If they are equal to zero, then the rank of the matrix is ​​equal to r. This method is also used if we not only calculate the rank of the matrix, but also determine which columns (rows) make up the basis minor of the matrix.

Example. Calculate the rank of the matrix using the bordering minors method

.

Solution. The second-order minor, located in the upper left corner of matrix A, is non-zero:

.

However, all third-order minors surrounding it are equal to zero:

; ;
; ;
; .

Therefore, the rank of matrix A is equal to two: .

The first and second rows, the first and second columns in this matrix are basic. The remaining rows and columns are linear combinations of them. In fact, the following equalities hold for strings:

In conclusion, we note the validity of the following properties:

1) the rank of the product of matrices is not greater than the rank of each of the factors;

2) the rank of the product of an arbitrary matrix A on the right or left by a non-singular square matrix Q equal to rank matrices A.

Polynomial matrices

Definition. A polynomial matrix or -matrix is ​​a rectangular matrix whose elements are polynomials in one variable with numerical coefficients.

On -matrices one can perform elementary transformations. These include:

Rearranging two rows (columns);

Multiplying a row (column) by a number other than zero;

Adding to one row (column) another row (column) multiplied by any polynomial.

Two -matrices and same sizes are called equivalent: if you can go from the matrix to using a finite number of elementary transformations.

Example. Prove matrix equivalence

, .

1. Swap the first and second columns in the matrix:

.

2. From the second line, subtract the first, multiplied by ():

.

3. Multiply the second line by (–1) and note that

.

4. Subtract from the second column the first, multiplied by , we get

.

The set of all -matrices of given sizes is divided into disjoint classes of equivalent matrices. Matrices that are equivalent to each other form one class, and those that are not equivalent form another.

Each class of equivalent matrices is characterized by a canonical, or normal, matrix of given dimensions.

Definition. A canonical, or normal, matrix of dimensions is a matrix whose main diagonal contains polynomials , where p is the smaller of the numbers m and n ( ), and polynomials that are not equal to zero have leading coefficients equal to 1, and each subsequent polynomial is divided by the previous one. All elements outside the main diagonal are 0.

From the definition it follows that if among the polynomials there are polynomials of degree zero, then they are at the beginning of the main diagonal. If there are zeros, they are at the end of the main diagonal.

The matrix of the previous example is canonical. Matrix

also canonical.

Each class of -matrices contains a unique canonical -matrix, i.e. every -matrix is ​​equivalent to a unique canonical matrix, which is called canonical form or normal form of this matrix.

Polynomials located on the main diagonal of the canonical form of a given -matrix are called invariant factors of this matrix.

One method for calculating invariant factors is to reduce a given -matrix to canonical form.

Thus, for the matrix of the previous example, the invariant factors are

, , , .

From the above it follows that the presence of the same set of invariant factors is necessary and sufficient condition-matrices equivalence.

Reducing -matrices to canonical form reduces to defining invariant factors

, ; ,

where r is the rank of the matrix; - the greatest common divisor of the kth order minors, taken with the leading coefficient equal to 1.

Example. Let given -matrix

.

Solution. Obviously, the greatest common divisor of the first order, i.e. .

Let's define second-order minors:

, etc.

Already these data are enough to draw a conclusion: therefore, .

We define

,

Hence, .

Thus, the canonical form of this matrix is ​​the following -matrix:

.

A matrix polynomial is an expression of the form

where is variable; - square matrices order n with numeric elements.

If , then S is called the degree of the matrix polynomial, n is the order of the matrix polynomial.

I love it quadratic matrix can be represented as a matrix polynomial. Obviously, the opposite statement is also true, i.e. any matrix polynomial can be represented as a square matrix.

The validity of these statements clearly follows from the properties of operations on matrices. Let's look at the following examples:

Example. Represent a polynomial matrix

in the form of a matrix polynomial as follows

.

Example. Matrix polynomial

can be represented as the following polynomial matrix ( -matrix)

.

This interchangeability of matrix polynomials and polynomial matrices plays a significant role in the mathematical apparatus of factor and component analysis methods.

Matrix polynomials of the same order can be added, subtracted, and multiplied in the same way as ordinary polynomials with numerical coefficients. It should, however, be remembered that multiplication of matrix polynomials, generally speaking, is not commutative, since Matrix multiplication is not commutative.

Two matrix polynomials are said to be equal if their coefficients are equal, i.e. corresponding matrices for the same powers of the variable .

The sum (difference) of two matrix polynomials is a matrix polynomial whose coefficient for each degree of the variable equal to the sum(differences) of coefficients for the same degree in polynomials and .

To multiply a matrix polynomial by a matrix polynomial, you need to multiply each term of the matrix polynomial by each term of the matrix polynomial, add the resulting products and bring similar terms.

The degree of a matrix polynomial is a product less than or equal to the sum of the degrees of the factors.

Operations on matrix polynomials can be performed using operations on the corresponding -matrices.

To add (subtract) matrix polynomials, it is enough to add (subtract) the corresponding -matrices. The same applies to multiplication. -matrix of the product of matrix polynomials is equal to the product of -matrices of factors.

On the other hand, and can be written in the form

where B 0 is a non-singular matrix.

When dividing by there is a unique right quotient and a right remainder

where the degree of R 1 is less than the degree , or (division without remainder), as well as the left quotient and the left remainder if and only if, where of order

Each row of matrix A is denoted by e i = (a i 1 a i 2 …, a in) (for example,
e 1 = (a 11 a 12 ..., a 1 n), e 2 = (a 21 a 22 ..., a 2 n), etc.). Each of them is a row matrix that can be multiplied by a number or added to another row by general rules operations with matrices.

Linear combination strings e l , e 2 ,...e k is the sum of the products of these strings by arbitrary real numbers:
e = l l e l + l 2 e 2 +...+ l k e k, where l l, l 2,..., l k are arbitrary numbers (coefficients of a linear combination).

The rows of the matrix e l , e 2 ,...e m are called linearly dependent, if there are numbers l l , l 2 ,..., l m that are not equal to zero at the same time, such that the linear combination of rows of the matrix is ​​equal to the zero row:
l l e l + l 2 e 2 +...+ l m e m = 0, where 0 = (0 0...0).

Linear dependence rows of a matrix means that at least one row of the matrix is ​​a linear combination of the others. Indeed, for definiteness, let the last coefficient l m ¹ 0. Then, dividing both sides of the equality by l m, we obtain the expression for last line, as a linear combination of the remaining lines:
e m = (l l /l m)e l + (l 2 /l m)e 2 +...+ (l m-1 /l m)e m-1 .

If a linear combination of rows is equal to zero if and only if all coefficients are equal to zero, i.e. l l e l + l 2 e 2 +...+ l m e m = 0 Û l k = 0 "k, then the lines are called linearly independent.

Matrix rank theorem. The rank of a matrix is ​​equal to the maximum number of its linearly independent rows or columns through which all its other rows or columns can be linearly expressed.

Let's prove this theorem. Let a matrix A of size m x n have rank r (r(A) £ min (m; n)). Consequently, there exists a nonzero minor of rth order. We will call every such minor basic. Let it be a minor to be clear

The lines of this minor will also be called basic.

Let us prove that then the rows of the matrix e l , e 2 ,...e r are linearly independent. Let's assume the opposite, i.e. one of these lines, for example the r-th, is a linear combination of the others: e r = l l e l + l 2 e 2 +...+ l r-1 e r-1 = 0. Then, if you subtract from the elements rth row elements of the 1st row multiplied by l l , elements of the 2nd row multiplied by l 2 , etc., finally, elements of the (r-1)th row multiplied by l r-1 , then rth line will become zero. In this case, according to the properties of the determinant, the above determinant should not change, and at the same time it should be equal to zero. A contradiction is obtained linear independence lines has been proven.

Now we prove that any (r+1) rows of the matrix are linearly dependent, i.e. any string can be expressed in terms of basic ones.

Let's supplement the previously considered minor with one more row (i-th) and one more column (j-th). As a result, we obtain a minor of (r+1) order, which by definition of rank is equal to zero.