Sometimes the span of a set of vectors is “smaller” than you expect from the number of vectors, as in the picture below. This means that (at least) one of the vectors is redundant: it can be removed without affecting the span. In the present section, we formalize this idea in the notion of linear independence.
Subsection3.2.1The Definition of Linear Independence
Definition
A set of vectors is linearly independent if the vector equation
has only the trivial solution The set is linearly dependent otherwise.
In other words, is linearly dependent if there exist numbers not all equal to zero, such that
This is called a linear dependence relation or equation of linear dependence.
Note that linear dependence and linear independence are notions that apply to a collection of vectors. It does not make sense to say things like “this vector is linearly dependent on these other vectors,” or “this matrix is linearly independent.”
A set of vectors is linearly independent if and only if the vector equation
has only the trivial solution, if and only if the matrix equation has only the trivial solution, where is the matrix with columns
This is true if and only if has a pivot position in every column.
Solving the matrix equation will either verify that the columns are linearly independent, or will produce a linear dependence relation by substituting any nonzero values for the free variables.
(Recall that has a nontrivial solution if and only if has a column without a pivot: see this observation in Section 3.1.)
Suppose that has more columns than rows. Then cannot have a pivot in every column (it has at most one pivot per row), so its columns are automatically linearly dependent.
A wide matrix (a matrix with more columns than rows) has linearly dependent columns.
For example, four vectors in are automatically linearly dependent. Note that a tall matrix may or may not have linearly independent columns.
Facts about linear independence
Two vectors are linearly dependent if and only if they are collinear, i.e., one is a scalar multiple of the other.
Any set containing the zero vector is linearly dependent.
If a subset of is linearly dependent, then is linearly dependent as well.
If then so is linearly dependent. In the other direction, if with (say), then
It is easy to produce a linear dependence relation if one vector is the zero vector: for instance, if then
After reordering, we may suppose that is linearly dependent, with This means that there is an equation of linear dependence
with at least one of nonzero. This is also an equation of linear dependence among since we can take the coefficients of to all be zero.
With regard to the first fact, note that the zero vector is a multiple of any vector, so it is collinear with any other vector. Hence facts 1 and 2 are consistent with each other.
Subsection3.2.2Criteria for Linear Independence
In this subsection we give two criteria for a set of vectors to be linearly independent. Keep in mind, however, that the actual definition is above.
Theorem
A set of vectors is linearly dependent if and only if one of the vectors is in the span of the other ones.
Any such vector may be removed without affecting the span.
Suppose, for instance, that is in so we have an equation like
We can subtract from both sides of the equation to get
This is a linear dependence relation.
In this case, any linear combination of is already a linear combination of
Therefore, is contained in Any linear combination of is also a linear combination of (with the -coefficient equal to zero), so is also contained in and thus they are equal.
In the other direction, if we have a linear dependence relation like
then we can move any nonzero term to the left side of the equation and divide by its coefficient:
This shows that is in
We leave it to the reader to generalize this proof for any set of vectors.
Warning
In a linearly dependent set it is not generally true that any vector is in the span of the others, only that at least one of them is.
For example, the set is linearly dependent, but is not in the span of the other two vectors. Also see this figure below.
The previous theorem makes precise in what sense a set of linearly dependent vectors is redundant.
Theorem(Increasing Span Criterion)
A set of vectors is linearly independent if and only if, for every the vector is not in
It is equivalent to show that is linearly dependent if and only if is in for some The “if” implication is an immediate consequence of the previous theorem. Suppose then that is linearly dependent. This means that some is in the span of the others. Choose the largest such We claim that this is in If not, then
with not all of equal to zero. Suppose for simplicity that Then we can rearrange:
This says that is in the span of which contradicts our assumption that is the last vector in the span of the others.
We can rephrase this as follows:
If you make a set of vectors by adding one vector at a time, and if the span got bigger every time you added a vector, then your set is linearly independent.
Subsection3.2.3Pictures of Linear Independence
A set containing one vector is linearly independent when since implies
A set of two noncollinear vectors is linearly independent:
Neither is in the span of the other, so we can apply the first criterion.
The two vectors below are linearly independent because they are not collinear.
The three vectors below are linearly independent: the span got bigger when we added then again when we added so we can apply the increasing span criterion.
The three coplanar vectors below are linearly dependent:
Note that three vectors are linearly dependent if and only if they are coplanar. Indeed, is linearly dependent if and only if one vector is in the span of the other two, which is a plane (or a line) (or ).
The four vectors below are linearly dependent: they are the columns of a wide matrix. Note however that is not contained in See this warning.
Subsection3.2.4Linear Dependence and Free Variables
In light of this important note and this criterion, it is natural to ask which columns of a matrix are redundant, i.e., which we can remove without affecting the column span.
Theorem
Let be vectors in and consider the matrix
Then we can delete the columns of without pivots (the columns corresponding to the free variables), without changing
The pivot columns are linearly independent, so we cannot delete any more columns without changing the span.
then the column without a pivot is visibly in the span of the pivot columns:
and the pivot columns are linearly independent:
If the matrix is not in reduced row echelon form, then we row reduce:
The following two vector equations have the same solution set, as they come from row-equivalent in Section 2.2 matrices:
We conclude that
and that
has only the trivial solution.
Note that it is necessary to row reduce to find which are its pivot columns. However, the span of the columns of the row reduced matrix is generally not equal to the span of the columns of one must use the pivot columns of the original matrix. See theorem in Section 3.4 for a restatement of the above theorem.
There are multiple different spans of vectors associated to any given matrix, For instance there is the span of the columns, and there is also the solution set of Each of these spans has a dimension, and these dimensions can be different. Therefore we do not speak about the dimension of a matrix, since it is ambiguous which dimension this is referring to.