Glam Prestige Journal

Bright entertainment trends with youth appeal.

$\begingroup$

I've recently got into Linear Algebra again after a long break. I've been watching Gilbert Strang's lectures and reading his book. I understand the concepts of linear Independence, basis and span of vectors. I'm also aware that $n$ linearly independent vectors give every other vector in $R^n$. However, it's not clear to me, intuitively, that (for example) two linearly independent vectors give every possible vector (i.e span a plane) in $R^2$. If I have a $2$ by $2$ matrix that consists of following vectors:$$\begin{pmatrix}1\\0\\\end{pmatrix}\begin{pmatrix}0\\1\\\end{pmatrix}$$

Then it's clear that I can get any other vector. However it's not as clear if, say, two vectors are: $$\begin{pmatrix}1\\9\\\end{pmatrix}\begin{pmatrix}11\\25\\\end{pmatrix}$$

My idea was that I can reduce this to the identity matrix, therefore, from upper example, the linear combination will give me every possible vector. But i'm not sure if i'm allowed to do that. Is my idea correct? Is there a more intuitive way to understand this?

Edit : to clarify what I mean by "allowed". Does reducing the matrix ($A$) to it's row reduced echelon ($R$) form and then taking it's combinations (of $R$) yields the same result as if I take linear combinations (of $A$) without reducing it first? if so, why?

$\endgroup$ 5

5 Answers

$\begingroup$

It's not at all obvious that your two vectors span all of the plane, so this is a great question.

A usual proof goes like this:

"The span of those vectors is a 2-dimensional subspace of a 2-dimensional space. Hence, by a decomposition theorem usually proved pretty early, it must equal the whole space."

That's not very helpful, even if you know the decomposition theorem I'm talking about. So here's an alternative approach.

If your two vectors both have $0$ in the first entry, then they are in fact not independent. So at least ONE of them has a nonzero first entry. Call that $a$, and the other one $b$. In your example,$$ a = \pmatrix{1\\9}, b = \pmatrix{11\\25} $$Now form a linear combination of $a$ and $b$ that looks like this:

$$ a' = \frac{1}{a_1} a, b' = b. \tag{1} $$Now $a'$ has a $1$ as its first entry. Observe that any combination of $a'$ and $b'$ is ALSO a combination of $a$ and $b$ (just substitute in formula 1).

In your example, $a' = a, b' = b$, because $a_1$ just happens to be $1$.

Now let$$ b'' = b' - b'_1 a' = \pmatrix{11\\25} - 11 \pmatrix{1\\9} = \pmatrix{0\\-74} $$

Now let's get that bottom entry of $b''$ nicer: replace$$ a''' = a''; b''' = \frac{1}{b''_2}b'' = \frac{1}{-74} \pmatrix{0\\-74}= \pmatrix{0\\1} $$

And finally, let $$ a'''' = a''' - a'''_2 b''' = \pmatrix{1\\9} - 9 \pmatrix{0\\1} = \pmatrix{1\\0}. $$

And now we have shown that the two standard basis vectors can be written as linear combinations of $a$ and $b$, and therefore (by back-substituting), any combination of the standard basis vectors can also be written as a combination of $a$ and $b$. Hence all of $\Bbb R^2$ is in the span of $a$ and $b$.

$\endgroup$ $\begingroup$

Row reducing to the identity matrix certainly gives you one way to think about it. In this case, we could row-reduce as follows: taking the two vectors as rows, we have$$ \left[\begin{array}{cc|cc}1 & 9&1&0\\ 11 & 25&0&1 \end{array}\right] \leadsto \left[\begin{array}{cc|cc}1 & 0&-25/74 & 9/74\\ 0 & 1 & 11/74 & -1/74 \end{array}\right]. $$You probably already know that the matrix on the right is the inverse of the matrix on the left, but the important point here is that the entries on the right keep track of the row operations performed in order to produce the column on the left. In particular, we have$$ (1,0) = \frac{-25}{74}(1,9) + \frac 9{74}(11,25), \\ (0,1) = \frac{11}{74}(1,9) - \frac 1{74}(11,25). $$With that established: if we want to write the vector $(a,b)$ as a linear combination of the vectors that we started with, then we have$$ (a,b) = a(1,0) + b(0,1) = a\left[\frac{-25}{74}(1,9) + \frac 9{74}(11,25) \right] + b\left[ \frac{11}{74}(1,9) - \frac 1{74}(11,25)\right]\\ = \left[-\frac{25}{74}a + \frac{11}{74}b \right](1,9) + \left[\frac{9}{74}a - \frac{1}{74}b \right](11,25). $$

$\endgroup$ $\begingroup$

Rather than look the algebra explicitly, lets look at the geometry to build your intuition as to why any two linearly independent vectors can be used to span the plane.

Firstly, lets construct a name for each point using the canonical basis vectors in a geometric fashion that more readily generalizes. Assign an origin and draw two perpendicular lines through it, call them $X$ and $Y$. Mark a unit length on each line. Now choose any point in the plane. I can now draw a line parallel to $X$ and $Y$ through that point and it will intersect the other line, $Y$ and $X$ respectively at a unique point. Based on the unit distance we marked on that line we can measure its distance from the origin.

So to make this more concrete if we use the usual $x$-$y$ plane then the line $x=3$ and $y=2$ will intersect at $(3,2)$. Each of these lines is parallel to one axis and intersects the other axis at a unique point. These give the scalar components of our coordinates.

But now, note two things - the choice where we marked the unit distance was arbitrary, and we didn't require the orthogonality, just that lines parallel to one intersect lines parallel to the other axis. Said another way, we need the vectors in in line to be linearly independent of the others. You can draw these lines in the plane as usual and see that the construction works with the two non-canonical vectors you used as your example. I think the basis $(1,2)$ and $(1,1)$ makes a nice example to draw as well.

$\endgroup$ $\begingroup$

It turns out that any two $n$-dimensional vector spaces over a field are isomorphic.

Two linearly independent vectors span a $2$-dimensional space. Over $\Bbb R$ that space will be $\Bbb R^2$.

Informally, you can't fit an $n$-dimensional space as a proper subspace of $\Bbb F^n$.

$\endgroup$ $\begingroup$

Let's say that your generic vector is $(x,y)$.

If two linear independent vectors are $(1,0)$ and $(0,1)$, that it's easy:$$\begin{pmatrix} x \\ y \end{pmatrix}=x\begin{pmatrix} 1 \\ 0 \end{pmatrix}+y\begin{pmatrix} 0 \\ 1 \end{pmatrix}$$If the two linear independent vectors are $(1,9)$ and $(11,25)$, then you can solve the system$$\begin{pmatrix} x \\ y \end{pmatrix}=a\begin{pmatrix} 1 \\ 9 \end{pmatrix}+b\begin{pmatrix} 11 \\ 25 \end{pmatrix}\quad \text{i.e.}\quad \begin{cases}a+11b=x \\ 9a+25b=y\end{cases}$$You can solve this sistem in several ways, e.g. by Gaussian elimination. In any case the solution is:$$a=-\frac{25}{74}x+\frac{11}{74}y,\qquad b=\frac{9}{74}x-\frac{1}{74}y$$For example, if $x=3$ and $y=5$, then:\begin{align*} a &= -\frac{75}{74}+\frac{55}{74}=-\frac{10}{37}\\ b&=\frac{27}{74}-\frac{5}{74}=\frac{11}{37} \end{align*}Indeed:$$-\frac{10}{37}\begin{pmatrix} 1 \\ 9 \end{pmatrix} +\frac{11}{37}\begin{pmatrix} 11 \\ 25 \end{pmatrix} =\begin{pmatrix}-\frac{10}{37} +\frac{121}{37} \\ -\frac{90}{37} + \frac{275}{37} \end{pmatrix}=\begin{pmatrix}3 \\ 5 \end{pmatrix} $$

$\endgroup$

Your Answer

Sign up or log in

Sign up using Google Sign up using Facebook Sign up using Email and Password

Post as a guest

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy