Glam Prestige Journal

Bright entertainment trends with youth appeal.

$\begingroup$

Question

Suppose we want to find a basis for the vector space $\{0\}$.

I know that the answer is that the only basis is the empty set.

Is this answer a definition itself or it is a result of the definitions for linearly independent/dependent sets and Spanning/Generating sets? If it is a result then would you mind mentioning the definitions of bold items which based on them this answer can be deduced.


Useful Links

These are the links that I found useful for answering this question. It needs some elementary background form mathematical logic. You can learn it by spending a few hours on this Wikipedia page.

Link 1, Link 2, Link 3, Link 4, Link 5, Link 6

$\endgroup$ 13

4 Answers

$\begingroup$

The standard definition of basis in vector spaces is:


$\mathcal B$ is a basis of a space $X$ if:

  • $\mathcal B$ is linearly independent.
  • The span of $\mathcal B$ is $X$.

You can easily show both of these statements are true when $X=\{0\}$ and $\mathcal B= \{\}$. Again, you have to look at the definitions:

  • Is $\{\}$ linearly independent? Well, a set $A$ is linearly independent if, for every nonempty finite subset $\{a_1,a_2\dots, a_n\}$, we have that if $$\alpha_1a_1 + \dots + \alpha_n a_n=0,$$ then $\alpha_i=0$ for all $i$. This condition is satisfied automaticall in the case of an empty set (everything follows from a false statement). This part may be difficult to understand, but since there is no nonempty finite collection of vectors from $\{\}$, any statement you say about nonempty finite collections of vectors from $\{\}$ must be true (because any such statement includes an assumption that a nonempty finite collection exists. It does not, meaning that any such statement is of the type $F\to A$ and is automatically true). This means $\{\}$ is linearly independent.

  • Is the span of $\{\}$ equal to $\{0\}$? Well, the span of a set $A\subseteq X$ is defined as the smallest vector subspace of $X$ that contains $A$. Since all vector subspaces contain $\{\}$, it is clear that $\{0\}$, which is the smallest vector subspace at all, must be the span of $\{\}$.


Alternatively, the span of $A$ is the intersection of all vector subspaces that contain $A$. Again, it should be obvious that this implies that the span of $\{\}$ is $\{0\}$.

$\endgroup$ 21 $\begingroup$

Definition 1. The span of a set of vectors $\{v_1,\ldots,v_m\}$ is the set of all linear combinations of $\{v_1,\ldots,v_m\}$. In other words, $$\text{span}\{v_1,\ldots,v_m\}=\{a_1v_1+\cdots+a_mv_m,\, a_1,\ldots,a_m\in\mathbb{F}\}.$$

This definition leaves out the case for $\{\}$: there is no vector to begin with! So we need to take care of that. But how do we define the span of $\{\}$? We define it to be $\{\}$? Or some arbitrary space? Here is the rationale for defining $\text{span}\{\}$ to be $\{0\}$:

Proposition. Let $V$ be a vector space. Let $S$ be a finite subset of $V$ that spans $V$. One can obtain a basis of $V$ by deleting elements from $S$.

Only then can we have this proposition working for $V=\{0\}$.

To summarize, when our definition of span is as in Definition 1, we want the following extra definition

  1. The empty set is independent;
  2. The span of the empty set is the zero space $\{0\}$

for the above proposition to be true for $V=\{0\}$. As a consequence of our definition, the empty set is a basis for the zero vector space.

(Notes:My definition of linear independence is:

A set of vectors $\{v_1,\ldots,v_m\}$ is said to be linearly independent if the equation $a_1v_1+\cdots+a_mv_m=0$ always implies $a_1=\cdots=a_m=0$. Otherwise, it is said to be linearly dependent.

And I do not define the "empty sum", so that the case $\{\}$ is left undetermined. )


Definition 2. The span of a set of vectors $\{v_1,\ldots,v_m\}$ is the smallest vector space containing $v_1,\ldots,v_m$.

Under this definition, indeed we do not need to additionally define the span for $\{\}$, as @5xum pointed out.


Definition 1 is more common, since elements of the set $\text{span}\{v_1,\ldots,v_m\}$ are described explicitly. The drawback of Definition 2 is that you don't know what the elements in the span look like, and you need to prove that the span of $\{v_1,\ldots,v_m\}$ indeed consists of linear combinations of $v_1,\ldots,v_m$.

$\endgroup$ 12 $\begingroup$

A basis has several equivalent definitions. One of which is:

  1. A basis of a vector space is a minimal generating set

So keeping that in mind, if we look at $V = \{0\}$, the only non-empty subset of this vector space is $B = \{0\}$. This set $B$ is a linearly dependent set and thus it cannot be a basis.We make a note here that $B$ is a generating set of $V$. We know by existence of basis of a vector space that $V$ must also have a basis. So if a basis were to exist it should be a subset of $B$. As removing a linearly dependent element from a generating set does not change the span of that set, $\phi$ is a generating set.

Now the definition of a linearly dependent set in crude language is: In a vector space $V$, a subset $A$ of $V$ is said to be linearly dependent if there exists an element which can be written as a finite linear combination of the rest of the elements.

So, consider the set $\phi$; there does not exist any element in it which can be written as a finite linear combination of the other elements, hence it is not a linearly dependent set and therefore it is linearly independent.

Therefor we see that $\phi$ is a linearly independent and generates $V$ and hence a basis.

$\endgroup$ 1 $\begingroup$

To answer the question of why $\text{Span}\{\}=\{0\}$ is true, I considered the following argument for myself.

I think all the things draw back to the operation of addition.

Addition is a map defined as follows

$$ \begin{align} (\cdot+\cdot): & V \times V \to V \\ & (u,v) \mapsto (u+v) \end{align}$$

with the commutative and associative properties

$$ \begin{align} (u+v) &= (v+u) \\ ((u+v)+w) &= (u+(v+w)) \end{align} $$

so according to this definition, whenever we are talking about addition we should provide two inputs to get one output.

From a programming point of view it is useful to have outputs in case when we have one or no inputs (see the detail of Plus function in wolfram language). It also turns out to be useful in proofs like the ones using induction. So what is the most useful definitions to make for such cases? Experience shows that these are

$$ \begin{align} (u+\text{null}) &= u\\ (\text{null}+u) &= u \\ (\text{null}+\text{null}) &= 0 \end{align} \tag{1}$$

where you can think of null meaning that no argument is provided!

Now the following definition can be easily interpreted in the special cases when we have a set with one element or no element.

Linear Combination and Span. A linear combination of a set $A=\{v_1,v_2,...,v_m\} \subseteq V$ is a vector $v$ defined by $v=\sum_{j=1}^{m}a_jv_j$. The set of all linear combinations of $A$ is called the span of $A$ denoted by $\text{Span}A$.

Now, if we make the convention that $m=0$ means $A=\{\}$ and when $m=1$ then $A=\{v_1\}$, according to $(1)$, we can interpret the definition as follows

$$ v=\sum_{j=1}^{m}a_jv_j:=s_m, \qquad s_i = \begin{cases} 0, & i=0 \\ (a_1v_1+\text{null}), & i=1 \\ (a_1v_1+a_2v_2), & i=2 \\ (s_{i-1}+a_{i}v_{i}), & \text{otherwise} \\ \end{cases} , \qquad 0 \le i \le m $$

So, we can see that $\text{Span}\{\}=\{0\}$. Note that this is a result of our own convention for the addition operation and the definition of the span. I think that the vacuous truth argument has no advantage over this one! However, it keeps repeating in many other examples! So it is good to learn it once and for all!

$\endgroup$ 4

Your Answer

Sign up or log in

Sign up using Google Sign up using Facebook Sign up using Email and Password

Post as a guest

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy