I am sorry this is a stupid question, but I am really confused about why $2 \times 2$ matrix is not a vector, but all $2 \times 2$ matrices can be a vector space? According to the definition, the each element in a vector spaces is a vector. So, $2 \times 2$ matrix cannot be element in a vector space since it is not even a vector.
$\endgroup$ 33 Answers
$\begingroup$That's an example of abstract vector spaces. They call vector spaces, but since the "object" that you are looking into have the properties of subspaces:
- Closed under multiplication;
- Closed under addition;
- Contains the $ \left\{ 0 \right\} $ "vector", or in this case, the null matrix.
You can affirm that it'll be a vector space.
To "visualize" that, try to imagine any matrix 2x2 added to another 2x2 matrix. You'll still be in a 2x2 matrix world. Same happens when multipling them... And because the null matrix is in the 2x2 matrix world, you can assume that it'll be a subspace, hence, a vector space.
Watch thisto get a better understanding of that kind of abstraction. It's hard to visualize...
$\endgroup$ 4 $\begingroup$It is possible to think of e.g. $2\times 2$ matrices $$ \begin{bmatrix} a & b\\c & d \end{bmatrix} $$ as vectors $$ \begin{bmatrix} a\\b\\c\\ d \end{bmatrix}. $$ All you need of vectors is to be able to add them and to multiply them by a scalar (subject to some commutation and distribution properties of these operations). All that is true for matrices too, thus, matrices are vectors. The major difference is that matrices constitute much richer world than vectors - it is even possible to multiply them to each other, which is not possible for vectors.
$\endgroup$ 7 $\begingroup$Put in an easy way, if we completely disregard matrix multiplication and instead focus on adding matrices and multiplying them by a scalar, we can treat a matrix $$A=\begin{pmatrix}a_{11} &\ldots & a_{1m} \\ \vdots & \ddots &\vdots \\ a_{n1} & \ldots & a_{nm} \end{pmatrix}$$ as a "big vector" $v$ with entries $$a=\begin{pmatrix} a_{11} \\ \vdots \\a_{1m} \\ a_{21} \\ \vdots \\ a_{2m}\\ \vdots \\ a_{nm} \end{pmatrix}$$ Since addition and multiplication by a scalar are entry-wise, the two structures are basically the same: there is no difference in writing the matrix in a "square" form or as a vertical vector if we are only interested in linear combinations. Given two matrices $A$ and $B$ and two scalars $\lambda, \mu \in K$, we can either express a linear combination as
$$\lambda A+\mu B=\begin{pmatrix}\lambda a_{11} &\ldots & \lambda a_{1m} \\ \vdots & \ddots &\vdots \\ \lambda a_{n1} & \ldots & \lambda a_{nm} \end{pmatrix}+ \begin{pmatrix}\mu b_{11} &\ldots & \mu b_{1m} \\ \vdots & \ddots &\vdots \\ \mu b_{n1} & \ldots & \mu b_{nm} \end{pmatrix}= \begin{pmatrix}\lambda a_{11}+\mu b_{11} &\ldots & \lambda a_{1m}+ \mu b_{1m} \\ \vdots & \ddots &\vdots \\ \lambda a_{n1}+\mu b_{n1} & \ldots & \lambda a_{nm}+\mu b_{nm} \end{pmatrix}$$ Clearly, if we express the two matrices in "vector" form, we obtain a vector whose entries are the same as the ones in the matrix $\lambda A+\mu B$.
If we indicate $n \times m$ matrices over the field $K$ as $M_{n\times m}(K)$, we are stating that there exists an isomorphism $$\Phi:M_{m\times n}(K) \to K^{n\times m}$$ that maps the matrix $(a_{ij})_{1\le i \le n, 1\le j \le m}$ to the vector of $K^{n \times m}$ with entries $a_{ij}$. This means that there exists an invertible map that respects the vector space operations (i.e. linear combinations), which, in laymen terms, means that we can treat matrices as vectors as far as we are only concerned with addition and multiplication by a scalar.
$\endgroup$ 1