The many meanings of $A\cdot\boldsymbol{x} = \boldsymbol{0}$
We reviewed what we had done in the previous class and how the
matrix-vector product equation
$$
A\cdot\boldsymbol{x} = \boldsymbol{0}
\ \ \text{ [Note: the bold face "$\boldsymbol{0}$" is the zero
vector, i.e. a vector of all zeros!]}
$$
has at least three different, but equivalent interpretations
that we had discussed. That was encapsulated with this diagram
on the whiteboard:

So by studying the matrix-vector product equation, we qre also
studying systems of linear equations, linear combinations of
column vectors and linear transformations.
Elementary row operations
Here are three very important kinds of operations we can perform
on a matrix.
Let $A$ be an $m\times n$ matrix with row vectors
$\boldsymbol{r_1},\ldots, \boldsymbol{r_m}$. The
elementary row operations
are:
- replace $\boldsymbol{r_i}$ with $s\boldsymbol{r_i}$,
where $s$ is a non-zero scalar
- swap row $i$ with row $j$
- replace $\boldsymbol{r_i}$ with
$\boldsymbol{r_i} + s\boldsymbol{r_j}$, where $i\neq j$
Below is an example of each type of elementary row operation.
| matrix $A$ | row operation | new matrix $A'$ |
|
$$
\begin{bmatrix}
-1&0&6\\
0&3&3\\
2&3&-9
\end{bmatrix}
$$
|
Type i
multiply row 3 by scalar 5 |
$$
\begin{bmatrix}
-1&0&6\\
0&3&3\\
10&15&-45
\end{bmatrix}
$$
|
|
$$
\begin{bmatrix}
-1&0&6\\
0&3&3\\
2&3&-9
\end{bmatrix}
$$
|
Type ii
swap row 1 and row 3 |
$$
\begin{bmatrix}
2&3&-9\\
0&3&3\\
-1&0&6\\
\end{bmatrix}
$$
|
|
$$
\begin{bmatrix}
-1&0&6\\
0&3&3\\
2&3&-9
\end{bmatrix}
$$
|
Type iii
add 5 times row 1 to row 3 |
$$
\begin{bmatrix}
2&3&-9\\
0&3&3\\
9&15&-39\\
\end{bmatrix}
$$
|
The crucial thing about these elementary row operations is
that although you are changing the matrix $A$, you are not
changing the solutions of $A\cdot \boldsymbol{x}=\boldsymbol{0}$.
This is made precise in the following theorem.
Let $A$ be an $m\times n$ matrix, and let $A'$ be the same
matrix after performing one elementary row operation, then
$\boldsymbol{x}$ is a solution to $A\cdot \boldsymbol{x}=\boldsymbol{0}$
if and only if
$\boldsymbol{x}$ is a solution to $A'\cdot \boldsymbol{x}=\boldsymbol{0}$.
ACTIVITY
Break into groups of three or four, each group going to the
board. Each group gets one of the elementary row operations.
-
Consider the equation
$A\cdot \boldsymbol{x}=\boldsymbol{0}$
with the 3x3 matrix from the above examples as $A$, and
the solution vector [6 -1 1]. In other words:
$$
\begin{bmatrix}
-1&0&6\\
0&3&3\\
2&3&-9
\end{bmatrix}
\cdot
\begin{bmatrix}
6\\
-1\\
1
\end{bmatrix}
=
\begin{bmatrix}
0\\
0\\
0
\end{bmatrix}
$$
Apply your row operation to this matrix to produce new
matrix $A'$, and verify that [6 -1 1] is indeed a solution
to $A'\cdot \boldsymbol{x}=\boldsymbol{0}$.
-
Hopefully you gained some insight from this for why the
theorem is true for your row operation - not just on this
example matrix, but for any matrix. So now prove the
theorem for your row operation!
So collectively, you and your classmates proved this theorem!
We prove that, for each of the three types of elementary row
operations, the solutions to $A\cdot\boldsymbol{x} = \boldsymbol{0}$
are the same before and after the operation.
Let $A$ be the original matrix, and let $A'$ be the same matrix
after one elemenatry row operation. We will denote the row
vectors of $A$ as $\boldsymbol{r_1},\boldsymbol{r_2},\cdots,\boldsymbol{r_m}$.
So let's consider the three types of elementary row operations:
-
We modify $A$ to produce $A'$ by replacing $\boldsymbol{r_i}$
with $s\cdot\boldsymbol{r_i}$, where $s\neq 0$.
All the rows of $A' \cdot \boldsymbol{x}$ are identical to
$A \cdot \boldsymbol{x}$ except row $i$.
Row $i$ of
$A \cdot \boldsymbol{x}$ is
$\boldsymbol{r_i}\cdot\boldsymbol{x}$.
Row $i$ is
$A' \cdot \boldsymbol{x}$ is
$(s\cdot\boldsymbol{r_i})\cdot\boldsymbol{x}$.
By point (v) of
$$
(s\cdot\boldsymbol{r_i})\cdot\boldsymbol{x} =
s\cdot(\boldsymbol{r_i}\cdot\boldsymbol{x})
$$
and since $s \neq 0$,
$\boldsymbol{r_i}\cdot\boldsymbol{x} = 0$
if and only if
$s\cdot(\boldsymbol{r_i}\cdot\boldsymbol{x}) = 0$.
Note: This does require that there are no
"zero-divsors" in the underlying ring $R$, i.e. the product
of non-zero elements is never zero. However, rings in which
all non-zero elements have multiplicative inverses (which is
a requirement for vector spaces) have no zero divisors.
-
We modify $A$ to produce $A'$ by swapping row $i$ and row $j$.
This changes the order of the rows, but not the rows
themselves. So all rows of $A$ have the property that
their dot products with $\boldsymbol{x}$ are zero
if and only if
all rows of $A'$ have the property that
their dot products with $\boldsymbol{x}$ are zero.
-
We modify $A$ to produce $A'$ by replacing $\boldsymbol{r_i}$
with $\boldsymbol{r_i} + s\cdot\boldsymbol{r_j}$.
All the rows of $A' \cdot \boldsymbol{x}$ are identical to
$A \cdot \boldsymbol{x}$ except row $i$.
Row $i$ is
$A \cdot \boldsymbol{x}$ is
$\boldsymbol{r_i}\cdot\boldsymbol{x}$.
Row $i$ of
$A' \cdot \boldsymbol{x}$ is
$
(\boldsymbol{r_i} + s\cdot\boldsymbol{r_j})\cdot\boldsymbol{x}
=
\boldsymbol{r_i}\cdot\boldsymbol{x} + s\cdot(\boldsymbol{r_j}\cdot\boldsymbol{x})
$,
by points (iii) and (v) of .
If all rows of $A\cdot\boldsymbol{x}$ are zero, then row
$i$ of $A'\cdot\boldsymbol{x}$ is
$\boldsymbol{r_i}\cdot\boldsymbol{x} +
s\cdot(\boldsymbol{r_j}\cdot\boldsymbol{x}) = 0 + s\cdot 0
= 0$.
If all rows of $A'\cdot\boldsymbol{x}$ are zero, then row
$i$ of $A'\cdot\boldsymbol{x}$ is zero, so
$\boldsymbol{r_i}\cdot\boldsymbol{x} +
s\cdot(\boldsymbol{r_j}\cdot\boldsymbol{x}) = 0 =
\boldsymbol{r_i}\cdot\boldsymbol{x} + s\cdot 0$.
Therefore, $\boldsymbol{r_i}\cdot\boldsymbol{x} = 0$.
So, $A\cdot\boldsymbol{x} = \boldsymbol{0}$
if and only if $A'\cdot\boldsymbol{x} = \boldsymbol{0}$.