### Systems of linear equations and matrices: Matrices

### Matrix equations

*Previously* we saw that the system of \(m\) linear equations in \(n\) unknowns \(x_1, \ldots, x_n\) \[\left\{\;\begin{array}{llllllll} a_{11}x_1 \!\!\!\!&+&\!\!\!\! \cdots \!\!\!\!&+&\!\!\!\! a_{1n}x_n\!\!\!\!&=&\!\!\!\!b_1\\ \;\;\vdots &&&& \vdots&&\!\!\!\!\vdots\\ a_{m1}x_1 \!\!\!\!&+&\!\!\!\! \cdots \!\!\!\!&+&\!\!\!\! a_{mn}x_n\!\!\!\!&=&\!\!\!\!b_m\end{array}\right.\] can be written in matrix notation as \[\begin{pmatrix} a_{11} & \ldots & a_{1n} \\ \vdots & & \vdots \\ a_{m1} & \ldots & a_{mn} \end{pmatrix} \begin{pmatrix}x_1\\ \vdots \\ x_n\end{pmatrix}= \begin{pmatrix}b_1\\ \vdots \\ b_m\end{pmatrix}\]

Solving the system of linear equations is then finding a column vector \(x\) which when multiplied from the left by the coefficient matrix \(A\) yields the column vector \(b\). This solution of a system for a given coefficient matrix \(A\) can be performed for multiple column vectors \(b\) at the same time and can be concisely recorded by replacing \(x\) and \(b\) by multiple columns, and, in fact, by matrices \(X\) and \(B\).

If #A# is an #(m\times n)#-matrix and #B# is an #(m\times p)#-matrix, then the **linear matrix equation** with an unknown #(n\times p)#-matrix #X# \[ A\, X = B \] summarizes the problem of simultaneously solving all the systems of linear equations #A\,\vec{x}_j = \vec{b}_j# for #j=1,\ldots,p#, where #\vec{x}_j # is the #j#-th column of #X# and #\vec{b}_j # is the #j#-the column of #B#. This problem can be solved by

- setting up the augmented matrix \(\left(A\,|\,B\right)\);
- calculating the reduced echelon form of this augmented matrix;
- reading off the solution for #A\,\vec{x}_j = \vec{b}_j# can be done as for systems of linear equations after selection of the columns #j# to the left and right of the vertical bar.

In particular, if the rank of #A# equals #n#, then the linear matrix equation has exactly one solution.

Not only can systems of linear equations be represented as matrix equations, also the row operations involved in Gaussian elimination can be formulated in terms of matrix multiplications:

The elementary row operations which are used to solve the system of linear equations in matrix form, #A\,x = b#, correspond to multiplications from the left by the following matrices, where #E_{ij,\lambda}# (for #i\ne j#) is the matrix whose #\rv{i,j}# entry is equal to #\lambda#, whose diagonal elements are equal to #1# and all of whose other entries are equal to zero.

elementary operation | multiplying from the left by the matrix |

multiplying row #i# by a number #\lambda# distinct from zero |
the diagonal matrix #D_{i,\lambda}# with #\lambda# at position #\rv{i,i}# and ones elsewhere on the main diagonal |

adding a scalar multiple #\lambda# of row #i# to row #j# |
#E_{ij,\lambda}# |

interchanging rows #i# and #j# |
the matrix #P_{(i,j)}# which has zeros everywhere except for the entries #\rv{k,k}# with #k\ne{i}# and #k\ne j# and the entries #\rv{i,j}# and #\rv{j,i}#, which are all equal to #1#. |

Thanks to the above interpretation of row operations, we can also interpret the solving of the matrix equation \(A\,X = B\) as follows: multiply the two sides of the equation sequentially from the left by matrices #C_1,\ldots,C_t# which correspond to elementary operations, in such a way that the left-hand side becomes #X# (or a matrix approaching #X# as close as possible: reduced echelon form). In the special case where we can achieve #C_t\,C_{t-1}\cdots C_1A X = X# for the left hand side, the equation will be reduced to the solution #X = C \,B#, where #C =C_t\,C_{t-1}\cdots C_1#. In this case the matrix #A# is called invertible with inverse #C#. We will pursue this *later*.

Write the given equation as #XA=B# and define #Y = X^{\top}#. Then #Y# satifies the equation #A^{\top} Y = B^{\top}#, or

\[ \matrix{0 & 1 \\ 1 & 2 \\ } Y = \matrix{1 & -2 \\ 0 & 1 \\ }\] for which a solution method is known. The corresponding augmented matrix is \[ \matrix{0 & 1 & 1 & -2 \\ 1 & 2 & 0 & 1 \\ }\] The reduced echelon form of this matrix is \[ \matrix{1 & 0 & -2 & 5 \\ 0 & 1 & 1 & -2 \\ } \] Because the left square submatrix is the identity matrix, the unique solution is the right square matrix \[ Y = \matrix{-2 & 5 \\ 1 & -2 \\ }\] so \[X =Y^{\top} = \matrix{-2 & 1 \\ 5 & -2 \\ }\]

**Pass Your Math**independent of your university. See pricing and more.

Or visit omptest.org if jou are taking an OMPT exam.