How do you solve linear systems using matrices?

Answer 1
Consider a system of #m# linear equations with #n# variables, #x_1# through #x_n#:
#{(a_(1 1)x_1+a_(1 2)x_2+...+a_(1 n)x_n=b_1),(a_(2 1)x_1+a_(2 2)x_2+...+a_(2 n)x_n=b_2),(...),(a_(m 1)x_1+a_(m 2)x_2+...+a_(m n)x_n=b_m):}#

Notice that we can swap the position of two equations, multiply both sides of an equation by a constant, or add a multiple of one equation to another (the left hand side to the left hand side, and the right hand side to the right hand side), and in each of these cases, the solution set to the system remains the same.

Now, using matrix multiplication, we find that we can rewrite the system as a matrix equation:

#((a_(1 1), a_(1 2), ... , a_(1 n)),(a_(2 1), a_(2 2), ... , a_(2 n)),(...,....,...,...), (a_(m 1), a_(m 2), ..., a_(m n)))((x_1),(x_2),(...),(x_n)) = ((b_1),(b_2),(...),(b_m))#
If we do any of the three operations we mentioned before, that is, swapping positions, multiplying by a constant, or adding a multiple of one equation to another, the resulting matrices will have had the same operation performed to the coefficient matrix and the matrix with the constant #b_i# terms. The matrix with the variables remains unchanged. To focus on the parts that do change, we create an augmented matrix from the coefficient matrix and the #b_i# matrix:
#((a_(1 1), a_(1 2), ... , a_(1 n),|,b_1),(a_(2 1), a_(2 2), ... , a_(2 n),|,b_2),(...,....,...,...,|,...), (a_(m 1), a_(m 2), ..., a_(m n),|,b_m))#

Once we have the augmented matrix, we use those three operations -- swapping two rows, multiplying a row by a constant, or subtracting a multiple of one row from another row -- to transform the matrix into row echelon form or reduced row echelon form. A standard way of doing this is using Gaussian or Gauss-Jordan elimination.

Finally, we need to interpret the result. We will assume that the matrix is in reduced row echelon form, meaning there is no need to use substitution or change back to the original linear equation form.

Any rows with all #0#s may be discounted, as they provide no information.
If any row contains all #0#s in the left portion, but a nonzero constant on the right, then the system has no solution (it is equivalent to saying #0x_1 + 0x_2 + ... + 0x_n = c# where #c!=0#, which is a contradiction). One system which would produce this result is #{(x_1 + x_2 = 1), (x_1 + x_2 = 2):}# (Is it clear why this cannot have a solution?)
If any row contains more than one nonzero constant, and there is at least one solution (the previous situation doesn't occur), then there are infinitely many solutions. One system which would produce this result is #{(x_1 + x_2 + x_3 = 0), (x_1 + x_2 - x_3 = 0):}# (Is it clear why this has infinite solutions?)
If neither of the prior situations occurred, then after removing any all-#0# rows, the final result should look like this:
#((1,0,0,...,0,|,c_1),(0,1,0,...,0,|,c_2),(0,0,1,...,0,|,c_3),(...,,,,,|,...),(0,0,0,...,1,|,c_n))#

and the solution to the original system is

#{(x_1 = c_1), (x_2 = c_2), (...), (x_n = c_n):}#
Sign up to view the whole answer

By signing up, you agree to our Terms of Service and Privacy Policy

Sign up with email
Answer 2

To solve linear systems using matrices, you can use the method of Gaussian elimination or matrix inversion. In Gaussian elimination, you form an augmented matrix of the system and then use row operations to transform it into reduced row-echelon form, from which you can read off the solutions directly. In matrix inversion, you represent the system of equations as ( Ax = b ), where ( A ) is the coefficient matrix, ( x ) is the column matrix of variables, and ( b ) is the column matrix of constants. You then find the inverse of matrix ( A ) (if it exists) and multiply both sides of the equation by it to solve for ( x ).

Sign up to view the whole answer

By signing up, you agree to our Terms of Service and Privacy Policy

Sign up with email
Answer from HIX Tutor

When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some examples.

When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some examples.

When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some examples.

When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some examples.

Not the question you need?

Drag image here or click to upload

Or press Ctrl + V to paste
Answer Background
HIX Tutor
Solve ANY homework problem with a smart AI
  • 98% accuracy study help
  • Covers math, physics, chemistry, biology, and more
  • Step-by-step, in-depth guides
  • Readily available 24/7