For people who are too lazy to learn this wicked cool subject.
This page is meant as a short introduction and overview of basic linear algebra. Rather than attempting and likely failing at being formal and using strict
definitions and theorems, I'll going to be more casual and somewhat intuitive with my explainations. Linear algebra is more interesting when used as a lens from
which you can view mathematics, rather than a subject in of itself anyways. Taking linear algebra changed my perspective on mathematics, and my goal is to
produce the same sort of effect in this article.
That being said, this article will still assume that you have a basic understanding of high school algebra and set theory. I'll also emphasize again just how basic
this article will be in terms of depth. To clarify, I am not a mathematics professor, and I am writing this article using my old notes from a course I took over a
year ago at the time of writing as references. That being said, have fun.
If we take a linear equation, say \(2x-4=4\), that equation has a particular solution, \(x=4\). In this instance, our equation has a single, solution solution, since our
equation is of a single variable.
In this chapter, we will be looking at systems of linear equations, where we consider the solutions of several linear equations at once. For example, consider the system
$$
2x + 3y = 2 \\
3x + 2y = 3
$$
This system contains 2 linear equations, each having 2 variables. When considering the solutions for a system of equations, we want to know what point, or maybe set of points,
satisfies each equation. In this case, the point \((1,0)\) works as a solution for each equation
$$
2(1) + 3(0) = 2 \\
3(1) + 2(0) = 3
$$
Looking at a higher dimensional example, the system
$$
2x + 3y - 3z = 4 \\
-x + 2y + z = 0
$$
has a set of solutions, rather than a single one. In this case, the line $$ y = \frac{x}{9} + \frac{4}{9} $$ contains all the points which satisfy both equations. This line forms what
we call a solution space, a type of vector space, which is arguably the central topic of linear algebra, but one that we'll talk more about later.
In the mean time, over the course of the history of mathematics, a certain notion was adopted to more efficiently
represent systems of equations, once known as arrays, but now referred to as matrices.
A matrix is a collection of numbers, arranged in a series of rows and columns. For example, we can represent the previous system of equations from above as such:
$$
\left[\begin{matrix}
2 & 3 & -3 & 4 \\
-1 & 2 & 1 & 0
\end{matrix}\right]
$$
This makes doing operations on systems of equations much easier, and lets us think about matrices more abstractly. As evidence of this, the majority of the time when you see
matrices being used in other fields, they aren't typically used to represent systems of equation, and rather just for their mathematical properties. Since this is linear
algebra, however, just keep in mind the context of systems of equations going forward.
The first type of matrix operations we'll look at are call Elementary Row Operations, or EROs, which are:
These three operations are of notable importance because they all have something in common, that performing any of these operations does not affect the solution space
of the corresponding system of equations. Intuitively, this make some sense. If you take two lines, say \(y=x\) and \(y=-x\), then they will intersect at the same point as
lines \(y=2x\) and \(y=-2x\), or for any multiple. Similarly, adding a multiple of one row to another row is, in some sense, the same as multiplying that row by a nonzero number.
Being able to modify a matrix while, in some sense, preserving the information encoded by the matrix has very important use cases in linear algebra. Notably, EROs can be
used to obtain the Reduced Row Echelon Form (RREF) of the matrix. A matrix which is in RREF has the following properties: