Linear Algebra with JavaScript: System of Linear Equations, Inverse Matrix

This is part of the course “Linear Algebra with JavaScript”.

Image for post
Image for post
GitHub Repository with Source Code

One of the main reasons that linear algebra is more broadly applicable, and required just about any technical discipline is that it solves certain systems of equations.

Image for post
Image for post
system of linear equations

The typical way to organize this sort of special equations is to gather all the variables on the left and put any lingering constants on the right.

Image for post
Image for post
system of linear equations

This is the system of linear equations. We are looking for a vector x⃗ which after applying the transformation lands on v⃗.

Determinant(covered in the previous part) playing an important role in the system of linear equations. if the determinant is non-zero, there will always be one and only one vector that lands on v⃗, and we can find it by playing the transformation in reverse. To play transformation in reverse, we need to find the inverse matrix that undoes whatever A did.

we can solve the system of linear equations by finding inverse matrix

The cumulative effect of multiplying by A and its inverse is equivalent to the identity transformation — a transformation that does nothing.

If the determinant is zero, we cannot un-squish a line to turn it into a plane. The solution can still exist, but we have to be lucky enough that the vector v⃗ lives somewhere on that line.

One of the most popular ways of solving a system of linear equations is the Gauss-Jordan elimination procedure that converts any matrix into its reduced row echelon form, from which we can easily find the solution (or solutions) of the system of equations. Maybe this algorithm will be covered in one of the next parts, but not now.

Finding Inverse Matrix

A general formula for obtaining the inverse based on the adjugate matrix:

Image for post
Image for post
formula for obtaining the inverse

The adjugate matrix is kind of complicated, so let’s proceed step by step. We’ll first define a few prerequisite concepts.

For each entry aᵢⱼ, the minor Mᵢⱼ is the determinant of the matrix that remains after we remove the i-th row and the j-th column of a given matrix.

The sign of each entry aᵢⱼ is defined as:

Image for post
Image for post
sign of an entry

The cofactor cᵢⱼ for the entry aᵢⱼ is the product of this entry’s sign and its minor:

Now we’re ready to describe the adjugate matrix. The adjugate matrix is defined as the transpose of the matrix of cofactors C. The matrix of cofactors is a matrix of the same dimensions as the original matrix A that is constructed by replacing each entry aᵢⱼ by its cofactor cᵢⱼ.

Finally, we are ready to implement a method that will return the inverse matrix by combining the methods we implemented earlier.

We already have animated examples of linear transformations in the linear-algebra-demo project, now we can make them more interesting by applying a transformation to the initial matrix, and when an animation is finished applying inverse one.

In the example below, we transform red cube to blue parallelepiped by using “shear right” transformation, and then by using its inverse — “shear left” we turn parallelepiped, to the cube.

Image for post
Image for post
“shear right” then “shear left”

We apply the same approach in the example below with “scale”/”shrink” transformations.

Image for post
Image for post
“scale” then “shrink”

Reach the next level of focus and productivity with increaser.org.

Image for post
Image for post
Increaser

Software engineer, creator of increaser.org. More at geekrodion.com

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store