Matrix decompositions are a useful tool for reducing a matrix to their constituent parts in order to simplify a range of more complex operations.

Perhaps the most used type of matrix decomposition is the that decomposes a matrix into and eigenvalues. This decomposition also plays a role in methods used in machine learning, such as in the the Principal Component Analysis method or PCA.

In this tutorial, you will discover the eigendecomposition, eigenvectors, and eigenvalues in linear algebra.

After completing this tutorial, you will know:

  • What an eigendecomposition is and the role of eigenvectors and eigenvalues.
  • How to calculate an eigendecomposition in Python with NumPy.
  • How to confirm a vector is an eigenvector and how to reconstruct a matrix from eigenvectors and eigenvalues.

Let’s get started.

Gentle Introduction to Eigendecomposition, Eigenvalues, and Eigenvectors for Machine Learning

Gentle to Eigendecomposition, Eigenvalues, and Eigenvectors for Machine Learning
Photo by Mathias Appel, some rights reserved.

Tutorial Overview

This tutorial is divided into 5 parts; they are:

  1. Eigendecomposition of a Matrix
  2. Eigenvectors and Eigenvalues
  3. Calculation of Eigendecomposition
  4. Confirm an Eigenvector and Eigenvalue
  5. Reconstruct Original Matrix


Need help with Linear Algebra for Machine Learning?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Download Your FREE Mini-Course



Eigendecomposition of a Matrix

Eigendecomposition of a matrix is a type of decomposition that involves decomposing a square matrix into a set of eigenvectors and eigenvalues.

One of the most widely used kinds of matrix decomposition is called eigendecomposition, in which we decompose a matrix into a set of eigenvectors and eigenvalues.

— Page 42, Deep Learning, 2016.

A vector is an eigenvector of a matrix if it satisfies the following equation.




This is called the eigenvalue equation, where A is the parent square matrix that we are decomposing, v is the eigenvector of the matrix, and lambda is the lowercase Greek letter and represents the eigenvalue scalar.

Or without the dot notation.




A matrix could have one eigenvector and eigenvalue for each dimension of the parent matrix. Not all square matrices can be decomposed into eigenvectors and eigenvalues, and some can only be decomposed in a way that requires complex numbers. The parent matrix can be shown to be a product of the eigenvectors and eigenvalues.




Or, without the dot notation.




Where Q is a matrix comprised of the eigenvectors, diag(V) is a diagonal matrix comprised of the eigenvalues along the diagonal (sometimes represented with a capital lambda), and Q^-1 is the inverse of the matrix comprised of the eigenvectors.

However, we often want to decompose matrices into their eigenvalues and eigenvectors. Doing so can help us to analyze certain properties of the matrix, much as decomposing an integer into its prime factors can help us understand the behavior of that integer.

— Page 43, Deep Learning, 2016.

Eigen is not a name, e.g. the method is not named after “Eigen”; eigen (pronounced -gan) is a German word that means “own” or “innate”, as in belonging to the parent matrix.

A decomposition operation does not result in a compression of the matrix; instead, it breaks it down into constituent parts to make certain operations on the matrix easier to perform. Like other matrix decomposition methods, Eigendecomposition is used as an element to simplify the calculation of other more complex matrix operations.

Almost all vectors change direction, when they are multiplied by A. Certain exceptional vectors x are in the same direction as Ax. Those are the “eigenvectors”. Multiply an eigenvector by A, and the vector Ax is the number lambda times the original x. […] The eigenvalue lambda tells whether the special vector x is stretched or shrunk or reversed or left unchanged – when it is multiplied by A.

— Page 289, Introduction to Linear Algebra, Fifth Edition, 2016.

Eigendecomposition can also be used to calculate the principal components of a matrix in the Principal Component Analysis method or PCA that can be used to reduce the dimensionality of in machine learning.

Eigenvectors and Eigenvalues

Eigenvectors are unit vectors, which means that their length or magnitude is equal to 1.0. They are often referred as right vectors, which simply means a column vector (as opposed to a row vector or a left vector). A right-vector is a vector as we understand them.

Eigenvalues are coefficients applied to eigenvectors that give the vectors their length or magnitude. For example, a negative eigenvalue may reverse the direction of the eigenvector as part of scaling it.

A matrix that has only positive eigenvalues is referred to as a positive definite matrix, whereas if the eigenvalues are all negative, it is referred to as a negative definite matrix.

Decomposing a matrix in terms of its eigenvalues and its eigenvectors gives valuable insights into the properties of the matrix. Certain matrix calculations, like computing the power of the matrix, become much easier when we use the eigendecomposition of the matrix.

— Page 262, No Bullshit Guide To Linear Algebra, 2017

Calculation of Eigendecomposition

An eigendecomposition is calculated on a square matrix using an efficient iterative algorithm, of which we will not go into the details.

Often an eigenvalue is found first, then an eigenvector is found to solve the equation as a set of coefficients.

The eigendecomposition can be calculated in NumPy using the eig() function.

The example below first defines a 3×3 square matrix. The eigendecomposition is calculated on the matrix returning the eigenvalues and eigenvectors.



Running the example first prints the defined matrix, followed by the eigenvalues and the eigenvectors. More specifically, the eigenvectors are the right-hand side eigenvectors and are normalized to unit length.



Confirm an Eigenvector and Eigenvalue

We can confirm that a vector is indeed an eigenvector of a matrix.

We do this by multiplying the candidate eigenvector by the eigenvector and comparing the result with the eigenvalue.

First, we will define a matrix, then calculate the eigenvalues and eigenvectors. We will then test whether the first vector and value are in fact an eigenvalue and eigenvector for the matrix. We know they are, but it is a good exercise.

The eigenvectors are returned as a matrix with the same dimensions as the parent matrix, where each column is an eigenvector, e.g. the first eigenvector is vectors[:, 0]. Eigenvalues are returned as a list, where value indices in the returned array are paired with eigenvectors by column index, e.g. the first eigenvalue at values[0] is paired with the first eigenvector at vectors[:, 0].



The example multiplies the original matrix with the first eigenvector and compares it to the first eigenvector multiplied by the first eigenvalue.

Running the example prints the results of these two multiplications that show the same resulting vector, as we would expect.



Reconstruct Original Matrix

We can reverse the process and reconstruct the original matrix given only the eigenvectors and eigenvalues.

First, the list of eigenvectors must be converted into a matrix, where each vector becomes a row. The eigenvalues need to be arranged into a diagonal matrix. The NumPy diag() function can be used for this.

Next, we need to calculate the inverse of the eigenvector matrix, which we can achieve with the inv() NumPy function. Finally, these elements need to be multiplied together with the dot() function.



The example calculates the eigenvalues and eigenvectors again and uses them to reconstruct the original matrix.

Running the example first prints the original matrix, then the matrix reconstructed from eigenvalues and eigenvectors matching the original matrix.



Extensions

This section lists some ideas for extending the tutorial that you may wish to explore.

  • Create 5 examples using each operation with your own data.
  • Implement each matrix operation manually for matrices defined as lists of lists.
  • Search machine learning papers and find 1 example of each operation being used.

If you explore any of these extensions, I’d love to know.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Books

API

Articles

Summary

In this tutorial, you discovered the eigendecomposition, eigenvectors, and eigenvalues in linear algebra.

Specifically, you learned:

  • What an eigendecomposition is and the role of eigenvectors and eigenvalues.
  • How to calculate an eigendecomposition in Python with NumPy.
  • How to confirm a vector is an eigenvector and how to reconstruct a matrix from eigenvectors and eigenvalues.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Get a Handle on Linear Algebra for Machine Learning!

Linear Algebra for Machine Learning

Develop a working understand of linear algebra

…by writing lines of code in python

Discover how in my new Ebook:
Linear Algebra for Machine Learning

It provides self-study tutorials on topics like:
Vector Norms, Matrix Multiplication, Tensors, Eigendecomposition, SVD, PCA and much more…

Finally Understand the Mathematics of Data

Skip the Academics. Just Results.

Click to learn more.



Source link
thanks you RSS link
( https://machinelearningmastery.com/introduction-to-eigendecomposition-eigenvalues-and-eigenvectors/)

LEAVE A REPLY

Please enter your comment!
Please enter your name here