Before going deep into the theory, if what you're looking for are practical examples can have them in these two links:

Example 1

Example 2

Given the endomorphism

F : V → V

x∈V → F(x) = Ax∈V

With V n-dimension vector space, n> 1 on Κ (where K contains all root of characteristic polynomial)

We have seen that for every endomorphism there exists a matrix of dimension nxn such that

F(v) = Av

Two matrix, A and B over K are said to be similar if there exists oher matrix B,
also over the field K such that

B = PAP-1

Then a matrix A is diagonalizable if it is similar to a diagonal matrix.

The A matrix characteristic polynomial decomposed into k roots
λj, each one with multiplicity
mj. At same time foeach
λj exists an autospace
E(λj) and such us.

Dim (E(λj)) = dj ≤ mj and therefore

∑kj=1dj ≤ ∑kj=1mj = n

Sometimes it is called algebraic multiplicity of eigenvalue
λj to mj,
and geometric multiplicity of eigenvalue λj to
dj.

If each one eigenvalue λj generates an eigenspace
E(λj) with dimension equal to the (algebraic)
multiplicity of the eigenvalue, that is
mj = djT
then A is a diagonalizable matrix (and also it is said that F is diagonalizable).

Otherwise, the sum of the dimensions of the eigenspace generated by each
one eigenvalue is less than n, ie, the basis of eigenvectors does not fill
all the space V. In this case, the matrix is not diagonalizable
but it is possible to find a basis where the matrix V is expressed in a
form called
Jordan Cannonical Form

# Matrix Diagonalizations

### Principles and basic concepts

### Extension Theory

Lets be the V a vector space of dimension n over K in the form

F : V → V

x∈V → F(x) = Ax∈V

And let A be the mxm matrix associated with this endomorphism.

Sea

P(λ) = (1-λ1)k1 . (1-λ2)k2 ... (1-λk)kk

The characteristic polynomial of matrix A, with λj ∈K eigenvalues of A ∀j=1,2,...,k.

Each eigenvalue corresponds to an eigenspace, subspace of V given by

E(λj) = {v ∈ V tal que F(v) = λjv} = {v ∈ V tal que (F − λjId)(v) = 0} = Ker(F − λjId).

Each personal space or eigenspace is generated by a basis vector associated with the eigenvalue λj. Thus each basis vector of the eigenspace call

Bj = {v1, v2, ..., vm}

In general the dimension of each eigenspace is less than the multiplicity of each eigenvalue, ie Dim(E(λj)) ≤ mj

However, if A is diagonalizable the dimension of each eigenspace are equaly to multiplicity of each eigenvalue, as we see it in following theorem.

We have the following theorem as a criterion for matrix diagonalization

A is diagonalizable if and only if it is hold

1) m1 + m1 + ... + mk = n.

2) For each eigenvalue j with multiplicity mj we have that the corresponding eigenspace dimension is mj, ie

Dim(E(λj) ) = mj
And we have the following corollaries

If the matrix A is diagonalizable then V is the direct sum of the eigenspace of A, ie

V = E(λ1) ⊕ E(λ2) ⊕ ... ⊕ E(λk)

A matrix A is diagonalizable if and only if there is a basis of V consisting of eigenvectors of A
Therefore we can give these sufficient conditions for the diagonalization

1) If the characteristic polynomial has n distinct roots in field K then

2) If the characteristic polynomial has k roots, and eigenspace corresponding to each one has dimension equal to its multiplicity then

3) If 1) or 2) does not holds then A is

Thus, if we are in case 3) of the previous theorem, the matrix A is not diagonalizable.

The case of symmetric matrices, the situation is simpler since all its eigenvalues are real, and eigenvectors corresponding to distinct eigenvalues are orthogonal, i want to remaind now that a matrix is symmetric if it equals its transpose, ie

A is symmetric <=> A = At

1) All its eigenvalues are real

2) A is diagonalizable.

3) The eigenvectors corresponding to distinct eigenvalues are orthogonal.

F : V → V

x∈V → F(x) = Ax∈V

And let A be the mxm matrix associated with this endomorphism.

Sea

P(λ) = (1-λ1)k1 . (1-λ2)k2 ... (1-λk)kk

The characteristic polynomial of matrix A, with λj ∈K eigenvalues of A ∀j=1,2,...,k.

Each eigenvalue corresponds to an eigenspace, subspace of V given by

E(λj) = {v ∈ V tal que F(v) = λjv} = {v ∈ V tal que (F − λjId)(v) = 0} = Ker(F − λjId).

Each personal space or eigenspace is generated by a basis vector associated with the eigenvalue λj. Thus each basis vector of the eigenspace call

Bj = {v1, v2, ..., vm}

In general the dimension of each eigenspace is less than the multiplicity of each eigenvalue, ie Dim(E(λj)) ≤ m

However, if A is diagonalizable the dimension of each eigenspace are equaly to multiplicity of each eigenvalue, as we see it in following theorem.

We have the following theorem as a criterion for matrix diagonalization

**Theorem 1 (necessary and sufficient condition for A to be Digagonalizable)**

A is diagonalizable if and only if it is hold

1) m1 + m1 + ... + mk = n.

2) For each eigenvalue j with multiplicity mj we have that the corresponding eigenspace dimension is mj, ie

Dim(E(λj) ) = mj

**Corollary 1**

If the matrix A is diagonalizable then V is the direct sum of the eigenspace of A, ie

V = E(λ1) ⊕ E(λ2) ⊕ ... ⊕ E(λk)

**Corollary 2**

A matrix A is diagonalizable if and only if there is a basis of V consisting of eigenvectors of A

**Theorem 2 (Sufficient conditions for A to be diagonalizable)**

1) If the characteristic polynomial has n distinct roots in field K then

**matrix A is diagonalizable**.

2) If the characteristic polynomial has k roots, and eigenspace corresponding to each one has dimension equal to its multiplicity then

**matrix A is diagonalizable**.

3) If 1) or 2) does not holds then A is

**not diagonalizable**

Thus, if we are in case 3) of the previous theorem, the matrix A is not diagonalizable.

The case of symmetric matrices, the situation is simpler since all its eigenvalues are real, and eigenvectors corresponding to distinct eigenvalues are orthogonal, i want to remaind now that a matrix is symmetric if it equals its transpose, ie

A is symmetric <=> A = At

**Theorem 2 (Diagonalization of symmetric matrix)**If A is a symmetric matrix, then we have:

1) All its eigenvalues are real

2) A is diagonalizable.

3) The eigenvectors corresponding to distinct eigenvalues are orthogonal.

# Was useful? want add anything?

Post here

### Post from other users

Post here