When we handle multiple variable in Linear algebras we need to use a generalized concept of Vector, matrix and constants, this concep is called Tensor.
To define it we must begin to remaind that all finite-dimension vector space V is
isomorphic
to R
n, so from now we can consider in fact V ≡R
n.
Anyone linear application f: R
n → R is in the form
f(x)=v
tx, con v, x∈R
n
Ie, Dual space of R
n is really itself (or they are
isomorphics by transpose v
t to v ).
Lets B={e
1, ...,e
n}
a Basis of V, then
B*={e
1, ...,e
n} is a basis of Dual V
*, characterized by
e
je
i = 0, j≠i
e
je
i = 1, j=i
This
isomorphism, we call φ between V and its dual V
*, allows us to consider any linear application in the form.
f: V x ...(n times)... x V → V
as
F: V
* x V x ...(n times)... x V → R
Being F=φ◊f (composition of f and φ) , ie, after applying f, we apply the dual element to obtain a scalar.
So, we can consider
Scalar Multilineal Functions simply, by multiplying by elements of the dual
Tensor definition
It is called s-contravariant and n-covariant Tensor or s-times contravariant n-times covariant tensor, or tensor kind (s, n) to a
multilinear Map like
T: V
* x ...(s times)... x V
* x V x ...(n times)... x V → R
If a tensor is (0, n) type, we say simply that it is n times covariant
If a tensor is type (s, 0) say simply that it is s times contravariant
So, the
endomorphism that we considered at diagonalizations or Jordan Cannonical form sections were
F : V → V
x∈V → F(x) = Ax∈V
by our definition F is an (1,1) tensor.
Also a vector is a tensor kind (1,0) because it applies any row vector (dual element) to a constant.
The scalar product in V is a tensor of type (0,2), because it applies two vectors to a constant.
Finally we consider that a constant is a (0,0) tensor.
Let T be a (r, s) tensor, therefore we will consider
∏ = V
* x ...(r times)... x V
* x V x ...(s times)... x V
T: ∏ → R
u
1, ...,u
r∈V
*, v
1, ...,v
s ∈V → F(u
1, ...,u
r,v
1, ...,v
s)∈ R
Then, if B={e
1, ...,e
n}
is an Basis of V and B*={e
1,...,e
n}
is a basis of dual V
*, we can consider the basis of
∏ as β=B*∪B, then we define the coordinates of T at basis β as
![](/mathImage?image=T_{j1',..jr'}^{i1',..is'})
= T(e
j1,...,e
jr,e
i1,...,e
1r)
Therefore, a (r,s) tensor on R
n has n
r+s components
Given two bases in ∏, β y β there is a 'change of basis matrix'
for covariant and contravariant coordinates given respectively by
![](/mathImage?image=A_{j1,..jr}^{j1,..js})
,
![](/mathImage?image=B_{i1,..ir}^{i1,..is})
Then given a tensor T to change the Coordinate of β a β'
Note the
Einstein summation convention
There is a tensorial product denoted por ⊗, defined as follows, if T is a tensor (r, s) and S is a tensor (m, n), tensorial product is a (r + m, s + n) tensor consisting of
T⊗S(u
1,...,u
r, u
1, ...,u
m, v
1, ...,v
s,v
1,...,v
n) =
T(u
1,...,u
r,v
1,...,v
s) .S(u
1, ...,u
m,v
1,...,v
n)
The notation tensotrial can get very complicated because we are dealing with covariant and contravariant coordinates of different dimensions, such as the Riemann tensor is a tensor (1.3) as it applies
a contravector of R
4 (z
1,z
2,z
3,z
4) and 3 vectors
(u
1,u
2,u
3,u
4)
(v
1,v
2,v
3,v
4)
(w
1,w
2,w
3,w
4)
to denote the application of the tensor in the vectors would have to write
∑
4i=1∑
4j=1∑
4k=1∑
4l=1R
lijk z
lu
iv
jw
k
with
Einstein summation convention this equation is reduced to
R
lijk z
lu
iv
jw
k
ie,
Einstein summation convention
consists in a duplicate index and dubindex indicates addition, for example
a tensor given by v
ijw
kl is a (2,2) tensor,
showever v
ikw
kl is a (1,1) tensor.
Therefore we can consider a constant k is a (0,0) tensor, because it is the contraction of a (1,0) tensor with a (0,1) tensor
k=v
iw
i
In a differentiable manifold M, one can define a tensor field or just a tensor
of kind (r,s) as an application which assigns at each point p of a manifold
an (r,s) tensor such us
T: Tp(M)* x ...(r veces)... x Tp(M)* x Tp(M) x ...(s veces)... x Tp(M) → R
That is, taking the tangent vector space as a vector space where tensor operates.
Given two charts C = (φ=(x
1,...,x
m), U)
K = (ψ=(y
1,...,y
m), V)
There exists an application
ψoφ
-1 wich transforms coodinates (x
1,...,x
m) into coordinates (y
1,...,y
m)
and its differential matrix is written as
and its inverse
Each chart have the vector fields
![](/mathImage?image=\displaystyle\frac{{\partial }}{{\partial x^1}}, ..., \displaystyle\frac{{\partial }}{{\partial x^m}})
and
![](/mathImage?image=\displaystyle\frac{{\partial }}{{\partial y^1}}, ..., \displaystyle\frac{{\partial }}{{\partial y^m}})
and
Respectively and the relationship between them is
1)
2)
A very important tensor in differential geometry is called
Metric Tensor. We denote it by G=g
ij
Given an element v=v
i of V, then v is a (1,0)tensor. It is possible to
apply the
Metric Tensor as follows
w
j=g
ijv
i
Obtaining a (0,1) tensor w, ie, an dual space element.
We will call v
i the contravariant coordiantes of v
and w
j will call covariants ones.
Finally, this process of matching indexes with superindexes and sum them (again the
Einstein summation convention) is called contraction.
Riemannian Manifolds
Is called a Riemannian a manifold to a differentiable manifold join a metric tensor acting on it, whose coefficients matrix is definite-positive,
When matrix coefficient of metric tensor is positive-semidefinite, is said that manifold is Semirimannian
if (!function_exists("utf8_encode")){
function utf8_encode($var){
return is_null($var) ? false : $var;
}
}
?>