# Spectral theorem

In mathematics, particularly linear algebra and functional analysis, the spectral theorem is any of a number of results about linear operators or about matrices. In broad terms the spectral theorem provides conditions under which an operator or a matrix can bediagonalized (that is, represented as a diagonal matrix in some basis). This concept of diagonalization is relatively straightforward for operators on finite-dimensional spaces, but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modelled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.
Examples of operators to which the spectral theorem applies are self-adjoint operators or more generally normal operators on Hilbert spaces.
The spectral theorem also provides a canonical decomposition, called the spectral decompositioneigenvalue decomposition, or eigendecomposition, of the underlying vector space on which the operator acts.
In this article we consider mainly the simplest kind of spectral theorem, that for a self-adjoint operator on a Hilbert space. However, as noted above, the spectral theorem also holds for normal operators on a Hilbert space.

In mathematics, on a finite-dimensional inner product space, a self-adjoint operator is one that is its own adjoint, or, equivalently, one whose matrix is Hermitian, where a Hermitian matrix is one which is equal to its own conjugate transpose. By the finite-dimensional spectral theorem such operators have an orthonormal basis in which the operator can be represented as a diagonal matrix with entries in the real numbers. In this article, we consider generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.
Self-adjoint operators are used in functional analysis and quantum mechanics. In quantum mechanics their importance lies in the fact that in the Diracvon Neumann formulation of quantum mechanics, physical observables such as position, momentumangular momentum and spin are represented by self-adjoint operators on a Hilbert space. Of particular significance is the Hamiltonian
$H \psi = V \psi - \frac{\hbar^2}{2 m} \nabla^2 \psi$
which as an observable corresponds to the total energy of a particle of mass m in a real potential field V. Differential operators are an important class of unbounded operators.
The structure of self-adjoint operators on infinite dimensional Hilbert spaces essentially resembles the finite dimensional case, that is to say, operators are self-adjoint if and only if they are unitarily equivalent to real-valued multiplication operators. With suitable modifications, this result can be extended to possibly unbounded operators on infinite dimensional spaces. Since an everywhere defined self-adjoint operator is necessarily bounded, one needs be more attentive to the domain issue in the unbounded case. This is explained below in more detail.

# Ehrenfest theorem

The Ehrenfest theorem, named after Paul Ehrenfest, the Austrian physicist and mathematician, relates the time derivative of the expectation value for a quantum mechanicaloperator to the commutator of that operator with the Hamiltonian of the system. It is

where A is some QM operator and  is its expectation value. Ehrenfest’s theorem is obvious in the Heisenberg picture of quantum mechanics, where it is just the expectation value of the Heisenberg equation of motion.

Ehrenfest’s theorem is closely related to Liouville’s theorem from Hamiltonian mechanics, which involves the Poisson bracket instead of a commutator. In fact, it is a rule of thumb that a theorem in quantum mechanics which contains a commutator can be turned into a theorem in classical mechanics by changing the commutator into a Poisson bracket and multiplying by .

http://en.wikipedia.org/wiki/Ehrenfest_theorem

# Jacobian matrix and determinant

In vector calculus, the Jacobian matrix is the matrix of all first-order partial derivatives of a vector– or scalar-valued function with respect to another vector. Suppose F :Rn → Rm is a function from Euclidean n-space to Euclidean m-space. Such a function is given by m real-valued component functions, y1(x1,…,xn), …, ym(x1,…,xn). The partial derivatives of all these functions (if they exist) can be organized in an m-by-n matrix, the Jacobian matrix J of F, as follows:

This matrix is also denoted by  and . If (x1,…,xn) are the usual orthogonal Cartesian coordinates, the i th row (i = 1, …, m) of this matrix corresponds to the gradient of the ith component function yi. Note that some books define the Jacobian as the transpose of the matrix given above.

The Jacobian determinant (often simply called the Jacobian) is the determinant of the Jacobian matrix.

These concepts are named after the mathematician Carl Gustav Jacob Jacobi. The term “Jacobian” is normally pronounced /dʒəˈkoʊbiən/, but sometimes also /jəˈkoʊbiən/.

http://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant

# Isometry

For the mechanical engineering and architecture usage, see isometric projection. For isometry in differential geometry, seeisometry (Riemannian geometry).
In mathematics, an isometry is a distance-preserving map between metric spaces. Geometric figures which can be related by an isometry are called congruent.
Isometries are often used in constructions where one space is embedded in another space. For instance, the completion of a metric space M involves an isometry from M into M’, a quotient set of the space of Cauchy sequences on M. The original space M is thus isometrically isomorphic to a subspace of a complete metric space, and it is usually identified with this subspace. Other embedding constructions show that every metric space is isometrically isomorphic to a closed subset of some normed vector space and that every complete metric space is isometrically isomorphic to a closed subset of some Banach space.
An isometric surjective linear operator on a Hilbert space is called a unitary operator.

In mathematics, on a finite-dimensional inner product space, a self-adjoint operator is one that is its own adjoint, or, equivalently, one whose matrix is Hermitian, where a Hermitian matrix is one which is equal to its own conjugate transpose. By the finite-dimensional spectral theorem such operators have an orthonormal basis in which the operator can be represented as a diagonal matrix with entries in the real numbers. In this article, we consider generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.
Self-adjoint operators are used in functional analysis and quantum mechanics. In quantum mechanics their importance lies in the fact that in the Diracvon Neumann formulation of quantum mechanics, physical observables such as position, momentumangular momentum and spin are represented by self-adjoint operators on a Hilbert space. Of particular significance is the Hamiltonian
which as an observable corresponds to the total energy of a particle of mass m in a real potential field V. Differential operators are an important class of unbounded operators.
The structure of self-adjoint operators on infinite dimensional Hilbert spaces essentially resembles the finite dimensional case, that is to say, operators are self-adjoint if and only if they are unitarily equivalent to real-valued multiplication operators. With suitable modifications, this result can be extended to possibly unbounded operators on infinite dimensional spaces. Since an everywhere defined self-adjoint operator is necessarily bounded, one needs be more attentive to the domain issue in the unbounded case. This is explained below in more detail.