In mathematics, a **Gaussian function** (named after Carl Friedrich Gauss) is a function of the form:

for some real constants *a*, *b*, *c* > 0, and *e* ≈ 2.718281828 (Euler’s number).

The graph of a Gaussian is a characteristic symmetric “bell curve” shape that quickly falls off towards plus/minus infinity. The parameter *a* is the height of the curve’s peak, *b* is the position of the centre of the peak, and *c* controls the width of the “bell”.

Gaussian functions are widely used in statistics where they describe the normal distributions, in signal processing where they serve to define Gaussian filters, in image processing where two-dimensional Gaussians are used for Gaussian blurs, and in mathematics where they are used to solve heat equations and diffusion equations and to define the Weierstrass transform.

#### Properties

Gaussian functions arise by applying the exponential function to a general quadratic function. The Gaussian functions are thus those functions whose logarithm is a quadratic function.

The parameter *c* is related to the full width at half maximum (FWHM) of the peak according to

Alternatively, the parameter *c* can be interpreted by saying that the two inflection points of the function occur at *x* = *b* − *c* and*x* = *b* + *c*.

Gaussian functions are analytic, and their limit as *x* → ∞ is 0.

Gaussian functions are among those functions that are elementary but lack elementary antiderivatives; the integral of the Gaussian function is the error function. Nonetheless their improper integrals over the whole real line can be evaluated exactly, using theGaussian integral

and one obtains

This integral is 1 if and only if *a* = 1/(*c*√(2π)), and in this case the Gaussian is the probability density function of a normally distributed random variable with expected value μ = *b* and variance σ^{2} = *c*^{2}. These Gaussians are graphed in the accompanying figure.

Gaussian functions centered at zero minimize the Fourier uncertainty principle.

The product of two Gaussian functions is a Gaussian, and the convolution of two Gaussian functions is again a Gaussian, with .

Taking the Fourier transform of a Gaussian function with parameters *a*, *b* = 0 and *c* yields another Gaussian function, with parameters *ac*, *b* = 0 and 1/*c*. So in particular the Gaussian functions with *b* = 0 and *c* = 1 are kept fixed by the Fourier transform (they are eigenfunctions of the Fourier transform with eigenvalue 1).

The fact that the Gaussian function is an eigenfunction of the Continuous Fourier transform allows to derive the following interesting identity from the Poisson summation formula:

#### Multi-dimensional Gaussian function

In an *n*-dimensional space a Gaussian function can be defined as

where is a column of *n* coordinates, *A* is a positive-definite matrix, and ^{T} denotes transposition.

The integral of a Gaussian function over the whole *n*-dimensional space is given as

It can be easily calculated by diagonalizing the matrix *B* and changing the integration variables to the eigenvectors of *B*.

More generally a shifted Gaussian function is defined as

where is the shift vector and the matrix *A* can be assumed to be symmetric, *A*^{T} = *A*. The following integrals with this function can be calculated with the same technique,

#### Applications

Gaussian functions appear in many contexts in the natural sciences, the social sciences,mathematics, and engineering. Some examples include:

- In statistics and probability theory, Gaussian functions appear as the density function of the
**normal distribution**, which is a limiting probability distribution of complicated sums, according to the central limit theorem. - Gaussian functions are the Green’s function for the (homogeneous and isotropic) diffusion equation (and, which is the same thing, to the heat equation), a partial differential equation that describes the time evolution of a mass-density under diffusion. Specifically, if the mass-density at time
*t*=0 is given by a Dirac delta, which essentially means that the mass is initially concentrated in a single point, then the mass-distribution at time*t*will be given by a Gaussian function, with the parameter*a*being linearly related to 1/√*t*and*c*being linearly related to √*t*. More generally, if the initial mass-density is φ(*x*), then the mass-density at later times is obtained by taking the convolution of φ with a Gaussian function. The convolution of a function with a Gaussian is also known as a Weierstrass transform. - A Gaussian function is the wave function of the ground state of the quantum harmonic oscillator.
- The molecular orbitals used in computational chemistry can be linear combinations of Gaussian functions called Gaussian orbitals (see also basis set (chemistry)).
- Mathematically, the derivatives of the Gaussian function can be represented using Hermite functions. The
*n*-th derivative of the Gaussian is the Gaussian function itself multiplied by the*n*-th Hermite polynomial, up to scale. For example the first-derivative of the Gaussian is simply the Gaussian multiplied by*x*. - Consequently, Gaussian functions are also associated with the vacuum state in quantum field theory.

### Gaussian integral

From Wikipedia, the free encyclopedia

A graph of *ƒ*(*x*) = *e*^{−x2} and the area between the function and the *x*-axis, which is equal to .

The **Gaussian integral**, also known as the **Euler-Poisson integral** or **Poisson integral**, is the integral of the Gaussian function *e*^{−x2} over the entire real line. It is named after the German mathematician and physicist Carl Friedrich Gauss. The integral is:

This integral has wide applications. When normalized so that its value is 1, it is the density function of the normal distribution. It is closely related to the error function, which is the same integral with finite limits.

Although no elementary function exists for the error function, as can be proven by the Risch algorithm, the Gaussian integral can be solved analytically through the tools of calculus. That is, there is no elementary *indefinite integral* for , but the definite integral can be evaluated.

#### Computation

##### By polar coordinates

A standard way to solve this integral is to take the square and change to polar coordinates

- consider the function
*e*^{−(x2 + y2)}=*e*^{−r2}on the plane**R**^{2}, and compute its integral two ways: - on the one hand, by double integration in the Cartesian coordinate system, its integral is a square:

- on the other hand, by shell integration (a case of double integration in polar coordinates), its integral is computed to be π.

Comparing these two computations yields the integral, though one should take care about the improper integrals involved.

###### Brief proof

Briefly, using the method above, one computes that on the one hand,

On the other hand,

where the factor of *r* comes from the transform to polar coordinates (*r* *dr* *dθ* is the standard measure on the plane, expressed in polar coordinates), and the substitution involves taking *s* = −*r*^{2}, so *ds* = −2*r* *dr*.

Combining these yields

so

- .

###### Careful proof

To justify the improper double integrals and equating the two expressions, we begin with an approximating function:

so that the integral may be found by

since

Taking the square of *I(a)* yields

Using Fubini’s theorem, the above double integral can be seen as an area integral

taken over a square with vertices {(−*a*, *a*), (*a*, *a*), (*a*, −*a*), (−*a*, −*a*)} on the *xy*–plane.

Since the exponential function is greater than 0 for all real numbers, it then follows that the integral taken over the square’s in circle must be less than *I*(*a*)^{2}, and similarly the integral taken over the square’s circumcircle must be greater than *I*(*a*)^{2}. The integrals over the two disks can easily be computed by switching from Cartesian coordinates to polar coordinates:

(See to polar coordinates from Cartesian coordinates for help with polar transformation.)

Integrating,

By the squeeze theorem, this gives the Gaussian integral

##### By Cartesian coordinates

Georgakis^{[2]} wrote that the following is “a better alternative to the usual method of reduction to polar coordinates”.

Let

Since the limits on *s* as *y* goes to depend on the sign of *x*, it simplifies the calculation to use the fact that is an even function, and, therefore, the integral over all real numbers is just twice the integral from zero to infinity. That is, . Thus, over the range of integration, , and the variables *y* and *s* have the same limits. This yields:

Then

Finally, , as expected.

Also

#### Relation to the gamma function

The integrand is an even function,

Thus, after the change of variable , this turns into the Euler integral

where Γ is the gamma function. This shows why the factorial of a half-integer is a rational multiple of . More generally,

##### Integrals of similar form

An easy way to derive these is by parameter differentiation.

##### Higher-order polynomials

Exponentials of other even polynomials can easily be solved using series. For example the solution to the integral of the exponential of a quartic polynomial is

The *n* + *p* = 0 mod 2 requirement is because the integral from −∞ to 0 contributes a factor of (−1)^{n+p}/2 to each term, while the integral from 0 to +∞ contributes a factor of 1/2 to each term. These integrals turn up in subjects such as quantum field theory.

The integral of an arbitrary Gaussian function is

An alternative form is

where *f* must be strictly positive for the integral to converge.

#### Proof

The integral

can be calculated by putting it into the form of a Gaussian integral. First, the constant *a* can simply be factored out of the integral. Next, the variable of integration is changed from *x* to *y* = *x* + *b*.

and then to *z* = *y* / | *c* |

Then, using the Gaussian integral identity

we have

This integral is independent of the value of the mean because we can change the variable of integration to a new variable shifted by the mean, i.e.,

.

vanishes for n odd because the integrand is an odd function and the integral cancels out.

Let’s prove

by induction. The case n=0 is just the Gaussian integral, so let’s assume the case n-1 and test n using integration by parts.

#### References

http://en.wikipedia.org/wiki/Gaussian_function

http://en.wikipedia.org/wiki/Gaussian_integral

http://en.wikipedia.org/wiki/Integral_of_a_Gaussian_function

http://en.wikipedia.org/wiki/Maxwell%E2%80%93Boltzmann_distribution

http://galileo.phys.virginia.edu/classes/252/kinetic_theory.html