# cofactor

In linear algebra, the cofactor (sometimes called adjunct, see below) describes a particular construction that is useful for calculating both the determinant and inverse of square matrices. Specifically the cofactor of the (ij) entry of a matrix, also known as the (ij) cofactor of that matrix, is the signed minor of that entry.

Finding the minors of a matrix A is a multi-step process:
1. Choose an entry aij from the matrix.
2. Cross out the entries that lie in the corresponding row i and column j.
3. Rewrite the matrix without the marked entries.
4. Obtain the determinant Mij of this new matrix.
Mij is termed the minor for entry aij.
If i + j is an even number, the cofactor Cij of aij coincides with its minor:
$C_{ij} = M_{ij}. \,$
Otherwise, it is equal to the additive inverse of its minor:
$C_{ij} = -M_{ij}. \,$
The matrix of cofactors for an $n\times n$ matrix A is the matrix whose (i,j) entry is the cofactor Cij of A. For instance, if A is
$A = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \end{bmatrix}$
the cofactor matrix of A is
$C = \begin{bmatrix} C_{11} & C_{12} & \cdots & C_{1n} \\ C_{21} & C_{22} & \cdots & C_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ C_{n1} & C_{n2} & \cdots & C_{nn} \end{bmatrix}$
where Cij is the cofactor of aij.
In linear algebra, the Laplace expansion, named after Pierre-Simon Laplace, also called cofactor expansion, is an expression for the determinant |B| of an n × n square matrix B that is a weighted sum of the determinants of n sub-matrices of B, each of size (n–1) × (n–1). The Laplace expansion is of theoretical interest as one of several ways to view the determinant, as well as of practical use in determinant computation.
The i,j cofactor of B is the scalar Cij defined by
$C_{ij}\ = (-1)^{i+j} |M_{ij}|\,,$
where Mij is the i,j minor matrix of B, that is, the (n–1) × (n–1) matrix that results from deleting the i-th row and the j-th column of B.
Then the Laplace expansion is given by the following
Theorem. Suppose B = (bij) is an n × n matrix and i,j ∈ {1, 2, …,n}.
Then its determinant |B| is given by:
\begin{align}|B| & {} = b_{i1} C_{i1} + b_{i2} C_{i2} + \cdots + b_{in} C_{in} \\ & {} = b_{1j} C_{1j} + b_{2j} C_{2j} + \cdots + b_{nj} C_{nj}. \end{align}
Suppose R is a commutative ring and A is an n×n matrix with entries from R. The definition of the adjugate of A is a multi-step process:
• Define the (i,jminor of A, denoted Mij, as the determinant of the (n − 1)×(n − 1) matrix that results from deleting row i and column j of A.
• Define the (i,jcofactor of A as
$\mathbf{C}_{ij} = (-1)^{i+j} \mathbf{M}_{ij}. \,$
• Define the cofactor matrix of A, as the n×n matrix C whose (i,j) entry is the (i,j) cofactor of A.
The adjugate of A is the transpose of the cofactor matrix of A:
$\mathrm{adj}(\mathbf{A}) = \mathbf{C}^T \,$.
That is, the adjugate of A is the n×n matrix whose (i,j) entry is the (j,i) cofactor of A:
$\mathrm{adj}(\mathbf{A})_{ij} = \mathbf{C}_{ji} \,$.

## Examples

### 2 × 2 generic matrix

The adjugate of the 2 × 2 matrix
$\mathbf{A} = \begin{pmatrix} {{a}} & {{b}}\\ {{c}} & {{d}} \end{pmatrix}$
is
$\operatorname{adj}(\mathbf{A}) = \begin{pmatrix} \,\,\,{{d}} & \!\!{{-b}}\\ {{-c}} & {{a}} \end{pmatrix}$.

### 3 × 3 generic matrix

Consider the $3\times 3$ matrix
$\mathbf{A} = \begin{pmatrix} A_{11} & A_{12} & A_{13} \\ A_{21} & A_{22} & A_{23} \\ A_{31} & A_{32} & A_{33} \end{pmatrix} = \begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{pmatrix}$.
Its adjugate is the transpose of the cofactor matrix
$\mathbf{C} = \begin{pmatrix} +\left| \begin{matrix} A_{22} & A_{23} \\ A_{32} & A_{33} \end{matrix} \right| & -\left| \begin{matrix} A_{21} & A_{23} \\ A_{31} & A_{33} \end{matrix} \right| & +\left| \begin{matrix} A_{21} & A_{22} \\ A_{31} & A_{32} \end{matrix} \right| \\ & & \\ -\left| \begin{matrix} A_{12} & A_{13} \\ A_{32} & A_{33} \end{matrix} \right| & +\left| \begin{matrix} A_{11} & A_{13} \\ A_{31} & A_{33} \end{matrix} \right| & -\left| \begin{matrix} A_{11} & A_{12} \\ A_{31} & A_{32} \end{matrix} \right| \\ & & \\ +\left| \begin{matrix} A_{12} & A_{13} \\ A_{22} & A_{23} \end{matrix} \right| & -\left| \begin{matrix} A_{11} & A_{13} \\ A_{21} & A_{23} \end{matrix} \right| & +\left| \begin{matrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{matrix} \right| \end{pmatrix} = \begin{pmatrix} +\left| \begin{matrix} 5 & 6 \\ 8 & 9 \end{matrix} \right| & -\left| \begin{matrix} 4 & 6 \\ 7 & 9 \end{matrix} \right| & +\left| \begin{matrix} 4 & 5 \\ 7 & 8 \end{matrix} \right| \\ & & \\ -\left| \begin{matrix} 2 & 3 \\ 8 & 9 \end{matrix} \right| & +\left| \begin{matrix} 1 & 3 \\ 7 & 9 \end{matrix} \right| & -\left| \begin{matrix} 1 & 2 \\ 7 & 8 \end{matrix} \right| \\ & & \\ +\left| \begin{matrix} 2 & 3 \\ 5 & 6 \end{matrix} \right| & -\left| \begin{matrix} 1 & 3 \\ 4 & 6 \end{matrix} \right| & +\left| \begin{matrix} 1 & 2 \\ 4 & 5 \end{matrix} \right| \end{pmatrix}$
So that we have
$\operatorname{adj}(\mathbf{A}) = \begin{pmatrix} +\left| \begin{matrix} A_{22} & A_{23} \\ A_{32} & A_{33} \end{matrix} \right| & -\left| \begin{matrix} A_{12} & A_{13} \\ A_{32} & A_{33} \end{matrix} \right| & +\left| \begin{matrix} A_{12} & A_{13} \\ A_{22} & A_{23} \end{matrix} \right| \\ & & \\ -\left| \begin{matrix} A_{21} & A_{23} \\ A_{31} & A_{33} \end{matrix} \right| & +\left| \begin{matrix} A_{11} & A_{13} \\ A_{31} & A_{33} \end{matrix} \right| & -\left| \begin{matrix} A_{11} & A_{13} \\ A_{21} & A_{23} \end{matrix} \right| \\ & & \\ +\left| \begin{matrix} A_{21} & A_{22} \\ A_{31} & A_{32} \end{matrix} \right| & -\left| \begin{matrix} A_{11} & A_{12} \\ A_{31} & A_{32} \end{matrix} \right| & +\left| \begin{matrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{matrix} \right| \end{pmatrix} = \begin{pmatrix} +\left| \begin{matrix} 5 & 6 \\ 8 & 9 \end{matrix} \right| & -\left| \begin{matrix} 2 & 3 \\ 8 & 9 \end{matrix} \right| & +\left| \begin{matrix} 2 & 3 \\ 5 & 6 \end{matrix} \right| \\ & & \\ -\left| \begin{matrix} 4 & 6 \\ 7 & 9 \end{matrix} \right| & +\left| \begin{matrix} 1 & 3 \\ 7 & 9 \end{matrix} \right| & -\left| \begin{matrix} 1 & 3 \\ 4 & 6 \end{matrix} \right| \\ & & \\ +\left| \begin{matrix} 4 & 5 \\ 7 & 8 \end{matrix} \right| & -\left| \begin{matrix} 1 & 2 \\ 7 & 8 \end{matrix} \right| & +\left| \begin{matrix} 1 & 2 \\ 4 & 5 \end{matrix} \right| \end{pmatrix}$
where
$\left| \begin{matrix} A_{im} & A_{in} \\ \,\,A_{jm} & A_{jn} \end{matrix} \right|= \det\left( \begin{matrix} A_{im} & A_{in} \\ \,\,A_{jm} & A_{jn} \end{matrix} \right)$.
Note that the adjugate is the transpose of the cofactor matrix. Thus, for instance, the (3,2) entry of the adjugate is the (2,3) cofactor of A.
The adjugate matrix is the transpose of the matrix of cofactors and is very useful due to its relation to the inverse of A.
$\mathbf{A}^{-1} = \frac{1}{\det \mathbf{A}} \mbox{adj}(\mathbf{A})$
The matrix of cofactors
$\begin{bmatrix} C_{11} & C_{12} & \cdots & C_{1n} \\ C_{21} & C_{22} & \cdots & C_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ C_{n1} & C_{n2} & \cdots & C_{nn} \end{bmatrix}$
when transposed becomes
$\mathrm{adj}(A) = \begin{bmatrix} C_{11} & C_{21} & \cdots & C_{n1} \\ C_{12} & C_{22} & \cdots & C_{n2} \\ \vdots & \vdots & \ddots & \vdots \\ C_{1n} & C_{2n} & \cdots & C_{nn} \end{bmatrix}.$

## A remark about different notations

In some books, including so the called “a bible of matrix theory”[1] instead of cofactor the term adjunct is used. Moreover, it is denoted as Aij and defined in the same way as cofactor:
$\mathbf{A}_{ij} = (-1)^{i+j} \mathbf{M}_{ij}$
Using this notation the inverse matrix is written this way:
$\mathbf{A}^{-1} = \frac{1}{\det(A)}\begin{bmatrix} A_{11} & A_{21} & \cdots & A_{n1} \\ A_{12} & A_{22} & \cdots & A_{n2} \\ \vdots & \vdots & \ddots & \vdots \\ A_{1n} & A_{2n} & \cdots & A_{nn} \end{bmatrix}$