I'm looking for examples of uses of the incidence matrix in graphs for my class (apart from proving that for a graph $G=(V,E)$ we have $2|E|=\underset{v\in V}{\sum}d(v)$). Can you think of anything interesting?
Thanks in advance for any help.
Examples of theorems in graph theory
-
0@Qiaochu: Oops, I was talking about the incidence matrix, but the answer below by lhf cought me of guard, since we talked about the adjacency matrix as well. Sorry for the confusion :) – 2019-04-13
4 Answers
There is also the classical result that the powers of the adjacency matrix describe the number of paths of a given length between two vertices.
-
0Oops, this is abo$u$t the *adjacency* matri$x$, not the *incidence* matri$x$. – 2011-05-03
A classic example is the Friendship theorem of Erdos and Renyi.
-
0I think this one has proofs using both Incidence and Adjacency matrix but I was thinking of the proof using the adjacency matrix when I posted this! – 2011-05-03
Let $R$ be a commutative ring and $R(V), R(E)$ the free $R$-modules on the vertices and edges, respectively. Pick an orientation of the graph (an arrow attached to each edge pointing from one vertex to the other). The oriented incidence matrix is the matrix of an $R$-module homomorphism
$\partial : R(E) \to R(V)$
sending an edge $e$ to $v - w$, where $v, w$ are the vertices in the appropriate orientation. This morphism, and in particular its kernel, is a basic invariant of the graph: when $R = \mathbb{Z}$ or $R = \mathbb{F}_2$ its kernel is the integral (resp. mod $2$) cycle space of the graph, which is a useful tool in answering certain questions in graph theory (the Wikipedia article has references). Abstractly this is a simple example of a simplicial homology group. (Note that you don't have to pick an orientation if $R$ has characteristic $2$. This is familiar behavior from algebraic topology.)
The transpose of the incidence matrix is the matrix of a homomorphism
$d : R^V \to R^E$
in the other direction (where $R^S$ is the $R$-module of functions $S \to R$) dual to the above one. It sends a function $f(v)$ on vertices to the function $df(e) = f(v) - f(w)$ on edges and can be thought of as a simple analogue of the derivative. In fact this whole setup is a simple analogue of de Rham cohomology and Hodge theory and one can get the Laplacian naturally this way as well.
The operators above naturally occur if one thinks of graphs as circuits. In fact, Kirchhoff's laws can be naturally stated using them when $R = \mathbb{R}$: an element $I \in \mathbb{R}(E)$ can be interpreted as a list of currents if and only if
$\partial I = 0$
and an element $e \in R^E$ can be interpreted as a list of potential differences if and only if there exists $\phi \in R^V$ (the potential) such that
$d \phi = e.$
The material about circuits should be in Bollobás' Modern Graph Theory, and for more information Doyle and Snell's Random Walks and Electrical Networks is also worth checking out.
Eigenvalues of the adjacency matrix provide bounds on the chromatic number $\chi(G)$.
For a graph $G$, let $\mu_\max(G)$ and $\mu_\min(G)$ be the maximum and minimum eigenvalues of the adjacency matrix. Note $\mu_\min(G) < 0 < \mu_\max(G)$ if $G$ is not empty.
All nonempty graphs satisfy $1 - \frac{\mu_\max(G)}{\mu_\min(G)} \leq \chi(G) \leq \mu_\max(G) + 1.$
This is proven in Bollobas' Modern Graph Theory.
-
0Whoops, you were looking for uses of the incidence matrix, and I provided something regarding the adjacency matrix. I will try to contribute something relevant, too. – 2011-05-03