This is a question by bygone asker/user, who apparently was satisfied with answer it got, but for the sake of not leaving the question half-answered (from my perspective), I'll point out that a notion strictly on semigroups does appear in Howie's Fundamentals of Semigroup Theory in an exercise on p. 42. I'll reproduce it here with Howie's notation, who writes function and relations to the right. If $\rho$ is a relation on $A\times B$ (i.e. subset thereof), you can have a notion of "mapping" along the lines of a (union-of-outputs) multi-valued function by
$a\rho = \{b\in B:(a,b)\in \rho\}$
If $S$ and $T$ are semigroups, define a relational morphism $\mu$ from $S$ to $T$ as satisfying both the following
RM1: $[\forall a\in S]\, a\mu\neq \emptyset$
RM2: $[\forall a,b\in S]\, (a\mu)(b\mu) \subseteq (ab)\mu$.
Furthermore, this relational morphism is said injective if it also satisfies
RM3: $[\forall a,b\in S]\, a\mu\cap b\mu \neq \emptyset \Rightarrow a\mu = b\mu$.
It is left as exercise to the reader to show that every relational morphism is a subsemigroup of the direct product $S\times T$ (seen as a semigroup).
A closer look at the definitions above sees that only RM2 actually has something to do with the semigroups operations; RM1 and RM3 only deal with properties of the multivalued-function-like application of $\rho$, which by itself does not get baptized in any way by Howie.
Anyhow, Howie goes on to define divisibility of semigroups in the usual manner, i.e. using run-of-the-mill function-based [homo]morphim as: $S$ divides $T$ if there exists a subsemigroup $U$ of $T$ and a [run-of-the-mill, function-based] [homo]morphism $\psi$ from $U$ onto $S$; so, equivalently, $S$ is a quotient subsemigroup of $T$.
As the final point of the exercise (relating the two notions of morphism), you are asked to show that $S$ divides $T$ if and only if there exists an injective relational morphism from $S$ to $T$.
Alas from this I conclude that the notion of relational morphism doesn't appear particularly fruitful since all we used it for is to say something we could say just as nicely with function-based [homo]morphims.
Howie later notes on p. 44:
The idea of a relational morphism appears in Eilenberg (1976) within the chapters written by Tilson, and is much used in applications of semigroups to language theory—see Pin (1986). It might have been more logical to call it a morphic relation, but the term 'relational morphism' is now standard.
The more detailed refs cited are:
- Eilenberg, S. (1976). Automata, languages and machines, Vol. B. Academic Press, New York.
- Pin, J.-E. (1986). Varieties of formal languages, North Oxford Academic Publishers, London, (A translation of Varietes de lagages formels. Masson, Paris, 1984.)
I haven't looked at those references in-depth to see why/how this notion of relational morphism is fruitful.
However, Pin says that any relational morphims as above factorizes through its "inverse projections", i.e. if $\mu$ is a relational morphism from $S$ to $T$, then the graph of $\mu$, i.e. $\{(a,b) : b\in a\mu\}$ is a subsemigroup of $S \times T$ (as already said). If we denote this subsemigroup by $R$, then there exist [function-based] morphism $\alpha : R \to S$ and $\beta : R \to T$, such that $\alpha$ is surjective and $\mu = \alpha^{-1}\beta$, where $\alpha$ is inverted as a relation. Given that this decomposition is possible for every relational morphism, Pin says:
Thus, if one has trouble to think in terms of relational morphims, one can always go back to usual morphisms.
Tilson introduced some additional terminology and has some insightful observations. He calls a relation that satisfies RM1 a fully defined relation. As we've already noted RM3 can be applied to relations in general which can be called injective. If a relation is injective, then its inverse is a partial function. Furthermore if an injective relation is fully defined then its inverse is a surjective partial function.
Relations satisfying RM1 have also been called left-total and the injective ones have also been called left-unique by Kilp, Knauer and Mikhalev. Or another way of putting it is that RM1+RM3 are the injective multi-valued functions.
A deeper result involving relational morphisms appears in Lambek's paper "The Butterfly and the Serpent", which I really enjoyed reading.
First I'll recast the above definitions in Lambek's terminology. Lambek defines a homomorphic relation to be a binary relation $\rho \subseteq A\times B$ whose graph is a subalgebra of the direct product. If $\rho$ additionally meets RM1, Lambek calls it universally defined, which he expresses using the equivalent definition $1_A \subseteq \rho^\vee\rho$, where $\rho^\vee$ is the inverse relation; Lambek writes relations in infix notation with respect to their arguments and composes them right-to-left: $c (\sigma\rho) a$ iff $[\exists b]\, c\sigma b \wedge b\rho a$. Note that unlike the other authors mentioned, Lambek does not include RM1 in his definition of homomorphic relation, so Lambek's definition is the only one mentioned here that coincides with the one that @Qiaochu Yuan gave in his answer above.
And finally we get to non-trivial facts. Lambek gives the following equivalent characterizations of a Maltsev variety (of which examples are plentiful, as you probably know, including groups, quasigroups, Heyting algebras, complemented lattices):
- (the usual one) there exists a ternary term $f\, xyz$ such that $f\, xyy = x$ and $f\, yyz = z$
- (what Maltsev proved) any two congruences permute; equivalently their join in the lattice of congruences is equal to their composition, so $\rho\sigma=\sigma\rho=\rho\sqcup\sigma$
- (finally we get to) any homomorphic relation is difunctional, meaning that $\rho\rho^\vee\rho = \rho$. This property can be restated as $\rho$ being a regular element using the usual notion from semigroup theory, although Lambek expresses his displeasure with this latter terminology.
- (one more) any reflexive homomorphic relation is a congruence relation.
Homomorphic relations are perhaps not that exciting by themselves, but a derived notion surely gets interesting. Define a subcongruence to be a homomorphic relation from $A$ to $A$ that is symmetric ($\rho^\vee \subseteq \rho$) and transitive ($\rho\rho \subseteq \rho$), but not necessarily reflexive ($1_A \subseteq \rho$).
The notion of subcongruence allows a nice characterization of a generalization of Maltsev varieties called Grousat varieties, which can be defined by any of the following equivalent statements:
- there exist two a ternary terms $f\, xyz$ and $g\, xyz$ such that $f\, xyy = x$, $f\, yyz = g\, yzz$, and $g\, yyz = z$
- any two congruences 3-permute, meaning that $\rho\sigma\rho=\sigma\rho\sigma=\rho\sqcup\sigma$, i.e that equality is also their join in the lattice of congruences
- any homomorphic relation satisfies $\rho\rho^\vee\rho\rho^\vee = \rho\rho^\vee$, which is saying that $\rho\rho^\vee$ is an idempotent.
The punchline of Lambek's paper is that one can state a general version of Grousat's theorem in a Grousat variety. Furthermore, in the variety of groups, one recovers Zassenhaus' butterfly lemma as a consequence of this formulation of Grousat's theorem.
Before I end this (rather long post!) with Grousat's theorem, we need one more definition, which alas Lambek only gives as a formula and doesn't put any name to it. Given a subcongruence $\sigma$ on $A$, define $\mathrm{Dom}\,\sigma = \{a\in A : a \sigma a\}$. Since $\sigma$ is not necessarily reflexive, it makes good sense to consider this set. I suppose this notation can be read as "domain" of $\sigma$, but that seems a little misleading given $A$; perhaps reading it as the [sub]diagonal of $\sigma$ makes more sense.
Anyway, Goursat's Theorem:
If $\rho$ is any homomorphic relation from $A$ to $B$ such that $\rho^\vee\rho$ and $\rho\rho^\vee$ are subcongruences of $A$ and $B$ respectively, in particular if $\rho$ is any homomorphic relation whatsoever between two algebras in a Goursat variety, then $\mathrm{Dom}\,\rho\rho^\vee / \rho\rho^\vee \simeq \mathrm{Dom}\, \rho^\vee\rho / \rho^\vee\rho$.
Since long MathJaX posts get horribly slow to edit (some $O(n^2)$ algorithm at work probably), I'll stop here by just mentioning that Lambek shows that one can derive the Snake Lemma from Goursat's Theorem as well.