2
$\begingroup$

On Page 38, Elementary Set Theory with a Universal Set, Randall Holmes(2012), which can be found here.

We give a semi-formal definition of complex names (this is a variation on Bertrand Russell's Theory of Descriptions):

Definition. A sentence $\psi [(\text{the }y\text{ such that }\phi)/x]$ is defined as $\begin{align*}&\big((\text{there is exactly one }y\text{ such that }\phi)\text{ implies }(\text{for all }y, \phi\text{ implies }\psi[y/x])\big)\\&\text{ and }\\&\Big(\big(\text{not}(\text{there is exactly one }y\text{ such that }\phi)\big)\text{ implies }\\&\qquad\big(\text{for all }x,(x\text{ is the empty set})\text{ implies }\psi\big)\Big)\;.\end{align*}$ Renaming of bound variables may be needed.

Definition of the form "$\phi[y/x]$" is:

Definition. When $\phi$ is a sentence and $y$ is a variable, we define $\phi[y/x]$ as the result of substituting $y$ for $x$ throughout $phi$, but only in case there are no bound occurrences of $x$ or $y$ in $\phi$. (We note for later, when we allow the construction of complex names $a$ which might contain bound variables, that $\phi[a/x]$ is only defined if no bound variable of $a$ occurs in $\phi$ (free or bound) and vice versa).

I can't understand why $\psi [($the$\, y\, $such that$\, \phi)/x]$ is defined as it is? Especially, "((not(there is exactly one $y$ such that $\phi$ )) implies (for all $x$, ($x$ is the empty set) implies $\psi$ ))" seems to come out of nowhere.

Feel free to retag this question, I'm not sure if some other disciplines, like elementary set theory, lingusitics are more closely related to it.

  • 0
    @RParadox: Thank you for your link. The problem is that it's equally elusive.2012-12-10

2 Answers 2

3

Good question. A guess coming up.

General issue: How should we regard expressions of the form "the $\varphi$" or better "the $y$ such that $\varphi(y)$".

Option one: as mere "syntax sugar" that can be parsed away. This is Russell's line. "The $y$ such that $\varphi(y)$" isn't really a complex name, but vanishes on analysis, because (i) $\psi$(the $y$ such that $\varphi(y)$) is equivalent to (ii) there is at least one thing which is $\varphi$ and at most one thing which is $\varphi$ and whatever is $\varphi$ is $\psi$.

Option two: descriptions are complex names. "The $y$ such that $\varphi(y)$" is a complex name of the one and only one thing that is $\varphi$ if there is such a thing, and takes a default value, say the empty set, if there isn't. This was Frege's line.

Both treatments are logically workable. Or we can mix them. Which seems to be what Holmes is doing here.

We do parsing away (a la Russell): but treat the cases where there is and where there isn't a unique $\varphi$ differently, in effect supplying a default value when there isn't (a la Frege). So, roughly speaking, $\psi$(the $y$ such that $\varphi(y)$) says that whatever is $\varphi$ is $\psi$ if there is a unique $\varphi$, but becomes [equivalent to] $\psi(\emptyset)$ when there is no unique $\varphi$.

But I am making this up as I go along, you understand: caveat lector!

  • 0
    @RParadox: Thank you for your suggestions.2012-12-11
-1

Note: if you want to downvote this, please read it first and then leave a comment.

The way Holmes presents this matter is not clear at all. How is the theory of descriptions and logic related?

A good example is the word "and". We use "and" in the english language and there is the symbol of symbol logic "$ \wedge $". Consider the mapping f from "and" to "$\wedge$" and g from "$\wedge$" to "and".

$f: $ "and" $\rightarrow "\wedge" $

$g: "\wedge" \rightarrow $ "and"

Now, Frege and Russell introduced the symbol logic, so that we can clearly distinguish between "and" and "$\wedge$", because they are not at all the same. Consider the expression "two and two is four". The expression is best translated as 2+2=4. Translation means really taking the first expression and putting into a proper system of logic. For instance here the word "and" was translated into the obivous symbol for addition "+", and not "$\wedge$", although a naive translator would not have known what "and" should stand for.

This matter is not at all trivial, and is not linguistics but logic proper (philosophy if you will). We want to know how the symbols "+" and "$\wedge$" operate. This study is what we call logic in the first place. For instance a few days ago, people downvoted my elementary proof of logic, probably because they thought real mathematicians use lots of strange symbols. However when we are concerned with logic, we can't be so presumptuous. We can't simply throw around symbols and hope that it will all make sense in the end.

Where this kind of analysis comes from is thinking about propositions. What does it mean to talk about anything? Well, if we talk about a thing, we are refering to its existence or non-existence. Which is why a mathematical expression will start with the phrase: $\exists x ...$ or $\nexists x ...$

In his theory of denoting Russells explains why every statements refers to all other things

  1. Definition of all: $C(E) \leftrightarrow \forall x C(x)$
  2. Definition of nothing: $C(N) \leftrightarrow \forall x \neg C(x)$
  3. Definition of exists $C(S)\leftrightarrow \neg \forall x \neg C(x)$

E="Everything",N="Nothing",S="Something". Taken from wikipedia. See also http://en.wikipedia.org/wiki/Theory_of_descriptions

What this achieves is that it shows a certain map, as explained above, but not for "and", but for the expression "exists". So we could say we have explained the map

$h: $ "exists x" $\rightarrow "\exists x"$

, although there are some remaining issues. What one should realize is that all of mathematics essentially is build on this theory, although very mathematicians realize it.

Holmes sentence is an ackward variant of this theory of description. You arrive at it, by applying the given definitions. A much better way to understand the operations is to look at the axioms, see metamath: PL, and play around with them.

  • 0
    If I have a proof, it is expected from me that people who are in the field can understand it and verify it. But if we are talking about logic, there is no such knowledge which can be assumed. Using abstract algebra to prove an elementary theorem in logic is just non-sensical. We are talking about the most fundamental notions, such as those in predicate logic. And anyone who uses complicated language to express elementary concepts is misusing the language. This is certainly true in this case.2012-12-11