564 Lecture 6 Sept. 9 1999

 

So far, in constructing our formal representation of English, we've managed to analyze sentences and certain sentential connectives. We've seen how to translate some sentences into propositional logic, and we're able to predict their truth value depending on the truth values of their sentential subparts – that is, we know the truth-conditions for sentences built up with certain sentential connectives, and can formulate inferences about when two sentences entail a third, and when two sentences are logically equivalent. This is satisfactorily close to what happens when I utter a sentence and you interpret it: I have provided you with a set of truth-conditions, and asserted something about them, i.e. that they hold. We have the following syntax for our propositional logic:

1. Syntax of Propositional Logic:

I. Any atomic statement is a wff (p,q,r,s,...)

II. Any wff preceded by ~ (negation - it is not the case that ) is a wff.

III. Any two wffs can be made into another wff by writing the symbol "&" (conjunction - and), "v" (disjunction - or), "-->" (conditional - if-then), or "<->" (biconditional - if and only if) between them and enclosing the result in parentheses.

And the following semantics:

2. Semantics of Propositional Logic:

I. [| ~P|] =1 iff [| P|] = 0

II. [| P&Q|]=1 iff [| P|] =1 and [|Q|]=1

III. [| P v Q |]=1 iff [| P|] =1 or [| Q |] =1

IV. [| P-->Q |]=1 iff [| P|] =0 or [| Q|] =1

V [| P<->Q|]=1 iff [| P|]=[| Q |]

And we have rules-of-thumb for doing translations: a single-clause S corresponds to an atomic statement. [S and S] corresponds to P&Q. [It is not the case that S] corresponds to ~P. [S or S] corresponds to PvQ. [If S, then S] corresponds to P-->Q, and similarly for some other conjunctions, disjunctions and conditionals.

However, it's clear that we're missing some crucial elements. So far, we're treating Ss as unanalysable wholes, rather like words, and yet it's clear that Ss are made up of smaller meaningful parts. Even if I utter something that's not an S (e.g. /kæt/), something is communicated, even if it's not a truth condition. More clearly, it's evident that the meanings of elements within an S can affect the entailment relations between Ss, in a way we are not yet equipped to capture, and since what we've been trying to do so far is make predictions about entailment relations, this is a problem. Consider the classic:

3. Every man is mortal.

Socrates is a man.

(Therefore) Socrates is mortal.

This is a valid argument, but we can only treat it if we are able to compose the meanings of "mortal" and "man" and "Socrates" and "every" separately, and are not forced to treat each sentence as an unanalysable whole. (Other things we're missing have been listed earlier: capturing the differences between "and" and "but" (presupposition of contrast, "if" and "only if" (temporal and causal relations), the interpretation of expressions that aren't statements (questions, imperatives, etc.) These are problems for a later date, or, more likely, another course).

The first person to develop a formal system for capturing the type of intuition in 3 above was Frege, who invented first-order predicate logic (predicate calculus) (late last century). We're going to develop rules-of-thumb for translating subparts of sentences into predicate calculus. Ultimately, we want the fully saturated, well-formed expression of the predicate calculus to denote a truth-value, and we want entailment relations like that in (3) to fall out.

For starters, we need to develop a translation into predicate calculus of proper names. Proper names pick out individuals in the universe of discourse: in any given discourse, "Fanny" will be one particular individual who is unique. (DeSwart notes that there's lots more to say about individuation, but it's not something we're going to talk about here). Let's treat the universe of discourse as a set, and assert that interpreting a proper name picks out a particular element of that set, that is, a proper name denotes an entity in the real world.

4. What the interpretation function does when applied to a proper name:

[ Chris ]

U

 

More colloquially, we can say that [| Chris |] = Chris.

That much is easy. Now, what about something like "(is a) man" or "(is) mortal"? These seem to denote concepts, that is, we have a notion of what "man" means independently of an instantiation of it (while we don't have a notion of what "Chris" means independently of an instantiation of it). That is, we, have an intensional notion of the meaning of "man". In terms of predicate logic, though, that intensional notion won't do us much good. Rather, we can think of "man" as the set of individuals with the property "man" – in effect the way we model the meaning of "man" in predicate calculus is as the set of all men. Similarly, the meaning of "mortal" is as the set of all mortal things, "purple" as the set of all purple things, etc. This is termed an extensional interpretation of the meaning of predicates, and (as you might suspect) it just ain't good enough. For the moment, however, it'll allow us to translate a bigger fragment of English fairly painlessly, so we'll stick with it for a while. We can see what happens when the interpretation function latches onto a predicate like "mortal" in the diagram below (webmistress' note — sorry, the diagrams have refused to get uploaded):

5. What the interpretation function does when applied to a predicate:

[ Mortal ]

U

 

More colloquially, again, we can say that [| Mortal |] = {x | x is Mortal} or

. {x | Mortal(x)}

This is a good representation of a one-place predicate, that is, a predicate that applies to only one argument, or term (a sentence like "Chris is mortal Pat" makes no sense because "mortal" is a one-place predicate which is being given two argument.s. Some obvious examples of two-place predicates are "love", "eat", "touch", etc. Two place predicates can be thought of as denoting a set, too, but it's not a set of individuals; rather, it's a set of ordered pairs, which bear a certain relation to each other.

Ordered pairs can be created from two sets, A and B, by taking an element of A as the first member of the pair and an element B as the second member. The Cartesian product of A and B (AxB) is the set of all such pairs, and in the predicate notation it's written: AxB = {<x,y> | x  A and y  B}.

Where do we get the set of ordered pairs that a natural language predicate like "love" can pick from? Recall that for one-place predicates, the set of elements denoted by that predicate was a subset of U. Similarly, the set of ordered pairs that is denoted by a two-place predicate is taken from UxU, that is, all possible ordered pairs given the number of elements in U.

Let's say the universe of discourse has 3 people in it, Mary, John, and Sue. We could represent the set of ordered pairs UxU as in the Venn diagram below, and if everybody loves themselves, and Mary loves Sue, Sue loves John, and Mary loves John, the interpretation function applied to the Love predicate will pick out the set shown, because the ordered pairs in the set will bear the Love relation to each other.:

6.

[ Love ]

UxU

<Mary, Mary> <Mary, John>

<Mary, Sue> <Sue, Sue>

<Sue, Mary> <Sue, John>

<John, John> <John, Mary>

<John, Sue>

And, again, we can say that [| Love |] = {<x,y> | x loves y} or

{<x,y> | Love(x,y)}

Clearly, for three-place predicates, we'd be considering a set of ordered triples picked out of UxUxU, etc. Predicates can be of any valence (i.e. can have any number of terms) in predicate logic.

7. New primitive elements in our formal system (previously we only had atomic statements and connectives):

Predicates: A,B,Love,Father of, W, etc..

Terms: type 1:

Constants: Chris, f, j, m, Sue, p

The combination of predicate and the appropriate number of terms creates the equivalent of an atomic proposition, which can be combined with the connectives to form wffs as specified in the propositional logic.

Properties of (two-place) relations

So, now we've got the notion of a relation down, we can define some of the properties we discussed earlier in a systematic way.

7. a. Converse predicates

A relation R is the converse of another relation S iff, for any x and any y, R(x,y)-->S(y,x). Examples were given earlier (above/below, grandmother of/granddaughter of, etc.); English often expresses the converse of a relation in a passive: Sue was touched by Mary/Mary touched Sue.

b. Symmetric relations

A relation R is symmetric iff, for any x and any y, R(x,y)-->R(y,x). Again, ex. were given earlier (spouse, roommate, twin of, sibling of, resemble); English can express symmetry with a plural subject and "each other" in object position: Sue and Mary like each other.

c. Reflexivity

A relation R is reflexive iff, for any x, R(x,x). That is, reflexive properties are tautologous when the same term is taken to fill both argument positions. Examples include "identical to", "the same as", "as happy as" (as anything as, really), "resemble", etc. English can encode reflexivity with (drum roll) a reflexive anaphor, as in, Pat liked herself.

d. Transitivity

This is the same notion we're familiar with from propositional logic, only now we can apply it to more relations than conditionals. A relation R is transitive iff, for any x, y and z such that R(x,y) and R(y,z), then R(x,z). Some transitive relations are older than, north of, sister of, etc.

So, point one for predicate logic is that we can talk formally about a few lexical-semantic properties of predicates, which also happen to be mathematical properties of relations (there are a few more which don't seem to be so relevant to natural language).

Quantification

So far, we've got constants (which correspond to proper names in natural language) and predicates (which correspond to most verbs, adjectives, and nouns). Predicates can take constants as arguments, so we can translate sentences like "Phil ponders" into predicate logic as the wff Ponders(p). We've also still got everything we had from propositional logic, i.e., the connectives. Saturated predicates like Ponders(p) correspond to the atomic propositions in propositional logic.

We still can't express the inference instantiated in 3, though, and we can't until we introduce the notion of a variable and binding of variables by quantifiers. Expressions like "all men", "nobody" and "some man" don't refer to a particular individual, and are hence called "non-referential" NPs. (Proper names are referential). Rather, they pick out some quantity of the set denoted by the predicate they're attached to (hence they're called quantifiers).

In the predicate notation in set theory, we've seen something analogous to a variable, that is, the x's and y's we use to stand in for all potential members of a certain set. We're now going to state firmly that variables, which we'll represent with the letters x, y, z, (sometimes v, w if necessary), are terms (arguments) which can range across any element in U. When they're used as an argument of a predicate, they create an open proposition. The predicate logic formula Ponders(x) can be evaluated as true iff x ponders – but who is x? Quantifiers, attached to the front of the formula, specify who x is, and close the proposition.

NOTE: Again, I have used strange ASCII equivalents for the correct symbols for the universal and existential quantifiers. Don't use these unless you have to.

8. a. Universal Quantifier

"Âx Read "for all x"

b. Existential Quantifier

¤x Read "There is an x"

Intuitively, Âx just means that for every x that you encounter in the wff which follows it, you have to check to be sure that each possible instantiation of x (i.e. all x in the universe of discourse) has the property which is predicated of it. Eg. in the formula Âx(Ponders(x)) (you can use brackets around your predicates or not; keep in mind they're atomic propositions), which translates to English as "Everything ponders", you have simply stated that everything in the universe of discourse is in the set of ponderers. Similarly, Âx(Love(x,John)) you have stated that everything in the universe loves John.

¤x simply means that for every x in a wff that follows it, there is one instance of something bearing the property predicated of x in the universe of discourse. ¤x(Ponders(x)) says "Something ponders" and claims that there is at least one thing in the universe which ponders; ¤x(Love(x, John)) says "Something loves John" and claims that there is at least one thing in the universe which loves John.

9. Everyone loves someone.

(i) ¤yÂx(Love(x,y))

(ii) Âx¤y(Love(x,y))

We're now ready to treat intuitively the two readings which are attached to the English sentence in 9. (9) could mean that there is a single person in the universe who is loved by every other person in the universe. Or, it could mean (more usually), for every person in the universe, there is at least one person (not the same for everybody, necessarily) who they love. The distinction can be captured by the two representations in 9 above. Because in (i) the existential quantifier comes first, it has scope over the universal quantifier: it picks one person out of the universe of discourse, and then you check to see if every person in the universe of discourse loves that one person. In (ii), though, you check every person in the universe of discourse to see if there is one person in the universe of discourse that they love, and they don't have to be the same person each time.

The ability to treat this kind of scope ambiguity is another point for predicate calculus.

Now, note that for any wff, a quantifier taking scope over it quantifies over the whole wff, so Âx(P(x) & Q(x)) asserts that everything in U is both P and Q. However, if some variable occurs in a formula and there is no quantifier that takes scope over that formula, the variable is said to be free (as opposed to bound). Natural language examples of free variables (whose reference is fixed by the discourse) would be something like "Susan thought that Laura liked him." A bound occurence of a variable in natural language would be "Every girl knew that her mother liked her." (there are two bound variables in this sentence, on one interpretation).

10. a. Bound variables:

"Every girl knew that her mother liked her."

b. Free variable

"Susan thought that Laura liked him."

Finally, let's consider the interaction of quantifiers with other connectives. First, let's look at something like "All men are mortal."

11. Which representation of the following sentences is correct?

a. All men are mortal

Âx(Man(x) & Mortal(x))

Âx(Man(x)-->Mortal(x))

b. Some man is mortal.

¤x(Man(x)&Mortal(x))

¤x(Man(x)-->Mortal(x))

Essentially, this breaks the sentence down into two separate atomic propositions, "x is a man" and "x is mortal" and states a connection between the two sets described as male and mortal.

Very important!! Universal quantifier implies the conditional, while the existential quantifier implies the conjunction.

Finally, consider the relationship between the following sentence and its representations:

12. a. Not everyone is happy.

~Âx(Happy(x))

¤x~(Happy(x))

b. Everyone is not happy.

Âx~(Happy(x))

~¤x(Happy(x))

Equivalences like this one indicate that negation interacts interestingly with quantifiers (just as it does with propositional logic connectives; recall that all of the connectives can be defined in terms of v and ~). This allows us to formulate our first predicate calculus equivalence: quantifier negation

13. Laws of Quantifier Negation

(i) ~ÂxP(x) Û ¤x~P(x)

(ii) ~¤xP(x) Û Âx~P(x)

(iii) ÂxP(x)Û ~¤x~P(x)

(iv) ¤xP(x)Û ~Âx~P(x)

Homework:

1. DeSwart p 94 problem 2 ( Do this as a series of equivalences, as in the simplification problem last time).

2. DeSwart p. 94 problem 3.

3. De Swart p. 94 problem 4.

4. DeSwart p. 94 problem 5. For sentences number (vi) and (vii) substitute the following:

(vi) All that glitters is not gold.

Give the key, and both possible readings (i.e. two different possible predicate-logical representations). Indicate which representation corresponds to the normal use of the idiom.

5. In each of the following expressions, identify all bound and free occurences of variables, and underline the scope of the quantifiers.

(a) ÂxP(x) v Q(x,y)

(b) Ây(Q(x)-->ÂzP(y,z))

(c) Âx~(P(x)-->¤yÂzQ(x,y,z))

(d) ¤xQ(x,y) & P(y,x)

(e) Âx(P(x)-->¤y(Q(y)-->ÂzR(y,z)))

6. Bonus question: Give a predicate-logical representation of "A man is mugged every 3 seconds". This one I haven't quite shown you how to treat yet, but have a stab at it and try to get the scope ambiguity to fall out. Indicate which representation corresponds to the funny reading.