564 Lecture 9 Sept. 21 1999

1 Problem set answers:

1. The main point of this question was to point out the need for a predicate quantifying over persons in expressions like "Everyone" and "Nobody", since in the human-centric worldview we all share it's easy to think that everything of any consequence is a person, especially when a quantified expression like "everyone" is written as one word. The secondary point was to provide practice in translation, and it's a fairly tricky little set of translations.

(a) No one answered every question

Regular reading (people might have answered some questions, but no one person answered all the questions)

~¤x(Person(x) & Ây(Question(y)-->Answer(x,y)))

Here's a version of the regular reading in PNF with conditional changed to &:

Âx¤y(~Person(x) v (Question(y)&~Answer(x,y)))

"For all x, there is a y, such that either x is not a person or y is a question and x didn't answer y." (i.e. for non-persons, there isn't (necessarily) a question that they didn't answer, but for persons, there is at least one question that they didn't answer).

This means no one answered any question, and it's not really an available reading for (a):

Âx(Question(x)-->~¤y(Person(y)&Answer(x,y))

(b) Everyone likes Mary except Mary herself

Âx((Person(x)-->(~x=m <-> Like(x,m))

Note: this would have been wrong:

Âx(Person(x) -->(Like(x,m)&~x=m))

because Mary's a person, this will make a contradiction for one possible assignment to x: it'll say, if Mary's a person, then mary likes mary and mary isn't mary. This will give the whole clause the value 0, when we want it to be true, when the situation is as described.

(c) No one answered any question that everyone attempted.

ÂxÂy((Question(x)&Person(y)&Attempted(y,x))-->~¤z(Person(z)& Answered(z,x)))

This is tricky because you have to introduce the "person" predicate twice, because it's quantified over twice ("no one" and "everyone").

(d) ÂxÂy((Person(x)&Question(y)&Attempt(x,y))-->Answer(x,y))

2. (a) Ax¤y(F(y,x))&Âz(O(z)-->I(z))

(i) ÂzÂx¤y(F(y,x) & (O(z)-->I(z)))

This is equivalent to 2(a). In example 16 in handout #7, I showed you that you can move a quantifier from the second half of a conjunct to the front of the formula, by a long long series of equivalences, as long as the relevant variable is free in the first conjunct. Here's that proof again, applied to this example (notice that the Âx¤y only has scope over the first conjunct, so it must be included inside the brackets around the whole formula!):

Âx¤y(F(y,x))&Âz(O(z)-->I(z))

~~(Âx¤y(F(y,x))&Âz(O(z)-->I(z))) Double negation

~(~Âx¤y(F(y,x))v~Âz(O(z)-->I(z))) De Morgan's

~(~Âx¤y(F(y,x))v¤z~(O(z)-->I(z))) Quantifier negation

~(Âx¤y(F(y,x))-->¤z~(O(z)-->I(z))) conditional law

~¤z(Âx¤y(F(y,x))-->~(O(z)-->I(z))) Quantifier movement

Âz~(Âx¤y(F(y,x))-->~(O(z)-->I(z))) Quantifier negation

Âz~(~Âx¤y(F(y,x))v~(O(z)-->I(z))) conditional law

Âz(~~Âx¤y(F(y,x))&~~(O(z)-->I(z))) DeMorgan's law

Âz(Âx¤y(F(y,x))&(O(z)-->I(z))) Double negation

(You could have just cited the proof in the handout, not needing to work through it all again, and given the final form there with the Âz outside). Now, all we need to do is move the Âx¤y outside and Bob's your uncle:

Âz~~(Âx¤y(F(y,x))&~~(O(z)-->I(z))) Double negation

Âz~(Âx¤y(F(y,x))--> ~(O(z)-->I(z))) conditional law c

Âz~¤x(¤y(F(y,x))--> ~(O(z)-->I(z))) Quantifier movement

ÂzÂx~(¤y(F(y,x))--> ~(O(z)-->I(z))) Quantifier negation

ÂzÂx~Ây(F(y,x)--> ~(O(z)-->I(z))) Quantifier movement

ÂzÂx¤y~(F(y,x)--> ~(O(z)-->I(z))) Quantifier negation

ÂzÂx¤y~~(F(y,x) & ~~(O(z)-->I(z))) conditional law c

ÂzÂx¤y(F(y,x) & (O(z)-->I(z))) Double negation

 

(ii) Âz¤yÂx(F(y,x) & (O(z)-->I(z)))

This is not equivalent to (i) above, which is equivalent to the first string, because Âx and ¤y have been reversed, which doesn't preserve truth conditions (note the unidirectionality of the implication of the third law of Quantifier (in)Dependence.) This says, in English, There is one thing that is the father of all things and all odd numbers are integers.

(iii) ÂxÂz¤y(F(y,x) & (O(z)-->I(z)))

This is equivalent to (i), which is equivalent to the first sentence, because of the first law of Quantifier Independence: the order of two adjacent universal quantifiers is immaterial.

(b) B(a)-->~Âx(M(x)-->H(x))

(i) Âx(B(a)-->~(M(x)-->H(x)))

This is not equivalent to (b). If we apply the first law of quantifier movement to this formula, we get

(i') B(a)-->Âx~(M(x)-->H(x))

and since we know that ~Âx(M(x)-->H(x)) is not equivalent to Âx~(M(x)-->H(x)), we know they're not equivalent. What does (i) actually say in English?

It says, for all x, if Adam is a bachelor, then it is not the case that if x is a man, then x is a husband. It's like that example with Âx~(Glitter(x)-->Gold(x)) I showed you last week. Let's manipulate the consequent of the matrix conditional to get a better idea of what it really says:

(i') B(a)-->Âx~(M(x)-->H(x))

B(a)-->Âx~~(M(x) & ~H(x)) Conditional law c

B(a)-->Âx(M(x) & ~H(x)) Double negation

In other words, it says, "If Adam is a bachelor, then everything is a man and everything is not a husband".

(ii) ¤x(B(a)-->~(M(x)-->H(x)))

This is equivalent. Starting with (b), here's the series to get (ii):

B(a)-->~Âx(M(x)-->H(x))

B(a)-->¤x~(M(x)-->H(x)) Quantifier negation

¤x(B(a)-->~(M(x)-->H(x) Quantifier movement

(iii) ~(B(a)-->Ax(M(x)-->H(x)))

This is not equivalent. It says, "It is not the case that if adam is a bachelor, then all men are husbands; that is, it negates the main conditional rather than the formula for "all men are husbands". In the original situation described in (b), the statement would be false if adam were a bachelor and all men were husbands, and true if adam isn't a bachelor or if not all men are husbands. In the situation described in (iii), the statement is false if adam is a bachelor and all men are husbands, just like in (b), but it's also false if adam isn't a bachelor (because the conditional is true if its antecedent is false). Transforming it into an either/or statement, we get:

(iii') ~(~B(a) v Âx(M(x)-->H(x)) Conditional law a

(iii'') ~~B(a) & ~Ax(M(x)-->H(x)) DeMorgan's law

(iii''') B(a) & ~Ax(M(x)-->H(x)) Double negation

Adam is a bachelor and not all men are husbands.

(iv) B(a)-->¤x(M(x)&~H(x))

This is equivalent. Starting with (b), here's the series to get to (iv):

B(a)-->~Âx(M(x)-->H(x))

B(a)-->¤x~(M(x)-->H(x)) Quantifier negation

B(a)-->¤x~~(M(x)& ~H(x)) Conditional law c

B(a)-->¤x(M(x)&~H(x)) Double negation

3. (a)

1 ~¤x(P(x)&Q(x)) premise

2 ¤x(P(x)&R(x)) premise

3 P(d)&R(d) 2, Existential Instantiation

4. Âx~(P(x)&Q(x)) 1, Quantifier negation

5. ~(P(d)&Q(d)) 4, Universal instantiation

6. ~P(d) v ~Q(d) 5, DeMorgan's

7. P(d) 3, Simplification

8. ~Q(d) 6,7 Disjunctive Syllogism

9. R(d) 3, Simplification

10. R(d) & ~Q(d) 8,9, Conjunction

11. ¤x(R(x) & ~Q(x)) 10, Existential generalization

(b)

1 Âx(P(x)-->Q(x)) premise

2 P(a) premise

3 R(a) premise

4. P(a)-->Q(a) 1, U. I.

5. Q(a) 2,4, M.P.

6. R(a) & Q(a) 3,5, Conjunction

7. ¤x(R(x)&Q(x)) 6, Existential Generalization

(c)

1 Âx((P(x)vQ(x))-->R(x)) premise

2 Âx((R(x)vS(x))-->T(x)) premise

3 (P(c)vQ(c))-->R(c) 1, U.I.

4. ((R(c)vS(c))-->T(c)) 2, U.I.

5. | P(c) Auxiliary premise

6. | P(c)vQ(c) 5, Addition

7. | R(c) 6, 3, M.P.

8. | R(c)vS(c) 4, Addition

9. | T(c) 8,4, MP

10. P(c)-->T(c) Conditional Proof

11. Âx(P(x)-->T(x)) Universal Generalization

4.

(i) U = {x| x is a person}

A(x) = " x get an A for x's exam"

S(x) = "x in semantics class"

j = Jim

1 Âx(S(x)-->A(x)) Premise

2 ~A(j) premise

3 S(j)-->A(j) 1, U. I.

4. ~S(j) 2,3, M.T.

(ii) P(x) = x is a philosopher

A(x) = x is absent-minded

s = Socrates

¤x(P(x) & A(x))

P(s)

A(s)

This is not valid. Imagine a situation where Socrates is a philosopher and is not absentminded, but Aristotle is a philosopher and Aristotle is absent-minded. Then both premises would be true, but the conclusion false.

(iii) L(x) = x is a linguist

PP(x) = x believes in the parity principle

B(x) = x is a behaviorist

D(x) = x is a dietician

a = My aunt

1 ~¤x(L(x) & PP(x) premise

2 Âx(PP(x) v B(x)) premise

3 Âx(D(x)-->~B(x)) premise

4 D(a) premise

5 Ax~(L(x)&PP(x)) 1, Q.N.

6. ~(L(a)&PP(a)) 5, U.I.

7. PP(a)vB(a) 2, U.I.

8. D(a)-->~B(a) 3, U. I.

9. ~B(a) 4, 8, M.P.

10. PP(a) 9,7, D.S.

11. ~L(a) v ~PP(a) 6, DeMorgan's

12. ~~PP(a) 10, Double negation

13. ~L(a) 11,12, D.S.

14. ~L(a) & ~B(a) 9,13, Conj.

15. ¤x(~L(x) & ~B(x)) 14, Existential Generalization

16. ¤x~(L(x)vB(x)) 15, DeMorgan's law.

(iv) F(x,y) = x forgives y

S(x) = x is a saint

1. Âx(¤y(F(x,y))-->S(x)) premise

2. ~¤x(S(x)) premise

3. Ax~(S(x)) 2, Quantifier negation

4. ~S(c) 3, Universal Instantiation

5. ¤yF(c,y)-->S(c)) 1, Universal Instantiation

6. ~¤y(F(c,y)) 5,4, M.T.

7. Âx~¤yF(x,y) 6, U.G.

(v) B(x)= x is a baby

I(x)=x is illogical

D(x)=x is despised

M(x)=x can manage a crocodile

1. Âx(B(x)-->I(x))

2. Âx(D(x)-->~M(x))

3. Âx(I(x)-->D(x))

4. B(c)-->I(c) 1, U.I.

5. I(c)-->D(c) 3, U.I.

6. B(c)-->D(c) 4,5 H.S.

7. D(c)-->~M(c) 2, U.I.

8. B(c)-->~M(c) 6,7, U.I.

2 Predicate logic and the structure of natural language

Predicate logic provides a good formalism for representing the different readings of ambiguous strings like that in 1. below:

1. Semantic ambiguity

Someone likes everybody.

(When U contains only people:

a) ¤xÂy(Like(x,y))

b) Ây¤x(Like(x,y))

a), of course, means that there is a particular person such that that person likes all people. b) means that for each person, there is someone who likes them. (Note of course that a) entails b), but not the other way around, as we've seen before).

And, of course, not only the quantifiers may bear scope, but so does negation. So the sentence in 2) has two readings, similar to the two readings in 1 above:

2. No one has seen a unicorn

a) ~¤x¤y(Unicorn(y) & See(x,y))

"There is no person for whom there is a unicorn such that that person saw it"

b) ¤y~¤x(Unicorn(y&See(x,y))

"There is a unicorn such that there is no person who's seen it"

So if we translate our sentence in 1) into a predicate logical structure, we'll have represented one and only one of the possible readings of that structure. Essentially, we'll have captured the scopal ambiguity of 1) if we propose that it has two different translations into the formal language.

Aha! Translations into the formal language! That was our original goal, to provide adequate formal translations. So far we don't have any concrete, piece-by-piece way to take a sentence of English and represent it in a formal language. Ultimately we will. For the moment we're going to stick basically with our rules of thumb, but, just to give you the feel for the job, we do want to capture the distinctions between two possible readings for a single string in a principled way. Recall that we're adopting a compositional approach to the representation of meaning, which necessarily takes the meanings of the words and the way in which they're put together and gives a particular translation into predicate logic.

The trick is to explain when a sentence will have ambiguous scope, and when it will not. If we have a good procedure for taking a sentence of natural language and representing it in predicate logic, then a sentence like "Everybody loves somebody" will necessarily have two translations into predicate logic which will be predicted by the fact that it contains two different quantifiers, and this will predict that that sentence should have two different readings.

So, for starters, let's assume that "Someone" and "Everyone" are not ambiguous; that is, that they mean the same thing in any possible translation into predicate logic. "Someone" means $ x(P(x)) and "Everyone" means " x(P(x)). So if we want a compositional account of the translation into predicate logic, we're forced to situate the ambiguity of "Everyone likes someone" somewhere else, in particular, somewhere in the representation or somewhere in the derivation.

First, let's consider the structure of the sentence "Someone likes everyone"

3. [S [NPSomeone] [vp [v likes][NP everyone] ] ]

There is no structural ambiguity here, of the type present in something like (4 a and b)

4. Structural ambiguity

"Someone likes intelligent students and faculty":

a)

[S [NPSomeone] [vp [v likes][NP[Adjp intelligentAdjp] [NP [NP studentsNP] and [NP facultyNP] NP] NP] vp] S]

[S [NPSomeone] [vp [v likes][NP [NP [Adjp intelligentAdjp] [NP studentsNP] NP] and [NP facultyNP] NP] vp] S]

This type of ambiguity can arise essentially because if [NP and NP] is itself an NP (which it must be), and structures of the type [Adj NP] are also possible NPs, then

[[Adj NP] and NP] should be possible as well as [Adj [NP and NP]]. Essentially, the recursive rules that govern the structure of syntax predict the existence of this type of structural ambiguity (which of course translates directly to semantic ambiguity, assuming that the semantic interpretation of what the word "intelligent" modifies is determined by the structure).

So, one way of approaching the semantic ambiguity of "Someone likes everyone" is to propose that there are two different structures for the two interpretations. Since, in English, there doesn't seem to be two different structures for the sentence, at least on the surface, we have to find a way to create two different structures which can then be interpreted appropriately.

One possible approach to this problem is to propose a level of "Logical Form", which itself is a syntactic representation of the sentence, but is a level at which ambiguities that exist in the surface form are structurally disambiguated. The syntactic rules which create this level are the same as those which apply to create any other form of syntactic representation, and this level is the one which is subject to interpretation. The derivation will look like this:

5.

{set of lexical items] -->(some syntactic rules)-->(both 1 and 2)

(1)-->(more syntax)--->Logical Form

(2)-->(phonology)--->Phonological Form

3. Syntactic movement

Let's say you take your basic lexical items, "Someone", "likes" and "everyone, and you begin building the tree for the sentence "someone likes everyone". Up to the point at which you have the form you utter, "Someone likes everyone", there is no difference between the derivation for this tree and the derivation for "John likes Mary". However, imagine that you utter the sentence before you're finished building it. There is an additional step in the building, which will disambiguate the scope of the quantifiers, and that is the final form which our interpretation principle will apply to. There will be two distinct structures for the two distinct interpretations.

What happens to produce these two structures? In order to understand how that single form could be mapped onto two different forms, we have to understand the notion of syntactic movement.

Consider the sentences in 6:

6. a) Mary likes Fanny.

a') Who does Mary like?

b) Sam thinks Mary likes Fanny.

b') Who does Sam think likes Fanny?

a) and b) are possible answers to the questions a') and b') respectively. The question in a') has to do with the object of "like", while the question in b') has to do with the subject of "like". In essence, a' and b' could be better represented by the following:

7. a") Mary likes who?

b") Sam thinks who likes Fanny?

The idea is that the "who", which starts out in argument position (just like "fanny" and "mary") is moved to the front of the sentence by a syntactic rule, giving a representation something like (8) for a':

8. Wh-movement

[S' Whoi (does) [S [NPMary] [vp [v like][NPti] VP] S] S']

This explains why who, which occurs in the front of the sentence, is interpreted as an object: it started off as an object, but was moved away from there. In its former position, it left an invisible element, or trace, which behaves like a variable bound by the question word.

Recall that so far we've talked about variables being pronouns. A supporting piece of evidence for claiming that the left-behind trace is a variable is that in some languages, a pronoun occurs in the spot that's empty in English. Even in English, sometimes a pronoun is acceptable (although not standard):

9. Pronouns as variables in the position of traces

Whoi does John want to know if shei likes him?

Now, given the possibility of this type of structure building, let's consider what might happen if quantifiers can move in the same way that question words can. (The idea is that both question words and quantifiers need to take scope at LF, so they both can undergo movement. Non-quantified NPs (like "John") don't need to take scope, so they don't undergo any movement). First, let's imagine that "everyone" moves to the front of the sentence and leaves a trace, which is interpreted as a bound variable:

10. Quantifier movement

[S' Everyonei [S [NP Someone] [vp [v likes][NPti] VP] S] S']

Now, let's imagine that "someone" does the same thing:

11.

[S" Someonej [S' Everyonei [S [NP tj] [vp [v likes][NPti] VP] S] S'] S"]

Now, we have something that will be very easy to translate into predicate logic. The quantifiers are at the front, in a particular order, and the traces represent the variables that they bind. The structural difference between subject and object represents who's liking who, which in predicate logic is represented by the order of the ordered pair <x,y>.

If we had done the same pair of movements in a different order, we would have gotten the following structure:

[S" Everyonej [S' Someonei [S [NP ti] [vp [v likes][NPtj] VP] S] S'] S"]

And this will translate into the other reading in predicate logic. So if these are the two "Logical Forms" of the sentence "Someone likes everyone", and these two logical forms are made possible by the simple claim that "Quantifiers move to the front of the sentence, leaving a variable behind", we predict that the simple sentence with two quantifiers in it will have two readings. That is, the semantic ambiguity is due to an unseen structural ambiguity which arises because of the simple premise that quantifiers move, and this movement happens after the stage of the derivation of the representation at which we pronounce the sentence.