564 Lecture 2 Aug. 26 1999

1. A structured lexicon

When asked about meaning, most people immediately think of dictionary meaning, that is, word-meaning (although as we saw last time there’s plenty of other types of meaning). It’s understandable. The first thing anyone does when learning a language, as a child or an adult, is acquire a list of words, mainly concrete nouns. (The amazing thing about children is that even though they don’t know the meaning of the surrounding words, they’re able to pick the appropriate string out of a "That’s a ___" and associate it with the appropriate object (and not, e.g., a subpart of that object, or an action associated with the object, or an object instead of an appropriate action, or whatever). Adult second language learners have the advantage of already knowing a list of concrete nouns, and they just start off by learning a bunch of new names for their existing list by making a correspondence between English names and their learning language’s names).

So clearly one thing semanticists need to do is develop a list (or lexicon) of word meanings. In fact, not just word meanings, but morpheme meanings: usually defined as the smallest meaningful unit. So, for example, "dog+s" has two meaningful morphemes in it: one denoting a dog and the other denoting plurality. "Mean+ing+ful" has three morphemes in it, one denoting the verb to mean, one indicating a nominalization of that verb (referring to the action of meaning or the result of meaning) and a third indicating a certain type of adjectivalization of that verb, predicating the property of being "full of" the noun (meaning) of whatever noun the final adjective is applied to. A satisfactory lexicon will have lexical entries for all of these items.

(1) A lexicon, containing lexical entries for morphemes, e.g.:

    1. "dog" = dog
    2. "-s" = plural
    3. "mean" = to mean
    4. "-ing" = the result of the action it’s applied to
    5. "-ful" = the property of being full of the noun which it’s applied to.

Lexical items may have particular properties, some of which you’re familiar with, that help organize the lexicon and some of which will turn out to be important later.

A word may be ambiguous, that is, it may have two meanings. (Another way of putting this is that a word may be homophonic with another word — English spelling being the perverse thing it is, you may imagine that ambiguity is distinct from homophony — but since writing is not a necessary part of language, and even when it is, distinguishing between some but not all homophonic words with spelling conventions is not a cross-linguistic practice). Some examples of ambiguous words (or pairs of homophonic words) are:

(2) ambiguity

(a) bank

(b) bunk

(c) pitcher

Ambiguity in word interpretation is usually resolved clearly by the context, but there’s some types of joke that rely entirely on word ambiguity. Consider the following actual headlines:

(3) Safety Experts Say School Bus Passengers Should Be Belted

Drunk Gets Nine Months in Violin Case

Iraqi Head Seeks Arms

Farmer Bill Dies in House

Stud Tires Out

Prostitutes Appeal to Pope

British Left Waffles on Falkland Islands

Reagan Wins on Budget, But More Lies Ahead

Red Tape Holds Up New Bridge

Deer Kill 17,000

Man Struck by Lightning Faces Battery Charge

Ban on Soliciting Dead in Trotwood

Lansing Residents Can Drop Off Trees

Prosecutor Releases Probe into Undersheriff

Some Pieces of Rock Hudson Sold at Auction

Other words you may happen to know to be essentially the same in meaning, such that if one is true in a given context, the other is as well:

(4) Synonymy

(a) beautiful/lovely

(b) mercury/quicksilver

(c) big/large

(d) under/beneath

(e) cut/slice

Other words may be the opposite in meaning, such that if one is true in a certain context, the other must not be true:

(5) Antonymy

(a) dead/alive

(b) happy/sad

(c) tall/short

(d) under/over

(e) inside/outside

Some words, if true, may entail the truth of other words. Such words are said to be hyponomous with the other words. A string of such words can categorize things, creating a taxonomy, also known as an isa hierarchy.

(6) Hyponomy

(a) a german shepherd is a dog, is a mammal, is a living being

(b) a dandelion is a flower, is a plant, is a livingbeing

(c) a pen is a writing instrument, is a tool, is a man-made thing, is inanimate

(d) I live at 4401 E 7th St., Tucson, AZ, the United States, North America, the Earth, the Solar System, the Milky Way, the Universe.

Predicates which take two or more arguments can have the property that they are symmetric — that is, if a P b is true, then b P a is also true. (This notion will crop up again soon).

(7) Symmetric predicates

(a) Alex is married to Bertha ß à Bertha is married to Alex.

(b) Max is dancing with Sue ß à Sue is dancing with Max.

(c) The plane is beside the cliff ß à The cliff is beside the plane.

(d) The broccoli is touching the potato ß à The potato is touching the broccoli

Finally, some predicates which take two or more predicates can entail the truth of other predicates which take the same arguments in another order. That is, if aPb then bQa. If this is the case, then P is the converse of Q. (This notion will also crop up again soon).

(8) Converse predicates

(a) Alex is the husband of Bertha ß à Bertha is the wife of Alex

(b) Sue sold a book to Bill ß à Bill sold a book to Sue.

(c) The plane is over the mountain ß à The mountain is under the plane.

(d) Bill taught Sue Spanish ß à Sue learned Spanish from Bill.

Computational linguists, especially, who want to model a speaker’s knowledge of their language, often represent a lexicon as a network, where items are linked to each other depending on the types of properties they have (as above), rather than as a simple dictionary.

Sometimes such modeling involves breaking words down into "basic" concepts and treating them as composed of features plus some descriptive material. This allows one to capture relations between items like man/boy (+ADULT), (+MALE), (+CAUSE) etc. It can also explain what the problems are with such sentences as "My uncle is pregnant", "My chair is talking" or (more subtly) "This oak tree has a nest" (Headline: Two Soviet Ships Collide; One Dies). That is, there are entailment relations and implications of words that seem to lend themselves to a formal treatment. We’re not going to worry about such things. More on why not in a minute.

(skipping section on generative semantics).

2. Structured strings of words.

We are going to worry about such things at the sentence level, though.

First, a lot of the notions we just saw apply to words also apply to sentences too. Sentences can be synonomous:

(9) (a) It is easy to please John.

(b) John is easy to please.

(Is there such a thing as true synonymy? Perhaps not with that sort of example — something must help us decide when we’re going to say (a) and when we’re going to say (b), usually information/discourse structure. Passives are also often given as examples of synonymous sentences, because they "describe the same situation" as the active. However,

(10) (a) Mary kicked the ball

(b) Mary’s foot came in contact with the ball and propelled it forward.

also describe the same siutation but are certainly not synonymous. Perhaps the closes thing to true synonymy is embodied by such sentences as (11):

(11) All cats are furry/If a thing is a cat, it is furry/No cats are not furry.

This is more than synonymy, it’s logical equivalence, and we’ll be seeing lots more of it.)

Sentences can be ambiguous, in a few different ways. They can be ambiguous because of ambiguous lexical items; they can be structurally ambiguous, where a single string of words (some of which may be lexically ambiguous, as in some headlines above) map to two different structures. Here’s some structurally ambiguous headlines:

(12)

(a) Two Sisters Reunited after 18 Years in Checkout Counter

(b) Enraged Cow Injures Farmer With Ax

(c) Squad Helps Dog Bite Victims

(d) Stolen Painting Found by Tree

(e) Drunken Drivers Paid $1000 in 1984.

(f) Air Head Fired

(g) Hospitals are Sued by 7 Foot Doctors

(h) Killer Sentenced to Die for Second Time in 10 Years.

(Important: remember that the reason that two different structures predict two different meanings is that meaning is structure-dependent).

Now, to the crucial type of sentential ambiguity, involving neither lexical items or sentence structure:

(13) (a) A student accompanied every visitor.

(b) All that glitters is not gold.

(c) Everyone didn’t see Summer of Sam.

3 Inference between sentences

Just as we can infer that if beautiful is synonymous with lovely and ugly is an antonym of beautiful, then ugly should be an antonym of lovely, we can make inferences about the relationships between sentences too. At the word level, though, inference is often a fuzzy thing (if susan caused bill to die, did she kill bill?). At the sentence level, inference can be so forceful that it should be captured by the theory.

Sentences, unlike words, express messages that can be true or false when interpreted in relation to a situation in the world. While the antonymy or synonymy of certain words depends on their actual content, there are sentences whose antonymy or synonymy falls out simply from containing certain key function words. Consider (14):

Truth-functional connectives and entailment

(14) (a) Colorless green ideas sleep furiously.

(b) Colorless green ideas don’t sleep furiously.

(c) The moon is made of green cheese.

(d) The earth is flat.

(e) The moon is made of green cheese, and the earth is flat.

We know (a) and (b) are antonyms, independently of what "Colorless green ideas sleep furiously" means. In particular, we know that (a) and (b) cannot both be true at the same time. We know that if (e) is true, then (c) is true and (d) is also true, independently of what "The earth is flat" means or "The moon is made of green cheese" means. (Here comes propositional logic!) The relation of "if X, then it must be true that Y" is called entailment.

What about (15)?

(15) The moon is made of green cheese, but the earth is spherical.

As far as the truth of (15) goes, it’s true if both its conjuncts are true, just like (14e). What makes "but" different from "and"? (Suggestions?)

Answer: there’s some sort of presupposition of contrast going on, which is an extra, on top of the truth-functionality of the sentence. More on that in a moment:

First, consider (16):

Entailment isn’t structural, but semantic

(16) (a) The moon is green.

(b) The moon is a cheese.

(c) The moon is a green cheese.

(17) (a) Ally is a former cellist.

(b) Ally is a cellist.

(c) Ally is former.

Adjectives like former, while structurally adjectives, are semantically different from adjectives like green, resulting in different entailment patterns for the identical structure.

(18) Entailment: P entails Q iff the truth of P guarantees the truth of Q.

The difference between entailment and presupposition:

Now, consider the following pair:

(19) (a) Paul stopped smoking.

(b) Paul smoked.

Does (a) entail (b), or just presuppose it? We can test it. Presuppositions are preserved in certain embedded contexts, but not entailments. For example:

(20) (a) It was Jane who brought a cake.

(b) It wasn’t Jane who brought a cake.

(c) Was it Jane who brought a cake?

(d) Someone brought a cake.

(e) Jane brought a cake.

(a) entails (d), because if (a) is true, (d) must be true (given the information structure of clefts). (b) and (c), however, only presuppose (d): if (c) is true, for example, that doesn’t mean that (d) is true – it’s only likely (i.e. presupposed) that (d) is true, else (c) wouldn’t be uttered. There’s nothing about the truth conditions of (c), however, that entail (d).

Presupposition is sort of like an auxiliary hypothesis about the world we make when we hear a given sentence to help it make sense in context, and hence forms part of pragmatic study. Entailment is purely a truth-conditional relationship between sentences. The difference is also illustrated by (e), in which "Jane brought a cake" entails (d), but does not presuppose it – that is, you don’t have to assume (d) before you can make (e) make sense.

So, let’s consider the meaning of the connective "but":

(15) The moon is made of green cheese, but the earth is spherical.

What has to be true in the real world for this to be true? The moon has to be made of green cheese and the earth has to be spherical. If either or both of these situations doesn’t hold, then (15) is false. The entailment patterns for (15) are the same as for "and". However, in order for (15) to be felicitous in a given utterance situation, we have to invent some preexisting assumption in the discourse that these two hyptheses shouldn’t both hold. It’s maybe a little clearer if you consider a situation where there’s twins, Click and Clack, and

they normally like all the same things, and both speaker and hearer know this. Then (20a) might sound odd, out of the blue, but (20b) would be fine.

(20) (a) Click likes peas and Clack doesn’t.

(b) Click likes peas, but Clack doesn’t.

Or notice the contrast implied in the sequence in (21a) but not in (21b):

(21) (a) 2+2 is 4, and 2x2 is 4. 3+3 is 6, but 3x3 is 9.

(b) 2+2 is 4, and 2x2 is 4. 3+3 is 6, and 3x3 is 9.

Right now, we’re only interested in the truth-functionality of connectives like "but", and when we’re doing our translation, we’ll abstract away from any presuppositions sentences might have, and stick with entailments. Presuppositions are more a pragmatic matter – or at least they will remain so for a while.

4 The difference between lexical implication and sentence-level entailment

(22) (a) Pat is my father-in-law / chicken.

(b) Pat is not pregnant

Certainly 22a implies 22b – that is, if you hear 22a, you can guess 22b. What do you have to know to know this? You need to know that "father", the head of "father-in-law", necessarily applies to male entities, and you need to know that "pregnant" necessarily applies only to female entities (female mammals, in fact – replace "father-in-law" with "chicken" and you get the same implication). In fact, you have to know quite a bit about the world. But consider (23aandb):

(23) (a) Pat is my father in law.

(b) Either Pat is my father in law or Pat is Sue’s father in law.

(c) Every woman is a farmer.

(d) If Pat is a woman, then Pat is a farmer.

You don’t need to know anything about the world to know that if 23a is true, then 23b is true. In fact, 23a could be any predicate at all, and we would know that 23b is true, if the first conjunct of 23b is 23a and 23a is true. (E.g. "The moon is made of green cheese. Either the moon is made of green cheese, or colorless green ideas sleep furiously".) Similarly, if (23c) is true, then (23d) must be true, independently of what "woman" or "farmer" actually means. Before trying to integrate lexical implication into the theory (the job of lexical semanticists) we’re going to try to model semantic entailment, like that in (23).

4 So, what are we talking about?

As the discussion above suggests, we will be talking about the circumstances under which a sentence may be said to be true or not – also known as its truth conditions. We’ll say that we know the meaning of a sentence when we know what it takes to make it true; when we know its truth conditions. We’ll be trying to specify truth conditions for each sentence of English that we translate into our formal language. Essentially, that’s the way we make our interpretation algorithm that models the mind’s interpretation algorithm.

(24) Truth Conditions: (A description of) the state the world must be in for a given sentence to be true.

(Knowing when a sentence sounds appropriate in a given context involves partly knowing what it means, and partly knowing what it presupposes or implies, and partly knowing what the speaker is concerned about when it’s uttered. We’re not worried about when a sentence may be said; we’re only worried about what it means when it’s said).

The difference between Sense and Reference (Sinn and Bedeutung)

We’ve seen one example of this:

(25) Sense: "Here" means an area around the speaker.

Reference (in a particular situation): Douglass 206

Other (famous) examples from Frege:

(26) (a) Hesperus is Phosphorus.

(b) Hesperus is Hesperus.

(Both Hesperus and Phosphorus denote Venus, in its incarnation as the Morning Star and the Evening Star. The ancient Babylonian astronomers, however, didn’t know they were both the same planet.) Frege invented the distinction between Sense and Reference to account for the fact that although (24a) is informative (because the two names have different senses), (24b) is not. The idea is that if the only relevant notion was reference, 24a and 24b would be saying exactly the same thing and hence wouldn’t be informative. In fact, 24a and 24b end up being true in the same circumstances, but have different relevant truth conditions.

More generally, we know the sense of a sentence if we know its truth conditions. We know if it’s true or not (the truth value, or in some sense the reference of the sentence) when we compare the truth conditions to the real world, and see if they hold. Another example distinguishing between sense and reference is in (25):

(27) Everyone named Ann smokes.

In order to know the meaning of this sentence, we don’t need to know if it’s true or not. We just need to know how the world would have to be in order to make it true. If all we were talking about was reference, we’d have to go find out whether everyone that we could determine was named Ann also smoked.

5 Compositionality

A central principle we’ll be dealing that was already mentioned is the idea that meanings are built up from the sum of their parts, their parts being syntactic substructures. That is, if you know the meaning of "Jane", "Susan" and "likes", and the syntactic rules of how to put categories together to make sentences and the semantic rules which give you the meanings of put-together categories., then you also know the meaning of "Jane likes Susan" and "Susan likes Jane" (and, crucially, how they are different).

(28) Principle of Compositionality: The meaning of a whole sentence is determined by the meaning of the individual parts (morphemes) of the sentence and the way in which they are combined.

(29) (a) Jane likes Susan.

(b) Susan likes Jane.

(c) Jane thought Susan liked Jane

(d) Susan thought Jane thought Susan liked Jane

(e) Jane thought Susan thought Jane thought Susan liked Jane

(f) The tapir is hiding in the barn with the aardvark

Not only that, if you know, in addition, the meaning of "thought", then you know the meaning of "Jane thought Susan liked Jane" and "Susan thought Jane thought Susan liked Jane" and "Jane thought Susan thought Jane thought Susan liked Jane" and so on. That is, you know the meaning of an infinite number of sentences. This is what makes our theory of semantics generative. You know the meaning of "The tapir is hiding in the barn with the aardvark" even though you’ve never heard it before.

The idea that meanings of bigger things are built up out of the meanings of littler things according to how the littler things are put together is called the Principle of Compositionality, and it’s Frege’s idea too. More on this later. However, it’s worth noting that there are different ways of applying this principle. You could either say, Let’s take the whole tree and take it apart into bits as we interpret it (what we’re going to do) – that’s an interpretive semantics. Or, you could do the interpretation as you build the tree, applying a rule for interpretation every time you apply a rule for building a tree; this type of interpretation theory is derivational. We’ll see some important differences a bit later.

(30)

Interpretive theory of semantic interpretation: Build the tree, then evaluate it.

Derivational theory of semantic interpretation: Interpret it in parallel while building it.

6 Object language and metalanguage: code and message

Distinction between an object language and metalanguage can be clearly seen in the examples in 26

(31) (a) January has 31 days.

(b) January has 7 letters.

26a is a normal statement about the abstract object that the string "January" refers to. 26b is a statement about the string "January". 26b is a metalinguistic statement, taking the string "January" as an object in its own right and stating something about that object.

Note the classic "Liar’s paradox"-style problem that arises when an object language is used as the metalanguage:

(32) This sentence is false.

The problem arises because "This sentence" refers to itself and invokes the truth predicate "is false". Tarski showed that the problem goes away as long as the boundaries between the object language and the metalanguage are clearly drawn, and the object language is defined in such a way that it can’t talk about itself and in particular can’t talk about its own truth or falsity.

We’ve got to respect the difference between metalanguage and object language too. It’s this difference which will make sentences like (28) sensible and informative:

(33) "The earth is flat" is true iff the earth is flat.

The statement "is true" is in the meta language. Obviously "The earth is flat" is the string of the object language we’re trying to provide a meaning for. the earth is flat, after the iff, is the meaning we’re providing for it.

This may look trivial, but it’s not. What we’re trying to do is convey the state of affairs that must obtain if the string "The earth is flat" is a true description of the world. The best way to do this would be to telepathically convey the mental image of a universe in which the earth is flat ("The earth is flat" is true if (concentrate).) We can’t do that. Drawing a picture of that state of affairs is beyond my skill as an artist. The best way I have to communicate to you the state of affairs in which the earth is flat is simply to say, the earth is flat. In a sense, the language after the iff corresponds to the picture of the turtle in last time’s handout. If you spoke French, we could use French as our metalanguage:

(34) "The earth is flat" est vrai sii la Terre est plate.

But we don’t, we all speak English, so we’re going to use English to convey things about the states of affairs that make sentences true (or false). At least initially, that’s all the english we’re going to use. Anything else we use will be in the languages of propositional calculus or predicate logic, formal languages which give us the condition of defined full interpretability that we said we needed last time. So in some sense English is part of our metalanguage, and also is our object language. What we’ve got to do is define meanings for "The", "earth", "is" and "flat" such that when they’re combined in a certain structure, the structure denotes "True" iff the earth is flat. (use multicolored markers). More on this anon. First, we need to learn propositional logic and predicate calculus, so that we all know our basic metalanguage.

************

(Aside: "iff" is read "if and only if". It’ll come up again. Basically it’s a way of saying that A is true if B, and B is true if A. (So above, "P entails Q if the truth of P guarantees the truth of Q, and the truth of P guarantees the truth of Q if P entails Q". Or with easier predicates: John loves Mary iff Mary loves John = John loves Mary if Mary loves John and Mary loves John if John loves Mary. The "only if" part comes from reversing the conditional by using "only if" instead of "if", so "If Susan wants to impress you, then she eats peas" is supposedly interpreted the same as "Susan eats peas only if she wants to impress you" which is the same as "Susan wants to impress you if she eats peas". (I’ll give you a bunch of paraphrases of conditionals later on).