Review of Is the best good enough? Optimality and competition in syntax, edited by Pilar Barbosa, Danny Fox, Paul Hagstrom, Martha McGinnis, and David Pesetsky. Cambridge, MA: MIT Press, 1998. Pp. vii, 450.

D. Terence Langendoen, Department of Linguistics, University of Arizona

Prepublication version of a review that appeared in Language 77.842-844, 2001.

This book is the proceedings of a workshop held at MIT in 1995; it consists of sixteen papers, a brief acknowledgement and a short introduction by the editors (1-14), and an even shorter index. Eight papers deal with syntax and Optimality Theory (OT):

  1. 'WHOT?' by Peter Ackema and Ad Neeleman (15-33); [NOTE 1]
  2. 'Optimality and inversion in Spanish' by Eric Bakovic (35-58);
  3. 'Morphology competes with syntax: Explaining typological variation in weak crossover effects' by Joan Bresnan (59-92);
  4. 'Anaphora and soft constraints' by Luigi Burzio (93-113);
  5. 'Optimal subjects and subject universals' by Jane Grimshaw and Vieri Samek-Lodovici (193-219);
  6. 'When is less more? Faithfulness and minimal links in wh-chains' by Géraldine Legendre, Paul Smolensky and Colin Wilson (249-89);
  7. 'On the nature of inputs and outputs: A case study of negation' by Mark Newson (315-36); and
  8. 'Some optimality principles of sentence pronunciation' by David Pesetsky (337-83).

Four papers deal with the principle of Economy in the Minimalist Program (MP):

  1. 'Some observations on economy in generative grammar' by Noam Chomsky (115-27);
  2. 'Locality in variable binding' by Danny Fox, (129-55);
  3. 'Reference set, minimal link condition, and parameterization' by Masanori Nakamura (291-313); and
  4. 'Constraints on local economy' by Geoffrey Poole (385-98).

The remaining four papers deal with various topics other than syntax:

  1. 'Optimality Theory and human sentence processing' by Edward Gibson and Edward Broihier (157-91);
  2. 'Semantic and pragmatic context-dependence: The case of reciprocals' by Yookyung Kim and Stanley Peters (221-47);
  3. 'The logical problem of language acquisition in Optimality Theory' by Douglas Pulleyblank and William J. Turkel (399-420); and
  4. 'Error-driven learning in Optimality Theory via the efficient computation of optimal forms' by Bruce B. Tesar (421-35).

In their introduction, the editors make a heroic effort to show how all these papers are related, by contrasting two views concerning explanation in linguistics: (1) a 'standard scenario' in which the status of a linguistic 'object' is determined by how independent principles analyze it and it alone, and (2) an 'optimality scenario' in which the status of an object is determined by how interacting principles analyze it in relation to other objects. They note that linguistic explanation has traditionally favored the standard scenario (whence the editors' choice of labels). Next, they review some occasions in which the optimality scenario has been used, beginning with Panini's principle that the application of a rule of grammar depends on the failure of a more specific rule to be applied. They show that explanations within the optimality scenario require at minimum the determination of a 'reference set' of competing objects, and a 'metric' for selecting one or more members of the reference set. This framework is general enough to cover all of the papers in this volume, [NOTE 2] even Kim and Peters's, for which the reference set consists of the possible meanings of the reciprocal anaphor and the metric yields the strongest meaning consistent with world knowledge and contextual assumptions. As this example shows, the optimality scenario can be used for explanations in the domain of linguistic performance as well as of competence.

The editors' historical survey omits perhaps the most striking recent use of an optimality scenario prior to the advent of the Minimalist Program and of Optimality Theory, namely the use of 'transderivational constraints' within Generative Semantics (Lakoff 1973, Hankamer 1973). [NOTE 3] The standard critical response to the use of devices like transderivational constraints (as in Langendoen 1975; see also Harris 1993: 181), that they needlessly increase the descriptive power of the theory of grammar, is easily countered within both MP and OT by the observation that the old rule-based theory has been replaced by the new principle- or constraint-based one, not augmented thereby.

If MP and OT both make use of optimality scenarios, what are the essential differences between these approaches to syntax? One candidate is their treatment of recursion. MP 'takes the recursive procedure literally, assuming that one (abstract) property of the language faculty is that it forms expressions step-by-step' (Chomsky: 126). OT syntax on the other hand appears to treat recursion representationally; its generator makes available the reference set for a given input (or 'Index' as suggested by Legendre, Smolensky and Wilson: 254) using a recursive procedure that is 'replaceable without loss by an explicit definition in the standard way' (Chomsky: 126), and its evaluator provides the metric for selecting the winning output(s) for that input (or Index). However, it is possible to construe OT syntax as operating step-by-step as a series of mappings in which the input for a particular stage is an output of previous stages, so that the treatment of recursion need not constitute a definitive difference between MP and OT.

A better candidate is their treatment of the relation between sound and meaning. MP, like all previous generative syntactic models dating back to the Standard Theory (ST), adopts the Saussurean principle that basic linguistic forms (e.g. lexical items) are pairings of sound and meaning. At some point in a successful derivation of a complex expression, these strands are separated, one terminating in phonetic form (PF), and the other in logical form (LF). In ST, this separation takes place at the level of Deep Structure; in MP, it takes place at Spell-Out. Thus in MP as in the earlier generative models, there is no direct mapping between PF and LF. Given a PF π, one must reverse the derivation from Spell-Out to π to determine its pairing with LF, and conversely given an LF λ. In OT, on the other hand, direct mappings between sound and meaning are possible, and perhaps even definitional (Prince and Smolensky 1997). For example sound and meaning could be separately generated, with elementary (i.e. idiomatic) sound-meaning pairings determined by lexical constraints, extending the idea of Direct OT (Golston 1996), and complex ones recursively built up out of elementary ones (see the conclusion of the preceding paragraph).

Finally, the editors note a tendency within both MP and OT to blur the distinction between competence and performance, by permitting systematic performance constraints to be incorporated into grammar. For example, they point out that there currently are 'efforts to unify [the sentence processing] literature with the literature on grammatical theory (e.g. Phillips 1997)' (11). Moreover, given Chomsky's (1965: 12-4) well known and widely accepted argument that computational complexity such as is found with multiple center embedding is a matter of linguistic performance, not competence, it is noteworthy that he is here prepared to maintain that '[c]onsiderations of computational complexity matter for a cognitive system (a "competence system" in the technical sense of this term)' (126). The implications of this change are enormous, including that grammars for natural languages are describable as finite state devices.

References

Ackema, Peter, and Ad Neeleman. 1995. Optimal questions. Unpublished manuscript.

Chomsky, Noam. 1965. Aspects of the theory of syntax. Cambridge, MA: MIT Press.

Golston, Christopher. 1996. Direct Optimality Theory: Representation as pure markedness. Language 72.713-48.

Hankamer, Jorge. 1973. Unacceptable ambiguity. Linguistic Inquiry 4.17-68.

Harris, Randy Allen. 1993. The linguistics wars. Oxford, UK: Oxford University Press.

Lakoff, George. 1973. Some thoughts on transderivational constraints. Issues in linguistics: Papers in honor of Henry and Renee Kahane, ed. by Braj B. Kachru and others, 442-52. Urbana: University of Illinois Press.

Langendoen, D. Terence. 1975. Acceptable conclusions from unacceptable ambiguity. Testing linguistic hypotheses, ed. by David Cohen and Jessica R. Wirth, 111-27. Washington, DC: Hemisphere Press.

Phillips, Colin. 1997. Order and structure. Ph.D. dissertation, Massachusetts Institute of Technology, Cambridge.

Prince, Alan S., and Paul Smolensky. 1997. Optimality: From neural networks to universal grammar. Science 275.1604-10.

Notes

1. The longer version of this paper (Ackema and Neeleman 1995) has a less obscure title. [BACK]

2. However, Pesetsky raises the possibility that some syntactic phenomena are better analyzed in terms of the standard scenario. His argument strikes me as flawed in two respects, but space does not permit me to discuss the matter further here. [BACK]

3. The central idea of transderivational constraints is due to Chomsky (1965: 126-27), who suggested that 'stylistic inversion of "major constituents" ... is tolerated up to ambiguity - that is, up to the point where a structure is produced that might have been generated independently by the grammatical rules'. [BACK]