We aim to build a system for processing lexical ambiguities that are
generated by complex forms of polysemy. Our main objective is
robustness: this system must be able to interpret ambiguous words in
open context, even when contextual information is degenerated.
Starting from questions like How to linguistically describe lexical
ambiguities? and Which computational treatments of ambiguities are
available?, we reach a double choice: we try to model the processing of
complex forms of polysemy (the least studied in computer science,
renowned as non-calculable) inside the frame of differential semantics.
We focus especially on usage polysemy, that has never been fully
studied, neither linguistically nor computationally, and we propose a
systematic characterisation for it. As a conclusion, these ambiguities
vagueness is necessary to the semantic cohesion of the statements in
which they appear: they must not be resolved.
Undertaking some existing computational systems, we propose a model of
dynamic lexicon inspired from the EDGAR model. Our model, PELEAS,
integrates ambiguity in its structures. For a given ambiguous
occurrence, it computes an analysis of the contributions of an attested
usage database for the occurrence meaning in context. This lexicon is a
hybrid between a symbolic system (lexical structures) and a
connectionist network (algorithm).
It is implemented by a software pack (a set of ActiveX controls). Their
development implies techniques of formal specification, advanced object
oriented design and distributed programming.
PELEAS has been validated through a test phase on this software pack.
The model proved to be perfectly robust while keeping a reasonable level
of pertinence and efficiency. Results show that it is most efficient on
joker words, plays on words and double meanings.
Keywords : natural language written text understanding and
interpretation lexical semantics ambiguity polysemy - usage