Saturday, September 6, 2008

Convergent Grammar

[convert into an essay or review article at Linguistic Exploration]

CVG Course at ESSLI 08, Day 1 Slides

(23) convergent grammar (CVG): a Look Ahead

• CVG is closely related to both ACG and HPSG.
• Like ACG—but unlike other frameworks descended from EMG—
CVG uses Curry-Howard proof terms (which we will explain)
to denote NL syntactic entities.
• This makes it easy to connect CVG to mainstream generative
grammar because the proof terms are really just a more precise
version of EST/GB-style labelled bracketings.
• Like HPSG—but unlike other frameworks descended from EMG—
the relation between syntax and semaentics is not a function, but
rather is one-to-many.

Curry-Howard Correspondence

(30)
The basic ideas of CH are that, if you let the atomic formulas be
the types of a TLC, then
1. a formula is the same thing as a type.
2. A formula A has a proof iff there is a combinator (closed
term containing no basic constants) of type A.
• Hence the Curry-Howard slogan:
formulas = types, proofs = terms
(34)
• Variables correspond to hypotheses.
• Basic constants correspond to nonlogical ax-
ioms.
• Derivability of Γ ⊢ a : A corresponds to A being
provable from the hypotheses in Γ.
• Application corresponds to Modus Ponens.
• Abstraction corresponds to Hypothetical Proof.

(31) Notation for ND Proof Theory

• An ND proof theory consists of inference rules,
which have premisses and a conclusion.
• An n-ary rule is one with n premisses, and a
0-ary rule is called an axiom.
• Premisses and conclusions have the format of a
judgment:
                        Γ ⊢ a : A
read ‘a is a proof of A with hypotheses Γ’.

Autonomy of syntax is possible, but not at the granularity of word strings. The syntactic parallel to semantic hypotheses must be both words and constructions.

A deterministic functional interface from syntax to semantics seems to be less of a fit to real language than a non-deterministic relational interface. [find slide to quote]

"variables correspond to hypothesis" — does this mean the granularity is that established by referential indexes? Does it make sense to consider a finer granularity? Does this tie up with DRT?

Perhaps we retain the granularity of referential indexes, but consider implicit relations contributed by the constructions (and also by the individual words). This explains why schema instances in working memory have the granularity they have. There may be some finer granularity less accessible to consciousness, but the level of folk semantics demands an explanatory account.

(38) ND-Style Syntax

• The inference rules are the syntax rules.
• The formulas/types are the syntactic categories.
• The proofs/terms are the syntactic expressions.
• The basic constants are the syntactic words;
• The variables are traces.
• The context of a judgment is the list of traces still
unbound at that point in the proof.

This slide confirms the granularity of variables as referential indexes or traces. Can at least some of the syntax rules (for phrase structure) be considered as types and hypotheses from the lexicon, rather than inference rules for constructing results? You still need inference rules on how to combine the word-level and the construction level. And the elements of a the constrution semantics may be implicit, below the surface.

(40) Basic Categories

• To get started: S, NP, and N. Others will be
added as needed.
• Here we ignore morphosyntactic details such as
case, agreement, and verb inflection.
• In a more detailed CVG, these would be handled
(much as in pregroup grammars) by subtyping.

"pregroup grammars"? Does this subtyping refer to feature structures at a finer granularrity?

(41) Function Categories

• As in many frameworks (RG, HPSG, LFG, DG,
traditional grammar) grammatical functions (gram-
funs) like subject and complement are treated
as theoretical primitives.
• To start we just assume the gramfuns subject
(s) and complement (c). Others will be added
as needed.

Can gramfuns be extended to handle thematic roles? Word-specific participant roles? This could be a finer grained semantics, capturing the insights from lexical semantics and its data.

(56) An Embedded Constituent Question

⊢ [whatfill t(s Kim (likes t c))] : Q
• Here what is an operator of type NPQS : it combines with an S containing an unbound NP trace
to form a Q, while binding the trace.
• Notice that the is not analyzed as a “projection”
of a ‘functional category”: there is no null com-
plementizer with respect to which the operator is
a “specifier”.

Can something like this be used to analyze "ang" in "babae ang bumili ng lasones"


Day 2

(2) Some Examples of Overt Movement

a. Johni, Fido bit ti. [Topicalization]
b. I wonder [whoi Fido bit ti]. [Indirect Question]
c. Whoi did Fido bite ti? [Direct Question]
d. The neighbor [whoi Fido bit ti] was John. [Relative Clause]
e. Felix bit [who(ever)i Fido bit ti]. [Free Relative]
f. It was John [whoi Fido bit ti]. [Cleft]
g. [Whoi Fido bit ti] was John. [Plain Pseudocleft]
h. [Whoi Fido bit ti] was he bit John. [Amalgamated Pseudocleft]
i. [[The more cats]i Fido bit ti], [[the more dogs]j Felix scratched tj ].
[Left and right sides of Correlative Comparatives]
In all these examples, the expression on the left periphery that
is coindexed with the trace is called the filler, or extractee, or
dislocated expression.

It seems likely that the dislocated "ang" in the Tagalog construction above can be analyzed as Overt Movement in this sense.

This list seems to capture what Goldber 1995 referred to as nonbasic constructions:

... it is not being claimed that all clause-level constructions encode scenes basic to human experience. Nonbasic clause-level constructions such as cleft constructions, question constructions, and topicalization constructions (and possibly passives) are primarily designed to provide an alternative information structure of the clause by allowing various arguments to be topicalized or focused. Thus children must also be sensivtive to the pragmatic information structure of the clause (Halliday 1967) and must learn additional constructions which can encode the pragmatic information structure in accord with the message to be conveyed. These cases are not discussed further here (cf. Lambrecht 1987, 1994).

This would hint that passives are not to be analyzed the same way. Also, the intentional "design" might be glossed as "statistically selected to fill the social function." What representation might be suitable for this pragmatic information structure?

The dislocated "ang" introduces a trace for the intiator of the event of the specified VP, and the predicative noun characterizes that initiator with a common noun type. It is a bit like the Free Relative construction, but with a nominal-type predicate instead of a subject-specified action-verb predicator. So the pragmatic information might be glossed: "[(it) (was) a woman]i [whoeveri bought the lanzones fruit]"



No comments: