Thursday, February 28, 2008

Notes on "Memory" in SEP, part 1

Notes on "Memory" by John Sutton in Stanford Encyclopedia of Philosophy
<John.SUTTON@scmp.mq.edu.au>

  1. Distinguish memory (or learning or plasticity) in animals and memory in human cognition. Models for animals need to be consistent with more sophisticated models for human cognition. Logical approaches to modeling, such as CG, Situation Theory, DRT and MRS, are rather specialized models of human cognition, that still need to be grounded in models for animals.
  2. Memory is to "retain information and reconstruct past experiences, usually for present purposes." From situation theory, we can say memory is to retain over time the attunement to a usually past situation. The typical case is that a situation that is presently experienced enters memory as it passes into the past. Considering long-term memory, the process may require a good nights sleep, and what is retained is attunement to a discrete representation of what is salient in an experienced situations, perhaps only a fragmentary representation much less detailed that what was experienced. We can also have prospective memories, remembering plans or intentions for the future that have not actually occurred, what is remembered is the conscious experience of planning or intending. "For present purposes" implies that memories are activated into current mental experience for present action.
  3. Memory is not pure imagination. Conscious experience can come in many forms: cognized immediate perception (perhaps animals without consciousness require us to consider levels of perception that are not cognized, or we can just call the subcognitive phenomena sensation or basal perception), plans and intentions, pure imagination and mental imagery, verbal daydreaming and song lyrics, and conscious memory. Conscious memory can be of past perceptual experience, of past plans and intentions. Are there veridicality conditions on memory, so that false memories are not really memories at all but memory-like experiences that fail on some essential criteria of what is memory? What about readily inferrable knowledge, that a person never really knew but inferred from facts and constraints that are known, but with the inference occurring only at the time of "recall"? Is it knowledge, but not memory, that I know Nelson Mandela has a liver? Searle talks about "unconscious" beliefs and other intentional mental states.
  4. The term memory is used very broadly. I am most concerned with the remembering of facts and knowledge as intentional mental states. Things like beliefs, desires, prior intentions, intention-in-action, perhaps even cognized perception. This excludes phonological memory and music memory, which are below the lexical-factual level of mental states, and basic visual memory which is not yet discretely classified, although it could include memory of visual relations (the red cylinder is on top of the green cube) . A big chunk of this is done in verbal schemes of individuation, but visuo-spatial facts can also be remembered. And the edge cases of Deaf language and language-isolated deaf children are also interesting to test the models. This mental level is too broad for thesis study, so I would like to narrow down to lexical-phrasal memory in a specific language, and related conceptual memory that is language independent. This could go slightly below lexical items, to the discrete qualia proposed by Pustejovsky.
  5. What is the relation of memory and belief? We remember facts (at least) and we believe facts to be true. But we can remember indeterminate states of affairs (a doubtful "fact-nonfact"), or remember an optical illusion we know is not actually true. Perhaps a belief is like a speech act in that it involves a commitment, a thought-is-to-world direction of fit.
  6. I am interested in the aspects of remembering that can be communicated. It is possible that there is no overarching model of memory and mental states, there are only thousands of memory games with family resemblances.
  7. ... [continue with section 1.1]

Wednesday, February 27, 2008

Tractatus

I have been reading Wittgenstein's Tractatus Logico-philosophicus, the German as well. I also have Alfred Nordmann's Wittgenstein's Tractatus: An Introduction. His first page talks about W's three kinds of sentence.


I am interested in comparing the Tractarian picture theory, as well as the later language games approach, to a model of conceptual and lexical memory that I am developing. My posts "On Memory" are partly to develop a foundation for such a theory. Tractarian logic is probably an inspiration for Sowa's Conceptual Graphs, Kamp's Discourse Representation Theory, Barwise's Situation Theory and Minimal Recursion Semantics, all of which I think are interesting approaches to build on. I am sure the language games critique will also be helpful, especially since I believe there are confusions about meaning related to a theory of consciousness (or lack thereof).



On Memory 1 - sensori-behavioral control cycles

Reflections on Memory



Organisms with nervous systems are able to perceive regularities in their environment within the reach of their senses, and modulate their movement, including the further control of sensory organs. Movement results in different perceptions, and provides the basis for control of movement. This feedback cycle with the environment is selected by evolution since it permits differential reproductive success.


Animals have evolved with multiple sense organs: chemosensory (smell and taste related to successful feeding), tactile (at the interface of the skin with the environment, visual, and auditory (and for some animals that produce sounds, phonological). Kinesthetic proprioception, including balance, provide neural feedback on movement. The integration of sensory signals allows the individuation of regularities in the environment through multiple modalities, and more effective classification of individuals into some kind of unified scheme of individuation.


Animals are able to perceive other animals, and to classify them in ways that modulate their own behavior. This has obvious selectional value for reproductive success, including mating behavior and predator-prey interactions. Interaction between adult organisms and their juvenile offspring leads to survival success, and many larger animals have elaborate maternal behavior that plays an important and even essential role in the survival of offspring. A large number of animals have evolved social behavior which is necessarily regulated by their perception and classification of the presence and behavior of conspecific individuals in their shared perceived environment. Since conspecific organisms are genotypically very similar (phylogenetic evolution), their schemes of individuation are innately similar. Organisms are also able to develop highly elaborated schemes of individuation in the course of their phenotypic growth (ontogenic development). Especially for social animals, there is selectional value for the convergence of these acquired schemes of individuation into a shared scheme of individuation that allows them to classify their shared environment, including the behavior of other animals such as conspecifics.



  • Attention.
  • Continuous variation, discrete classification.
  • Experience, planning. Origins of memory in shared schemes.
  • Social communication and shared schemes. Interpreting the behavior of conspecifics (and other animals) in terms of perception and memory of other individuals. Displays
  • Thought experiment grounded in observation: chimp troop and Deaf village. Iconic signs, arbitrary signs.
  • Vocal displays and shared schemes. Phonic displays and phonemic schemes. Discrete individuation.
  • Thought experiments: autistic village (non-linguistic Asperger's syndrome). Role of consciousness, scheme for other minds. Mentally retarded village, with language.
  • Models of conceptual memory, logical models like Sowa's concpetual graphs, situation theory
  • Models of lexical memory, semantics

Copestake on Minimal Recursion Semantics

Ann Copestake, Dan Flickinger, Ivan Sag and Carl Pollard. Minimal Recursion Semantics: An introduction. Journal of Research on Language and Computation, 3(2--3), pages 281-332, 2005.


I have read earlier drafts about MRS, and I find this published paper a lot clearer. Earlier drafts had confusing features called Handels and Lizsts. The new presentation seems better motivated. The first pass presentation is more like predicate logic with generalized quantifiers, but using elementary predications (EPs) as atoms, usually corresponding of a single lexeme. Then this is translated to Typed Feature Structures, before incorporating it into HPSG. The resulting structures can be quite elaborate even for a short sentence like "Every dog probably sleeps," but that is typical of HPSG. If this is psychologically realistic, and realizable in a biological neural network as well as a symbolic program, I think the brain has enough neural pathways to handle the typed structures. This approach steps back from using very specific relations, like CORPSE for a participant in DIE, using "semantically-bleached" ARG1 instead. They do mention the possibility of using more specific semantic (thematic) roles, like the ACTOR and UNDERGOER of Anthony Davis. I may want to come back to this, since Davis develops his theories to cover Austronesian languages I am interested in. As an aside, I wonder what work on thematic roles has been done for Chinese?


The motivation of this approach is to produce "flatter" representations, in the spirit of "shake-and-bake" translation approaches but more precise. An explicit goal is to support semantic transfer approaches to machine translation. MRS structures were originally developed during the VerbMobil project, and may be one of the enduring legacies of that work.


Many of the examples are related to quantifier scoping, the old donkey anaphora problem. I never felt that problem was very exciting, but a lot of analysis has been poured into it. There is some mention of Discourse Representation Theory, but I still don't see how it fits in with MRS. And now there is dynamic semantics and Segmented DRT as well. So much to do, so little time. I have a feeling that the MRS work is more general than just scope resolution, but the details are not yet clear to me.


The paper gives an example of how the MRS approach can be applied more generally, to phrases like "allegedly difficult problem." This section talks about intersective and scopal modifiers. They also claim that MRS solves problems with Cooper storage, which is part of HPSG that I never found convincing.


MRS has been implemented in the English Resource Grammar of the LinGO (Linguistic Grammars Online) project. Copestake has been working on the DELPHIN project, which talks about deep and shallow processing. I would be interested in seeing if MRS could be applied to a Chinese corpus (shallow) and treebank (deep).


I found the paper interesting, and worth rereading, perhaps after reading Copestake's draft on Robust MRS, which is designed for shallow processing. At that time, I may make comments on the references, some of which I would like to follow up on.

viewing the literature on computational semantics

I'll be journaling my readings and own research on language. I am specifically interested, for now, in lexical semantics. I am also studying Chinese, so expect details about that to appear.
I have been downloading some papers by Ann Copestake, Tim Baldwin at U Melbourne, James Pustejovsky. Manfred Krifka.