Research Statement

1. Goal: Aligning parsing and generation

Humans use their grammatical knowledge in at least two ways: in production and comprehension. Presumably, the grammatical knowledge used during speaking and understanding is the same, because what we can produce and what we can understand are systematically identical. However, questions remain as to whether these two ‘modes’ of constructing the same grammatical object are performed by the same system or by two distinct systems. My research program aims to examine whether it is possible to design a single system that builds structure both in production (generation) and comprehension (parsing), thereby building an integrated model of syntactic processing that can be smoothly linked to a model of grammar.

2. Why care?

There have been promising lines of research in comprehension demonstrating that many subtle grammatical constraints (e.g., islands, binding, etc) are shown to be respected faithfully in online processing. This type of data supports the view that the grammar-processing relation is transparent. Processing operations are direct reflections of grammatical knowledge, or more strongly, constraints on processing operations are grammatical knowledge. Such a view is theoretically and methodologically appealing. Theoretically, a transparent linking theory is a parsimonious linking theory. Methodologically, a transparent linking theory ensures that linguistic and psycholinguistic research are relevant to each other. However, these theoretical and methodological benefits hold only if the grammar is also transparently connected to generation as well. If the relation between grammar and generation is indirect and opaque, we are left with a rather odd model of language in which parsing directly reflects grammatical knowledge but generation does not. Thus, alignment between parsing and generation is a necessary prerequisite when attempting to link grammatical and processing theories transparently.

3. Approaches

A major approach in my research is to test whether specific properties of the parser and the generator that could in principle be different are nevertheless the same. Here are some potential misalignments:

  1. Grain size of structure building. Many parsing models build structures with small grain size of analysis (e.g., Lewis & Vasisith, 2005), while many generation models build structure with much larger grain size of planning (e.g., Garrett, 1975; F. Ferreira, 2000). If this misalignment is real, it challenges the view that parsing and generation are the same because it suggests that the parser and the generator have different sets of structural building blocks.
  2. Manner of structure building. Many parsing models build structures both ahead of lexical access (top-down structure building) and after lexical access (bottom-up structure building), as in left-corner parsing (e.g., Johnson-Leird, 1986; Lewis & Vasishth, 2005). Most generation models build structure either ahead of lexical access (structure-driven sentence production; e.g., Bock et al., 2004) or after lexical access (lexically-driven sentence production; e.g., Kempen, 1987). If this misalignment is real, it challenges the view that parsing and generation are the same, because it suggests that the parser and the generator build structure in different ways.
  3. Interactions between structural and lexical processes. Though most modern parsing and generation models agree that structural processes are constrained by the information stored in the lexical items involved (e.g., Kim & Trueswell, 1998; Melinger & Dobel, 2005), parsing and generation models rarely specify whether or how structural processes constrain lexical access. It is entirely possible that the way that structural processes affect lexical processes is different between parsing and generation. If this misalignment is real, it challenges the view that parsing and generation are the same because it suggests that the parser and the generator involve distinct mechanisms for binding structural and lexical representations when assembling a sentence.

My research so far has examined if these potential misalignments can be resolved, and the preliminary answer is yes. I have argued that the following claims are plausible:

  1. The grain size of structure building in both parsing and generation is the same. We know that the grain size of analysis is small in parsing, and I showed that the grain size of planning in production is equally small. Using syntactic priming, I estimated the time at which speakers make structural decision about two alternative structures in ditransitive sentences (prepositional object vs. double object dative). Syntactic priming usually is defined as the increase in the proportion of primed structures, but I showed that structural repetition also causes a speed-up in the duration of speaking. I applied a speech-to-text alignment algorithm to measure the duration of each word in addition to a speech onset latency measurement, and showed that the speed-up due to syntactic priming obtained only after verbs are started to be uttered. This suggests that, contrary to what is assumed in some models of sentence production (F. Ferreira, 2000; Garrett, 1975), the grain size of structure building in sentence production is smaller than a single verb phrase. We know that the grain size of parsing is comparably small, so parsing and generation plausibly have the same grain size of structure building.
  2. Both parsing and generation involve a mix of top-down and bottom-up structure building. We know that both top-down and bottom-up structure building are used in parsing, and I showed that the same is true for generation. I compared the timing of syntactic vs. lexical repetition priming in sentence production, and showed that the production speed up due to syntactic priming occurs earlier in time than that for lexical priming of verb arguments. This pattern shows that verb phrase structure building still precedes lexical access of verb phrase internal arguments. Because we already know that structural processes can occur ahead of lexical access (e.g., Staub & Clifton, 2006; Wright & Garrett, 1985) and after lexical access (e.g., at the initial word), both parsing and generation can be successfully characterized as involving both top-down and bottom-up structure building, as in left-corner parsing and its variants.
  3. Lexical access in both parsing and generation are constrained by projected syntactic structure in much the same way. We do not know how structural processes constrain subsequent lexical processes in parsing or in generation, and parsing and generation could be different in this regard. I have provided evidence that they are in fact the same. Using Event Related Potentials in comprehension and a novel experimental paradigm which I named the sentence-picture interference task, I showed that lexical access during both parsing and generation are strongly constrained by the syntactic categories of structural slots projected ahead of lexical items. In brief, the results of these experiments suggest that the search space of lexical access at a certain point in a sentence is restricted to lexical items matching the predicted/planned syntactic category. For instance, when the syntactic category of verb is predicted/planned, lexical access will only retrieve verbs. Somewhat independently, I showed that verbs are planned selectively before the articulation of internal arguments but not before the articulation of external arguments, both in English and Japanese. The syntactic constraint on lexical access might explain why this is the case. In a nutshell, verb retrieval is initiated only after the verb category node is projected in syntactic structure, which must occur before articulating internal arguments but not before articulating external arguments.

4. Future research

My primary future goal is to develop a detailed model of single-system structure builder. To do so I would like to extend my domain of study. So far, I have strategically focused on what is happening within a single clause. One problem I would like to work on is how long-distance dependencies (e.g., agreement, wh-dependencies, ellipsis, pronominal resolutions and scrambling) are processed in comprehension and production. This is relevant to my overall research goal of aligning parsing and generation. In principle, establishing the long-distance relation like filler-gap dependencies can be done in at least two ways both in comprehension and production: either in a forward or a backward fashion. These ways of long-distance dependency processing can easily be different between (or even within) parsing and generation.

There have been a large body of studies in comprehension, and we have a fairly good model of how long-distance dependencies are processed in comprehension. But the analogous research in production is extremely sparse and the detailed mechanistic model of long-distance dependency processing in production is almost nonexistent (with some rare exceptions like Badecker & Lewis, 2007). Thus, in addition to pursuing each line of research that I discussed above, I would like to start developing a detailed model of long-dependency processing in sentence production and see if it possible to unify it with the existing model of long-distance dependency processing in parsing. This requires overcoming a number of methodological challenges. For instance, eliciting utterances containing long-distance dependency in a controlled fashion by itself is not a trivial matter. It is also challenging to identify how to assess the local processing load at specific points during production. However, I believe both of these challenges can be overcome, and I have indeed partially overcome them already. First, I have designed a task combining sentence recall and interference to see whether recalling a sentence from a message generally follows the same time-course of processing as when first processed (in accordance with utterance regeneration hypothesis; Potter & Lombardi, 1990). If I can establish that this sentence recall task requires the same processes as in natural production with respect to syntactic processing, I can use it to elicit utterances with complex structures (within the limits of  speakers’ memory). Second, as briefly described above, I have been developing a technique in production that automatically measures the duration of each word in a sentence. I have already shown that such measures are sensitive to local processing load, just like self-paced reading measurements are sensitive to local processing load. When these two methodologies are combined, I can assess the local processing load of complex utterance generation. In sum, an integrated model of processing is much needed when attempting to link grammar and processing, and my research will be devoted to developing such a model.

Papers

Momma, S., Slevc, L. R., & Phillips, C. (to appear). Unaccusativity in sentence production. Linguistic Inquiry.

Chow, W., Momma. S., Smith, C., Lau, E., Phillips, C. (2016): Prediction as memory retrieval: timing and mechanisms. Language, Cognition, and Neuroscience.

Momma, S., Slevc, L. R., & Phillips, C. (2015). The timing of verb planning in Japanese sentence production. Journal of Experimental Psychology: Learning, Memory and Cognition. [pdf]

Chacón, A. D., Momma, S., & Phillips, C. (2016). Linguistic representations and memory architectures: The devil is in the details. Behavioral and Brain Sciences (commentary on target article by Christiansen & Chater). [pdf]

Momma, S., Sakai, H., & Phillips, C. (in prep). Give me several hundred more milliseconds: Temporal dynamics of verb prediction.

Momma, S., Slevc, L. R., & Phillips, C. (in prep). The timing of verb planning in English active and passive sentence production.

Momma, S., Luo, Y., Sakai, H., Phillips, C. Prediction as syntactically-constrained memory retrieval: Evidence from EEG.

Momma, S., Bowen, Y., Ferreira, V. Non-linear lexical planning in sentence production.

Momma, S., Kraut, B., Slevc., L. R., Phillips, C. Timing of syntactic and lexical priming reveals structure-building mechanisms in production.

Presentations

Momma, S., Bowen, Y., Ferreira, V. (2017). Non-linear lexical planning in sentence production. Talk to be given at the 30th annual CUNY Conference on Human Sentence Processing, Cambridge, MA. March 30-April 1.

Momma, S., Kraut, R., Slevc, L. R., Phillips, C. (2017). Timing of syntactic and lexical priming reveals structure-building mechanisms in production. Poster to be given at the 30th annual CUNY Conference on Human Sentence Processing, Cambridge, MA. March 30-April 1.

Schlueter, Z., Momma, S., Lau, E. (2017). No grammatical illusion with L2-specific memory retrieval cues in agreement processing. Poster to be given at the 30th annual CUNY Conference on Human Sentence Processing, Cambridge, MA. March 30-April 1.

Momma, S., Sakai, H., Luo, Y. & Phillips, C. (2016). Lexical predictions and the structure of semantic memory: EEG evidence from case changes. Talk to be given at the 29th annual CUNY Conference on Human Sentence Processing, Gainesville, FL. March 3-5.

Momma, S., Slevc, L, R., & Phillips, C. (2016). Split intransitivity modulates look-ahead effects in sentence planning. Poster to be presented at the 29th annual CUNY Conference on Human Sentence Processing, Gainesville, FL. March 3-5.

Momma, S., Slevc, L, R., Buffinton, J., & Phillips, C. (2016). Similar words compete, but only when they’re from the same category. Poster to be presented at the 29th annual CUNY Conference on Human Sentence Processing, Gainesville, FL. March 3-5.

Momma, S., Slevc, L, R., Buffinton, J., & Phillips, C. (2016). Grammatical category limits lexical selection in language production. Talk to be given at the 90th annual meeting of Linguistic Society of America, Washington DC. January 7-10.  

Momma, S., Slevc, L, R., & Phillips, C. (2015). A grammatically conditioned semantic interference effect in a "Picture-sentence" interference study. Poster presented at the 21st annual Architecture and Mechanisms for Language Processing Conference, Valletta, Malta. September 3-5. [pdf]

Slevc, L, R., Momma, S. (2015). Noisy evidence and plausibility influence structural priming. Poster presented at the 21st annual Architecture and Mechanisms for Language Processing Conference, Valletta, Malta. September 3-5. [pdf|

Momma, S., Sakai, H., & Phillips, C. (2015). Give me several hundred more milliseconds: the temporal dynamics of verb prediction. Talk given at the 28th annual CUNY Conference on Human Sentence Processing, Los Angeles, CA. March 19-21. [pdf]

Momma, S., Slevc, L. R., & Phillips, C. (2015). The timing of verb planning in active and passive sentence production. Poster presented at the 28th annual CUNY Conference on Human Sentence Processing, Los Angeles, CA. March 19-21. [pdf]

Momma, S., & Phillips, C. (2014). Looking ahead to verbs in comprehension and production. Poster presented at Gradient Symbolic Computation Workshop, Baltimore, MD. November 14-15. [pdf]

Momma, S., Slevc, L. R., & Phillips, C. (2014). The timing of verb retrieval in English passive and active sentences. Talk given at Mental Architecture for Processing and Learning of Language 2014, Tokyo, Japan. August 12-13. [pdf]

Momma, S., Slevc, L. R., & Phillips, C. (2014). The effect of syntactic category on advance planning in sentence production. Poster presented at the 27th annual CUNY Conference on Human Sentence Processing, Columbus, OH. March 13-15. [pdf]

Momma, S., Slevc, L. R., & Phillips, C. (2013). Advance selection of verbs in head-final language production. Poster presented at the 26th annual CUNY Conference on Human Sentence Processing, Columbia, SC. March 21-23. [replaced by the paper above]

Invited Talks 

Momma, S. (2016). Aligning Generation and Parsing. Talk given at the 9th International Workshop on Language Production (IWLP 2016), San Diego, CA, July 27.

Momma, S. (2016). Prediction as memory retrieval. Talk given at the Northwestern University, Department of Linguistics. Evanston, IL, March 10.

Momma, S. (2016). How awful are verb-final languages for production?Talk given at the Northwestern University, Department of Linguistics. Evanston, IL, March 11.

Momma, S. (2015). Fast and Slow Linguistic Prediction. Talk given at the Waseda University 1st BLIT Colloquium, Tokyo, Japan. July 28.

Momma, S. (2014). Incrementality and Advance Planning in Sentence Production. Talk given at the Hiroshima University 91st Kagamiyama Language Science Colloquium, Hiroshima, Japan. July 17.