I am interested in how the slow acquisition of abstract patterns from speech/text can be facilitated by manipulating factors like token frequency ("How often do you encounter this particular form?"), type frequency ("How many different candidate forms do you usually see in this particular context in a sentence?"), and input skewedness ("For this kind of sentence context, to what extent is the input disproportionally oriented towards a specific form?").
Beyond informing psycholinguistic theory, this can provide tangible suggestions for language educators by addressing questions such as: does encountering a form more frequently necessarily make it easier to recognize and produce? Is it better to teach new grammar by using a wide variety of words in the examples, or by sticking to a few familiar vocabulary items? Should teachers approximate the input frequencies encountered “in the real world,” or is it possible to tailor classroom input to facilitate L2 acquisition? The answers to such questions would be relevant at the level of designing a curriculum, at the level of planning a class activity, and even at the level of adjusting how a teacher uses the target language when speaking face-to-face with their students.
More recently, I have become interest in how L2 morphosyntax acquisition is affected by implicit/unintentional/non-attentive and explicit/willful/attentive processes, which can potentially be pulled apart from the EEG signal using new decoding methods. The ultimate goal is to find the trade-off between deliberate rule-based learning and sheer practice/exposure that is most conducive to language learning, such that individual students can deliberately change the way that they pay attention to real-time language speech/text to maximize comprehension (and, thus, long-term learning) of the L2. If we can tailor these kinds of recommendations according to individuals' particular cognitive profiles (which may vary along dimensions of working memory, executive control, etc.), then even better!
Beyond just EEG methods, I am also interested in corpus-based methods. Recent projects include: