All models are wrong, but some are useful.
— George Box

PROJECT AIMS

Wide Incremental learning with Discrimination nEtworks

Principal Investigator: R. Harald Baayen (Professor of Quantitative Linguistics)

This five-year project aims to deepen our understanding of how we produce and understand words in everyday speech.

It is almost-universally assumed that language use involves a form of mental calculus, in which alphabets of elementary symbols and rules define well-formed sequences. This calculus is usually believed to operate at two distinct levels, the level of phonology and the level of morphology and syntax. The phonological alphabet consists of letter-like units of sound called phonemes. Strings of phonemes build the atomic meaningful units of the language, known as morphemes. Rules and constraints define which sequences of phonemes can form legal morphemes. These morphemes in turn comprise the alphabet of a second calculus, with morphological and syntactic rules defining the legal sequences of morphemes (and thus the words and sentences of a language).

This pairing of a meaning-free phonological calculus with a morpheme-based morphological and syntactic calculus is widely regarded as a fundamental design feature of language, one that structuralist linguistics referred to as the dual articulation of language. Psychologists have followed linguists in positing that phonemes and morphemes exists as real mental units, and a large body of research has sought to show how these units are strung together in production and how in comprehension, visual or auditory input is first segmented into these elementary units, which are subsequently re-assembled into hierarchical structures.

In this project, we are investigating whether the comprehension and production of words truly requires sub-word units such as phonemes and morphemes. The realization of phonemes is known to vary tremendously with the context in which they occur. For distinguishing a 'p' from a 't' or a 'k', changes in the first and second formants of adjacent vowels are crucial. Furthermore, the theoretical construct of the morpheme, as the smallest linguistic sign, is perhaps attractive for agglutinating languages such as Turkish, but is not helpful at all for understanding the structure of words in fusional languages such as Latin. The central hypothesis under research in this project is that the relation between words' forms and their meanings can be modeled computationally in an insightful and cognitively valid way without using the theoretically problematic constructs of the phoneme and the morphemes.

Recent advances in machine learning and natural language engineering have shown that much can be achieved without these constructs. How far current natural language processing technology has moved away from concepts in classical (psycho)linguistics theory is exemplified by Hannun et al. (2014), who announced that they "... do not need a phoneme dictionary, nor even the concept of a 'phoneme' ". Importantly, also in theoretical morphology within linguistics, the construct of the morpheme has been heavily criticized. For inflectional morphology, many scientists now agree that inflectional features (such as for person, number, tense, etc.) are realized in sound, without there being a one-to-one mapping between bits of sound and individual feature values. In fact, one morphological theory, Word and Paradigm Morphology (Blevins, 2016), holds that words, and not sublexical units such as stems and affixes, are the fundamental units. According to this theory, proportional analogies between whole words drive morphological cognition.

The first goal of the WIDE project is to show that indeed the relation between words' forms and meanings can be computationally modeled without using phonemes and morphemes. In other words, we aim to develop a computational implementation of Word and Paradigm Morphology that provides, at the functional level, a cognitively valid characterization of the comprehension and production of complex words.

The second goal of the WIDE project is to clarify how much progress can be made with, and what the limits are of, wide learning networks, i.e., networks with very large numbers of input and output nodes, but no hidden layers. The mathematics of these networks are well understood. From a statistical perspective, wide learning is related to multivariate multiple regression. In this respect, wide learning differs from deep learning. Deep learning networks, however impressive their performance, are still largely black boxes when it comes to understanding why they work, and how exactly they work for a given problem.

There are three main reasons for studying wide networks. Apart from interpretational transparency, they turn out to perform surprisingly well, especially when the input and output features for such networks are carefully designed against the background of what we know about language and the brain. Furthermore, if we can show that wide networks can perform speech production and language comprehension with a high degree of accuracy, similar to that of listeners and speakers, then we have the strongest possible proof for the existence of algorithms that can do comprehension and production without the help of phoneme and morpheme units. This is of crucial importance in the context of deep learning networks. The units on the hidden layers of deep learning networks as applied to natural language processing have been interpreted as "fuzzy" variants of phonemes and morphemes, and hence as evidence that the classical hierarchical linguistic models must be correct after all. For instance, Hannagan et al. (2014) proposed a deep learning network explaining lexical learning in baboons, and attributed hidden units at various levels of granularity to different parts of the ventral pathway in the primate brain. However, as shown by Linke et al. (2017), much better prediction for baboon learning behavior is obtained with a wide learning network.

REFERENCES

Arnold, D., Tomaschek, F., Sering, K., Lopez, F., and Baayen, R.H. (2017). Words from spontaneous conversational speech can be recognized with human-like accuracy by an error-driven learning algorithm that discriminates between meanings straight from smart acoustic features, bypassing the phoneme as recognition unit. PLoS ONE 12(4): e0174623, 1-16.

Baayen, R. H., Chuang, Y. Y., Shafaei-Bajestan E., and Blevins, J. P. (2019). The discriminative lexicon: A unified computational model for the lexicon and lexical processing in comprehension and production grounded not in (de)composition but in linear discriminative learning. Complexity, 2019, 1-39.

Birkholz, P. (2013). Modeling Consonant-Vowel Coarticulation for Articulatory Speech Synthesis. PloS ONE, 8.

Blevins, J. P. (2016). Word and paradigm morphology. Oxford University Press.

Hannagan, T., Ziegler, J. C., Dufau, S., Fagot, J., and Grainger, J. (2014). Deep Learning of Orthographic Representations in Baboons, PLOS-ONE, vol. 9.

Hannun, A., Case, C., Casper, J., Catanzaro, B., Diamos, G., Elsen, E., Prenger, R., Satheesh, S., Sengupta, S., Coates, A., et al. (2014). Deep speech: Scaling up end-to-end speech recognition. arXiv:1412.5567.

Linke, M., Bröker, F., Ramscar, M., and Baayen, R. H. (2017). Are baboons learning "orthographic" representations? Probably not. PLoS ONE, 12 (8): e0183876.

Shafaei-Bajestan, E., and Baayen, R. H. (2018). Wide Learning for Auditory Comprehension. In Yegnanarayana, B. (Chair) Proceedings of Interspeech 2018, 966-970. Hyderabad, India: International Speech Communication Association (ISCA).

PROJECT

Project Packages

The WIDE research programme comprises three subprojects, one addressing language comprehension, one addressing speech production, and one focusing on how to best model word use and lexical semantics.

A synthesis of some recent results is presented in Baayen et al. (2019). An outreach article on this and related research carried out in the quantitative linguistics lab is available in the leading science communication publication, Scientia.

LANGUAGE COMPREHENSION

The project on language comprehension focuses on the understanding of natural spontaneous speech. Building on previous work on speech comprehension (Arnold et al. 2017), we are studying auditory word recognition with wide learning.

Wide learning networks trained with low-level acoustic features extracted from the audio signal of words occurring in corpora of spontaneous speech (such as vast repository of multi-model TV news broadcasts of the Distributed Little Red Hen Lab) perform surprisingly well, outperforming deep learning networks on the task of isolated word recognition by a factor of two (Shafaei-Bajestan & Baayen, 2018). Deep learning networks, however, are amazingly good at recognition of words in continuous speech, and an important challenge for this project is to show that wide learning can also be made to work for continuous speech.

SPEECH PRODUCTION

The project on speech production addresses the question of how to model the learning of articulation.

We have started working with the Vocal Tract Lab (VTL) model developed by Birkholz and collaborators at the TU Dresden. VTL provides a 3-dimensional model of the vocal tract, and generates speech sounds based on simulated articulator and vocal fold motion. The model has 20 parameters, including parameters for velic opening, horizontal jaw position, tongue root position, parameters for tongue body and tongue tip, lip parameters, and velum shape. The challenge here is to learn how to modulate these parameters over time to produce words, given the lexical semantics to be expressed and the feedback the speaker receives from the audio signal and her own articulators.

SPEECH IN CONTEXT

The third project is concerned with how to represent words' meanings, and how to model the effect of words' contexts on comprehension and production.

Spoken words can be very difficult to make sense of without context. For instance, in conversational German, 'wuerden' (they became) is often realized as 'wuen' instead of 'wuerdn'; in spontaneous Dutch, 'natuurlijk' ('of course') reduces to 'tuuk', English 'hilarious' becomes 'hlεrəs,', and in Mandarin informal speech, all that may be left of the three-syllable word '要不然' ('jaʊpuʐan', or, otherwise) is ʊɪ. We therefore will examine different statistical models that predict words' probabilities given their context, as these will be informative for extending our current system for auditory comprehension so that it can deal not only with single-word recognition but also with the understanding of continuous speech. A better understanding of the role of context is also essential for modeling how exactly words are articulated. This project also addresses the question of the optimal representation of words' meanings, focusing on the meanings of morphologically complex words on the one hand, and exploring the potential of wide learning networks on the other hand.

ERC-WIDE

Publications

ARTICLES

Baayen, R. H., Fasiolo, M., Wood, S., Chuang, Y.-Y. (2022). A note on the modeling of the effects of experimental time in psycholinguistic experiments. The Mental Lexicon, 1-35.

Denistia, K., Shafaei-Bajestan, E. and Baayen, R. H. (2021). Exploring semantic differences between the Indonesian prefixes PE- and PEN- using a vector space model. Corpus Linguistics and Linguistic Theory, 1-26.

Heitmeier, M., Chuang, Y-Y., Baayen, R. H. (2021). Modeling morphology with Linear Discriminative Learning: considerations and design choices. Frontiers in Psychology, 12, 4929.

Nixon, J. S., and Tomaschek, F. (2021). Prediction and error in early infant speech learning: A speech acquisition model. Cognition, 212, 1-15.

Shafaei-Bajestan, E., Moradipour-Tari, M., Uhrig, P., and Baayen, R. H. (2021). LDL-AURIS: A computational model, grounded in error-driven learning, for the comprehension of single spoken words. Language, Cognition and Neuroscience, 1-28.

Sun, K., Wang, R., and Xiong, W. (2021). Investigating genre distinctions through discourse distance and discourse network. Corpus Linguistics and Linguistic Theory, 1-26.

Tomaschek, F., Tucker, B.V., Ramscar, M., and Baayen, R. H. (2021). Paradigmatic enhancement of stem vowels in regular English inflected verb forms. Morphology, 1-29.

Baayen, R. H., and Smolka, E. (2020). Modeling morphological priming in German with naive discriminative learning. Frontiers in Communication, section Language Sciences, 1-40.

Chuang, Y-Y., Bell, M. J., Banke, I., and Baayen, R. H. (2020). Bilingual and Multilingual Mental Lexicon: A Modeling Study With Linear Discriminative Learning. Language Learning, 1-73.

Chuang, Y-Y., Vollmer, M-l., Shafaei-Bajestan, E., Gahl, S., Hendrix, P., and Baayen, R. H. (2020). The processing of pseudoword form and meaning in production and comprehension: A computational modeling approach using Linear Discriminative Learning. Behavior Research Methods, 1-51.

Linke, M., and Ramscar, M. (2020). How the Probabilistic Structure of Grammatical Context Shapes Speech. Entropy, 22(1):90, 1-23.

Nixon, J. S. (2020). Of mice and men: Speech sound acquisition as discriminative learning from prediction error, not just statistical tracking. Cognition, 197, 104081, 1-16.

Sun, K., and Baayen, R. H. (2020). Hyphenation as a compounding technique in English. Language Sciences, 83, 1-20.

Sun, K., Liu, H., and Xiong, W. (2020). The evolutionary pattern of language in scientific writings: A case study of Philosophical Transactions of Royal Society (1665–1869). Scientometrics, 1-30.

Baayen, R. H. (2019). Understanding and producing words with high-school maths. Open Access Government, 23, ICT, 424-425.

Baayen, R. H. (2019). Are You Listening? Teaching a Machine to Understand Speech. Scientia, 2019, 1-5.

Baayen, R. H., Chuang, Y. Y., Shafaei-Bajestan E., and Blevins, J. P. (2019). The discriminative lexicon: A unified computational model for the lexicon and lexical processing in comprehension and production grounded not in (de)composition but in linear discriminative learning. Complexity, 2019, 1-39.

Cassani, G., Chuang, Y. Y., and Baayen R. H. (2019). On the semantics of non-words and their lexical category. Journal of Experimental Psychology. Learning, Memory and Cognition, July 18, 1-49.

Hendrix, P., Ramscar, M., and Baayen, R. H. (2019). NDRA: A single route model of response times in the reading aloud task based on discriminative learning. PLoS ONE, 14 (7), e0218802.

Baayen, R. H., Chuang, Y. Y., and Blevins, J. P. (2018). Inflectional morphology with linear mappings. The Mental Lexicon, 13 (2), 232-270.

Sering, K., Milin, P., and Baayen, R. H. (2018). Language comprehension as a multi-label classification problem. Statistica Neerlandica, 72, 339-353.

Sun, K., and Wang, R. (2018). Frequency distributions of punctuation marks in English: Evidence from large-scale corpora. English Today, 35 (4), 23-35.

BOOK CHAPTERS

Chuang, Y.-Y., Lõo, K., Blevins, J. P., and Baayen, R. H. (2020). Estonian case inflection made simple. A case study in Word and Paradigm morphology with Linear Discriminative Learning. In Körtvélyessy, L., and Štekauer, P. (Eds.) Complex Words: Advances in Morphology, (pages 119–141).

Pirrelli, V., Marzi, C., Ferro, M., Cardillo, F. A., Baayen, R. H, and Milin, P. (2020). Psycho-computational modelling of the mental lexicon. In Pirrelli, V., Plag, I., and Dressler, W. U. (Eds.) Word Knowledge and Word Usage. A Cross-Disciplinary Guide to the Mental Lexicon, (pages 23-82).

CONFERENCE PAPERS

Nixon, J. S., Poelstra, S., and Rij, J. van (2022). Does error-driven learning occur in the absence of cues? Examination of the effects of updating connection weights to absent cues. In Culbertson, J., Perfors, A., Rabagliati, H., and Ramenzoni, V. (Eds.), Proceedings of the 44th Annual Meeting of the Cognitive Science Society, virtual meeting, USA, 2590-2597. Merced, USA: eScholarship.

Saito, M., Tomaschek, F., and Baayen, R. H. (2021). Relative functional load determines co-articulatory movements of the tonguetip. In Tiede, M., Whalen, D. H., and Gracco, V. (Eds.) Proceedings of the 12th International Seminar on Speech Production (ISSP 2020), virtual meeting, USA, 210-213. New Haven, USA: Haskins Press.

Schmidt-Barbo, P., Shafaei-Bajestan, E., and Sering, K. (2021). Predictive articulatory speech synthesis with semantic discrimination. In Studientexte zur Sprachkommunikation: Elektronische Sprachsignalverarbeitung 2021, Berlin, Germany, 177-184. Dresden, Germany: TUDpress.

Sering, K., Saito, M., and Tomaschek., F. (2021). Anticipatory coarticulation in predictive articulatory speech modeling. In Studientexte zur Sprachkommunikation: Elektronische Sprachsignalverarbeitung 2021, Berlin, Germany, 208-215. Dresden, Germany: TUDpress.

Sering, K., Schmidt-Barbo, P., Otte, S., Butz, M. V., and Baayen, R. H. (2021). Recurrent Gradient-based Motor Inference for Speech Resynthesis with aVocal Tract Simulator. In Tiede, M., Whalen, D. H., and Gracco, V. (Eds.) Proceedings of the 12th International Seminar on Speech Production (ISSP 2020), virtual meeting, USA, 72-75. New Haven, USA: Haskins Press.

Nixon, J. S., and Tomaschek, F. (2020). Learning from the acoustic signal: Error-driven learning of low-level acoustics discriminates vowel and consonant pairs. In Denison, S., Mack, M., Xu, Y., and Armstrong, B. C. (Eds.) Proceedings of the 42th Annual Conference of the Cognitive Science Society, Toronto, Canada, 585 – 591. Austin, USA: Cognitive Science Society.

Sering, K., Tomaschek, F. (2020). Comparing KEC Recordings with Resynthesized EMA Data. In Studientexte zur Sprachkommunikation: Elektronische Sprachsignalverarbeitung 2020, Magdeburg, Germany, 77-84. Dresden, Germany: TUDpress.

Chuang, Y. Y., Sun, C. C., Fon, J., and Baayen, R. H. (2019). Geographical variation of the merging between dental and retroflex sibilants in Taiwan Mandarin. In Calhoun, S., Escudero, P., Tabain, M. and Warren, P. (Eds.) Proceedings of the 19th International Congress of Phonetic Sciences, Melbourne, Australia, 274-276. Canberra, Australia: Australasian Speech Science and Technology Association Inc..

Chuang, Y. Y., Vollmer, M. L., Shafaei-Bajestan, E., Gahl, S., Hendrix, P., and Baayen, R. H. (2019). On the processing of nonwords in word naming and auditory lexical decision. In Calhoun, S., Escudero, P., Tabain, M. and Warren, P. (Eds.) Proceedings of the 19th International Congress of Phonetic Sciences, Melbourne, Australia, 1432-1436. Canberra, Australia: Australasian Speech Science and Technology Association Inc..

Denistia, K., Shafaei-Bajestan, E., and Baayen, R. H. (2019). Semantic Vector Model on the Indonesian Prefixes pe- and peN-. In proceedings of the 11th International Conference on the Mental Lexicon, Edmonton, Canada, 1-4. Edmonton, Canada: ERA (Education and Research Archive).

Sering, K., Stehwien, N., Gao, Y., Butz, M. V., and Baayen, R. H. (2019). Resynthesizing the GECO speech corpus with VocalTractLab. In Studientexte zur Sprachkommunikation: Elektronische Sprachsignalverarbeitung 2019, Dresden, Germany, 95-102. Dresden, Germany: TUDpress.

Boll-Avetisyan, N., Nixon, J. S. Lentz, T. O., Liu, L., van Ommen, S., Çöltekin, Ç., and van Rij, J. (2018). Neural response development during distributional learning. In Yegnanarayana, B., Et.Al. (Ed.) Proceedings of Interspeech 2018,1432-1436. Hyderabad, India: International Speech Communication Association (ISCA).

Nixon, J. S. (2018). Effective acoustic cue learning is not just statistical, it is discriminative. In Yegnanarayana, B., Et.Al. (Ed.) Proceedings of Interspeech 2018, 1447-1451. Hyderabad, India: International Speech Communication Association (ISCA).

Shafaei-Bajestan, E., and Baayen, R. H. (2018). Wide Learning for Auditory Comprehension. In Yegnanarayana, B. (Chair) Proceedings of Interspeech 2018, 966-970. Hyderabad, India: International Speech Communication Association (ISCA).

SOFTWARE

Sering, K., Weitz, M., Künstle, D.-E., and Schneider, L. (2020). Pyndl: Naive discriminative learning in python. Genève, Switzerland: Zenodo.

Sering, K., Stehwien, N., and Gao, Y. (2019). Create_vtl_corpus: Synthesizing a speech corpus with VocalTractLab. Genève, Switzerland: Zenodo.

ERC-WIDE

Project Presentations

2022

Baayen, R. H., Modeling lexical processing with linear mappings, International Seminar on Language Culture and Cognition (part of the series from the National Coordination of the National Institute for Anthropology and History), Mexico City, Mexico, May 31, 2022 (virtual keynote).

Baayen, R. H., Heitmeier, M., and Chuang, Y.-Y., Word learning never stops - evidence from computational modeling, Colloquium Research Training Group "Dynamics and stability of linguistic representations", Marburg, Germany, May 20, 2022.

Chuang, Y.-Y., Modeling variation with Generalized Additive Mixed Models (GAMMS), Statistics Workshop "Modelling Diversity in Language and Cognition", Freiburg, Germany, May 5, 2022 (training).

Linke, M., Empirical Distributions in Conversational Speech, Czech National Corpus, Prague, Czech Republic, April 26, 2022.

Baayen, R. H., Understanding what word embeddings understand, Groningen Spring School Cognitive Modeling, Groningen, Netherlands, April 7, 2022 (keynote).

Baayen, R. H., Chuang, Y.-Y., and Heitmeier, M., Discriminative Learning and the Lexicon: NDL and LDL, Groningen Spring School Cognitive Modeling, Groningen, Netherlands, April 4 - 8, 2022 (training).

Baayen, R. H., Modeling lexical processing with linear mappings, Surrey Linguistics Circle, Guildford, UK, March 29, 2022 (virtual talk).

Baayen, R. H., Modeling lexical processing with linear mappings, UCL (University College London) Language & Cognition seminar series, London, UK, March 16, 2022 (virtual talk).

Sering, K., and Schmidt-Barbo, P., Articubench – An articulatory speech synthesis benchmark, 33rd conference on electronic speech signal processing (ESSV 2022), Sonderborg, Denmark, March 2, 2022 (virtual talk).

Schmidt-Barbo, P., and Sering, K., Using semantic embeddings to start and plan articulatory speech synthesis, 33rd conference on electronic speech signal processing (ESSV 2022), Sonderborg, Denmark, March 2, 2022 (virtual talk).

Linke, M., and Ramscar, M., How communicative constraints shape the structure of lexical distributions, 44th Annual Conference of the German Linguistic Society (DGfS 2022), Tübingen, Germany, February 23, 2022 (virtual talk).

Baayen, R. H., Shafaei-Bajestan, E., Chuang, Y.-Y., and Heitmeier, M., Productivity in inflection, 44th Annual Conference of the German Linguistic Society (DGfS 2022), Tübingen, Germany, February 23, 2022 (virtual talk).

Baayen, R. H., and Gahl, S., Time and thyme again: Connecting spoken word duration to models of the mental lexicon, Morphology in Production and Perception (MPP2022), Düsseldorf, Germany, February 7, 2022 (virtual talk).

Baayen, R. H., Chuang, Y.-Y., Hsieh, S.-K., Tseng, S., Chen, J., and Shen, T., Conceptualising for compounding: Mandarin two-syllable compounds and names, Workshop on Morphology and Word Embeddings, Tübingen/York, Germany/UK, January 18, 2022 (virtual talk).

Shahmohammadi, H., Heitmeier, M., Hendrik Lensch, H., Elnaz Shafaei-Bajestan, E., and Harald Baayen, R. H., Visual grounding of word embeddings, Workshop on Morphology and Word Embeddings, Tübingen/York, Germany/UK, January 18, 2022 (virtual talk).

Shen, T., and Baayen, R. H., Productivity and semantic transparency: An exploration of compounding in Mandarin, Workshop on Morphology and Word Embeddings, Tübingen/York, Germany/UK, January 18, 2022 (virtual talk).

Brown, D., Chuang, Y.-Y., Evans, R., and Baayen, R. H., Case and number in Russian, Workshop on Morphology and Word Embeddings, Tübingen/York, Germany/UK, January 17, 2022 (virtual talk).

Nikolaev, A., Chuang, Y.-Y., and Baayen, R. H., Case and number in Finnish, Workshop on Morphology and Word Embeddings, Tübingen/York, Germany/UK, January 17, 2022 (virtual talk).

Shafaei-Bajestan, E., Moradipour-Tari, M., Uhrig , P., and Baayen, R. H., Semantic properties of English nominal pluralization: Insights from word embeddings, Workshop on Morphology and Word Embeddings, Tübingen/York, Germany/UK, January 17, 2022 (virtual talk).

Stupak, I., and Baayen, R. H., An inquiry into the productivity of German particle verbs, Workshop on Morphology and Word Embeddings, Tübingen/York, Germany/UK, January 17, 2022 (virtual talk).

Shafaei-Bajestan, E., Moradipour-Tari, M., Uhrig , P., and Baayen, R. H., Semantic properties of English nominal pluralization: Insights from word embeddings, FOR 2373 colloquium, Düsseldorf, Germany, January 14, 2022 (virtual talk).

2021

Baayen, R. H., Explorations into gesture, 2021 International Conference on Multimodal Communication: Emerging Computational and Technical Methods (ICMC2021), Changsha, China, December 11, 2021 (virtual talk).

Heitmeier, M., Chuang, Y.-Y., and Baayen, R. H., Modeling German nonword plural productions with Linear Discriminative Learning, Words in the World 2021, Montreal, Canada, November 26, 2021 (virtual poster presentation).

Shafaei-Bajestan, E., Moradipour-Tari, M., Uhrig , P., and Baayen, R. H., Inflectional analogies with word embeddings: there is more than the average, Words in the World 2021, Montreal, Canada, November 26, 2021 (virtual talk).

Baayen, R. H., and Chuang, Y. Y., An introduction to data analysis with the generalized additive model, Mannheim, Germany, November 8-9, 2021 (virtual training).

Baayen, R. H., and Chuang, Y.-Y., Modeling morphology with multivariate multiple regression, Workshop Recent Approaches to the Quantitative Study of Language: Rules and Un-rules, Neuchatel, Switzerland, October 14, 2021 (virtual talk).

Linke, M., and Ramscar, M., The communicative efficiency in conversational English, QUALICO 2021, Tokyo, Japan, September 9, 2021 (virtual talk) .

Shen, T., and Baayen, R. H., Productivity and semantic transparency: An exploration of compounding in Mandarin, Workshop Perspectives on productivity, Leuven, Belgium, May 26, 2021 (virtual talk).

Baayen, R. H., Chuang, Y. Y., Luo, X., Heitmeier, M., and Shafaei-Bajestan, E., An introduction to vector space morphology and morphological processing using linear discriminative learning (LDL), Edmonton, Canada, May 17-21, 2021 (virtual training).

Baayen, R. H., Chuang, Y. Y., and Hendrix, P., An introduction to data analysis with the generalized additive model, Edmonton, Canada, May 10-14, 2021 (virtual training).

Sering, K., Predictive articulatory speech synthesis utilizing lexical embeddings (paule), Spoken Morphology Colloquium, Düsseldorf, Germany, April 16, 2021 (virtual talk).

Baayen, R. H., and Gahl, S., Thyme and time again: Semantics all the way down, Internal Workshop FOR2373, Düsseldorf, Germany, March 18, 2021 (virtual talk).

Linke , M., and Ramscar, M., How Distributional Context Solves the Variance Problem in Speech Sampling, International Conference on Error-Driven Learning in Language (EDLL 2021), Tübingen, Germany, March 11, 2021 (virtual poster presentation).

Luo, X., Chuang, Y-Y., and Baayen, R. H., Linear Discriminative Learning in Julia, International Conference on Error-Driven Learning in Language (EDLL 2021), Tübingen, Germany, March 11, 2021 (virtual poster presentation).

Nixon, J. S., and Tomaschek, F., Infant speech acquisition through error-driven learning of the acoustic speech signal, International Conference on Error-Driven Learning in Language (EDLL 2021), Tübingen, Germany, March 11, 2021 (virtual poster presentation).

Poelstra, S., Nixon, J. S., and Rij, J. van, Does learning occur in the absence of cues?, International Conference on Error-Driven Learning in Language (EDLL 2021), Tübingen, Germany, March 11, 2021 (virtual talk).

Schmidt-Barbo, P., Shafaei-Bajestan, E., and Sering, K., Predictive articulatory speech synthesis with semantic discrimination, the 32nd Conference on Electronic Speech Signal (ESSV 2021), Berlin, Germany, March 4, 2021 (virtual talk).

Sering, K., Saito, M., and Tomaschek., F., Anticipatory coarticulation in predictive articulatory speech modeling, the 32nd Conference on Electronic Speech Signal (ESSV 2021), Berlin, Germany, March 4, 2021 (virtual poster presentation).

2020

Saito, M., Tomaschek, F., and Baayen, R. H., Relative functional load determines co-articulatory movements of the tonguetip, the 12th International Seminar on Speech Production (ISSP 2020), New Haven, USA, December 18, 2020 (virtual poster presentation).

Sering, K., Schmidt-Barbo, P., Otte, S., Butz, M. V., and Baayen, R. H., Recurrent Gradient-based Motor Inference for Speech Resynthesis with a Vocal Tract Simulator, the 12th International Seminar on Speech Production (ISSP 2020), New Haven, USA, December 14, 2020 (virtual poster presentation).

Baayen, R. H., A multivariate multiple regression approach to the mental lexicon, the 28th International Conference on Computational Linguistics (COLING’2020), Barcelona, Spain, December 8, 2020 (virtual invited talk).

Baayen, R. H., Quantitative Cognitive Linguistics, Hindustan Institute of Technology & Science, Chennai, India, November 23, 2020 (virtual talk).

Baayen, R. H., A discriminative perspective on learning a new language, 7th International Scientific Interdisciplinary Conference on Research and Methodology, Moscow, Russia, November 20, 2020 (virtual keynote).

Baayen, R. H., and Chuang, Y.-Y., How long you make your words crucially depends on their meanings, PACLIC 2020 - The 34th Pacific Asia Conference on Language, Information and Computation, Hanoi, Vietnam, October 24, 2020 (virtual talk).

Linke, M., and Ramscar, M., How the Empirical Distribution of Words Solves the Variability Problem in Child-Directed Speech, Many Paths To Language, Nijmegen, Netherlands, October 23, 2020 (virtual talk).

Li, J., Chuang, Y.-Y., and Baayen, R. H., Tonal (ir)regularity and word frequency in Mandarin bisyllabic compounds, Words in the World International Conference 2020, St. Catharines, Canada, October 18, 2020 (virtual talk).

Saito, M., Tomaschek, F., and Baayen, R. H., Co-articulation between stem vowels and suffixes: semantics all the way down, Words in the World International Conference 2020, St. Catharines, Canada, October 18, 2020 (virtual talk).

Luo, X., Chuang, Y.-Y., and Baayen, R. H., Implementation of Linear Discriminative Learning in Julia, Words in the World International Conference 2020, St. Catharines, Canada, October 16, 2020 (virtual talk).

Baayen, R. H., Chuang, Y.-Y., and Shafaei-Bajestan, E., Using discriminative learning to model comprehension and production of inflectional morphology without morphemes and without inflectional classes, Workshop How to fill a cell: computational approaches to inflectional morphology, Sheffield, United Kingdom, September 16, 2020 (virtual talk).

Nixon, J. S. and Tomaschek, F., Learning from the acoustic signal: Error-driven learning of low-level acoustics discriminates vowel and consonant pairs, 42th Annual Conference of the Cognitive Science Society, Toronto, Canada, July 29, 2020 (virtual talk).

Nixon, J. S., Cue weighting as a result of cue competition and prediction error. Oral presentation at Cue thinking outside the box, Satelite workshop of LabPhon, Vancouver, Canada, July 5, 2020.

Nixon, J. S., and Tomaschek, F., Development of first language cue weights from error-driven learning of the speech signal. Poster presentation at Cue weighting: thinking outside the box, Satelite workshop of LabPhon, Vancouver, Canada, July 5, 2020.

Baayen, R. H., A blueprint for discriminative learning of simple utterances, TÜling Linguistics Lectures, Tartu, Estonia, May 5, 2020 (virtual talk).

Baayen, R. H., and Chuang, Y. Y., Statistics and computational modeling, Tartu, Estonia, May 4-7, 2020 (virtual training).

Sering, K., Tomaschek, F., Comparing KEC Recordings with Resynthesized EMA Data, Conference Elektronische Sprachsignalverarbeitung (ESSV 2020), Magdeburg, Germany, March 5, 2020.

Tomaschek, F. and Nixon, J. S., Learning theory as linguistic theory, Linguistic Evidence 2020, Tübingen, Germany, February 13, 2020.

Chuang, Y.-Y., Lõo, K., Blevins, J. P., and Baayen, R. H., Estonian case inflection made simple. A case study in Word and Paradigm morphology with Linear Discriminative Learning, The 1st NTü Linguistic Workshop, IMM19: PsyComMT, Vienna, Austria, February 7, 2020.

Heitmeier, M., and Baayen, R. H., Simulating phonological and semantic impairment of English tense inflection with linear discriminative learning, IMM19: PsyComMT, Vienna, Austria, February 7, 2020.

Shafaei-Bajestan, E., and Baayen, R. H., Wide learning of the comprehension of morphologically complex words: from audio signal to semantics, IMM19: PsyComMT, Vienna, Austria, February 7, 2020.

2019

Baayen, R. H., A dynamic approach to lexical processing, The 1st NTü Linguistic Workshop, Taipei, Taiwan, November 28, 2019 (keynote).

Chuang, Y. Y., Do nonwords have meaning? Making sense of nonwords with linear discriminative learning, The 1st NTü Linguistic Workshop, Taipei, Taiwan, November 28, 2019 (invited).

Sering, K., Learning vocal tract control parameters to synthesize speech, The International Morphological Processing Conference (MoProc 2019), Tübingen, Germany, November 7, 2019.

Harald Baayen, R. H., and Smolka, E., Modeling morphological priming in German with naive discriminative learning, The International Morphological Processing Conference (MoProc 2019), Tübingen, Germany, November 5, 2019.

Nixon, J. S., Ramscar, M., and Tomaschek, F., The emergence of morphological structure from a continuous signal on the basis of error-driven learning, The International Morphological Processing Conference (MoProc 2019), Tübingen, Germany, November 4, 2019.

Baayen, R. H., Construction morphology, linear discriminative learning, and cognitive reality, workshop The Constructionist Challenge – empirical and theoretical aspects, Erlangen, Germany, October 18, 2019 (invited).

Sering, K., Learning vocal tract control parameters to synthesize speech, Neural Information Processing Group, Tübingen, Germany, October 15, 2019.

Chuang, Y. Y., Workshop Linear Discriminative Learning, Spoken Morphology: Phonetics and phonology of complex words (DFG Research Unit FOR 2373), Düsseldorf, Germany, September 23-24, 2019 (invited).

Nixon, J. S., and Tomaschek, F., Infant speech sound acquisition as error-driven discriminative learning of the speech signal, The 25th Architectures and Mechanisms of Language Processing Conference (AMLaP 2019), Moscow, Russia, September 6, 2019.

Chuang, Y. Y., Analyzing speech data with GAMs, Colloquium National Taiwan University, Taipei, Taiwan, August 22, 2019 (invited).

Chuang, Y. Y., Vollmer, M. L., Shafaei-Bajestan, E., Gahl, S., Hendrix, P., and Baayen, R. H., On the processing of nonwords in word naming and auditory lexical decision, International Congres of Phonetic Sciences (ICPhS2019), Melbourne, Australia, August 8, 2019.

Chuang, Y. Y., Sun, C. C., Fon, J., and Baayen, R. H., Geographical variation of the merging between dental and retroflex sibilants in Taiwan Mandarin, International Congres of Phonetic Sciences (ICPhS2019), Melbourne, Australia, August 5, 2019.

Nixon, J. S. and Tomaschek, F. Learning speech cues from the input, Interdisciplinary Advances in Statistical Learning, Donostia San Sebastian, Spain, June 27, 2019.

Tomaschek, F., and Nixon, J. S., Emerging Structures in Random Data Result in Naive Learning, Psycholinguistics in Iceland – Parsing and Prediction, Reykjavik, Iceland, June 20, 2019.

Baayen, R. H., Wide learning in language modeling, Colloquium ICCLS - Interdisciplinary Centre for Cognitive Language Studies, München, Germany, June 17, 2019.

Sun, K., A Regression Model for Simulating and Predicting the Use of Periods by Chinese Natives, Interpunktion international, Regensburg, Germany, May 4, 2019.

Baayen, R. H., Throwing off the shackles of the morpheme with simple linear transformations, Colloquium for Computational Linguistics and Linguistics in Stuttgart, Stuttgart, Germany, April 29, 2019 (invited).

Chuang, Y. Y., Making sense of auditory nonwords, Groningen Spring School on Cognitive Modeling, Groningen, The Netherlands, April 11, 2019 (keynote).

Baayen, R. H., Wide learning in language modeling, Vienna University of Economics and Business, Vienna, Austria, March 15, 2019 (invited).

Sering, K., Stehwien, N., Gao, Y., Butz, M. V., and Baayen, R. H., Resynthesizing the GECO speech corpus with VocalTractLab, 30th Conference on Electronic Speech Signal Processing (ESSV), Dresden, Germany, March 7, 2019.

Baayen, R. H., and Chuang, Y. Y., The WpmWithLdl package for R: A tutorial introduction to modeling with linear discriminative learning, Cambridge, UK, February 15, 2019 (training).

Chuang, Y. Y., and Baayen, R. H., Making sense of auditory nonwords, Workshop - "Models of Computational Morpho(phono)logy", Cambridge, UK, February 15, 2019 (invited).

Baayen, R. H., Linear discriminative learning and the bilingual lexicon, A Language Learning Roundtable, Fribourg, Switzerland, February 11, 2019 (invited).

2018

Baayen, R. H., Throwing off the shackles of the morpheme with simple linear mappings, Annual Meeting of the Society for Computers in Psychology (SCiP), New Orleans, USA, November 15, 2018 (keynote).

Cassani, G., Chuang, Y.-Y., and Baayen, R. H., On the Semantics of Non-words and their Lexical Categories, Eleventh International Conference on the Mental Lexicon, Edmonton, Canada, September 27, 2018 (poster presentation).

Nixon, J. S., The Kamin Blocking Effect in Speech Acquisition: Non-native Acoustic Cue Learning is Blocked by Already-learned Cues, Eleventh International Conference on the Mental Lexicon, Edmonton, Canada, September 27, 2018 (poster presentation).

Sun, K., Diachronic and Qualitative Analysis of English Hyphenated Compounds in the Last Two Hundred Years, Eleventh International Conference on the Mental Lexicon, Edmonton, Canada, September 27, 2018 (poster presentation).

Baayen, R. H., Speech Production in the Discriminative Lexicon, Eleventh International Conference on the Mental Lexicon, Edmonton, Canada, September 26, 2018 (poster presentation).

Chuang, Y.-Y., and Baayen, R. H., Computational Modeling of the Role of Phonology in Silent Reading, Eleventh International Conference on the Mental Lexicon, Edmonton, Canada, September 26, 2018.

Denistia, K., Shafaei Bajestan, E., and Baayen, R. H., A Semantic Vector Model for the Indonesian Prefixes pe-and peN-, Eleventh International Conference on the Mental Lexicon, Edmonton, Canada, September 26, 2018 (poster presentation).

Baayen, R. H., Word and Paradigm Morphology with Linear Discriminative Learning, University of Sheffield, Sheffield, UK, September 12, 2018 (invited).

Nixon, J. S. (2018). Prior learning of acoustic cues blocks learning of new cues in non-native speech acquisition, AMLaP conference, Berlin, Germany. September 6-8, 2018.

Boll-Avetisyan, N., Nixon, J. S. Lentz, T. O., Liu, L., van Ommen, S., Çöltekin, Ç. and van Rij, J. (2018). Neural response development during distributional learning, Interspeech 2018 – The 19th Annual Conference of the International Speech Communication Association, Hyderabad, India. September 2-6, 2018.

Nixon, J. S. (2018). Effective acoustic cue learning is not just statistical, it is discriminative, Interspeech 2018 – The 19th Annual Conference of the International Speech Communication Association, Hyderabad, India. September 2-6, 2018.

Shafaei-Bajestan, E. and Baayen, R. H. (2018). Wide Learning for Auditory Comprehension, Interspeech 2018 – The 19th Annual Conference of the International Speech Communication Association, Hyderabad, India. September 2-6, 2018.

Baayen, R. H., Participant in a discussion on the lifespan development of the mental lexicon, Symposium on the Aging Lexicon, Basel, Switzerland, June 7-9, 2018 (invited).

Steiner, I., Tomaschek, F., Bolkart, T., Hewer, A., and Sering, K., Simultaneous Dynamic 3D Face Scanning and Articulography, SimPhon.Net workshop 5, Stuttgart, Germany, June 6, 2018.

Baayen, R. H., and E. Shafaei, A discriminative perspective on lexical access in auditory comprehension, Basque Center for Applied Mathematics, Bilbao, Spain, April 10, 2018 (invited).

Baayen, R. H., and E. Shafaei, A discriminative perspective on lexical access in auditory com- prehension and speech production, Basque Center on Cognition, Brain and Language, San Sebastian, Spain, April 9, 2018 (invited).

2017

Baayen R. H., Tomaschek, F., Ernestus, M., and Plag, I., Explaining the acoustic durations of s in conversational English with naive discriminative learning, Workshop Current Approaches to Morphology, Edmonton, Canada, December 20, 2017 (invited).

Baayen, R. H., Trial-by-trial discrimination learning in the lexical decision task, CLiPS (Computational Linguistics and Psycholinguistics) Colloquium, Antwerp, Belgium, October 16, 2017 (invited).