Exploiting multiple sources of information in learning an artificial language : human data and modeling

Pierre Perruchet, Barbara Tillmann

    Research output: Contribution to journalArticlepeer-review

    33 Citations (Scopus)

    Abstract

    This study investigates the joint influences of three factors on the discovery of new word-like units in a continuous artificial speech stream: the statistical structure of the ongoing input, the initial wordlikeness of parts of the speech flow, and the contextual information provided by the earlier emergence of other word-like units. Results of an experiment conducted with adult participants show that these sources of information have strong and interactive influences on word discovery. The authors then examine the ability of different models of word segmentation to account for these results. PARSER (Perruchet & Vinter, 1998) is compared to the view that word segmentation relies on the exploitation of transitional probabilities between successive syllables, and with the models based on the Minimum Description Length principle, such as INCDROP. The authors submit arguments suggesting that PARSER has the advantage of accounting for the whole pattern of data without ad-hoc modifications, while relying exclusively on general-purpose learning principles. This study strengthens the growing notion that nonspecific cognitive processes, mainly based on associative learning and memory principles, are able to account for a larger part of early language acquisition than previously assumed.
    Original languageEnglish
    Pages (from-to)255-285
    Number of pages31
    JournalCognitive Science
    Volume34
    Issue number2
    DOIs
    Publication statusPublished - 2010

    Fingerprint

    Dive into the research topics of 'Exploiting multiple sources of information in learning an artificial language : human data and modeling'. Together they form a unique fingerprint.

    Cite this