Abstract : Dependency length minimization (DLM, also called dependency distance minimization) is studied by many authors and identified as a property of natural languages. In this paper we show that DLM can be interpreted as the flux size minimization and study the advantages of such a view. First it allows us to understand why DLM is cognitively motivated and how it is related to the constraints on the processing of sentences. Second, it opens the door to the definition of a big range of variations of DLM, taking into account other characteristics of the flux such as nested constructions and pro-jectivity.
https://hal-univ-paris10.archives-ouvertes.fr/hal-02291684
Contributor : Administrateur Hal Nanterre <>
Submitted on : Thursday, September 19, 2019 - 10:22:58 AM Last modification on : Tuesday, November 19, 2019 - 9:29:02 AM Long-term archiving on: : Saturday, February 8, 2020 - 9:31:39 PM
Sylvain Kahane, Chunxiao Yan. Advantages of the flux-based interpretation of dependency length minimization. First international conference on Quantitative Syntax (Quasy), Aug 2019, Paris, France. ⟨hal-02291684⟩