Fusion is great, and interpretable fusion could be exciting for theory generation
Lisa Pearl
September 2018
 

Response to “Generative linguistics and neural networks at 60: foundation, friction, and fusion” by Joe Pater. From my perspective, Pater’s (2018) target article does a great service to both researchers who work in generative linguistics and researchers who utilize neural networks – and especially to researchers who might find themselves wanting to do both by harnessing the insights of each tradition. The fusion of theories of linguistic representation and probabilistic learning techniques has certainly led to many interesting and valuable insights about the nature of both linguistic representation and the language acquisition process. However, I feel that the most exciting aspect of Pater’s article is the increasing interpretability of neural network models, especially when combined with insights from the generative linguistics theoretical framework. This allows for the possibility that neural networks could be used to actually generate new theories of representation. I describe how I think this theory generation process might work with interpretable neural networks.
Format: [ pdf ]
Reference: lingbuzz/004142
(please use that when you cite this article)
Published in: (submitted to Perspectives subsection of Language)
keywords: generative linguistics, neural networks, probabilistic learning, language acquisition, theory generation, bayesian inference, learnability, syntax, phonology, semantics, morphology
previous versions: v1 [July 2018]
Downloaded:801 times

 

[ edit this article | back to article list ]