The model comparison through orthography, phonology, and semantics

Abstract

We tried to compare the performances of 3 neural network models. Those were perceptrons, back-propagations, and attractor networks. Perceptrons are two-layered model without any hidden layers. On the other hand, back-propagation models are 3-layered models with a hidden layer. In addition to the hidden layer, attractor networks have a cleanup layer from/to the output layer. We had all the models learnt the data sets of Hinton & Schallice(1991), Plaut & Schallice (1993), and Tyler et al(2000). The components of language processings are divided to three parts, orthography, phonology and semantics. The comparison among the models and the data sets revealed adequacy as a model of dyslectic patients. It also revealed that the cleanup layer had to play an important roll in order to process all the data set. The category specificity, which was defined as the inner correlation matrix between concepts, could be simulated, as well.


Back to Table of Contents