In this work, in order to increase the capacity of a recurrent neural network, we present a model for extracting common features and sharing them across data. As a result of using this model, extracted principle components of data will be invariant to unwanted variation More
In this work, in order to increase the capacity of a recurrent neural network, we present a model for extracting common features and sharing them across data. As a result of using this model, extracted principle components of data will be invariant to unwanted variations. The recurrent connection of the network removes the noise using a continuous attractor formed during the training phase. The defined speaker codes will be transformed to the information need for switching the continuous attractor in the input space. As a result, speaker variations can be compensated and the recognition will performed when a clean signal is available. We compared the performance of this method with a reference network described in the paper. The results show that the proposed model is more useful in removing noise and unwanted variations.
We compared the performance of this method with the reference network. The results show that the proposed model performs better in removing noise and unwanted variations, it increased the phoneme recognition accuracy about 5% when the signal to noise ratio is 0 dB.
Manuscript profile