dc.creator |
Sivakumar, Seshadri |
|
dc.creator |
Sivakumar, Shyamala C. |
|
dc.date.accessioned |
2018-03-16T16:00:36Z |
|
dc.date.available |
2018-03-16T16:00:36Z |
|
dc.date.issued |
2017-09-26 |
|
dc.identifier.issn |
2168-2267 |
|
dc.identifier.uri |
http://library2.smu.ca/handle/01/27363 |
|
dc.description |
Post-print |
|
dc.description.abstract |
This paper introduces a discrete-time recurrent neural network architecture using triangular feedback weight matrices that allows a simplified approach to ensuring network and training stability. The triangular structure of the weight matrices is exploited to readily ensure that the eigenvalues of the feedback weight matrix represented by the block diagonal elements lie on the unit circle in the complex z-plane by updating these weights based on the differential of the angular error variable. Such placement of the eigenvalues together with the extended close interaction between state variables facilitated by the nondiagonal triangular elements, enhances the learning ability of the proposed architecture. Simulation results show that the proposed architecture is highly effective in time-series prediction tasks associated with nonlinear and chaotic dynamic systems with underlying oscillatory modes. This modular architecture with dual upper and lower triangular feedback weight matrices mimics fully recurrent network architectures, while maintaining learning stability with a simplified training process. While training, the block-diagonal weights (hence the eigenvalues) of the dual triangular matrices are constrained to the same values during weight updates aimed at minimizing the possibility of overfitting. The dual triangular architecture also exploits the benefit of parsing the input and selectively applying the parsed inputs to the two subnetworks to facilitate enhanced learning performance. |
en_CA |
dc.description.provenance |
Submitted by Betty McEachern (betty.mceachern@smu.ca) on 2018-03-16T16:00:36Z
No. of bitstreams: 1
Sivakumar_Shyamala_C_article_2017.pdf: 2271977 bytes, checksum: 81a971bafcc7bd64fff067881f3762e0 (MD5) |
en |
dc.description.provenance |
Made available in DSpace on 2018-03-16T16:00:36Z (GMT). No. of bitstreams: 1
Sivakumar_Shyamala_C_article_2017.pdf: 2271977 bytes, checksum: 81a971bafcc7bd64fff067881f3762e0 (MD5)
Previous issue date: 2017 |
en |
dc.language.iso |
en |
en_CA |
dc.publisher |
IEEE |
en_CA |
dc.relation.uri |
https://dx.doi.org/10.1109/TCYB.2017.2751005 |
|
dc.rights |
Article is made available in accordance with the publisher’s policy and is subject to copyright law. Please refer to the publisher’s site. Any re-use of this article is to be in accordance with the publisher’s copyright policy. This posting is in no way granting any permission for re-use to the reader/user. |
|
dc.subject.lcsh |
Neural networks (Computer science) |
|
dc.subject.lcsh |
Chaotic behavior in systems |
|
dc.title |
Marginally stable triangular recurrent neural network architecture for time series prediction |
en_CA |
dc.type |
Text |
en_CA |
dcterms.bibliographicCitation |
IEEE Transactions on Cybernetics PP(99), 1-15. (2017) |
en_CA |