TY - GEN
T1 - Matrix neural networks
AU - Gao, Junbin
AU - Guo, Yi
AU - Wang, Zhiyong
PY - 2017
Y1 - 2017
N2 - ![CDATA[Traditional neural networks assume vectorial inputs as the network is arranged as layers of single line of computing units called neurons. This special structure requires the non-vectorial inputs such as matrices to be converted into vectors. This process can be problematic for loss of spatial information and huge solution space. To address these issues, we propose matrix neural networks (MatNet), which takes matrices directly as inputs. Each layer summarises and passes information through bilinear mapping. Under this structure, back prorogation and gradient descent combination can be utilised to obtain network parameters efficiently. Furthermore, it can be conveniently extended for multi-modal inputs. We apply MatNet to MNIST handwritten digits classification and image super resolution tasks to show its effectiveness. Without too much tweaking MatNet achieves comparable performance as the state-of-the-art methods in both tasks with considerably reduced complexity.]]
AB - ![CDATA[Traditional neural networks assume vectorial inputs as the network is arranged as layers of single line of computing units called neurons. This special structure requires the non-vectorial inputs such as matrices to be converted into vectors. This process can be problematic for loss of spatial information and huge solution space. To address these issues, we propose matrix neural networks (MatNet), which takes matrices directly as inputs. Each layer summarises and passes information through bilinear mapping. Under this structure, back prorogation and gradient descent combination can be utilised to obtain network parameters efficiently. Furthermore, it can be conveniently extended for multi-modal inputs. We apply MatNet to MNIST handwritten digits classification and image super resolution tasks to show its effectiveness. Without too much tweaking MatNet achieves comparable performance as the state-of-the-art methods in both tasks with considerably reduced complexity.]]
KW - computational complexity
KW - high resolution imaging
KW - matrices
KW - neural networks (computer science)
KW - optical character recognition
UR - http://handle.westernsydney.edu.au:8081/1959.7/uws:44573
U2 - 10.1007/978-3-319-59072-1_37
DO - 10.1007/978-3-319-59072-1_37
M3 - Conference Paper
SN - 9783319590714
SP - 313
EP - 320
BT - Advances in Neural Networks: ISNN 2017: 14th International Symposium, ISNN 2017, Sapporo, Hakodate, and Muroran, Hokkaido, Japan, June 21-26, 2017: Proceedings, Part 1
PB - Springer
T2 - International Symposium on Neural Networks
Y2 - 21 June 2017
ER -