Recursive neural network

A recursive neural network is a kind of deep neural network created by applying the same set of weights recursively over a structured input, to produce a structured prediction over variable-size input structures, or a scalar prediction on it, by traversing a given structure in topological order. Recursive neural networks, sometimes abbreviated as RvNNs, have been successful, for instance, in learning sequence and tree structures in natural language processing, mainly phrase and sentence continuous representations based on word embedding. RvNNs have first been introduced to learn distributed representations of structure, such as logical terms.[1] Models and general frameworks have been developed in further works since the 1990s.[2][3]

  1. ^ Goller, C.; Küchler, A. (1996). "Learning task-dependent distributed representations by backpropagation through structure". Proceedings of International Conference on Neural Networks (ICNN'96). Vol. 1. pp. 347–352. CiteSeerX 10.1.1.52.4759. doi:10.1109/ICNN.1996.548916. ISBN 978-0-7803-3210-2. S2CID 6536466.
  2. ^ Sperduti, A.; Starita, A. (1997-05-01). "Supervised neural networks for the classification of structures". IEEE Transactions on Neural Networks. 8 (3): 714–735. doi:10.1109/72.572108. ISSN 1045-9227. PMID 18255672.
  3. ^ Frasconi, P.; Gori, M.; Sperduti, A. (1998-09-01). "A general framework for adaptive processing of data structures". IEEE Transactions on Neural Networks. 9 (5): 768–786. CiteSeerX 10.1.1.64.2580. doi:10.1109/72.712151. ISSN 1045-9227. PMID 18255765.

Developed by StudentB