Recursive Tree Grammar Autoencoders are recursive neural networks that can auto-encode tree data if a grammar is known. The autoencoding accuracy and optimization performance for this model is generally higher compared to an autoencoder that encodes trees sequentially or does not use grammar knowledge.
Reservoir Memory Machines are an extension of Echo State Networks with an explicit memory. This enables these networks to solve computational tasks such as losslessly copying data which are difficult or impossible to solve for standard recurrent neural networks (even deep ones). This memory extension also raises the computational power of ESNs from below Chomsky-3 to above Chomsky-3. Reference Paper
Linear Supervised Transfer Learning provides a simple expectation maximization scheme to learn a mapping from a target space to a source space based on a labelled Gaussian mixture model in the source space and very few target space data points. Reference paper
Python3 Software Packages
edist implements a variety of edit distances between sequences and trees, including backtracing and metric learning (Paaßen et al., 2018), in cython. In particular, the library contains implementations for the Levenshtein distance, dynamic time warping, the affine edit distance, and the tree edit distance, as well as support for further edit distances via algebraic dynamic programming (Giegerich, Meyer, and Steffen, 2004). The library is available on pypi via pip3 install edist (currently only for linux).
proto-dist-ml implements prototype-based machine learning for distance data, in particular relational neural gas, relational generalized learning vector quantization, and median generalized learning vector quantization. It is available on pypi via pip3 install proto-dist-ml.
TCS Alignment Toolbox for edit distances and derivatives thereof. Also supports custom comparison functions for sequence elements, custom derivatives, and new sequence edit distances via algebraic dynamic programming. Reference Paper
Relational Neural Gas provides an efficient clustering algorithm solely based on pairwise dissimilarities. Data points are clustered by assigning the cluster of the closest prototype, where each prototype is a convex combination of existing data points. Reference Paper