As researchers apply Machine Learning to increasingly complex tasks, there is mounting interest in strategies for combining multiple simple models into more powerful algorithms. In this post we will explore some of these techniques. We will use a little bit of language from Category Theory, but not much.

In the following discussion we will use the following notation and terminology: Machine Learning models are functions of the form where is a dataset and is a function that maps samples in to samples in . The dataset may contain pairs of samples (supervised learning), just samples (unsupervised learning) or anything else. This is of course a very limited perspective on Machine Learning models, and this post will focus mainly on supervised and unsupervised learning, but there are many more examples of composition in reinforcement learning and beyond.

Side-by-Side Composition

The most general way to combine Machine Learning models is to just place them “side-by-side”. There are a few ways to do this:

Product

Given models of the forms we can attach them in parallel to get a model . At both training and inference time, the composite model independently executes the component models. We can think of this sort of composition as zooming out our perspective to see the two separate and noninteracting models as part of the same whole. In Backprop as Functor the authors define this sort of composition to be the monoidal product in their category .

For example, say we have a software system that contains two modules: one for training a linear regression on driving records to predict insurance premiums and one for training a decision tree on credit history to predict mortgage approvals. We can think of this system as containing a single module that trains a linear regression decision tree on pairs of driving records and credit history to predict pairs of insurance premiums and credit history.

Ensemble

Given a set of Machine Learning models that accept the same input, there are a number of side-by-side composition strategies, called ensemble methods, that involve running each model on the same input and then applying some kind of aggregation function to their output. For example, if the models in our set all produce outputs in the same space, we could simply train them independently and average their outputs. The models in an ensemble are generally trained in concert, perhaps on different slices of the same dataset.

Input-Output Composition

Another way to combine Machine Learning models is to use the output of one model as the input to another. That is, say we have two models that we combine into a model . At inference time, operates on some by first running the trained version of to get a and then running the trained version of on to get the output . Within this framework, there are a number of ways that we can train and :

Unsupervised Feature Transformations

The most straightforward form of input-output composition is the class of unsupervised learned feature transformations. In this case is a dataset of samples from and is an unsupervised Machine Learning algorithm. In unsupervised feature tranformations the learning processes of and proceed sequentially: is trained on the output of , and this training does not begin until is fully trained. There are no assumptions on the structure of , and the dataset is a set of samples that we create from the trained version of and a dataset of samples .

Some examples of this include:

  • Standardization: learns the means/variances of each component of and transforms samples from by rescaling them to be zero-norm and unit variance.
  • PCA: learns a linear projection from to a subspace
  • GMM: learns a mapping from to the space of vectors of posterior probabilities for each mixture component

Supervised Feature Transformations

A similar but slightly more complex form of input-output composition is the class of supervised learned feature transformations. In this case is a dataset of samples from and is a Machine Learning algorithm that transforms samples from into a form that may be more convenient for a model that aims to generate predictions in to consume. Just like in unsupervised feature tranformations, the learning processes of and proceed sequentially and we construct from and a dataset of samples .

Some simple examples of this include:

  • Feature Selection: transforms by removing features that are not useful for predicting
  • Supervised Discretization: learns to represent the samples from as vectors of one-hot encoded bins, where the bins are chosen based on the relationship between the distributions of the components of and

A more complex example of a supervised feature transformation is the vertical composition of decision trees. If we have two sets of decision rules from which we can build decision trees, we can combine them to form a composite decision tree that first applies all of the rules in the first group and then applies all of the rules in the second group.

End-to-End Training

End-to-End training is probably both the most complex and most studied form of input-output composition of Machine Learning models. This paper and this paper and this paper all build categories on top of this kind of composition.

In end-to-end training, we train and at the same time from a set of samples . We never explicitly construct the datasets or . In general, we need our Machine Learning models to have a special structure in order to employ this strategy. For example, the Backprop as functor paper defines the notions of request and update functions to characterize this. Because of the chain rule, we can define these functions and employ end-to-end training whenever our models are parameteric and differentiable.

Naturally, the clearest example of end-to-end training is the composition of layers in a neural network, which we train with Backpropagation.

Meta-Learning

In meta-learning, or learning to learn, the training or “update” function for one Machine Learning model is defined by another Machine Learning model. In certain cases, like those described in this paper, we can define a notion of composition where is a model with an inference function equivalent to that of and a training function defined based on ’s inference and training functions. This is described in more detail for the parametric and differentiable case here.

Conclusion

This is just a small sample of techniques for building complex models from simple components. Machine Learning is growing rapidly, and there are many more strategies for model composition that we will not discuss here. Thanks for reading!