Can linear ensembles yield non-linear boundaries when the base learner is a linear model?

On the surface, the answer surely is “no”. How could a linear model, when combined in a linear combination yield a non-linear boundary?

On closer inspection, the answer lies on how we may ensemble, and also how we may introduce non-linearities, similar to how neural networks are trained to represent non-linear decision boundaries.

Thinking about neural networks

If we had simple linear models, which are ensemble with another linear operator, this could be interpreted as a two layer neural network where the first layer represents your linear models and the second layer represents your simple ensemble.

In this representation, the only way to yield non-linear decision boundaries based on knowledge of neural networks, is if there is a non-linear activation function.

This can be used to naturally answer the question - if we add a non-linear activation function after the first layer then we demonstrate how linear ensembles can yield non-linear boundaries!

Code example

The easiest way to demonstrate how this could work is if we consider an ensemble of three arbitrary (generalised) linear models in the form of a logistic regression. In this setting, the non-linearity will come from the logistic function which is used at the end of each of the base learners.

library(dplyr)
library(ggplot2)

logistic <- function(x) {
return(1/(1+exp(-x)))
}

ensemble <- function(x) {
m1 <- rowSums(c(0.5, 0.5)*x + 0.5) %>% logistic
m2 <- rowSums(c(-0.5, 0.5)*x - 0.5) %>% logistic
m3 <- rowSums(c(-1, 0.4)*x + 0.7) %>% logistic
return(rowMeans(cbind(m1, m2, m3)))
}

lgrid <- expand.grid(x1 = seq(-4, 2, by=0.05),
                    x2 = seq(-4, 2, by=0.05))
lgrid$pred <- ensemble(lgrid)
ggplot(data=lgrid) +
geom_point(aes(x=x1, y=x2, color=pred), alpha=0.7) + 
scale_color_gradient2(midpoint=0.5, low="blue", mid="white",
                        high="red", space ="Lab" )