Stacking

Published:

Stacking follows a two-stage process. First, several base models are trained on the same task, and each one produces its own predictions. Those predictions are then used as inputs to a second model, often called a meta model, which learns how to combine them and how much weight to give each base model in different situations.

This approach works well when the base models have different strengths, since one model may capture signals that another misses. The key is training it correctly. If the meta model learns from predictions made on the same data that the base models were trained on, results can look better than they really are. Teams avoid this by generating meta-model training data through careful separation, such as cross-validated predictions.

Follow us on Facebook and LinkedIn to keep abreast of our latest news and articles