Just like all the other H2O algos, we should be able to perform k-fold cross-validation of the ensemble. To do a full cross-validation of the base model training + ensemble process, its a very computationally heavy operation. Also, the way that our SE code is set up – it assumes models have already been trained. So the only way to do this is to cross-validate the metalearner only.
If we were add support for k-fold cross-validation of the full ensemble (not just the metalearner) in the future, then we'd probably set it up as a separate API where the user specifies just the parameters for the base models (instead of the pre-trained models themselves), similar to the `SuperLearner::CV.SuperLearner()` function in R.