
Deep-ensemble fine-tune of a Pillar 4 foundation encoder
Source:R/foundation_posterior.R
foundation_finetune_ensemble.RdTrains K_ens independent heads on the same encoder with
different random seeds and collects them into a single object.
Each member is a full foundation_fit_classifier() or
foundation_fit_regressor() fit.
Usage
foundation_finetune_ensemble(
encoder,
x,
y,
task = c("classification", "regression"),
K_ens = 5L,
base_seed = 301L,
...
)Arguments
- encoder
A MoCo/SimCLR encoder as returned by
foundation_weights_load()(or an equivalentnn_module).- x, y
Training data; same shape requirements as the base fine-tune functions (4-D array
(N, C, H, W)+ vectory).- task
"classification"or"regression".- K_ens
Integer; number of ensemble heads. Defaults to
5L.- base_seed
Integer; each member uses
base_seed + k - 1L.- ...
Additional arguments forwarded verbatim to
foundation_fit_classifier()orfoundation_fit_regressor()(e.g.epochs,batch_size,lr,dropout,hidden,freeze_backbone,device,verbose).