
Fine-tune or linearly probe a Pillar 4 encoder for regression
Source:R/foundation_finetune.R
foundation_fit_regressor.RdRegression counterpart of foundation_fit_classifier(). Attaches
a scalar-output head (linear or MLP) on top of a self-supervised
encoder and trains it against a numeric target y. Supports the
same two regimes — linear probing with a frozen backbone or full
fine-tuning with a two-group learning rate — and the same
device ∈ {"cpu", "mps", "cuda"} dispatch.
Usage
foundation_fit_regressor(
encoder,
x,
y,
freeze_backbone = TRUE,
head = c("linear", "mlp"),
hidden = c(64L, 32L),
dropout = 0,
epochs = 30L,
batch_size = 32L,
lr = 0.001,
weight_decay = 0,
backbone_lr_mult = 0.1,
loss = c("mse", "huber"),
val_split = 0.2,
device = c("cpu", "mps", "cuda"),
seed = NULL,
verbose = FALSE
)Arguments
Value
An edaphos_foundation_regressor list with the same slots
as the classifier counterpart plus y_mean, y_sd (target
normalisation constants) and val_rmse_history.
Details
Target normalisation is handled internally: y is centred and
scaled before training and un-scaled at predict() time so the
user never has to think about the numerical range of the head.
Examples
if (FALSE) { # \dontrun{
moco <- foundation_weights_load("edaphos-cerrado-moco-v1")
ds <- readRDS("tools/pretrain/cerrado_dataset.rds")
patches <- array(rnorm(300 * ds$n_channels * 16 * 16),
dim = c(300, ds$n_channels, 16, 16))
soc <- rnorm(300, mean = 15, sd = 6)
fit <- foundation_fit_regressor(
moco, patches, soc,
freeze_backbone = TRUE, head = "linear",
epochs = 40L, device = "mps", seed = 1L
)
predict(fit, patches[1:10, , , , drop = FALSE])
} # }