Cross validation logic used by LightGBM

lgb.cv(params = list(), data, nrounds = 10, nfold = 3,
  label = NULL, weight = NULL, obj = NULL, eval = NULL,
  verbose = 1, record = TRUE, eval_freq = 1L, showsd = TRUE,
  stratified = TRUE, folds = NULL, init_model = NULL,
  colnames = NULL, categorical_feature = NULL,
  early_stopping_rounds = NULL, callbacks = list(),
  reset_data = FALSE, ...)

Arguments

params

List of parameters

data

a lgb.Dataset object, used for training

nrounds

number of training rounds

nfold

the original dataset is randomly partitioned into nfold equal size subsamples.

label

vector of response values. Should be provided only when data is an R-matrix.

weight

vector of response values. If not NULL, will set to dataset

obj

objective function, can be character or custom objective function. Examples include regression, regression_l1, huber, binary, lambdarank, multiclass, multiclass

eval

evaluation function, can be (list of) character or custom eval function

verbose

verbosity for output, if <= 0, also will disable the print of evaluation during training

record

Boolean, TRUE will record iteration message to booster$record_evals

eval_freq

evaluation output frequency, only effect when verbose > 0

showsd

boolean, whether to show standard deviation of cross validation

stratified

a boolean indicating whether sampling of folds should be stratified by the values of outcome labels.

folds

list provides a possibility to use a list of pre-defined CV folds (each element must be a vector of test fold's indices). When folds are supplied, the nfold and stratified parameters are ignored.

init_model

path of model file of lgb.Booster object, will continue training from this model

colnames

feature names, if not null, will use this to overwrite the names in dataset

categorical_feature

list of str or int type int represents index, type str represents feature names

early_stopping_rounds

int Activates early stopping. Requires at least one validation data and one metric If there's more than one, will check all of them except the training data Returns the model with (best_iter + early_stopping_rounds) If early stopping occurs, the model will have 'best_iter' field

callbacks

list of callback functions List of callback functions that are applied at each iteration.

reset_data

Boolean, setting it to TRUE (not the default value) will transform the booster model into a predictor model which frees up memory and the original datasets

...

other parameters, see Parameters.rst for more information. A few key parameters:

  • boostingBoosting type. "gbdt" or "dart"

  • num_leavesnumber of leaves in one tree. defaults to 127

  • max_depthLimit the max depth for tree model. This is used to deal with overfit when #data is small. Tree still grow by leaf-wise.

  • num_threadsNumber of threads for LightGBM. For the best speed, set this to the number of real CPU cores, not the number of threads (most CPU using hyper-threading to generate 2 threads per CPU core).

Value

a trained model lgb.CVBooster.

Examples

library(lightgbm) data(agaricus.train, package = "lightgbm") train <- agaricus.train dtrain <- lgb.Dataset(train$data, label = train$label) params <- list(objective = "regression", metric = "l2") model <- lgb.cv(params, dtrain, 10, nfold = 5, min_data = 1, learning_rate = 1, early_stopping_rounds = 10)
#> [1]: valid's l2:0.000460829+0.000921659 #> [2]: valid's l2:0.000460829+0.000921659 #> [3]: valid's l2:0.000460829+0.000921659 #> [4]: valid's l2:0.000460829+0.000921659 #> [5]: valid's l2:0.000460829+0.000921659 #> [6]: valid's l2:0.000460829+0.000921659 #> [7]: valid's l2:0.000460829+0.000921659 #> [8]: valid's l2:0.000460829+0.000921659 #> [9]: valid's l2:0.000460829+0.000921659 #> [10]: valid's l2:0.000460829+0.000921659