We have a gpus parameter in the ImageClassificationPredictConfig but it's never used anywhere
|
gpus (int): Number of GPUs to use for inference. |
|
Defaults to all of the available GPUs found on the machine. |
pl.Trainer isn't used in predict()
|
with torch.no_grad(): |
|
y_hat = ( |
|
torch.softmax(classifier_module(input_data.unsqueeze(0)), dim=1) |
|
.squeeze(0) |
|
.numpy() |
|
) |
|
predictions.append((filepath, detection_category, detection_conf, bbox, y_hat)) |
In training, gpus are configured based on devices and accelerator and then passed to the trainer in train()
|
accelerator (str): PyTorch Lightning accelerator type ('gpu' or 'cpu'). |
|
Defaults to 'gpu' if CUDA is available, otherwise 'cpu'. |
|
devices (Any): Which devices to use for training. Can be int, list of ints, or 'auto'. |
|
Defaults to 'auto'. |
|
train_trainer = pl.Trainer( |
|
max_epochs=config.max_epochs, |
|
logger=mlflow_logger, |
|
callbacks=callbacks, |
|
devices=config.devices, |
|
accelerator=config.accelerator, |
|
strategy=strategy, |
|
log_every_n_steps=log_every_n_steps, |
|
accumulate_grad_batches=accumulate_n_batches, |
|
) |
We have a
gpusparameter in theImageClassificationPredictConfigbut it's never used anywherezamba/zamba/images/config.py
Lines 83 to 84 in d4fe633
pl.Trainerisn't used inpredict()zamba/zamba/images/manager.py
Lines 82 to 88 in d4fe633
In training, gpus are configured based on
devicesandacceleratorand then passed to the trainer intrain()zamba/zamba/images/config.py
Lines 248 to 251 in d4fe633
zamba/zamba/images/manager.py
Lines 315 to 324 in d4fe633