使用 Ray Tune 进行超参数调优
超参数调优可以在普通模型和高精度模型之间产生显著差异。通常,像选择不同的学习率或更改网络层大小这样简单的操作,都可能对模型的性能产生巨大影响。
幸运的是,有一些工具可以帮助找到最佳参数组合。Ray Tune 是分布式超参数调优的行业标准工具。Ray Tune 包含了最新的超参数搜索算法,集成了多种分析库,并通过 Ray 的分布式机器学习引擎 原生支持分布式训练。
在本教程中,我们将向您展示如何将 Ray Tune 集成到 PyTorch 训练工作流程中。我们将基于 PyTorch 文档中的这个教程 进行扩展,以训练一个 CIFAR10 图像分类器。
如您所见,我们只需要进行一些轻微的修改。具体来说,我们需要
-
将数据加载和训练封装到函数中,
-
使部分网络参数可配置,
-
添加检查点(可选),
-
并定义模型调优的搜索空间
要运行本教程,请确保已安装以下软件包:
-
ray[tune]
: 分布式超参数调优库 -
torchvision
: 用于数据转换的库
设置 / 导入
让我们从导入开始:
fromfunctoolsimport partial
importos
importtempfile
frompathlibimport Path
importtorch
importtorch.nnasnn
importtorch.nn.functionalasF
importtorch.optimasoptim
fromtorch.utils.dataimport random_split
importtorchvision
importtorchvision.transformsastransforms
fromrayimport tune
fromrayimport train
fromray.trainimport Checkpoint, get_checkpoint
fromray.tune.schedulersimport ASHAScheduler
importray.cloudpickleaspickle
大多数导入是用于构建 PyTorch 模型的。只有最后的导入是用于 Ray Tune 的。
数据加载器
我们将数据加载器封装在它们自己的函数中,并传递一个全局数据目录。这样我们可以在不同的试验之间共享一个数据目录。
defload_data(data_dir="./data"):
transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]
)
trainset = torchvision.datasets.CIFAR10(
root=data_dir, train=True, download=True, transform=transform
)
testset = torchvision.datasets.CIFAR10(
root=data_dir, train=False, download=True, transform=transform
)
return trainset, testset
可配置的神经网络
我们只能调整那些可配置的参数。在这个示例中,我们可以指定全连接层的层大小:
classNet(nn.Module):
def__init__(self, l1=120, l2=84):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, l1)
self.fc2 = nn.Linear(l1, l2)
self.fc3 = nn.Linear(l2, 10)
defforward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = torch.flatten(x, 1) # flatten all dimensions except batch
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
训练函数
现在事情变得有趣了,因为我们对PyTorch 文档中的示例进行了一些修改。
我们将训练脚本封装在一个函数 train_cifar(config, data_dir=None)
中。config
参数将接收我们想要训练的超参数。data_dir
指定了加载和存储数据的目录,以便多个运行可以共享相同的数据源。如果提供了检查点,我们还会在运行开始时加载模型和优化器的状态。在本教程的后面部分,您将找到有关如何保存检查点及其用途的信息。
net = Net(config["l1"], config["l2"])
checkpoint = get_checkpoint()
if checkpoint:
with checkpoint.as_directory() as checkpoint_dir:
data_path = Path(checkpoint_dir) / "data.pkl"
with open(data_path, "rb") as fp:
checkpoint_state = pickle.load(fp)
start_epoch = checkpoint_state["epoch"]
net.load_state_dict(checkpoint_state["net_state_dict"])
optimizer.load_state_dict(checkpoint_state["optimizer_state_dict"])
else:
start_epoch = 0
优化器的学习率也被设置为可配置的:
optimizer = optim.SGD(net.parameters(), lr=config["lr"], momentum=0.9)
我们还将训练数据分为训练子集和验证子集。因此,我们在80%的数据上进行训练,并在剩余的20%数据上计算验证损失。遍历训练集和测试集的批次大小也是可配置的。
使用 DataParallel 添加 (多) GPU 支持
图像分类在很大程度上受益于 GPU。幸运的是,我们可以在 Ray Tune 中继续使用 PyTorch 的抽象。因此,我们可以将模型包装在 nn.DataParallel
中,以支持在多个 GPU 上进行数据并行训练:
device = "cpu"
if torch.cuda.is_available():
device = "cuda:0"
if torch.cuda.device_count() > 1:
net = nn.DataParallel(net)
net.to(device)
通过使用 device
变量,我们确保在没有 GPU 可用时训练也能正常进行。PyTorch 要求我们显式地将数据发送到 GPU 内存,如下所示:
for i, data in enumerate(trainloader, 0):
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
该代码现在支持在 CPU、单个 GPU 和多个 GPU 上进行训练。值得注意的是,Ray 还支持部分 GPU,因此我们可以在试验之间共享 GPU,只要模型仍然适合 GPU 内存即可。我们稍后会再讨论这一点。
与 Ray Tune 通信
最有趣的部分是与 Ray Tune 的通信:
checkpoint_data = {
"epoch": epoch,
"net_state_dict": net.state_dict(),
"optimizer_state_dict": optimizer.state_dict(),
}
with tempfile.TemporaryDirectory() as checkpoint_dir:
data_path = Path(checkpoint_dir) / "data.pkl"
with open(data_path, "wb") as fp:
pickle.dump(checkpoint_data, fp)
checkpoint = Checkpoint.from_directory(checkpoint_dir)
train.report(
{"loss": val_loss / val_steps, "accuracy": correct / total},
checkpoint=checkpoint,
)
在这里,我们首先保存一个检查点,然后将一些指标报告回 Ray Tune。具体来说,我们将验证损失和准确率发送回 Ray Tune。Ray Tune 可以使用这些指标来决定哪个超参数配置产生了最佳结果。这些指标还可以用于提前终止表现不佳的试验,以避免在这些试验上浪费资源。
保存检查点是可选的,但如果我们想使用诸如基于人口的训练之类的高级调度器,则是必要的。此外,通过保存检查点,我们可以在稍后加载训练好的模型并在测试集上进行验证。最后,保存检查点有助于实现容错,并且允许我们中断训练并在稍后继续训练。
完整的训练函数
完整代码示例如下:
deftrain_cifar(config, data_dir=None):
net = Net(config["l1"], config["l2"])
device = "cpu"
if torch.cuda.is_available():
device = "cuda:0"
if torch.cuda.device_count() > 1:
net = nn.DataParallel(net)
net.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=config["lr"], momentum=0.9)
checkpoint = get_checkpoint()
if checkpoint:
with checkpoint.as_directory() as checkpoint_dir:
data_path = Path(checkpoint_dir) / "data.pkl"
with open(data_path, "rb") as fp:
checkpoint_state = pickle.load(fp)
start_epoch = checkpoint_state["epoch"]
net.load_state_dict(checkpoint_state["net_state_dict"])
optimizer.load_state_dict(checkpoint_state["optimizer_state_dict"])
else:
start_epoch = 0
trainset, testset = load_data(data_dir)
test_abs = int(len(trainset) * 0.8)
train_subset, val_subset = random_split(
trainset, [test_abs, len(trainset) - test_abs]
)
trainloader = torch.utils.data.DataLoader(
train_subset, batch_size=int(config["batch_size"]), shuffle=True, num_workers=8
)
valloader = torch.utils.data.DataLoader(
val_subset, batch_size=int(config["batch_size"]), shuffle=True, num_workers=8
)
for epoch in range(start_epoch, 10): # loop over the dataset multiple times
running_loss = 0.0
epoch_steps = 0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
epoch_steps += 1
if i % 2000 == 1999: # print every 2000 mini-batches
print(
"[%d, %5d] loss: %.3f"
% (epoch + 1, i + 1, running_loss / epoch_steps)
)
running_loss = 0.0
# Validation loss
val_loss = 0.0
val_steps = 0
total = 0
correct = 0
for i, data in enumerate(valloader, 0):
with torch.no_grad():
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
outputs = net(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
loss = criterion(outputs, labels)
val_loss += loss.cpu().numpy()
val_steps += 1
checkpoint_data = {
"epoch": epoch,
"net_state_dict": net.state_dict(),
"optimizer_state_dict": optimizer.state_dict(),
}
with tempfile.TemporaryDirectory() as checkpoint_dir:
data_path = Path(checkpoint_dir) / "data.pkl"
with open(data_path, "wb") as fp:
pickle.dump(checkpoint_data, fp)
checkpoint = Checkpoint.from_directory(checkpoint_dir)
train.report(
{"loss": val_loss / val_steps, "accuracy": correct / total},
checkpoint=checkpoint,
)
print("Finished Training")
如您所见,大部分代码都是直接从原始示例中改编而来的。
测试集准确率
通常,机器学习模型的性能会在一个保留的测试集上进行评估,该测试集包含未用于训练模型的数据。我们同样将其封装在一个函数中:
deftest_accuracy(net, device="cpu"):
trainset, testset = load_data()
testloader = torch.utils.data.DataLoader(
testset, batch_size=4, shuffle=False, num_workers=2
)
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
images, labels = images.to(device), labels.to(device)
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
return correct / total
该函数还期望接收一个 device
参数,以便我们可以在 GPU 上进行测试集的验证。
配置搜索空间
最后,我们需要定义 Ray Tune 的搜索空间。以下是一个示例:
config = {
"l1": tune.choice([2 ** i for i in range(9)]),
"l2": tune.choice([2 ** i for i in range(9)]),
"lr": tune.loguniform(1e-4, 1e-1),
"batch_size": tune.choice([2, 4, 8, 16])
}
tune.choice()
接受一个值列表,这些值将从中进行均匀采样。在本例中,l1
和 l2
参数应为 4 到 256 之间的 2 的幂,即 4、8、16、32、64、128 或 256。lr
(学习率)应在 0.0001 和 0.1 之间均匀采样。最后,批量大小在 2、4、8 和 16 之间选择。
在每次试验中,Ray Tune 将从这些搜索空间中随机采样一组参数。然后它将并行训练多个模型,并从中找到表现最佳的一个。我们还使用了 ASHAScheduler
,它会提前终止表现不佳的试验。
我们使用 functools.partial
包装 train_cifar
函数以设置常量 data_dir
参数。我们还可以告诉 Ray Tune 每个试验应具备哪些资源:
gpus_per_trial = 2
# ...
result = tune.run(
partial(train_cifar, data_dir=data_dir),
resources_per_trial={"cpu": 8, "gpu": gpus_per_trial},
config=config,
num_samples=num_samples,
scheduler=scheduler,
checkpoint_at_end=True)
您可以指定 CPU 的数量,这些 CPU 将可用于例如增加 PyTorch DataLoader
实例的 num_workers
。所选的 GPU 数量在每个试验中对 PyTorch 可见。试验无法访问未为其请求的 GPU —— 因此您无需担心两个试验使用相同的资源集。
在这里,我们还可以指定部分 GPU,因此像 gpus_per_trial=0.5
这样的设置是完全有效的。试验将共享 GPU。您只需确保模型仍能适应 GPU 内存。
训练完模型后,我们将找到性能最佳的模型,并从检查点文件加载训练好的网络。然后我们获取测试集的准确率,并通过打印报告所有结果。
完整的主函数如下所示:
defmain(num_samples=10, max_num_epochs=10, gpus_per_trial=2):
data_dir = os.path.abspath("./data")
load_data(data_dir)
config = {
"l1": tune.choice([2**i for i in range(9)]),
"l2": tune.choice([2**i for i in range(9)]),
"lr": tune.loguniform(1e-4, 1e-1),
"batch_size": tune.choice([2, 4, 8, 16]),
}
scheduler = ASHAScheduler(
metric="loss",
mode="min",
max_t=max_num_epochs,
grace_period=1,
reduction_factor=2,
)
result = tune.run(
partial(train_cifar, data_dir=data_dir),
resources_per_trial={"cpu": 2, "gpu": gpus_per_trial},
config=config,
num_samples=num_samples,
scheduler=scheduler,
)
best_trial = result.get_best_trial("loss", "min", "last")
print(f"Best trial config: {best_trial.config}")
print(f"Best trial final validation loss: {best_trial.last_result['loss']}")
print(f"Best trial final validation accuracy: {best_trial.last_result['accuracy']}")
best_trained_model = Net(best_trial.config["l1"], best_trial.config["l2"])
device = "cpu"
if torch.cuda.is_available():
device = "cuda:0"
if gpus_per_trial > 1:
best_trained_model = nn.DataParallel(best_trained_model)
best_trained_model.to(device)
best_checkpoint = result.get_best_checkpoint(trial=best_trial, metric="accuracy", mode="max")
with best_checkpoint.as_directory() as checkpoint_dir:
data_path = Path(checkpoint_dir) / "data.pkl"
with open(data_path, "rb") as fp:
best_checkpoint_data = pickle.load(fp)
best_trained_model.load_state_dict(best_checkpoint_data["net_state_dict"])
test_acc = test_accuracy(best_trained_model, device)
print("Best trial test set accuracy: {}".format(test_acc))
if __name__ == "__main__":
# You can change the number of GPUs per trial here:
main(num_samples=10, max_num_epochs=10, gpus_per_trial=0)
0% 0.00/170M [00:00<?, ?B/s]
0% 492k/170M [00:00<00:34, 4.91MB/s]
4% 7.57M/170M [00:00<00:03, 43.6MB/s]
9% 14.9M/170M [00:00<00:02, 56.9MB/s]
14% 24.4M/170M [00:00<00:02, 71.9MB/s]
19% 32.8M/170M [00:00<00:01, 76.1MB/s]
24% 40.4M/170M [00:00<00:01, 74.8MB/s]
30% 50.7M/170M [00:00<00:01, 83.8MB/s]
35% 59.1M/170M [00:00<00:01, 78.8MB/s]
40% 68.2M/170M [00:00<00:01, 82.2MB/s]
45% 76.4M/170M [00:01<00:01, 79.5MB/s]
50% 85.3M/170M [00:01<00:01, 82.1MB/s]
55% 93.6M/170M [00:01<00:00, 77.7MB/s]
61% 104M/170M [00:01<00:00, 84.4MB/s]
66% 112M/170M [00:01<00:00, 78.7MB/s]
72% 122M/170M [00:01<00:00, 83.9MB/s]
77% 131M/170M [00:01<00:00, 80.7MB/s]
82% 140M/170M [00:01<00:00, 84.3MB/s]
87% 149M/170M [00:01<00:00, 80.3MB/s]
93% 158M/170M [00:02<00:00, 85.5MB/s]
98% 167M/170M [00:02<00:00, 77.9MB/s]
100% 170M/170M [00:02<00:00, 77.6MB/s]
2025-03-21 17:07:17,236 WARNING services.py:1889 -- WARNING: The object store is using /tmp instead of /dev/shm because /dev/shm has only 2147479552 bytes available. This will harm performance! You may be able to free up space by deleting files in /dev/shm. If you are inside a Docker container, you can increase /dev/shm size by passing '--shm-size=10.24gb' to 'docker run' (or add it to the run_options list in a Ray cluster config). Make sure to set this to more than 30% of available RAM.
2025-03-21 17:07:17,377 INFO worker.py:1642 -- Started a local Ray instance.
2025-03-21 17:07:18,751 INFO tune.py:228 -- Initializing Ray automatically. For cluster usage or custom Ray initialization, call `ray.init(...)` before `tune.run(...)`.
2025-03-21 17:07:18,753 INFO tune.py:654 -- [output] This will use the new output engine with verbosity 2. To disable the new output and use the legacy output engine, set the environment variable RAY_AIR_NEW_OUTPUT=0. For more information, please see https://github.com/ray-project/ray/issues/36949
+--------------------------------------------------------------------+
| Configuration for experiment train_cifar_2025-03-21_17-07-18 |
+--------------------------------------------------------------------+
| Search algorithm BasicVariantGenerator |
| Scheduler AsyncHyperBandScheduler |
| Number of trials 10 |
+--------------------------------------------------------------------+
View detailed results here: /var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18
To visualize your results with TensorBoard, run: `tensorboard --logdir /var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18`
Trial status: 10 PENDING
Current time: 2025-03-21 17:07:19. Total running time: 0s
Logical resource usage: 0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+-------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size |
+-------------------------------------------------------------------------------+
| train_cifar_f231b_00000 PENDING 16 1 0.00213327 2 |
| train_cifar_f231b_00001 PENDING 1 2 0.013416 4 |
| train_cifar_f231b_00002 PENDING 256 64 0.0113784 2 |
| train_cifar_f231b_00003 PENDING 64 256 0.0274071 8 |
| train_cifar_f231b_00004 PENDING 16 2 0.056666 4 |
| train_cifar_f231b_00005 PENDING 8 64 0.000353097 4 |
| train_cifar_f231b_00006 PENDING 16 4 0.000147684 8 |
| train_cifar_f231b_00007 PENDING 256 256 0.00477469 8 |
| train_cifar_f231b_00008 PENDING 128 256 0.0306227 8 |
| train_cifar_f231b_00009 PENDING 2 16 0.0286986 2 |
+-------------------------------------------------------------------------------+
Trial train_cifar_f231b_00002 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_f231b_00002 config |
+--------------------------------------------------+
| batch_size 2 |
| l1 256 |
| l2 64 |
| lr 0.01138 |
+--------------------------------------------------+
Trial train_cifar_f231b_00004 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_f231b_00004 config |
+--------------------------------------------------+
| batch_size 4 |
| l1 16 |
| l2 2 |
| lr 0.05667 |
+--------------------------------------------------+
Trial train_cifar_f231b_00003 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_f231b_00003 config |
+--------------------------------------------------+
| batch_size 8 |
| l1 64 |
| l2 256 |
| lr 0.02741 |
+--------------------------------------------------+
Trial train_cifar_f231b_00006 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_f231b_00006 config |
+--------------------------------------------------+
| batch_size 8 |
| l1 16 |
| l2 4 |
| lr 0.00015 |
+--------------------------------------------------+
Trial train_cifar_f231b_00001 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_f231b_00001 config |
+--------------------------------------------------+
| batch_size 4 |
| l1 1 |
| l2 2 |
| lr 0.01342 |
+--------------------------------------------------+
Trial train_cifar_f231b_00007 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_f231b_00007 config |
+--------------------------------------------------+
| batch_size 8 |
| l1 256 |
| l2 256 |
| lr 0.00477 |
+--------------------------------------------------+
Trial train_cifar_f231b_00000 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_f231b_00000 config |
+--------------------------------------------------+
| batch_size 2 |
| l1 16 |
| l2 1 |
| lr 0.00213 |
+--------------------------------------------------+
Trial train_cifar_f231b_00005 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_f231b_00005 config |
+--------------------------------------------------+
| batch_size 4 |
| l1 8 |
| l2 64 |
| lr 0.00035 |
+--------------------------------------------------+
(func pid=4473) [1, 2000] loss: 2.324
Trial status: 8 RUNNING | 2 PENDING
Current time: 2025-03-21 17:07:49. Total running time: 30s
Logical resource usage: 16.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+-------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size |
+-------------------------------------------------------------------------------+
| train_cifar_f231b_00000 RUNNING 16 1 0.00213327 2 |
| train_cifar_f231b_00001 RUNNING 1 2 0.013416 4 |
| train_cifar_f231b_00002 RUNNING 256 64 0.0113784 2 |
| train_cifar_f231b_00003 RUNNING 64 256 0.0274071 8 |
| train_cifar_f231b_00004 RUNNING 16 2 0.056666 4 |
| train_cifar_f231b_00005 RUNNING 8 64 0.000353097 4 |
| train_cifar_f231b_00006 RUNNING 16 4 0.000147684 8 |
| train_cifar_f231b_00007 RUNNING 256 256 0.00477469 8 |
| train_cifar_f231b_00008 PENDING 128 256 0.0306227 8 |
| train_cifar_f231b_00009 PENDING 2 16 0.0286986 2 |
+-------------------------------------------------------------------------------+
(func pid=4473) [1, 4000] loss: 1.152 [repeated 8x across cluster] (Ray deduplicates logs by default. Set RAY_DEDUP_LOGS=0 to disable log deduplication, or see https://docs.ray.io/en/master/ray-observability/ray-logging.html#log-deduplication for more options.)
(func pid=4476) [1, 4000] loss: 1.037 [repeated 6x across cluster]
(func pid=4473) [1, 6000] loss: 0.769 [repeated 2x across cluster]
Trial status: 8 RUNNING | 2 PENDING
Current time: 2025-03-21 17:08:19. Total running time: 1min 0s
Logical resource usage: 16.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+-------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size |
+-------------------------------------------------------------------------------+
| train_cifar_f231b_00000 RUNNING 16 1 0.00213327 2 |
| train_cifar_f231b_00001 RUNNING 1 2 0.013416 4 |
| train_cifar_f231b_00002 RUNNING 256 64 0.0113784 2 |
| train_cifar_f231b_00003 RUNNING 64 256 0.0274071 8 |
| train_cifar_f231b_00004 RUNNING 16 2 0.056666 4 |
| train_cifar_f231b_00005 RUNNING 8 64 0.000353097 4 |
| train_cifar_f231b_00006 RUNNING 16 4 0.000147684 8 |
| train_cifar_f231b_00007 RUNNING 256 256 0.00477469 8 |
| train_cifar_f231b_00008 PENDING 128 256 0.0306227 8 |
| train_cifar_f231b_00009 PENDING 2 16 0.0286986 2 |
+-------------------------------------------------------------------------------+
Trial train_cifar_f231b_00006 finished iteration 1 at 2025-03-21 17:08:19. Total running time: 1min 0s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00006 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000000 |
| time_this_iter_s 54.59943 |
| time_total_s 54.59943 |
| training_iteration 1 |
| accuracy 0.1021 |
| loss 2.303 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00006 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00006_6_batch_size=8,l1=16,l2=4,lr=0.0001_2025-03-21_17-07-18/checkpoint_000000
(func pid=4479) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00006_6_batch_size=8,l1=16,l2=4,lr=0.0001_2025-03-21_17-07-18/checkpoint_000000)
Trial train_cifar_f231b_00003 finished iteration 1 at 2025-03-21 17:08:22. Total running time: 1min 3s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00003 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000000 |
| time_this_iter_s 57.08528 |
| time_total_s 57.08528 |
| training_iteration 1 |
| accuracy 0.148 |
| loss 2.42043 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00003 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00003_3_batch_size=8,l1=64,l2=256,lr=0.0274_2025-03-21_17-07-18/checkpoint_000000
Trial train_cifar_f231b_00003 completed after 1 iterations at 2025-03-21 17:08:22. Total running time: 1min 3s
Trial train_cifar_f231b_00008 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_f231b_00008 config |
+--------------------------------------------------+
| batch_size 8 |
| l1 128 |
| l2 256 |
| lr 0.03062 |
+--------------------------------------------------+
Trial train_cifar_f231b_00007 finished iteration 1 at 2025-03-21 17:08:22. Total running time: 1min 3s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00007 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000000 |
| time_this_iter_s 56.81921 |
| time_total_s 56.81921 |
| training_iteration 1 |
| accuracy 0.4478 |
| loss 1.51747 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00007 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-03-21_17-07-18/checkpoint_000000
(func pid=4473) [1, 8000] loss: 0.576 [repeated 5x across cluster]
(func pid=4478) [1, 8000] loss: 0.470 [repeated 3x across cluster]
(func pid=4484) [2, 2000] loss: 1.400 [repeated 3x across cluster]
Trial status: 8 RUNNING | 1 TERMINATED | 1 PENDING
Current time: 2025-03-21 17:08:49. Total running time: 1min 30s
Logical resource usage: 16.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_f231b_00000 RUNNING 16 1 0.00213327 2 |
| train_cifar_f231b_00001 RUNNING 1 2 0.013416 4 |
| train_cifar_f231b_00002 RUNNING 256 64 0.0113784 2 |
| train_cifar_f231b_00004 RUNNING 16 2 0.056666 4 |
| train_cifar_f231b_00005 RUNNING 8 64 0.000353097 4 |
| train_cifar_f231b_00006 RUNNING 16 4 0.000147684 8 1 54.5994 2.303 0.1021 |
| train_cifar_f231b_00007 RUNNING 256 256 0.00477469 8 1 56.8192 1.51747 0.4478 |
| train_cifar_f231b_00008 RUNNING 128 256 0.0306227 8 |
| train_cifar_f231b_00003 TERMINATED 64 256 0.0274071 8 1 57.0853 2.42043 0.148 |
| train_cifar_f231b_00009 PENDING 2 16 0.0286986 2 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4474) [1, 10000] loss: 0.462 [repeated 3x across cluster]
(func pid=4479) [2, 4000] loss: 1.146 [repeated 3x across cluster]
(func pid=4473) [1, 12000] loss: 0.384 [repeated 3x across cluster]
Trial train_cifar_f231b_00001 finished iteration 1 at 2025-03-21 17:09:03. Total running time: 1min 45s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00001 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000000 |
| time_this_iter_s 98.71102 |
| time_total_s 98.71102 |
| training_iteration 1 |
| accuracy 0.1028 |
| loss 2.30777 |
+------------------------------------------------------------+(func pid=4474) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00001_1_batch_size=4,l1=1,l2=2,lr=0.0134_2025-03-21_17-07-18/checkpoint_000000) [repeated 3x across cluster]
Trial train_cifar_f231b_00001 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00001_1_batch_size=4,l1=1,l2=2,lr=0.0134_2025-03-21_17-07-18/checkpoint_000000
Trial train_cifar_f231b_00001 completed after 1 iterations at 2025-03-21 17:09:03. Total running time: 1min 45s
Trial train_cifar_f231b_00009 started with configuration:
+-------------------------------------------------+
| Trial train_cifar_f231b_00009 config |
+-------------------------------------------------+
| batch_size 2 |
| l1 2 |
| l2 16 |
| lr 0.0287 |
+-------------------------------------------------+
Trial train_cifar_f231b_00004 finished iteration 1 at 2025-03-21 17:09:05. Total running time: 1min 46s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00004 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000000 |
| time_this_iter_s 100.22192 |
| time_total_s 100.22192 |
| training_iteration 1 |
| accuracy 0.0973 |
| loss 2.31545 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00004 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00004_4_batch_size=4,l1=16,l2=2,lr=0.0567_2025-03-21_17-07-18/checkpoint_000000
Trial train_cifar_f231b_00004 completed after 1 iterations at 2025-03-21 17:09:05. Total running time: 1min 46s
Trial train_cifar_f231b_00005 finished iteration 1 at 2025-03-21 17:09:05. Total running time: 1min 47s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00005 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000000 |
| time_this_iter_s 99.2248 |
| time_total_s 99.2248 |
| training_iteration 1 |
| accuracy 0.3781 |
| loss 1.66134 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00005 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-03-21_17-07-18/checkpoint_000000
Trial train_cifar_f231b_00006 finished iteration 2 at 2025-03-21 17:09:11. Total running time: 1min 52s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00006 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000001 |
| time_this_iter_s 51.50313 |
| time_total_s 106.10257 |
| training_iteration 2 |
| accuracy 0.1792 |
| loss 2.27292 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00006 saved a checkpoint for iteration 2 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00006_6_batch_size=8,l1=16,l2=4,lr=0.0001_2025-03-21_17-07-18/checkpoint_000001
(func pid=4479) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00006_6_batch_size=8,l1=16,l2=4,lr=0.0001_2025-03-21_17-07-18/checkpoint_000001) [repeated 3x across cluster]
(func pid=4475) [1, 12000] loss: 0.386 [repeated 2x across cluster]
Trial train_cifar_f231b_00007 finished iteration 2 at 2025-03-21 17:09:15. Total running time: 1min 56s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00007 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000001 |
| time_this_iter_s 53.38768 |
| time_total_s 110.20689 |
| training_iteration 2 |
| accuracy 0.5198 |
| loss 1.34171 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00007 saved a checkpoint for iteration 2 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-03-21_17-07-18/checkpoint_000001
Trial train_cifar_f231b_00008 finished iteration 1 at 2025-03-21 17:09:18. Total running time: 1min 59s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00008 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000000 |
| time_this_iter_s 56.68488 |
| time_total_s 56.68488 |
| training_iteration 1 |
| accuracy 0.2221 |
| loss 2.11022 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00008 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00008_8_batch_size=8,l1=128,l2=256,lr=0.0306_2025-03-21_17-07-18/checkpoint_000000
(func pid=4476) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00008_8_batch_size=8,l1=128,l2=256,lr=0.0306_2025-03-21_17-07-18/checkpoint_000000) [repeated 2x across cluster]
Trial status: 7 RUNNING | 3 TERMINATED
Current time: 2025-03-21 17:09:19. Total running time: 2min 0s
Logical resource usage: 14.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_f231b_00000 RUNNING 16 1 0.00213327 2 |
| train_cifar_f231b_00002 RUNNING 256 64 0.0113784 2 |
| train_cifar_f231b_00005 RUNNING 8 64 0.000353097 4 1 99.2248 1.66134 0.3781 |
| train_cifar_f231b_00006 RUNNING 16 4 0.000147684 8 2 106.103 2.27292 0.1792 |
| train_cifar_f231b_00007 RUNNING 256 256 0.00477469 8 2 110.207 1.34171 0.5198 |
| train_cifar_f231b_00008 RUNNING 128 256 0.0306227 8 1 56.6849 2.11022 0.2221 |
| train_cifar_f231b_00009 RUNNING 2 16 0.0286986 2 |
| train_cifar_f231b_00001 TERMINATED 1 2 0.013416 4 1 98.711 2.30777 0.1028 |
| train_cifar_f231b_00003 TERMINATED 64 256 0.0274071 8 1 57.0853 2.42043 0.148 |
| train_cifar_f231b_00004 TERMINATED 16 2 0.056666 4 1 100.222 2.31545 0.0973 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4474) [1, 2000] loss: 2.340 [repeated 2x across cluster]
(func pid=4479) [3, 2000] loss: 2.259 [repeated 2x across cluster]
(func pid=4484) [3, 2000] loss: 1.232 [repeated 3x across cluster]
(func pid=4473) [1, 18000] loss: 0.256 [repeated 4x across cluster]
(func pid=4474) [1, 6000] loss: 0.778 [repeated 3x across cluster]
Trial status: 7 RUNNING | 3 TERMINATED
Current time: 2025-03-21 17:09:49. Total running time: 2min 30s
Logical resource usage: 14.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_f231b_00000 RUNNING 16 1 0.00213327 2 |
| train_cifar_f231b_00002 RUNNING 256 64 0.0113784 2 |
| train_cifar_f231b_00005 RUNNING 8 64 0.000353097 4 1 99.2248 1.66134 0.3781 |
| train_cifar_f231b_00006 RUNNING 16 4 0.000147684 8 2 106.103 2.27292 0.1792 |
| train_cifar_f231b_00007 RUNNING 256 256 0.00477469 8 2 110.207 1.34171 0.5198 |
| train_cifar_f231b_00008 RUNNING 128 256 0.0306227 8 1 56.6849 2.11022 0.2221 |
| train_cifar_f231b_00009 RUNNING 2 16 0.0286986 2 |
| train_cifar_f231b_00001 TERMINATED 1 2 0.013416 4 1 98.711 2.30777 0.1028 |
| train_cifar_f231b_00003 TERMINATED 64 256 0.0274071 8 1 57.0853 2.42043 0.148 |
| train_cifar_f231b_00004 TERMINATED 16 2 0.056666 4 1 100.222 2.31545 0.0973 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4476) [2, 4000] loss: 1.055 [repeated 3x across cluster]
Trial train_cifar_f231b_00006 finished iteration 3 at 2025-03-21 17:09:55. Total running time: 2min 37s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00006 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000002 |
| time_this_iter_s 44.68938 |
| time_total_s 150.79195 |
| training_iteration 3 |
| accuracy 0.2333 |
| loss 2.16889 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00006 saved a checkpoint for iteration 3 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00006_6_batch_size=8,l1=16,l2=4,lr=0.0001_2025-03-21_17-07-18/checkpoint_000002
(func pid=4479) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00006_6_batch_size=8,l1=16,l2=4,lr=0.0001_2025-03-21_17-07-18/checkpoint_000002)
(func pid=4474) [1, 8000] loss: 0.583 [repeated 2x across cluster]
Trial train_cifar_f231b_00007 finished iteration 3 at 2025-03-21 17:10:02. Total running time: 2min 43s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00007 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000002 |
| time_this_iter_s 47.10437 |
| time_total_s 157.31126 |
| training_iteration 3 |
| accuracy 0.5505 |
| loss 1.27905 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00007 saved a checkpoint for iteration 3 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-03-21_17-07-18/checkpoint_000002
(func pid=4484) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-03-21_17-07-18/checkpoint_000002)
Trial train_cifar_f231b_00008 finished iteration 2 at 2025-03-21 17:10:09. Total running time: 2min 50s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00008 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000001 |
| time_this_iter_s 50.41293 |
| time_total_s 107.09781 |
| training_iteration 2 |
| accuracy 0.222 |
| loss 2.08701 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00008 saved a checkpoint for iteration 2 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00008_8_batch_size=8,l1=128,l2=256,lr=0.0306_2025-03-21_17-07-18/checkpoint_000001
Trial train_cifar_f231b_00008 completed after 2 iterations at 2025-03-21 17:10:09. Total running time: 2min 50s
(func pid=4476) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00008_8_batch_size=8,l1=128,l2=256,lr=0.0306_2025-03-21_17-07-18/checkpoint_000001)
(func pid=4479) [4, 2000] loss: 2.127 [repeated 3x across cluster]
Trial train_cifar_f231b_00000 finished iteration 1 at 2025-03-21 17:10:14. Total running time: 2min 55s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00000 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000000 |
| time_this_iter_s 169.32698 |
| time_total_s 169.32698 |
| training_iteration 1 |
| accuracy 0.0966 |
| loss 2.30395 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00000 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00000_0_batch_size=2,l1=16,l2=1,lr=0.0021_2025-03-21_17-07-18/checkpoint_000000
Trial train_cifar_f231b_00000 completed after 1 iterations at 2025-03-21 17:10:14. Total running time: 2min 55s
(func pid=4473) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00000_0_batch_size=2,l1=16,l2=1,lr=0.0021_2025-03-21_17-07-18/checkpoint_000000)
(func pid=4475) [1, 20000] loss: 0.232 [repeated 3x across cluster]
Trial status: 5 TERMINATED | 5 RUNNING
Current time: 2025-03-21 17:10:19. Total running time: 3min 0s
Logical resource usage: 10.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_f231b_00002 RUNNING 256 64 0.0113784 2 |
| train_cifar_f231b_00005 RUNNING 8 64 0.000353097 4 1 99.2248 1.66134 0.3781 |
| train_cifar_f231b_00006 RUNNING 16 4 0.000147684 8 3 150.792 2.16889 0.2333 |
| train_cifar_f231b_00007 RUNNING 256 256 0.00477469 8 3 157.311 1.27905 0.5505 |
| train_cifar_f231b_00009 RUNNING 2 16 0.0286986 2 |
| train_cifar_f231b_00000 TERMINATED 16 1 0.00213327 2 1 169.327 2.30395 0.0966 |
| train_cifar_f231b_00001 TERMINATED 1 2 0.013416 4 1 98.711 2.30777 0.1028 |
| train_cifar_f231b_00003 TERMINATED 64 256 0.0274071 8 1 57.0853 2.42043 0.148 |
| train_cifar_f231b_00004 TERMINATED 16 2 0.056666 4 1 100.222 2.31545 0.0973 |
| train_cifar_f231b_00008 TERMINATED 128 256 0.0306227 8 2 107.098 2.08701 0.222 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4479) [4, 4000] loss: 1.011 [repeated 2x across cluster]
Trial train_cifar_f231b_00005 finished iteration 2 at 2025-03-21 17:10:24. Total running time: 3min 5s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00005 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000001 |
| time_this_iter_s 78.22336 |
| time_total_s 177.44816 |
| training_iteration 2 |
| accuracy 0.4652 |
| loss 1.45765 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00005 saved a checkpoint for iteration 2 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-03-21_17-07-18/checkpoint_000001
(func pid=4478) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-03-21_17-07-18/checkpoint_000001)
(func pid=4484) [4, 4000] loss: 0.581 [repeated 2x across cluster]
Trial train_cifar_f231b_00002 finished iteration 1 at 2025-03-21 17:10:33. Total running time: 3min 14s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00002 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000000 |
| time_this_iter_s 188.7492 |
| time_total_s 188.7492 |
| training_iteration 1 |
| accuracy 0.1005 |
| loss 2.32115 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00002 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00002_2_batch_size=2,l1=256,l2=64,lr=0.0114_2025-03-21_17-07-18/checkpoint_000000
Trial train_cifar_f231b_00002 completed after 1 iterations at 2025-03-21 17:10:33. Total running time: 3min 14s
(func pid=4475) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00002_2_batch_size=2,l1=256,l2=64,lr=0.0114_2025-03-21_17-07-18/checkpoint_000000)
Trial train_cifar_f231b_00006 finished iteration 4 at 2025-03-21 17:10:35. Total running time: 3min 16s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00006 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000003 |
| time_this_iter_s 39.3127 |
| time_total_s 190.10464 |
| training_iteration 4 |
| accuracy 0.2828 |
| loss 1.90732 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00006 saved a checkpoint for iteration 4 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00006_6_batch_size=8,l1=16,l2=4,lr=0.0001_2025-03-21_17-07-18/checkpoint_000003
Trial train_cifar_f231b_00007 finished iteration 4 at 2025-03-21 17:10:43. Total running time: 3min 24s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00007 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000003 |
| time_this_iter_s 40.59188 |
| time_total_s 197.90314 |
| training_iteration 4 |
| accuracy 0.5597 |
| loss 1.28452 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00007 saved a checkpoint for iteration 4 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-03-21_17-07-18/checkpoint_000003
(func pid=4484) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-03-21_17-07-18/checkpoint_000003) [repeated 2x across cluster]
(func pid=4474) [1, 16000] loss: 0.292 [repeated 3x across cluster]
Trial status: 6 TERMINATED | 4 RUNNING
Current time: 2025-03-21 17:10:49. Total running time: 3min 30s
Logical resource usage: 8.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_f231b_00005 RUNNING 8 64 0.000353097 4 2 177.448 1.45765 0.4652 |
| train_cifar_f231b_00006 RUNNING 16 4 0.000147684 8 4 190.105 1.90732 0.2828 |
| train_cifar_f231b_00007 RUNNING 256 256 0.00477469 8 4 197.903 1.28452 0.5597 |
| train_cifar_f231b_00009 RUNNING 2 16 0.0286986 2 |
| train_cifar_f231b_00000 TERMINATED 16 1 0.00213327 2 1 169.327 2.30395 0.0966 |
| train_cifar_f231b_00001 TERMINATED 1 2 0.013416 4 1 98.711 2.30777 0.1028 |
| train_cifar_f231b_00002 TERMINATED 256 64 0.0113784 2 1 188.749 2.32115 0.1005 |
| train_cifar_f231b_00003 TERMINATED 64 256 0.0274071 8 1 57.0853 2.42043 0.148 |
| train_cifar_f231b_00004 TERMINATED 16 2 0.056666 4 1 100.222 2.31545 0.0973 |
| train_cifar_f231b_00008 TERMINATED 128 256 0.0306227 8 2 107.098 2.08701 0.222 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4474) [1, 18000] loss: 0.259 [repeated 3x across cluster]
(func pid=4474) [1, 20000] loss: 0.233 [repeated 4x across cluster]
Trial train_cifar_f231b_00006 finished iteration 5 at 2025-03-21 17:11:09. Total running time: 3min 50s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00006 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000004 |
| time_this_iter_s 34.28823 |
| time_total_s 224.39288 |
| training_iteration 5 |
| accuracy 0.3211 |
| loss 1.78305 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00006 saved a checkpoint for iteration 5 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00006_6_batch_size=8,l1=16,l2=4,lr=0.0001_2025-03-21_17-07-18/checkpoint_000004
(func pid=4479) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00006_6_batch_size=8,l1=16,l2=4,lr=0.0001_2025-03-21_17-07-18/checkpoint_000004)
Trial status: 6 TERMINATED | 4 RUNNING
Current time: 2025-03-21 17:11:19. Total running time: 4min 0s
Logical resource usage: 8.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_f231b_00005 RUNNING 8 64 0.000353097 4 2 177.448 1.45765 0.4652 |
| train_cifar_f231b_00006 RUNNING 16 4 0.000147684 8 5 224.393 1.78305 0.3211 |
| train_cifar_f231b_00007 RUNNING 256 256 0.00477469 8 4 197.903 1.28452 0.5597 |
| train_cifar_f231b_00009 RUNNING 2 16 0.0286986 2 |
| train_cifar_f231b_00000 TERMINATED 16 1 0.00213327 2 1 169.327 2.30395 0.0966 |
| train_cifar_f231b_00001 TERMINATED 1 2 0.013416 4 1 98.711 2.30777 0.1028 |
| train_cifar_f231b_00002 TERMINATED 256 64 0.0113784 2 1 188.749 2.32115 0.1005 |
| train_cifar_f231b_00003 TERMINATED 64 256 0.0274071 8 1 57.0853 2.42043 0.148 |
| train_cifar_f231b_00004 TERMINATED 16 2 0.056666 4 1 100.222 2.31545 0.0973 |
| train_cifar_f231b_00008 TERMINATED 128 256 0.0306227 8 2 107.098 2.08701 0.222 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4478) [3, 10000] loss: 0.274 [repeated 3x across cluster]
Trial train_cifar_f231b_00007 finished iteration 5 at 2025-03-21 17:11:20. Total running time: 4min 1s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00007 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000004 |
| time_this_iter_s 37.09912 |
| time_total_s 235.00226 |
| training_iteration 5 |
| accuracy 0.5809 |
| loss 1.2468 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00007 saved a checkpoint for iteration 5 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-03-21_17-07-18/checkpoint_000004
(func pid=4484) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-03-21_17-07-18/checkpoint_000004)
Trial train_cifar_f231b_00009 finished iteration 1 at 2025-03-21 17:11:22. Total running time: 4min 3s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00009 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000000 |
| time_this_iter_s 138.41704 |
| time_total_s 138.41704 |
| training_iteration 1 |
| accuracy 0.097 |
| loss 2.35534 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00009 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00009_9_batch_size=2,l1=2,l2=16,lr=0.0287_2025-03-21_17-07-18/checkpoint_000000
Trial train_cifar_f231b_00009 completed after 1 iterations at 2025-03-21 17:11:22. Total running time: 4min 3s
Trial train_cifar_f231b_00005 finished iteration 3 at 2025-03-21 17:11:28. Total running time: 4min 9s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00005 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000002 |
| time_this_iter_s 63.89062 |
| time_total_s 241.33878 |
| training_iteration 3 |
| accuracy 0.5056 |
| loss 1.35293 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00005 saved a checkpoint for iteration 3 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-03-21_17-07-18/checkpoint_000002
(func pid=4478) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-03-21_17-07-18/checkpoint_000002) [repeated 2x across cluster]
(func pid=4484) [6, 2000] loss: 1.006 [repeated 2x across cluster]
(func pid=4478) [4, 2000] loss: 1.340 [repeated 2x across cluster]
Trial train_cifar_f231b_00006 finished iteration 6 at 2025-03-21 17:11:42. Total running time: 4min 23s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00006 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000005 |
| time_this_iter_s 32.73869 |
| time_total_s 257.13156 |
| training_iteration 6 |
| accuracy 0.3501 |
| loss 1.73277 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00006 saved a checkpoint for iteration 6 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00006_6_batch_size=8,l1=16,l2=4,lr=0.0001_2025-03-21_17-07-18/checkpoint_000005
(func pid=4479) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00006_6_batch_size=8,l1=16,l2=4,lr=0.0001_2025-03-21_17-07-18/checkpoint_000005)
(func pid=4484) [6, 4000] loss: 0.533
(func pid=4478) [4, 4000] loss: 0.670
Trial status: 7 TERMINATED | 3 RUNNING
Current time: 2025-03-21 17:11:49. Total running time: 4min 30s
Logical resource usage: 6.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_f231b_00005 RUNNING 8 64 0.000353097 4 3 241.339 1.35293 0.5056 |
| train_cifar_f231b_00006 RUNNING 16 4 0.000147684 8 6 257.132 1.73277 0.3501 |
| train_cifar_f231b_00007 RUNNING 256 256 0.00477469 8 5 235.002 1.2468 0.5809 |
| train_cifar_f231b_00000 TERMINATED 16 1 0.00213327 2 1 169.327 2.30395 0.0966 |
| train_cifar_f231b_00001 TERMINATED 1 2 0.013416 4 1 98.711 2.30777 0.1028 |
| train_cifar_f231b_00002 TERMINATED 256 64 0.0113784 2 1 188.749 2.32115 0.1005 |
| train_cifar_f231b_00003 TERMINATED 64 256 0.0274071 8 1 57.0853 2.42043 0.148 |
| train_cifar_f231b_00004 TERMINATED 16 2 0.056666 4 1 100.222 2.31545 0.0973 |
| train_cifar_f231b_00008 TERMINATED 128 256 0.0306227 8 2 107.098 2.08701 0.222 |
| train_cifar_f231b_00009 TERMINATED 2 16 0.0286986 2 1 138.417 2.35534 0.097 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4479) [7, 2000] loss: 1.740
Trial train_cifar_f231b_00007 finished iteration 6 at 2025-03-21 17:11:54. Total running time: 4min 35s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00007 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000005 |
| time_this_iter_s 33.64137 |
| time_total_s 268.64364 |
| training_iteration 6 |
| accuracy 0.566 |
| loss 1.29 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00007 saved a checkpoint for iteration 6 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-03-21_17-07-18/checkpoint_000005
(func pid=4484) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-03-21_17-07-18/checkpoint_000005)
(func pid=4478) [4, 6000] loss: 0.445
(func pid=4479) [7, 4000] loss: 0.862
(func pid=4484) [7, 2000] loss: 0.975
Trial train_cifar_f231b_00006 finished iteration 7 at 2025-03-21 17:12:13. Total running time: 4min 54s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00006 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000006 |
| time_this_iter_s 31.24474 |
| time_total_s 288.3763 |
| training_iteration 7 |
| accuracy 0.3668 |
| loss 1.68538 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00006 saved a checkpoint for iteration 7 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00006_6_batch_size=8,l1=16,l2=4,lr=0.0001_2025-03-21_17-07-18/checkpoint_000006
(func pid=4479) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00006_6_batch_size=8,l1=16,l2=4,lr=0.0001_2025-03-21_17-07-18/checkpoint_000006)
(func pid=4484) [7, 4000] loss: 0.513 [repeated 2x across cluster]
Trial status: 7 TERMINATED | 3 RUNNING
Current time: 2025-03-21 17:12:19. Total running time: 5min 0s
Logical resource usage: 6.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_f231b_00005 RUNNING 8 64 0.000353097 4 3 241.339 1.35293 0.5056 |
| train_cifar_f231b_00006 RUNNING 16 4 0.000147684 8 7 288.376 1.68538 0.3668 |
| train_cifar_f231b_00007 RUNNING 256 256 0.00477469 8 6 268.644 1.29 0.566 |
| train_cifar_f231b_00000 TERMINATED 16 1 0.00213327 2 1 169.327 2.30395 0.0966 |
| train_cifar_f231b_00001 TERMINATED 1 2 0.013416 4 1 98.711 2.30777 0.1028 |
| train_cifar_f231b_00002 TERMINATED 256 64 0.0113784 2 1 188.749 2.32115 0.1005 |
| train_cifar_f231b_00003 TERMINATED 64 256 0.0274071 8 1 57.0853 2.42043 0.148 |
| train_cifar_f231b_00004 TERMINATED 16 2 0.056666 4 1 100.222 2.31545 0.0973 |
| train_cifar_f231b_00008 TERMINATED 128 256 0.0306227 8 2 107.098 2.08701 0.222 |
| train_cifar_f231b_00009 TERMINATED 2 16 0.0286986 2 1 138.417 2.35534 0.097 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4479) [8, 2000] loss: 1.693 [repeated 2x across cluster]
Trial train_cifar_f231b_00005 finished iteration 4 at 2025-03-21 17:12:25. Total running time: 5min 6s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00005 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000003 |
| time_this_iter_s 57.59381 |
| time_total_s 298.93259 |
| training_iteration 4 |
| accuracy 0.5282 |
| loss 1.31436 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00005 saved a checkpoint for iteration 4 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-03-21_17-07-18/checkpoint_000003
(func pid=4478) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-03-21_17-07-18/checkpoint_000003)
Trial train_cifar_f231b_00007 finished iteration 7 at 2025-03-21 17:12:27. Total running time: 5min 8s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00007 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000006 |
| time_this_iter_s 33.39238 |
| time_total_s 302.03601 |
| training_iteration 7 |
| accuracy 0.5472 |
| loss 1.36817 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00007 saved a checkpoint for iteration 7 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-03-21_17-07-18/checkpoint_000006
(func pid=4479) [8, 4000] loss: 0.832
(func pid=4478) [5, 2000] loss: 1.246
Trial train_cifar_f231b_00006 finished iteration 8 at 2025-03-21 17:12:45. Total running time: 5min 26s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00006 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000007 |
| time_this_iter_s 31.61772 |
| time_total_s 319.99402 |
| training_iteration 8 |
| accuracy 0.3779 |
| loss 1.63442 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00006 saved a checkpoint for iteration 8 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00006_6_batch_size=8,l1=16,l2=4,lr=0.0001_2025-03-21_17-07-18/checkpoint_000007
(func pid=4479) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00006_6_batch_size=8,l1=16,l2=4,lr=0.0001_2025-03-21_17-07-18/checkpoint_000007) [repeated 2x across cluster]
(func pid=4478) [5, 4000] loss: 0.635 [repeated 2x across cluster]
Trial status: 7 TERMINATED | 3 RUNNING
Current time: 2025-03-21 17:12:49. Total running time: 5min 30s
Logical resource usage: 6.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_f231b_00005 RUNNING 8 64 0.000353097 4 4 298.933 1.31436 0.5282 |
| train_cifar_f231b_00006 RUNNING 16 4 0.000147684 8 8 319.994 1.63442 0.3779 |
| train_cifar_f231b_00007 RUNNING 256 256 0.00477469 8 7 302.036 1.36817 0.5472 |
| train_cifar_f231b_00000 TERMINATED 16 1 0.00213327 2 1 169.327 2.30395 0.0966 |
| train_cifar_f231b_00001 TERMINATED 1 2 0.013416 4 1 98.711 2.30777 0.1028 |
| train_cifar_f231b_00002 TERMINATED 256 64 0.0113784 2 1 188.749 2.32115 0.1005 |
| train_cifar_f231b_00003 TERMINATED 64 256 0.0274071 8 1 57.0853 2.42043 0.148 |
| train_cifar_f231b_00004 TERMINATED 16 2 0.056666 4 1 100.222 2.31545 0.0973 |
| train_cifar_f231b_00008 TERMINATED 128 256 0.0306227 8 2 107.098 2.08701 0.222 |
| train_cifar_f231b_00009 TERMINATED 2 16 0.0286986 2 1 138.417 2.35534 0.097 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4484) [8, 4000] loss: 0.503
(func pid=4478) [5, 6000] loss: 0.419
Trial train_cifar_f231b_00007 finished iteration 8 at 2025-03-21 17:13:01. Total running time: 5min 42s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00007 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000007 |
| time_this_iter_s 33.5588 |
| time_total_s 335.59482 |
| training_iteration 8 |
| accuracy 0.5762 |
| loss 1.30593 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00007 saved a checkpoint for iteration 8 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-03-21_17-07-18/checkpoint_000007
(func pid=4484) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-03-21_17-07-18/checkpoint_000007)
(func pid=4478) [5, 8000] loss: 0.313 [repeated 2x across cluster]
(func pid=4484) [9, 2000] loss: 0.902 [repeated 2x across cluster]
Trial train_cifar_f231b_00006 finished iteration 9 at 2025-03-21 17:13:16. Total running time: 5min 57s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00006 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000008 |
| time_this_iter_s 31.17297 |
| time_total_s 351.16699 |
| training_iteration 9 |
| accuracy 0.3993 |
| loss 1.59557 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00006 saved a checkpoint for iteration 9 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00006_6_batch_size=8,l1=16,l2=4,lr=0.0001_2025-03-21_17-07-18/checkpoint_000008
(func pid=4479) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00006_6_batch_size=8,l1=16,l2=4,lr=0.0001_2025-03-21_17-07-18/checkpoint_000008)
Trial status: 7 TERMINATED | 3 RUNNING
Current time: 2025-03-21 17:13:19. Total running time: 6min 0s
Logical resource usage: 6.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_f231b_00005 RUNNING 8 64 0.000353097 4 4 298.933 1.31436 0.5282 |
| train_cifar_f231b_00006 RUNNING 16 4 0.000147684 8 9 351.167 1.59557 0.3993 |
| train_cifar_f231b_00007 RUNNING 256 256 0.00477469 8 8 335.595 1.30593 0.5762 |
| train_cifar_f231b_00000 TERMINATED 16 1 0.00213327 2 1 169.327 2.30395 0.0966 |
| train_cifar_f231b_00001 TERMINATED 1 2 0.013416 4 1 98.711 2.30777 0.1028 |
| train_cifar_f231b_00002 TERMINATED 256 64 0.0113784 2 1 188.749 2.32115 0.1005 |
| train_cifar_f231b_00003 TERMINATED 64 256 0.0274071 8 1 57.0853 2.42043 0.148 |
| train_cifar_f231b_00004 TERMINATED 16 2 0.056666 4 1 100.222 2.31545 0.0973 |
| train_cifar_f231b_00008 TERMINATED 128 256 0.0306227 8 2 107.098 2.08701 0.222 |
| train_cifar_f231b_00009 TERMINATED 2 16 0.0286986 2 1 138.417 2.35534 0.097 |
+------------------------------------------------------------------------------------------------------------------------------------+
Trial train_cifar_f231b_00005 finished iteration 5 at 2025-03-21 17:13:23. Total running time: 6min 4s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00005 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000004 |
| time_this_iter_s 57.86861 |
| time_total_s 356.80121 |
| training_iteration 5 |
| accuracy 0.5115 |
| loss 1.35296 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00005 saved a checkpoint for iteration 5 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-03-21_17-07-18/checkpoint_000004
(func pid=4478) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-03-21_17-07-18/checkpoint_000004)
(func pid=4484) [9, 4000] loss: 0.490 [repeated 2x across cluster]
(func pid=4478) [6, 2000] loss: 1.210 [repeated 2x across cluster]
Trial train_cifar_f231b_00007 finished iteration 9 at 2025-03-21 17:13:34. Total running time: 6min 15s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00007 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000008 |
| time_this_iter_s 33.64809 |
| time_total_s 369.24291 |
| training_iteration 9 |
| accuracy 0.5229 |
| loss 1.55028 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00007 saved a checkpoint for iteration 9 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-03-21_17-07-18/checkpoint_000008
(func pid=4484) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-03-21_17-07-18/checkpoint_000008)
(func pid=4478) [6, 4000] loss: 0.604 [repeated 2x across cluster]
Trial train_cifar_f231b_00006 finished iteration 10 at 2025-03-21 17:13:48. Total running time: 6min 29s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00006 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000009 |
| time_this_iter_s 31.90389 |
| time_total_s 383.07089 |
| training_iteration 10 |
| accuracy 0.3991 |
| loss 1.57063 |
+------------------------------------------------------------+
(func pid=4479) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00006_6_batch_size=8,l1=16,l2=4,lr=0.0001_2025-03-21_17-07-18/checkpoint_000009)
Trial train_cifar_f231b_00006 saved a checkpoint for iteration 10 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00006_6_batch_size=8,l1=16,l2=4,lr=0.0001_2025-03-21_17-07-18/checkpoint_000009
Trial train_cifar_f231b_00006 completed after 10 iterations at 2025-03-21 17:13:48. Total running time: 6min 29s
Trial status: 8 TERMINATED | 2 RUNNING
Current time: 2025-03-21 17:13:49. Total running time: 6min 30s
Logical resource usage: 4.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_f231b_00005 RUNNING 8 64 0.000353097 4 5 356.801 1.35296 0.5115 |
| train_cifar_f231b_00007 RUNNING 256 256 0.00477469 8 9 369.243 1.55028 0.5229 |
| train_cifar_f231b_00000 TERMINATED 16 1 0.00213327 2 1 169.327 2.30395 0.0966 |
| train_cifar_f231b_00001 TERMINATED 1 2 0.013416 4 1 98.711 2.30777 0.1028 |
| train_cifar_f231b_00002 TERMINATED 256 64 0.0113784 2 1 188.749 2.32115 0.1005 |
| train_cifar_f231b_00003 TERMINATED 64 256 0.0274071 8 1 57.0853 2.42043 0.148 |
| train_cifar_f231b_00004 TERMINATED 16 2 0.056666 4 1 100.222 2.31545 0.0973 |
| train_cifar_f231b_00006 TERMINATED 16 4 0.000147684 8 10 383.071 1.57063 0.3991 |
| train_cifar_f231b_00008 TERMINATED 128 256 0.0306227 8 2 107.098 2.08701 0.222 |
| train_cifar_f231b_00009 TERMINATED 2 16 0.0286986 2 1 138.417 2.35534 0.097 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4478) [6, 6000] loss: 0.403 [repeated 2x across cluster]
(func pid=4478) [6, 8000] loss: 0.299 [repeated 2x across cluster]
Trial train_cifar_f231b_00007 finished iteration 10 at 2025-03-21 17:14:05. Total running time: 6min 46s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00007 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000009 |
| time_this_iter_s 30.66806 |
| time_total_s 399.91097 |
| training_iteration 10 |
| accuracy 0.5601 |
| loss 1.47924 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00007 saved a checkpoint for iteration 10 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-03-21_17-07-18/checkpoint_000009
Trial train_cifar_f231b_00007 completed after 10 iterations at 2025-03-21 17:14:05. Total running time: 6min 46s
(func pid=4484) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2025-03-21_17-07-18/checkpoint_000009)
(func pid=4478) [6, 10000] loss: 0.241
Trial train_cifar_f231b_00005 finished iteration 6 at 2025-03-21 17:14:16. Total running time: 6min 57s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00005 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000005 |
| time_this_iter_s 53.00542 |
| time_total_s 409.80662 |
| training_iteration 6 |
| accuracy 0.5515 |
| loss 1.25844 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00005 saved a checkpoint for iteration 6 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-03-21_17-07-18/checkpoint_000005
(func pid=4478) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-03-21_17-07-18/checkpoint_000005)
Trial status: 9 TERMINATED | 1 RUNNING
Current time: 2025-03-21 17:14:19. Total running time: 7min 0s
Logical resource usage: 2.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_f231b_00005 RUNNING 8 64 0.000353097 4 6 409.807 1.25844 0.5515 |
| train_cifar_f231b_00000 TERMINATED 16 1 0.00213327 2 1 169.327 2.30395 0.0966 |
| train_cifar_f231b_00001 TERMINATED 1 2 0.013416 4 1 98.711 2.30777 0.1028 |
| train_cifar_f231b_00002 TERMINATED 256 64 0.0113784 2 1 188.749 2.32115 0.1005 |
| train_cifar_f231b_00003 TERMINATED 64 256 0.0274071 8 1 57.0853 2.42043 0.148 |
| train_cifar_f231b_00004 TERMINATED 16 2 0.056666 4 1 100.222 2.31545 0.0973 |
| train_cifar_f231b_00006 TERMINATED 16 4 0.000147684 8 10 383.071 1.57063 0.3991 |
| train_cifar_f231b_00007 TERMINATED 256 256 0.00477469 8 10 399.911 1.47924 0.5601 |
| train_cifar_f231b_00008 TERMINATED 128 256 0.0306227 8 2 107.098 2.08701 0.222 |
| train_cifar_f231b_00009 TERMINATED 2 16 0.0286986 2 1 138.417 2.35534 0.097 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4478) [7, 2000] loss: 1.172
(func pid=4478) [7, 4000] loss: 0.591
(func pid=4478) [7, 6000] loss: 0.387
(func pid=4478) [7, 8000] loss: 0.293
Trial status: 9 TERMINATED | 1 RUNNING
Current time: 2025-03-21 17:14:49. Total running time: 7min 30s
Logical resource usage: 2.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_f231b_00005 RUNNING 8 64 0.000353097 4 6 409.807 1.25844 0.5515 |
| train_cifar_f231b_00000 TERMINATED 16 1 0.00213327 2 1 169.327 2.30395 0.0966 |
| train_cifar_f231b_00001 TERMINATED 1 2 0.013416 4 1 98.711 2.30777 0.1028 |
| train_cifar_f231b_00002 TERMINATED 256 64 0.0113784 2 1 188.749 2.32115 0.1005 |
| train_cifar_f231b_00003 TERMINATED 64 256 0.0274071 8 1 57.0853 2.42043 0.148 |
| train_cifar_f231b_00004 TERMINATED 16 2 0.056666 4 1 100.222 2.31545 0.0973 |
| train_cifar_f231b_00006 TERMINATED 16 4 0.000147684 8 10 383.071 1.57063 0.3991 |
| train_cifar_f231b_00007 TERMINATED 256 256 0.00477469 8 10 399.911 1.47924 0.5601 |
| train_cifar_f231b_00008 TERMINATED 128 256 0.0306227 8 2 107.098 2.08701 0.222 |
| train_cifar_f231b_00009 TERMINATED 2 16 0.0286986 2 1 138.417 2.35534 0.097 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4478) [7, 10000] loss: 0.232
Trial train_cifar_f231b_00005 finished iteration 7 at 2025-03-21 17:15:03. Total running time: 7min 44s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00005 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000006 |
| time_this_iter_s 47.00625 |
| time_total_s 456.81288 |
| training_iteration 7 |
| accuracy 0.5814 |
| loss 1.19347 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00005 saved a checkpoint for iteration 7 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-03-21_17-07-18/checkpoint_000006
(func pid=4478) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-03-21_17-07-18/checkpoint_000006)
(func pid=4478) [8, 2000] loss: 1.121
Trial status: 9 TERMINATED | 1 RUNNING
Current time: 2025-03-21 17:15:19. Total running time: 8min 0s
Logical resource usage: 2.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_f231b_00005 RUNNING 8 64 0.000353097 4 7 456.813 1.19347 0.5814 |
| train_cifar_f231b_00000 TERMINATED 16 1 0.00213327 2 1 169.327 2.30395 0.0966 |
| train_cifar_f231b_00001 TERMINATED 1 2 0.013416 4 1 98.711 2.30777 0.1028 |
| train_cifar_f231b_00002 TERMINATED 256 64 0.0113784 2 1 188.749 2.32115 0.1005 |
| train_cifar_f231b_00003 TERMINATED 64 256 0.0274071 8 1 57.0853 2.42043 0.148 |
| train_cifar_f231b_00004 TERMINATED 16 2 0.056666 4 1 100.222 2.31545 0.0973 |
| train_cifar_f231b_00006 TERMINATED 16 4 0.000147684 8 10 383.071 1.57063 0.3991 |
| train_cifar_f231b_00007 TERMINATED 256 256 0.00477469 8 10 399.911 1.47924 0.5601 |
| train_cifar_f231b_00008 TERMINATED 128 256 0.0306227 8 2 107.098 2.08701 0.222 |
| train_cifar_f231b_00009 TERMINATED 2 16 0.0286986 2 1 138.417 2.35534 0.097 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4478) [8, 4000] loss: 0.567
(func pid=4478) [8, 6000] loss: 0.380
(func pid=4478) [8, 8000] loss: 0.291
(func pid=4478) [8, 10000] loss: 0.232
Trial status: 9 TERMINATED | 1 RUNNING
Current time: 2025-03-21 17:15:49. Total running time: 8min 31s
Logical resource usage: 2.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_f231b_00005 RUNNING 8 64 0.000353097 4 7 456.813 1.19347 0.5814 |
| train_cifar_f231b_00000 TERMINATED 16 1 0.00213327 2 1 169.327 2.30395 0.0966 |
| train_cifar_f231b_00001 TERMINATED 1 2 0.013416 4 1 98.711 2.30777 0.1028 |
| train_cifar_f231b_00002 TERMINATED 256 64 0.0113784 2 1 188.749 2.32115 0.1005 |
| train_cifar_f231b_00003 TERMINATED 64 256 0.0274071 8 1 57.0853 2.42043 0.148 |
| train_cifar_f231b_00004 TERMINATED 16 2 0.056666 4 1 100.222 2.31545 0.0973 |
| train_cifar_f231b_00006 TERMINATED 16 4 0.000147684 8 10 383.071 1.57063 0.3991 |
| train_cifar_f231b_00007 TERMINATED 256 256 0.00477469 8 10 399.911 1.47924 0.5601 |
| train_cifar_f231b_00008 TERMINATED 128 256 0.0306227 8 2 107.098 2.08701 0.222 |
| train_cifar_f231b_00009 TERMINATED 2 16 0.0286986 2 1 138.417 2.35534 0.097 |
+------------------------------------------------------------------------------------------------------------------------------------+
Trial train_cifar_f231b_00005 finished iteration 8 at 2025-03-21 17:15:50. Total running time: 8min 31s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00005 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000007 |
| time_this_iter_s 46.86667 |
| time_total_s 503.67955 |
| training_iteration 8 |
| accuracy 0.5926 |
| loss 1.15038 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00005 saved a checkpoint for iteration 8 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-03-21_17-07-18/checkpoint_000007
(func pid=4478) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-03-21_17-07-18/checkpoint_000007)
(func pid=4478) [9, 2000] loss: 1.123
(func pid=4478) [9, 4000] loss: 0.561
(func pid=4478) [9, 6000] loss: 0.375
Trial status: 9 TERMINATED | 1 RUNNING
Current time: 2025-03-21 17:16:19. Total running time: 9min 1s
Logical resource usage: 2.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_f231b_00005 RUNNING 8 64 0.000353097 4 8 503.68 1.15038 0.5926 |
| train_cifar_f231b_00000 TERMINATED 16 1 0.00213327 2 1 169.327 2.30395 0.0966 |
| train_cifar_f231b_00001 TERMINATED 1 2 0.013416 4 1 98.711 2.30777 0.1028 |
| train_cifar_f231b_00002 TERMINATED 256 64 0.0113784 2 1 188.749 2.32115 0.1005 |
| train_cifar_f231b_00003 TERMINATED 64 256 0.0274071 8 1 57.0853 2.42043 0.148 |
| train_cifar_f231b_00004 TERMINATED 16 2 0.056666 4 1 100.222 2.31545 0.0973 |
| train_cifar_f231b_00006 TERMINATED 16 4 0.000147684 8 10 383.071 1.57063 0.3991 |
| train_cifar_f231b_00007 TERMINATED 256 256 0.00477469 8 10 399.911 1.47924 0.5601 |
| train_cifar_f231b_00008 TERMINATED 128 256 0.0306227 8 2 107.098 2.08701 0.222 |
| train_cifar_f231b_00009 TERMINATED 2 16 0.0286986 2 1 138.417 2.35534 0.097 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4478) [9, 8000] loss: 0.280
(func pid=4478) [9, 10000] loss: 0.223
Trial train_cifar_f231b_00005 finished iteration 9 at 2025-03-21 17:16:37. Total running time: 9min 18s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00005 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000008 |
| time_this_iter_s 47.23348 |
| time_total_s 550.91303 |
| training_iteration 9 |
| accuracy 0.5744 |
| loss 1.21333 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00005 saved a checkpoint for iteration 9 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-03-21_17-07-18/checkpoint_000008
(func pid=4478) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-03-21_17-07-18/checkpoint_000008)
(func pid=4478) [10, 2000] loss: 1.108
Trial status: 9 TERMINATED | 1 RUNNING
Current time: 2025-03-21 17:16:50. Total running time: 9min 31s
Logical resource usage: 2.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_f231b_00005 RUNNING 8 64 0.000353097 4 9 550.913 1.21333 0.5744 |
| train_cifar_f231b_00000 TERMINATED 16 1 0.00213327 2 1 169.327 2.30395 0.0966 |
| train_cifar_f231b_00001 TERMINATED 1 2 0.013416 4 1 98.711 2.30777 0.1028 |
| train_cifar_f231b_00002 TERMINATED 256 64 0.0113784 2 1 188.749 2.32115 0.1005 |
| train_cifar_f231b_00003 TERMINATED 64 256 0.0274071 8 1 57.0853 2.42043 0.148 |
| train_cifar_f231b_00004 TERMINATED 16 2 0.056666 4 1 100.222 2.31545 0.0973 |
| train_cifar_f231b_00006 TERMINATED 16 4 0.000147684 8 10 383.071 1.57063 0.3991 |
| train_cifar_f231b_00007 TERMINATED 256 256 0.00477469 8 10 399.911 1.47924 0.5601 |
| train_cifar_f231b_00008 TERMINATED 128 256 0.0306227 8 2 107.098 2.08701 0.222 |
| train_cifar_f231b_00009 TERMINATED 2 16 0.0286986 2 1 138.417 2.35534 0.097 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4478) [10, 4000] loss: 0.551
(func pid=4478) [10, 6000] loss: 0.365
(func pid=4478) [10, 8000] loss: 0.277
(func pid=4478) [10, 10000] loss: 0.222
Trial status: 9 TERMINATED | 1 RUNNING
Current time: 2025-03-21 17:17:20. Total running time: 10min 1s
Logical resource usage: 2.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_f231b_00005 RUNNING 8 64 0.000353097 4 9 550.913 1.21333 0.5744 |
| train_cifar_f231b_00000 TERMINATED 16 1 0.00213327 2 1 169.327 2.30395 0.0966 |
| train_cifar_f231b_00001 TERMINATED 1 2 0.013416 4 1 98.711 2.30777 0.1028 |
| train_cifar_f231b_00002 TERMINATED 256 64 0.0113784 2 1 188.749 2.32115 0.1005 |
| train_cifar_f231b_00003 TERMINATED 64 256 0.0274071 8 1 57.0853 2.42043 0.148 |
| train_cifar_f231b_00004 TERMINATED 16 2 0.056666 4 1 100.222 2.31545 0.0973 |
| train_cifar_f231b_00006 TERMINATED 16 4 0.000147684 8 10 383.071 1.57063 0.3991 |
| train_cifar_f231b_00007 TERMINATED 256 256 0.00477469 8 10 399.911 1.47924 0.5601 |
| train_cifar_f231b_00008 TERMINATED 128 256 0.0306227 8 2 107.098 2.08701 0.222 |
| train_cifar_f231b_00009 TERMINATED 2 16 0.0286986 2 1 138.417 2.35534 0.097 |
+------------------------------------------------------------------------------------------------------------------------------------+
Trial train_cifar_f231b_00005 finished iteration 10 at 2025-03-21 17:17:25. Total running time: 10min 6s
+------------------------------------------------------------+
| Trial train_cifar_f231b_00005 result |
+------------------------------------------------------------+
| checkpoint_dir_name checkpoint_000009 |
| time_this_iter_s 47.74513 |
| time_total_s 598.65816 |
| training_iteration 10 |
| accuracy 0.5787 |
| loss 1.1842 |
+------------------------------------------------------------+
Trial train_cifar_f231b_00005 saved a checkpoint for iteration 10 at: (local)/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-03-21_17-07-18/checkpoint_000009
Trial train_cifar_f231b_00005 completed after 10 iterations at 2025-03-21 17:17:25. Total running time: 10min 6s
Trial status: 10 TERMINATED
Current time: 2025-03-21 17:17:25. Total running time: 10min 6s
Logical resource usage: 2.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name status l1 l2 lr batch_size iter total time (s) loss accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_f231b_00000 TERMINATED 16 1 0.00213327 2 1 169.327 2.30395 0.0966 |
| train_cifar_f231b_00001 TERMINATED 1 2 0.013416 4 1 98.711 2.30777 0.1028 |
| train_cifar_f231b_00002 TERMINATED 256 64 0.0113784 2 1 188.749 2.32115 0.1005 |
| train_cifar_f231b_00003 TERMINATED 64 256 0.0274071 8 1 57.0853 2.42043 0.148 |
| train_cifar_f231b_00004 TERMINATED 16 2 0.056666 4 1 100.222 2.31545 0.0973 |
| train_cifar_f231b_00005 TERMINATED 8 64 0.000353097 4 10 598.658 1.1842 0.5787 |
| train_cifar_f231b_00006 TERMINATED 16 4 0.000147684 8 10 383.071 1.57063 0.3991 |
| train_cifar_f231b_00007 TERMINATED 256 256 0.00477469 8 10 399.911 1.47924 0.5601 |
| train_cifar_f231b_00008 TERMINATED 128 256 0.0306227 8 2 107.098 2.08701 0.222 |
| train_cifar_f231b_00009 TERMINATED 2 16 0.0286986 2 1 138.417 2.35534 0.097 |
+------------------------------------------------------------------------------------------------------------------------------------+
Best trial config: {'l1': 8, 'l2': 64, 'lr': 0.0003530972286268149, 'batch_size': 4}
Best trial final validation loss: 1.1841994988113642
Best trial final validation accuracy: 0.5787
(func pid=4478) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2025-03-21_17-07-18/train_cifar_f231b_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2025-03-21_17-07-18/checkpoint_000009)
Best trial test set accuracy: 0.5926
如果您运行代码,输出结果可能如下所示:
Numberoftrials:10/10(10TERMINATED)
+-----+--------------+------+------+-------------+--------+---------+------------+
|...|batch_size|l1|l2|lr|iter|loss|accuracy|
|-----+--------------+------+------+-------------+--------+---------+------------|
|...|2|1|256|0.000668163|1|2.31479|0.0977|
|...|4|64|8|0.0331514|1|2.31605|0.0983|
|...|4|2|1|0.000150295|1|2.30755|0.1023|
|...|16|32|32|0.0128248|10|1.66912|0.4391|
|...|4|8|128|0.00464561|2|1.7316|0.3463|
|...|8|256|8|0.00031556|1|2.19409|0.1736|
|...|4|16|256|0.00574329|2|1.85679|0.3368|
|...|8|2|2|0.00325652|1|2.30272|0.0984|
|...|2|2|2|0.000342987|2|1.76044|0.292|
|...|4|64|32|0.003734|8|1.53101|0.4761|
+-----+--------------+------+------+-------------+--------+---------+------------+
Besttrialconfig:{'l1':64,'l2':32,'lr':0.0037339984519545164,'batch_size':4}
Besttrialfinalvalidationloss:1.5310075663924216
Besttrialfinalvalidationaccuracy:0.4761
Besttrialtestsetaccuracy:0.4737
大多数试验都被提前终止,以避免资源浪费。表现最佳的试验在验证集上达到了约47%的准确率,这一结果在测试集上也得到了证实。
就是这样!您现在可以调整您的 PyTorch 模型参数了。