You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Using base config ['configs/train_colossalai.yaml']
Global seed set to 23
Using ckpt_path = 512-base-ema.ckpt
LatentDiffusion: Running in v-prediction mode
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
building MemoryEfficientAttnBlock with 512 in_channels...
/root/anaconda3/envs/ldm/lib/python3.9/site-packages/lightning/pytorch/trainer/connectors/accelerator_connector.py:578: LightningDeprecationWarning: The Trainer argument auto_select_gpus has been deprecated in v1.9.0 and will be removed in v2.0.0. Please use the function lightning.pytorch.accelerators.find_usable_cuda_devices instead.
rank_zero_deprecation(
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
Txt2ImgIterableBaseDataset dataset contains -1 examples.
Txt2ImgIterableBaseDataset dataset contains -1 examples.
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /mnt/ColossalAI/examples/images/diffusion/main.py:813 in │
│ │
│ 810 │ │ │
│ 811 │ │ # Print some information about the datasets in the data module │
│ 812 │ │ for k in data.datasets: │
│ ❱ 813 │ │ │ rank_zero_info(f"{k}, {data.datasets[k].class.name}, {len(data.datas │
│ 814 │ │ │
│ 815 │ │ # Configure learning rate based on the batch size, base learning rate and number │
│ 816 │ │ # If scale_lr is true, calculate the learning rate based on additional factors │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: len() should return >= 0
5、尝试修改第3步当中的file_path,加上refs/blobs,提示not a directory,加上snapshots,提示和第4步结果相同。
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
1、访问:https://github.com/hpcaitech/ColossalAI/tree/main/examples/images/diffusion,并安装依赖包
2、下载 laion 1B dataset:
3、修改configs/train_colossalai.yaml
file_path: "data/datasets--laion--laion1B-nolang-safety/"
4、运行train
提示如下:
5、尝试修改第3步当中的file_path,加上refs/blobs,提示not a directory,加上snapshots,提示和第4步结果相同。
请问是什么原因?应该怎么配置?
Beta Was this translation helpful? Give feedback.
All reactions