You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I ran into an error while running the solution for lab 1,
File ~/envs/nlpkernel/lib/python3.10/site-packages/transformers/training_args.py:1340, in TrainingArguments.__post_init__(self)
1334 if version.parse(version.parse(torch.__version__).base_version) == version.parse("2.0.0") and self.fp16:
1335 raise ValueError("--optim adamw_torch_fused with --fp16 requires PyTorch>2.0")
1337 if (
1338 self.framework == "pt"
1339 and is_torch_available()
-> 1340 and (self.device.type != "cuda")
1341 and (get_xla_device_type(self.device) != "GPU")
1342 and (self.fp16 or self.fp16_full_eval)
1343 ):
1344 raise ValueError(
1345 "FP16 Mixed precision training with AMP or APEX (`--fp16`) and FP16 half precision evaluation"
1346 " (`--fp16_full_eval`) can only be used on CUDA devices."
1347 )
1349 if (
1350 self.framework == "pt"
1351 and is_torch_available()
(...)
1356 and (self.bf16 or self.bf16_full_eval)
1357 ):
File ~/envs/nlpkernel/lib/python3.10/site-packages/transformers/training_args.py:1764, in TrainingArguments.device(self)
1760 """ 1761 The device used by this process. 1762 """
1763 requires_backends(self, ["torch"])
-> 1764 return self._setup_devices
File ~/envs/nlpkernel/lib/python3.10/site-packages/transformers/utils/generic.py:54, in cached_property.__get__(self, obj, objtype)
52 cached = getattr(obj, attr, None)
53 if cached is None:
---> 54 cached = self.fget(obj)
55 setattr(obj, attr, cached)
56 return cached
File ~/envs/nlpkernel/lib/python3.10/site-packages/transformers/training_args.py:1672, in TrainingArguments._setup_devices(self)
1670 if not is_sagemaker_mp_enabled():
1671 if not is_accelerate_available(min_version="0.20.1"):
-> 1672 raise ImportError(
1673 "Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U`"
1674 )
1675 AcceleratorState._reset_state(reset_partial_state=True)
1676 self.distributed_state = None
ImportError: Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U`
I ran into an error while running the solution for lab 1,
The error message suggested changing this line
Into
or
The text was updated successfully, but these errors were encountered: