runtime error
Exit code: 1. Reason: …)pytorch_model-00003-of-00003.safetensors: 21%|██ | 797M/3.87G [00:01<00:03, 795MB/s][A (…)pytorch_model-00003-of-00003.safetensors: 65%|██████▍ | 2.51G/3.87G [00:02<00:01, 1.33GB/s][A (…)pytorch_model-00003-of-00003.safetensors: 100%|█████████▉| 3.87G/3.87G [00:03<00:00, 1.32GB/s][A (…)pytorch_model-00003-of-00003.safetensors: 100%|█████████▉| 3.87G/3.87G [00:03<00:00, 1.24GB/s] config.json: 0%| | 0.00/548 [00:00<?, ?B/s][A config.json: 100%|██████████| 548/548 [00:00<00:00, 3.03MB/s] model.safetensors: 0%| | 0.00/176M [00:00<?, ?B/s][A model.safetensors: 100%|█████████▉| 176M/176M [00:00<00:00, 270MB/s] transformer.gradient_checkpointing = False vae.set_attn_processor(AttnProcessor2_0()) torch.backends.cuda.matmul.allow_tf32 = True Will cache examples in '/home/user/app/space/gradio_cached_examples/41' directory at first use. ZeroGPU tensors packing: 0%| | 0.00/33.9G [00:00<?, ?B/s][A ZeroGPU tensors packing: 0%| | 0.00/33.9G [00:00<?, ?B/s] Traceback (most recent call last): File "/home/user/app/app.py", line 1, in <module> import os; exec(os.getenv('EXEC')) File "<string>", line 19, in <module> File "/home/user/app/space/app.py", line 283, in <module> block.launch(allowed_paths=[hf_space_logger.LOG_DIR]) File "/usr/local/lib/python3.10/site-packages/spaces/zero/gradio.py", line 162, in launch task(*task_args, **task_kwargs) File "/usr/local/lib/python3.10/site-packages/spaces/zero/torch/patching.py", line 367, in pack _pack(Config.zerogpu_offload_dir) File "/usr/local/lib/python3.10/site-packages/spaces/zero/torch/patching.py", line 359, in _pack pack = pack_tensors(originals, fakes, offload_dir, callback=update) File "/usr/local/lib/python3.10/site-packages/spaces/zero/torch/packing.py", line 114, in pack_tensors os.posix_fallocate(fd, 0, total_asize) OSError: [Errno 28] No space left on device
Container logs:
Fetching error logs...