Problem with 'flash_attn' and CUDA

#14
by Farquaad56 - opened

Hi i have a probleme with install
i have this error "No module named 'flash_attn'" and when i run the software i have a cuda error to "User provided device_type of 'cuda', but CUDA is not available. Disabling"

i tried
pip install packaging
pip install --upgrade setuptools wheel pip
pip install flash-attn
pip install cython numpy

but didn't solve my problem
(sorry for my english)

PS L:\stable_audio_tools\stable-audio-tools> python ./run_gradio.py --pretrained-name stabilityai/stable-audio-open-1.0
Loading pretrained model stabilityai/stable-audio-open-1.0
No module named 'flash_attn'
flash_attn not installed, disabling Flash Attention
L:\stable_audio_tools\venv\lib\site-packages\torch\nn\utils\weight_norm.py:28: UserWarning: torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.
warnings.warn("torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.")
Done loading model
Running on local URL: http://127.0.0.1:7860

Could not create share link. Please check your internet connection or our status page: https://status.gradio.app.
Prompt: Ambient Techno, meditation, Scandinavian Forest, 808 drum machine, 808 kick, claps, shaker, synthesizer, synth bass, Synth Drones, beautiful, peaceful, Ethereal, Natural, 122 BPM, Instrumental
4055760663
L:\stable_audio_tools\venv\lib\site-packages\torch\amp\autocast_mode.py:250: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
warnings.warn(
2%|█▌ | 1/50 [00:26<21:21, 26.15s/it]

Exact same problem here

Same here. Anyone found something?

Everyone has this warning. Instruction movies about how to install have this warning. I have Flash Attention installed but that changes nothing. CUDA works for me.

"FutureWarning: torch.backends.cuda.sdp_kernel() is deprecated. In the future, this context manager will be removed. Please see, torch.nn.attention.sdpa_kernel() for the new context manager, with updated signature.
warnings.warn(
E:\StableAudio\stable-audio-tools\stable_audio_tools\models\transformer.py:379: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.)"

To solve this problème i have install
Stable audio with a software call "pinokio" HTTPS://pinokio.computer
Flash attention work with hardware accélération.

Sign up or log in to comment