Too many nvidia drivers are on my laptop, how should I uninstall them all? #nvidia #2204 #uninstall #cuda
After one more year of intensive work and numerous test runs, a new major update for https://github.com/chrxh/alien is finally polished and ready. It offers possibilities I had only dreamed of before. 🪐
YouTube: https://youtu.be/dSkxvi9igqQ
#ArtificialLife #generativeart #cuda
howdy all! Just wanted to let y'all know that we have an #oneAPI meetup with David Airlie titled
"Why compute needs commoditization and currently sucks?"
Come join us this Friday at 4:30pm PT!
https://www.meetup.com/oneapi-community-us/events/295639963/
Unable to install nvidia-cuda-toolkit. Getting unment dependencies error in Ubuntu 22.04 #nvidia #dependencies #cuda
Is it possible to use Ubuntu 22.04 with GeForce GT 610 (390.157 driver) and CUDA 12.2 with TensorFlow 2.13? #nvidia #2204 #cuda #jupyter #tensorflow
#nvidia #cuda #jupyter #tensorflow
Is it possible to use Ubuntu 22.04 with GeForce GT 610 (390.157 diver) and CUDA 12.2 with TensorFlow 2.13? #nvidia #2204 #cuda #jupyter #tensorflow
#nvidia #cuda #jupyter #tensorflow
Nvidia's CUDA Monopoly
Link: https://matt-rickard.com/nvidias-cuda-monopoly
Discussion: https://news.ycombinator.com/item?id=37031451
VkFFT: Vulkan/CUDA/Hip/OpenCL/Level Zero/Metal Fast Fourier Transform Library
Link: https://github.com/DTolm/VkFFT
Discussion: https://news.ycombinator.com/item?id=36968273
@asnascimento sounds great! Could you please share some resources for learning #CUDA?
A #LocalFirst perspective on #AI and #LLMs costs to the environment:
My current laptop is portable & capable at everything I need it to do. A bit dated. But System76 make open hardware; it's maintainable & repairable. Environmentally and practically, it's as #sustainable as current day #computers get. 2.54lbs.
To do anything with #tensors at capable speed I need a dedicated GPU, which I don't have now. #NVIDIA has an almost-monopoly on the #CUDA instruction set and vendor lockin is rife.
#localfirst #ai #LLMs #sustainable #computers #tensors #nvidia #cuda
My air-cooled DIY Dual #RTX3090 #NVLink #AI workstation. You can never have enough #VRAM with these exciting LLMs! #LLM #LLaMA #Falcon #finetuning #lora #gptq #inference #cuda #pytorch
#rtx3090 #nvlink #ai #vram #llm #llama #falcon #finetuning #lora #gptq #inference #cuda #pytorch