yabs.io

Yet Another Bookmarks Service

Viewing mzimmerm's Bookmarks


[https://huggingface.co/docs/optimum/index] - - public:mzimmerm
ai, doc, huggingface, llm, model, optimum, repo, small, transformer - 9 | id:1489894 -

Optimum is an extension of Transformers that provides a set of performance optimization tools to train and run models on targeted hardware with maximum efficiency. It is also the repository of small, mini, tiny models.

[https://stackoverflow.com/questions/77708142/how-can-i-fetch-vram-and-gpu-cache-size-in-linux] - - public:mzimmerm
amd, apu, command, linux, memory - 5 | id:1489838 -

If you have AMD GPU as I do then you can grab PCI ID for the device with lspci command executed with -D flag (shows PCI doamin) and read the following file cat /sys/bus/pci/devices/${pci_slot}/mem_info_vram_total, it contains GPU VRAM size in bytes.

[https://civitai.com/articles/2296/how-to-install-rocm-on-opensusesuse] - - public:mzimmerm
amd, gpu, rocm - 3 | id:1489831 -

Another rocm installation claim on Opensuse. Interesting note: I realize this is a bit old, but you don't really need amdgpu from the repository: it comes for free with the kernel. amdgpu-dkms is only needed if you're stuck on an older kernel version and you can't upgrade for some reason. For example, Tumbleweed users will not need it..

[https://medium.com/@rafaelmanzanom/ditching-cuda-for-amd-rocm-for-more-accessible-llm-inference-ryzen-apus-edition-92c3649f8f7d] - - public:mzimmerm
ai, amd, apu, compile, gfx902, install, pytorch, rocm - 8 | id:1489810 -

Train LLM on AMD APU. In this scenario, we’ll use an APU because most laptops with a Ryzen CPU include an iGPU; specifically, this post should work with iGPUs based on the “GCN 5.0” architecture, or “Vega” for friends. We’ll use an AMD Ryzen 2200G in this post, an entry-level processor equipped with 4C/4T and an integrated GPU.

With marked bookmarks
| (+) | |

Viewing 451 - 500, 50 links out of 1766 links, page: 10


Export:

JSONXMLRSS