yabs.io

Yet Another Bookmarks Service

Viewing mzimmerm's Bookmarks

perform delete ,

[https://www.reddit.com/r/LocalLLaMA/comments/12vxxze/most_cost_effective_gpu_for_local_llms/] - - public:mzimmerm
ai, doc, llm, model, optimize, perform - 6 | id:1489804 -

GGML quantized models. They would let you leverage CPU and system RAM, instead of having to rely on a GPU’s. This could save you a fortune, especially if go for some used AMD Epyc platforms. This could be more viable for the larger models, especially the 30B/65B parameters models which would still press or exceed the VRAM on the P40.

With marked bookmarks
| (+) | |

Viewing 1 - 2, 2 links out of 2 links, page: 1

Follow Tags

Manage

Export:

JSONXMLRSS