Search
Results
The Best GPUs for Deep Learning in 2023 — An In-depth Analysis
CPU Database | TechPowerUp
SHOWS EXACTS SPECS OF CPU OR APU INCLUDING GRAPHICS
AMD Ryzen 7 5700G 8-Core/16-Thread 7nm Processor | Canada Computers & Electronics
$249, at Canadacomputers, they also have open box for $224. BUY THIS, NOT THE AMD Ryzen 5 5600GT(VEGA7), BECAUSE THIS HAS VEGA8, FASTER THAN VEGA7, AND SAME AS VEGA8 ON LAPTOP 2500U.
acai66/Pytorch_ROCm_whl: Pytorch compiled with ROCm.
Install Pytorch on ROCm from AMD git repo. TRY THIS!!!
GPU Database | TechPowerUp
Database of AMD GPUs. Here we can see that 2500U is Vega 8 mobile , which is GCN 5.0. This is only supported in ROCm up to 4.5.2!!
ROCm 5.xx ever planning to include gfx90c GPUs? · Issue #1743 · ROCm/ROCm
Suggested git-build of pytorch on gfx90c FAILED for me
torch.Tensor — PyTorch 2.2 documentation
BERT Transformers – How Do They Work? | Exxact Blog
Excellent document about BERT transformers / models and their parameters: - L=number of layers. - H=size of the hidden layer = number of vectors for each word in the sentence. - A = Number of self-attention heads - Total parameters.
ROCm installation for Linux — ROCm installation (Linux)
Top of the guide describing ROCm on Linux. There are 2 core approaches: Using RPM (Package manager), or using AMD installer. I should use Package manager. Also single-version vs. multi-version. I should use single-version, latest.
Doesn't ROCm support AMD's integrated GPU (APU)? · Issue #2216 · ROCm/ROCm
This guy claims successful installation of ROCm on Ubuntu - this seems to be workable for Tumbleweed as well. See the comment “nav9 commented on Jul 16, 2023“
6 Ways to Run LLMs Locally (also how to use HuggingFace)
Various methods to run LLM models locally hugging face is only one of them.
Home - Replit
Replit is a site where I can run any REPL online. Can be used for AI
deepseek-ai (DeepSeek)
They have the 1.3B version!!! This may be the best to start with Newspeak. Should work train even on huggingcface
deepseek-ai/deepseek-coder-6.7b-instruct · Hugging Face
Another possible model. For coding capabilities, Deepseek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks.
StarCoder: A State-of-the-Art LLM for Code
Article has comparison with other code-LLM models
stabilityai (Stability AI) - Stable Diffusion running on Huggingface
Chat, models. Not open source, but instruct and relatively small (3B). The 3B instruct may be the best to try on Newspeak.
AI Code Tools: The Ultimate Guide in 2024
AI Code tools : Good summary. Does not talk about which pre-trained model they use. One is gemini (bard) -> alphacode2
Introduction - Hugging Face NLP Course
Natural Languge processing - full course.
BERT 101 - State Of The Art NLP Model Explained
Best summary of Natural Language Processing and terms - model (a language model - e.g. BertModel, defines encoder and decoder and their properties), transformer (a specific neural network based on attention paper), encoder (series of transformers on input), decoders (series of transformers on output). Bert does NOT use decoder. TensorFlow and PyTorch are possible backends to Transformers (NN). Summary: BERT is a highly complex and advanced language model that helps people automate language understanding.
BERT vs GPT: A Tale of Two Transformers That Revolutionized NLP | by Tavva Prudhvith | Medium
Fine-tune a pretrained model
Use the Bert model to train on Yelp dataset
How to train a new language model from scratch using Transformers and Tokenizers
Describes how to train a new language (desperanto) model.
Generative AI in a Nutshell - how to survive and thrive in the age of AI - YouTube
(193) Fireship - YouTube
A Quantum Trick Implied Eternal Stability. Now It’s Falling Apart. | Quanta Magazine
The secret, he found, was to stick the particle in a “disordered” quantum landscape, one dotted with peaks and valleys. Each location would have a random height, representing a random energy. In a real material, this disorder might come from impurities such as missing atoms or atoms of different elements.
(185) First quantum measurement of gravity: What does it mean? - YouTube
Web technology for developers | MDN
Learn PWA | web.dev
Designing For A Browserless Web — Smashing Magazine
There’s a New Theory About Where Dark Matter Is Hiding | WIRED
UAWire
Syrskyi in, Zaluzhnyi out: what to expect from Ukraine's army reshuffle
Online Markdown Editor - Dillinger, the Last Markdown Editor ever.
rabbit — keynote
git - How do I squash my last N commits together? - Stack Overflow
FINVIZ.com - Stock Screener
Poe
(107) Is it necessary to make a GUI version for a simple program? - Quora
Tracking information about the Russian War against Ukraine
bash-style-guide
(18) 5 muscles of eternal youth. If you train them, you will never grow old. @DoctorAlekseev - YouTube
QuickRef.ME - Quick Reference Cheat Sheet
javascript - For ServiceWorker cache.addAll(), how do the URLs work? - Stack Overflow
cache.addAll treatment of '/' '/index.html', and other assets.
Using Service Workers - Web APIs | MDN
CycleTracker: Service workers - Progressive web apps | MDN
Solution to write a service worker using VERSION + implementing “install“ event. These 2 together forces the browser to clear old cache on version change.
caching - Service worker: check cached resources are up-to-date or latest - Stack Overflow
You have two options: Switch from your current cache-first strategy in your fetch handler to a strategy like stale-while-revalidate. Use a build-time tool to generate a service worker that changes each time one of your local resources changes, which will trigger the install and activate handlers, giving them a chance to update your users' caches with the latest copies of your assets. These tools work by generating hashes of each local file and inlining those hashes into the generated service worker script.
Strategies for service worker caching | Workbox | Chrome for Developers
Best description of how service worker should handle caching - network first, cache first etc.
android - When and how does a PWA update itself? - Stack Overflow
Only two events… In essence, there are only two events that will automatically trigger a PWA update: A change in the linked manifest.json; e.g. a changed icon file or a changed scope or start_url. A one-character change in the controlling service-worker.js. E.g., if you store the version number in a const in this file and change it, an update of the PWA will be triggered.
The service worker lifecycle | Articles | web.dev
directory structure - Where do I put my systemd unit file? - Unix & Linux Stack Exchange
Best overview of systemctl admin files