Replit is a site where I can run any REPL online. Can be used for AI
Research community developing various code models, small and big. Models may not be instruct
They have the 1.3B version!!! This may be the best to start with Newspeak. Should work train even on huggingcface
Another possible model. For coding capabilities, Deepseek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks.
With the optimizers of bitsandbytes (like 8 bit AdamW), you would need 2 bytes per parameter, or 14 GB of GPU memory.
Another potential model to use for Newspeak, but it is NOT open source. Adventage: 2.5B params, so should be usable in small GPUs
training a model like Llama with 2.7 billion parameters outperformed a larger model like Vicuna with 13 billion parameters. Especially when considering resource consumption, this might be a good alternative to using a 7B Foundation model instead of a full-blown ChatGPT. The best price-to-performance base model for our use case turned out to be Mistral 7b. The model is compact enough to fit into an affordable GPU with 24GB VRAM and outperforms the other models with 7B parameters.
Comparison of LLM models for coding
Open source with lots of information. Uses Multiple undrelying models. Not sure how I would train for it
The Mixtral model is new, and seems to be good. Click on “Demo“ to test it
Article has comparison with other code-LLM models
Review of LLM specialized for code generation
Model which generates code for Python, Javascript, Go, Shell, Perl, Swifg, Ruby, PHP
Advanced coding Our first version of Gemini can understand, explain and generate high-quality code in the world’s most popular programming languages, like Python, Java, C++, and Go. Using a specialized version of Gemini, we created a more advanced code generation system, AlphaCode 2,
AI Code tools : Good summary. Does not talk about which pre-trained model they use. One is gemini (bard) -> alphacode2
Use the Bert model to train on Yelp dataset
Look for models that could be used in Newspeak
wat=wasm text format; sat=server to run wat; subo=project that oversees sat
Protocol buffers are Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data
Style guide for the Flutter repo. Interestingly, some of the recommended alignment is destroyed by Dart formatter. I am on the side of good alignment FWIW