Sarin SuriyakoonConvert Pytorch Model to Quantize GGUF to Run on OllamaPytorch Model(Bonito)->GGUF->Quantize for your local inference using Ollama4 min read·Mar 29, 2024--4--4
Sarin SuriyakoonUse Unsloth LORA Adapter with Ollama in 3 StepsUse LLama.Cpp to convert Unsloth Lora Adapter to GGML(.bin) and use it in Ollama — with a single GPU3 min read·Mar 29, 2024--1--1
Sarin SuriyakoonSarin Tech March Newsletter — Fine-TuningThis month I am into Fine-Tune1 min read·Mar 19, 2024----
Sarin SuriyakoonUse Anaconda to Organize Python EnvironmentWorking on multiple Python projects can quickly become chaotic if you don't have a good system for managing dependencies and isolating…2 min read·Mar 17, 2024----
Sarin SuriyakoonDeploy Ollama on Local Kubernetes(MicroK8s)Let’s deploy Ollama(LLM REST API) to your local Kubernetes5 min read·Mar 12, 2024----
Sarin SuriyakoonNoob Night: [EP5] Basic LLM Fine-tuning - Video ยาวเท่าไรนะครับ1 min read·Mar 12, 2024----
Sarin SuriyakoonCreating MacOS-Agent Part 2: Applied A Few Shot Prompts to LLAMA2Use a few shot prompts to improve and guarantee how LLama2 7B performs with the help of Claude4 min read·Mar 3, 2024----
Sarin SuriyakoonGenerate your Typescript project with this bash scriptRun this script and start coding1 min read·Mar 1, 2024--1--1
Sarin SuriyakoonHost Ollama using NgrokSpoiler: Run Ollama and use ngrok to expose your Mac mini to the internet2 min read·Feb 27, 2024----