Sarin SuriyakoonSarin June NewsletterInteresting paper and code repository to check out this monthJun 30Jun 30
Sarin SuriyakoonCreating MacOS Agent Part 3: ReAct PromptingLet’s learn how to do ReAct prompting. The advanced prompting technique reduces hallucination and makes your LLM reliable.Jun 21Jun 21
Sarin SuriyakoonConvert Pytorch Model to Quantize GGUF to Run on OllamaPytorch Model(Bonito)->GGUF->Quantize for your local inference using OllamaMar 294Mar 294
Sarin SuriyakoonUse Unsloth LORA Adapter with Ollama in 3 StepsUse LLama.Cpp to convert Unsloth Lora Adapter to GGML(.bin) and use it in Ollama — with a single GPUMar 296Mar 296
Sarin SuriyakoonUse Anaconda to Organize Python EnvironmentWorking on multiple Python projects can quickly become chaotic if you don't have a good system for managing dependencies and isolating…Mar 17Mar 17
Sarin SuriyakoonDeploy Ollama on Local Kubernetes(MicroK8s)Let’s deploy Ollama(LLM REST API) to your local KubernetesMar 123Mar 123
Sarin SuriyakoonCreating MacOS-Agent Part 2: Applied A Few Shot Prompts to LLAMA2Use a few shot prompts to improve and guarantee how LLama2 7B performs with the help of ClaudeMar 3Mar 3