Creating MacOS Agent Part 3: ReAct PromptingLet’s learn how to do ReAct prompting. The advanced prompting technique reduces hallucination and makes your LLM reliable.Jun 21Jun 21
Convert Pytorch Model to Quantize GGUF to Run on OllamaPytorch Model(Bonito)->GGUF->Quantize for your local inference using OllamaMar 294Mar 294
Use Unsloth LORA Adapter with Ollama in 3 StepsUse LLama.Cpp to convert Unsloth Lora Adapter to GGML(.bin) and use it in Ollama — with a single GPUMar 296Mar 296
Use Anaconda to Organize Python EnvironmentWorking on multiple Python projects can quickly become chaotic if you don't have a good system for managing dependencies and isolating…Mar 17Mar 17
Deploy Ollama on Local Kubernetes(MicroK8s)Let’s deploy Ollama(LLM REST API) to your local KubernetesMar 123Mar 123
Creating MacOS-Agent Part 2: Applied A Few Shot Prompts to LLAMA2Use a few shot prompts to improve and guarantee how LLama2 7B performs with the help of ClaudeMar 3Mar 3