Generate from playgroundai/playground-v2.5–1024px-aesthetic

Creating MacOS Agent Part 3: ReAct Prompting

Let’s learn how to do ReAct prompting. The advanced prompting technique reduces hallucination and makes your LLM reliable.

Sarin Suriyakoon
3 min readJun 21, 2024

--

Introduction

From Part 1 and Part 2, I have learned a few more things and keep improving an idea to turn “Human wish to Bash Script”

I have concluded osascript is less effective than the bash approach since bash covers more ground and actions.

I have explored the fine-tuning and distillation too but seems like the cost is more than I anticipate as a side learning project.

Then I discover ReAct!

What is ReAct

ReAct is one of the popular techniques which suggest an effective way to write a prompt.

In short, you iterate the answer generation into

Thought, Action, Observation format

Thought is where the LLM would reason about what action to take next.

Action is the action you take

Observation is the result of the action

I have seen the ReAct paper before and never understood how to put it into practice….how to write prompts to solve my problem.

I accidentally checked out the LangChain code and I got it.

My observation

Note that the actual implementation from the paper repo is different from the LangChain one.

In the paper, it is iterated through thought, action and observation and in each iteration it inference the openai and append the result as the next iteration prompt

But in LangChain it introduce a kind of one-shot inference iteration. You will see what I mean in a second.

Let’s start with the prompt

Ok, Let’s get to the point. Here is an original prompt I dug up from LangChain

Use the following format in your response:

Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question

Check out the original: Code

Here is the first draft I adapted to “Human wish to bash script bot”

Instruction: the input instruction you must answer
Thought: you should always think about what to do in bash commands
Action: bash commands
Verification: verification command to check if the action was successful
... (this Thought/Action/Verification can repeat N times)
Thought: I now know the final answer
Bash Script: the bash script to run all actions at once


Instruction: {question}

ReAct Full Version Online

If you want to see it in practice, here they are!

PartRock AWS

Basher PartyRock AWS

Huggingface Chat

Basher Huggingface Chat

I recommend you add a few shot prompts or examples before you use it as your day-to-day assistant.

If you want to take it further, the final result would be injected into bash command eval then we should have an agent that turns human wish to bash script!

In a nutshell

How ReAct helps the LLM is that it breaks down a solving step into a single action and checks for the result. This LLM focuses on one small sub-task and verifies its result before coming up with the next action.

The Result

The result is great and I can see why it works well and improve the effectiveness and reliability of LLM greatly.

Going further

Of course, this ReAct prompting is perfect for Agentic LLM system where you can call actual function aka tool calling and get the actual result. Maybe next time.

--

--