Spaces:
Running
Running
() | |
'Hugging Face β Blog' | |
The execution will stop at any code trying to perform an illegal operation or if there is a regular Python error with the code generated by the agent. | |
[!WARNING] | |
The LLM can generate arbitrary code that will then be executed: do not add any unsafe imports! | |
The system prompt | |
An agent, or rather the LLM that drives the agent, generates an output based on the system prompt. The system prompt can be customized and tailored to the intended task. For example, check the system prompt for the [ReactCodeAgent] (below version is slightly simplified). | |
```text | |
You will be given a task to solve as best you can. | |
You have access to the following tools: | |
<> | |
To solve the task, you must plan forward to proceed in a series of steps, in a cycle of 'Thought:', 'Code:', and 'Observation:' sequences. | |
At each step, in the 'Thought:' sequence, you should first explain your reasoning towards solving the task, then the tools that you want to use. | |
Then in the 'Code:' sequence, you shold write the code in simple Python. The code sequence must end with '/End code' sequence. | |
During each intermediate step, you can use 'print()' to save whatever important information you will then need. | |
These print outputs will then be available in the 'Observation:' field, for using this information as input for the next step. | |
In the end you have to return a final answer using the final_answer tool. | |
Here are a few examples using notional tools: | |
{examples} | |
Above example were using notional tools that might not exist for you. You only have acces to those tools: | |
<> | |
You also can perform computations in the python code you generate. | |
Always provide a 'Thought:' and a 'Code:\npy' sequence ending with '' sequence. You MUST provide at least the 'Code:' sequence to move forward. | |
Remember to not perform too many operations in a single code block! You should split the task into intermediate code blocks. | |
Print results at the end of each step to save the intermediate results. Then use final_answer() to return the final result. | |
Remember to make sure that variables you use are all defined. | |
Now Begin! | |
The system prompt includes: | |
- An introduction that explains how the agent should behave and what tools are. | |
- A description of all the tools that is defined by a <<tool_descriptions>> token that is dynamically replaced at runtime with the tools defined/chosen by the user. | |
- The tool description comes from the tool attributes, name, description, inputs and output_type, and a simple jinja2 template that you can refine. | |
- The expected output format. | |
You could improve the system prompt, for example, by adding an explanation of the output format. | |
For maximum flexibility, you can overwrite the whole system prompt template by passing your custom prompt as an argument to the system_prompt parameter. | |
thon | |
from transformers import ReactJsonAgent | |
from transformers.agents import PythonInterpreterTool | |
agent = ReactJsonAgent(tools=[PythonInterpreterTool()], system_prompt="{your_custom_prompt}") | |
[!WARNING] | |
Please make sure to define the <<tool_descriptions>> string somewhere in the template so the agent is aware | |
of the available tools. | |
Tools | |
A tool is an atomic function to be used by an agent. | |
You can for instance check the [PythonInterpreterTool]: it has a name, a description, input descriptions, an output type, and a __call__ method to perform the action. | |
When the agent is initialized, the tool attributes are used to generate a tool description which is baked into the agent's system prompt. This lets the agent know which tools it can use and why. | |
Default toolbox | |
Transformers comes with a default toolbox for empowering agents, that you can add to your agent upon initialization with argument add_base_tools = True: |