Spaces:
Running
Running
Now you can use it just like any other tool. For example, let's improve the prompt a rabbit wearing a space suit. | |
thon | |
image_generation_tool = load_tool('huggingface-tools/text-to-image') | |
agent = CodeAgent(tools=[prompt_generator_tool, image_generation_tool], llm_engine=llm_engine) | |
agent.run( | |
"Improve this prompt, then generate an image of it.", prompt='A rabbit wearing a space suit' | |
) | |
The model adequately leverages the tool: | |
text | |
======== New task ======== | |
Improve this prompt, then generate an image of it. | |
You have been provided with these initial arguments: {'prompt': 'A rabbit wearing a space suit'}. | |
==== Agent is executing the code below: | |
improved_prompt = StableDiffusionPromptGenerator(query=prompt) | |
while improved_prompt == "QUEUE_FULL": | |
improved_prompt = StableDiffusionPromptGenerator(query=prompt) | |
print(f"The improved prompt is {improved_prompt}.") | |
image = image_generator(prompt=improved_prompt) | |
==== | |
Before finally generating the image: | |
[!WARNING] | |
gradio-tools require textual inputs and outputs even when working with different modalities like image and audio objects. Image and audio inputs and outputs are currently incompatible. | |
Use LangChain tools | |
We love Langchain and think it has a very compelling suite of tools. | |
To import a tool from LangChain, use the from_langchain() method. | |
Here is how you can use it to recreate the intro's search result using a LangChain web search tool. | |
thon | |
from langchain.agents import load_tools | |
from transformers import Tool, ReactCodeAgent | |
search_tool = Tool.from_langchain(load_tools(["serpapi"])[0]) | |
agent = ReactCodeAgent(tools=[search_tool]) | |
agent.run("How many more blocks (also denoted as layers) in BERT base encoder than the encoder from the architecture proposed in Attention is All You Need?") | |
``` |