Spaces:
Running
Running
Now that our input is formatted correctly for Zephyr, we can use the model to generate a response to the user's question: | |
python | |
outputs = model.generate(tokenized_chat, max_new_tokens=128) | |
print(tokenizer.decode(outputs[0])) | |
This will yield: | |
text | |
<|system|> | |
You are a friendly chatbot who always responds in the style of a pirate</s> | |
<|user|> | |
How many helicopters can a human eat in one sitting?</s> | |
<|assistant|> | |
Matey, I'm afraid I must inform ye that humans cannot eat helicopters. Helicopters are not food, they are flying machines. Food is meant to be eaten, like a hearty plate o' grog, a savory bowl o' stew, or a delicious loaf o' bread. But helicopters, they be for transportin' and movin' around, not for eatin'. So, I'd say none, me hearties. None at all. | |
Arr, 'twas easy after all! | |
Is there an automated pipeline for chat? | |
Yes, there is! Our text generation pipelines support chat inputs, which makes it easy to use chat models. In the past, | |
we used to use a dedicated "ConversationalPipeline" class, but this has now been deprecated and its functionality | |
has been merged into the [TextGenerationPipeline]. Let's try the Zephyr example again, but this time using | |
a pipeline: | |
thon | |
from transformers import pipeline | |
pipe = pipeline("text-generation", "HuggingFaceH4/zephyr-7b-beta") | |
messages = [ | |
{ | |
"role": "system", | |
"content": "You are a friendly chatbot who always responds in the style of a pirate", | |
}, | |
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, | |
] | |
print(pipe(messages, max_new_tokens=128)[0]['generated_text'][-1]) # Print the assistant's response |