Spaces:
Runtime error
Runtime error
Update app.py
Browse files
app.py
CHANGED
@@ -1,17 +1,15 @@
|
|
1 |
import gradio as gr
|
2 |
|
3 |
-
|
4 |
-
"""**
|
5 |
-
✨ This demo is powered by [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b), finetuned on the [Baize](https://github.com/project-baize/baize-chatbot) dataset, and running with [Text Generation Inference](https://github.com/huggingface/text-generation-inference). [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) is a state-of-the-art large language model built by the [Technology Innovation Institute](https://www.tii.ae) in Abu Dhabi. It is trained on 1 trillion tokens (including [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)) and available under the Apache 2.0 license. It currently holds the 🥇 1st place on the [🤗 Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). This demo is made available by the [HuggingFace H4 team](https://huggingface.co/HuggingFaceH4).
|
6 |
-
🧪 This is only a **first experimental preview**: the [H4 team](https://huggingface.co/HuggingFaceH4) intends to provide increasingly capable versions of Falcon Chat in the future, based on improved datasets and RLHF/RLAIF.
|
7 |
-
👀 **Learn more about Falcon LLM:** [falconllm.tii.ae](https://falconllm.tii.ae/)
|
8 |
-
➡️️ **Intended Use**: this demo is intended to showcase an early finetuning of [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b), to illustrate the impact (and limitations) of finetuning on a dataset of conversations and instructions. We encourage the community to further build upon the base model, and to create even better instruct/chat versions!
|
9 |
-
⚠️ **Limitations**: the model can and will produce factually incorrect information, hallucinating facts and actions. As it has not undergone any advanced tuning/alignment, it can produce problematic outputs, especially if prompted to do so. Finally, this demo is limited to a session length of about 1,000 words.
|
10 |
-
|
11 |
Give me something to say!
|
12 |
""")
|
|
|
|
|
|
|
|
|
|
|
13 |
|
14 |
-
print(this_Markdown)
|
15 |
|
16 |
tts_examples = [
|
17 |
"I love learning machine learning",
|
@@ -22,14 +20,14 @@ tts_demo = gr.load(
|
|
22 |
"huggingface/facebook/fastspeech2-en-ljspeech",
|
23 |
title=None,
|
24 |
examples=tts_examples,
|
25 |
-
description=
|
26 |
)
|
27 |
|
28 |
stt_demo = gr.load(
|
29 |
"huggingface/facebook/wav2vec2-base-960h",
|
30 |
title=None,
|
31 |
inputs="mic",
|
32 |
-
description=
|
33 |
)
|
34 |
gr.api_name="additionss"
|
35 |
demo = gr.TabbedInterface([tts_demo, stt_demo], ["Text-to-speech", "Speech-to-text"],css=".gradio-container {background-color: black}")
|
|
|
1 |
import gradio as gr
|
2 |
|
3 |
+
this_Markdown1=(
|
4 |
+
"""**
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
Give me something to say!
|
6 |
""")
|
7 |
+
this_Markdown2=(
|
8 |
+
"""**
|
9 |
+
say something and i will write it!
|
10 |
+
""")
|
11 |
+
|
12 |
|
|
|
13 |
|
14 |
tts_examples = [
|
15 |
"I love learning machine learning",
|
|
|
20 |
"huggingface/facebook/fastspeech2-en-ljspeech",
|
21 |
title=None,
|
22 |
examples=tts_examples,
|
23 |
+
description=this_Markdown1,
|
24 |
)
|
25 |
|
26 |
stt_demo = gr.load(
|
27 |
"huggingface/facebook/wav2vec2-base-960h",
|
28 |
title=None,
|
29 |
inputs="mic",
|
30 |
+
description=this_Markdown2,
|
31 |
)
|
32 |
gr.api_name="additionss"
|
33 |
demo = gr.TabbedInterface([tts_demo, stt_demo], ["Text-to-speech", "Speech-to-text"],css=".gradio-container {background-color: black}")
|