zakerytclarke commited on
Commit
447577a
·
verified ·
1 Parent(s): 5aa6f84

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -25
README.md CHANGED
@@ -2,6 +2,7 @@
2
  license: mit
3
  datasets:
4
  - teapotai/synthqa
 
5
  language:
6
  - en
7
  - fr
@@ -14,49 +15,65 @@ tags:
14
  - transformers.js
15
  widget:
16
  - text: >-
17
- Teapot is an open-source small language model (~800 million parameters) fine-tuned on synthetic data and optimized to run locally on resource-constrained devices such as smartphones and CPUs.
18
- Teapot is trained to only answer using context from documents, reducing hallucinations.
19
- Teapot can perform a variety of tasks, including hallucination-resistant Question Answering (QnA), Retrieval-Augmented Generation (RAG), and JSON extraction.
20
- TeapotLLM is a fine tune of flan-t5-large that was trained on synthetic data generated by Deepseek v3
21
- TeapotLLM can be hosted on low-power devices with as little as 2GB of CPU RAM such as a Raspberry Pi.
22
- Teapot is a model built by and for the community.
 
 
 
 
23
 
24
 
25
  What devices can teapot run on?
26
  example_title: Question Answering
27
  - text: >-
28
- Teapot is an open-source small language model (~800 million parameters) fine-tuned on synthetic data and optimized to run locally on resource-constrained devices such as smartphones and CPUs.
29
- Teapot is trained to only answer using context from documents, reducing hallucinations.
30
- Teapot can perform a variety of tasks, including hallucination-resistant Question Answering (QnA), Retrieval-Augmented Generation (RAG), and JSON extraction.
31
- TeapotLLM is a fine tune of flan-t5-large that was trained on synthetic data generated by Deepseek v3
32
- TeapotLLM can be hosted on low-power devices with as little as 2GB of CPU RAM such as a Raspberry Pi.
33
- Teapot is a model built by and for the community.
 
 
 
 
34
 
35
 
36
  Tell me about teapotllm
37
  example_title: Summarization Answering
38
  - text: >-
39
- Teapot is an open-source small language model (~800 million parameters) fine-tuned on synthetic data and optimized to run locally on resource-constrained devices such as smartphones and CPUs.
40
- Teapot is trained to only answer using context from documents, reducing hallucinations.
41
- Teapot can perform a variety of tasks, including hallucination-resistant Question Answering (QnA), Retrieval-Augmented Generation (RAG), and JSON extraction.
42
- TeapotLLM is a fine tune of flan-t5-large that was trained on synthetic data generated by Deepseek v3
43
- TeapotLLM can be hosted on low-power devices with as little as 2GB of CPU RAM such as a Raspberry Pi.
44
- Teapot is a model built by and for the community.
 
 
 
 
45
 
46
 
47
  Extract the number of parameters
48
  example_title: Information Extraction
49
  - text: >-
50
- Teapot is an open-source small language model (~800 million parameters) fine-tuned on synthetic data and optimized to run locally on resource-constrained devices such as smartphones and CPUs.
51
- Teapot is trained to only answer using context from documents, reducing hallucinations.
52
- Teapot can perform a variety of tasks, including hallucination-resistant Question Answering (QnA), Retrieval-Augmented Generation (RAG), and JSON extraction.
53
- TeapotLLM is a fine tune of flan-t5-large that was trained on synthetic data generated by Deepseek v3
54
- TeapotLLM can be hosted on low-power devices with as little as 2GB of CPU RAM such as a Raspberry Pi.
55
- Teapot is a model built by and for the community.
 
 
 
 
56
 
57
 
58
  How many parameters is Deepseek?
59
- example_title: Hallucination Resistance
60
  base_model:
61
  - google/flan-t5-large
62
  pipeline_tag: text2text-generation
 
2
  license: mit
3
  datasets:
4
  - teapotai/synthqa
5
+ - teapotai/teapot-chat
6
  language:
7
  - en
8
  - fr
 
15
  - transformers.js
16
  widget:
17
  - text: >-
18
+ Teapot is an open-source small language model (~800 million parameters)
19
+ fine-tuned on synthetic data and optimized to run locally on
20
+ resource-constrained devices such as smartphones and CPUs. Teapot is trained
21
+ to only answer using context from documents, reducing hallucinations. Teapot
22
+ can perform a variety of tasks, including hallucination-resistant Question
23
+ Answering (QnA), Retrieval-Augmented Generation (RAG), and JSON extraction.
24
+ TeapotLLM is a fine tune of flan-t5-large that was trained on synthetic data
25
+ generated by Deepseek v3 TeapotLLM can be hosted on low-power devices with
26
+ as little as 2GB of CPU RAM such as a Raspberry Pi. Teapot is a model built
27
+ by and for the community.
28
 
29
 
30
  What devices can teapot run on?
31
  example_title: Question Answering
32
  - text: >-
33
+ Teapot is an open-source small language model (~800 million parameters)
34
+ fine-tuned on synthetic data and optimized to run locally on
35
+ resource-constrained devices such as smartphones and CPUs. Teapot is trained
36
+ to only answer using context from documents, reducing hallucinations. Teapot
37
+ can perform a variety of tasks, including hallucination-resistant Question
38
+ Answering (QnA), Retrieval-Augmented Generation (RAG), and JSON extraction.
39
+ TeapotLLM is a fine tune of flan-t5-large that was trained on synthetic data
40
+ generated by Deepseek v3 TeapotLLM can be hosted on low-power devices with
41
+ as little as 2GB of CPU RAM such as a Raspberry Pi. Teapot is a model built
42
+ by and for the community.
43
 
44
 
45
  Tell me about teapotllm
46
  example_title: Summarization Answering
47
  - text: >-
48
+ Teapot is an open-source small language model (~800 million parameters)
49
+ fine-tuned on synthetic data and optimized to run locally on
50
+ resource-constrained devices such as smartphones and CPUs. Teapot is trained
51
+ to only answer using context from documents, reducing hallucinations. Teapot
52
+ can perform a variety of tasks, including hallucination-resistant Question
53
+ Answering (QnA), Retrieval-Augmented Generation (RAG), and JSON extraction.
54
+ TeapotLLM is a fine tune of flan-t5-large that was trained on synthetic data
55
+ generated by Deepseek v3 TeapotLLM can be hosted on low-power devices with
56
+ as little as 2GB of CPU RAM such as a Raspberry Pi. Teapot is a model built
57
+ by and for the community.
58
 
59
 
60
  Extract the number of parameters
61
  example_title: Information Extraction
62
  - text: >-
63
+ Teapot is an open-source small language model (~800 million parameters)
64
+ fine-tuned on synthetic data and optimized to run locally on
65
+ resource-constrained devices such as smartphones and CPUs. Teapot is trained
66
+ to only answer using context from documents, reducing hallucinations. Teapot
67
+ can perform a variety of tasks, including hallucination-resistant Question
68
+ Answering (QnA), Retrieval-Augmented Generation (RAG), and JSON extraction.
69
+ TeapotLLM is a fine tune of flan-t5-large that was trained on synthetic data
70
+ generated by Deepseek v3 TeapotLLM can be hosted on low-power devices with
71
+ as little as 2GB of CPU RAM such as a Raspberry Pi. Teapot is a model built
72
+ by and for the community.
73
 
74
 
75
  How many parameters is Deepseek?
76
+ example_title: Hallucination Resistance
77
  base_model:
78
  - google/flan-t5-large
79
  pipeline_tag: text2text-generation