Spaces:
Running
Running
snowkylin
commited on
Commit
·
6666806
1
Parent(s):
2ba2f1f
readme update
Browse files
main.py
CHANGED
@@ -4,7 +4,6 @@ import torch
|
|
4 |
pipe = pipeline(
|
5 |
"image-text-to-text",
|
6 |
model="google/gemma-3-4b-it",
|
7 |
-
device="cuda",
|
8 |
torch_dtype=torch.bfloat16,
|
9 |
)
|
10 |
|
|
|
4 |
pipe = pipeline(
|
5 |
"image-text-to-text",
|
6 |
model="google/gemma-3-4b-it",
|
|
|
7 |
torch_dtype=torch.bfloat16,
|
8 |
)
|
9 |
|
readme.md
CHANGED
@@ -11,9 +11,15 @@ license: mit
|
|
11 |
short_description: Chat with a character via reference sheet!
|
12 |
---
|
13 |
|
14 |
-
# Chat via
|
15 |
|
16 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
|
18 |
## Environment Configuration
|
19 |
|
@@ -54,5 +60,52 @@ huggingface-cli login
|
|
54 |
|
55 |
Copy-paste your access token and press enter.
|
56 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
57 |
|
58 |
|
|
|
11 |
short_description: Chat with a character via reference sheet!
|
12 |
---
|
13 |
|
14 |
+
# RefSheet Chat -- Chat with a character via reference sheet
|
15 |
|
16 |
+
Upload a reference sheet of a character, RefSheet Chat will try to understand the character through the reference sheet, and talk to you as that character. RefSheet Chat can run locally to ensure privacy.
|
17 |
+
|
18 |
+
Website: <https://refsheet.chat>
|
19 |
+
|
20 |
+
Tutorial slide (in Chinese) can be found in <https://snowkylin.github.io/talks/>
|
21 |
+
|
22 |
+
RefSheet Chat is a demo of [Gemma 3](https://blog.google/technology/developers/gemma-3/), demonstrating its excellent vision and multilingual capability.
|
23 |
|
24 |
## Environment Configuration
|
25 |
|
|
|
60 |
|
61 |
Copy-paste your access token and press enter.
|
62 |
|
63 |
+
## Packing
|
64 |
+
|
65 |
+
See <https://github.com/whitphx/gradio-pyinstaller-example> for more details
|
66 |
+
|
67 |
+
Create a hook file `runtime_hook.py` including environment variables
|
68 |
+
|
69 |
+
```python
|
70 |
+
# This is the hook patching the `multiprocessing.freeze_support` function,
|
71 |
+
# which we must import before calling `multiprocessing.freeze_support`.
|
72 |
+
import PyInstaller.hooks.rthooks.pyi_rth_multiprocessing # noqa: F401
|
73 |
+
import os
|
74 |
+
|
75 |
+
if __name__ == "__main__":
|
76 |
+
os.environ['PYINSTALLER'] = "1"
|
77 |
+
os.environ['HF_ENDPOINT'] = "https://hf-mirror.com" # optional, HF mirror site in China
|
78 |
+
os.environ['HF_TOKEN'] = "hf_XXXX" # HF token that allow access to Gemma 3
|
79 |
+
# This is necessary to prevent an infinite app launch loop.
|
80 |
+
import multiprocessing
|
81 |
+
multiprocessing.freeze_support()
|
82 |
+
```
|
83 |
+
|
84 |
+
Then
|
85 |
+
|
86 |
+
```commandline
|
87 |
+
pyi-makespec --collect-data=gradio_client --collect-data=gradio --collect-data=safehttpx --collect-data=groovy --runtime-hook=./runtime_hook.py app.py
|
88 |
+
```
|
89 |
+
|
90 |
+
open `app.spec` and add
|
91 |
+
```python
|
92 |
+
a = Analysis(
|
93 |
+
...,
|
94 |
+
module_collection_mode={
|
95 |
+
'gradio': 'py', # Collect gradio package as source .py files
|
96 |
+
}
|
97 |
+
}
|
98 |
+
```
|
99 |
+
then pack the environment
|
100 |
+
```commandline
|
101 |
+
pyinstaller --clean app.spec
|
102 |
+
```
|
103 |
+
finally copy the `win32ctypes` folder from your conda environment
|
104 |
+
```commandline
|
105 |
+
C:\Users\[Your-User-Name]\miniconda3\envs\[Your-Env-Name]\Lib\site-packages
|
106 |
+
```
|
107 |
+
to `dist/app/_internal`.
|
108 |
+
|
109 |
+
Run `app.exe` in `dist/app` and it should work.
|
110 |
|
111 |
|