RAG / openai_text-embedding-ada-002 /fixed_chunks /_big_models.txt_chunk_2.txt
thenativefox
Added split files and tables
939262b
raw
history blame contribute delete
401 Bytes
from transformers import AutoModelForCausalLM
gemma = AutoModelForCausalLM.from_pretrained("google/gemma-7b", torch_dtype="auto")
You can also set the data type to use for models instantiated from scratch.
thon
import torch
from transformers import AutoConfig, AutoModel
my_config = AutoConfig.from_pretrained("google/gemma-2b", torch_dtype=torch.float16)
model = AutoModel.from_config(my_config)
```