Model Details

This model is an int4 model with group_size 128 and symmetric quantization of google/gemma-3-27b-it generated by intel/auto-round algorithm.

Please follow the license of the original model.

Inference on XPU/CPU/CUDA

Requirements

pip install 'auto-round>=0.5'
from transformers import AutoProcessor, Gemma3ForConditionalGeneration
from PIL import Image
import requests
import torch
## must import for autoround format or use the tranformers>4.51.3
from auto_round import AutoRoundConfig

model_id = "OPEA/gemma-3-27b-it-int4-AutoRound"

model = Gemma3ForConditionalGeneration.from_pretrained(
    model_id, torch_dtype=torch.bfloat16, device_map="auto"
).eval()

processor = AutoProcessor.from_pretrained(model_id)

messages = [
    {
        "role": "system",
        "content": [{"type": "text", "text": "You are a helpful assistant."}]
    },
    {
        "role": "user",
        "content": [
            {"type": "image",
             "image": "https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/bee.jpg"},
            {"type": "text", "text": "Describe this image in detail."}
        ]
    }
]

inputs = processor.apply_chat_template(
    messages, add_generation_prompt=True, tokenize=True,
    return_dict=True, return_tensors="pt"
).to(model.device, dtype=torch.bfloat16)
model.to(torch.bfloat16)
input_len = inputs["input_ids"].shape[-1]

with torch.inference_mode():
    generation = model.generate(**inputs, max_new_tokens=100, do_sample=False)
    generation = generation[0][input_len:]

decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
"""
Here's a detailed description of the image:

**Overall Impression:**

The image is a close-up shot of a vibrant garden scene, focusing on a pink cosmos flower with a bumblebee actively collecting pollen. The composition is natural and slightly wild, with a mix of blooming and fading flowers.

**Detailed Description:**

*   **Main Subject:** A bright pink cosmos flower is the central focus. The petals are a delicate shade of pink with a slightly darker pink vein pattern. The
"""

Generate the model

Here is the sample command to reproduce the model.

auto-round-mllm \
--model google/gemma-3-27b-it \
--device 0 \
--bits 4 \
--format 'auto_round' \
--output_dir "./tmp_autoround"
Downloads last month
40
Safetensors
Model size
5.26B params
Tensor type
I32
·
BF16
·
FP16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for OPEA/gemma-3-27b-it-int4-AutoRound

Quantized
(67)
this model

Dataset used to train OPEA/gemma-3-27b-it-int4-AutoRound

Collection including OPEA/gemma-3-27b-it-int4-AutoRound