TESS-QwenRe-Fact-0.5B (ONNX)

This is an ONNX version of prithivMLmods/TESS-QwenRe-Fact-0.5B. It was automatically converted and uploaded using this space.


5.png

TESS-QwenRe-Fact-0.5B

TESS-QwenRe-Fact-0.5B is a compact fact-checking and short reasoning model built upon Qwen2.5 0.5B. Designed for rapid response, real-world fact verification, and concise logical reasoning, this lightweight model is ideal for digital assistants, quick-response tools, and misinformation detection systems in English and Chinese.

Key Features

  1. Fact Verification & Correction
    Trained to analyze factual accuracy in statements and offer corrected or clarified responses, making it ideal for real-time verification tasks and misinformation mitigation.

  2. Concise Reasoning
    Specializes in short-form reasoning, capable of analyzing and explaining claims, decisions, or statements in just a few logical steps β€” perfect for Q&A bots and assistant systems.

  3. Multilingual Support (EN + ZH)
    Supports fact-checking tasks in both English and Simplified Chinese, enhancing accessibility for bilingual or regional use cases.

  4. Built on Qwen2.5 0.5B
    Combines the latest architectural improvements from Qwen2.5 with a small parameter footprint (0.5B), optimized for speed, efficiency, and edge-device compatibility.

  5. Prompt-Friendly Output
    Responds well to well-structured queries, returning clean, interpretable answers β€” especially for true/false classification, source-based fact validation, and yes/no reasoning.

Quickstart with Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/TESS-QwenRe-Fact-0.5B"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Is the capital of Australia Sydney? Explain briefly."
messages = [
    {"role": "system", "content": "You are a concise and accurate fact-checking assistant."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=256
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

Intended Use

  • Fact-Checking Assistants: Quickly verify factual claims in conversation or content.
  • Digital Truth Detectors: Misinformation and rumor detection in social feeds or news summaries.
  • Micro-Reasoning Bots: Smart agents for short-form logic and rationale generation.
  • Multilingual Knowledge Tools: Fact reasoning in EN/ZH, ideal for diverse platforms.

Limitations

  1. Limited Depth
    Focused on short-form reasoning β€” may not perform well on multi-step or abstract logic tasks.

  2. Compact Model Scale
    At 0.5B parameters, it prioritizes efficiency over complexity β€” best for straightforward fact-based tasks.

  3. Language & Topic Bias
    Inherits limitations and biases from its base model Qwen2.5 0.5B. Use carefully in sensitive contexts.

  4. Prompt Clarity Required
    Structured prompts result in higher factual accuracy and shorter response latency.

Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for prithivMLmods/TESS-QwenRe-Fact-0.5B-ONNX

Base model

Qwen/Qwen2.5-0.5B
Quantized
(3)
this model