Victor Nogueira's picture

Victor Nogueira

Felladrin

AI & ML interests

Models to run in the web browser

Recent Activity

Organizations

Blog-explorers's profile picture MLX Community's profile picture Social Post Explorers's profile picture M4-ai's profile picture ONNX Community's profile picture Smol Community's profile picture

Felladrin's activity

reacted to danielhanchen's post with 🚀 1 day ago
view post
Post
5475
🦥 Introducing Unsloth Dynamic v2.0 GGUFs!
Our v2.0 quants set new benchmarks on 5-shot MMLU and KL Divergence, meaning you can now run & fine-tune quantized LLMs while preserving as much accuracy as possible.

Llama 4: unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF
DeepSeek-R1: unsloth/DeepSeek-R1-GGUF-UD
Gemma 3: unsloth/gemma-3-27b-it-GGUF

We made selective layer quantization much smarter. Instead of modifying only a subset of layers, we now dynamically quantize all layers so every layer has a different bit. Now, our dynamic method can be applied to all LLM architectures, not just MoE's.

Blog with Details: https://docs.unsloth.ai/basics/dynamic-v2.0

All our future GGUF uploads will leverage Dynamic 2.0 and our hand curated 300K–1.5M token calibration dataset to improve conversational chat performance.

For accurate benchmarking, we built an evaluation framework to match the reported 5-shot MMLU scores of Llama 4 and Gemma 3. This allowed apples-to-apples comparisons between full-precision vs. Dynamic v2.0, QAT and standard iMatrix quants.

Dynamic v2.0 aims to minimize the performance gap between full-precision models and their quantized counterparts.
reacted to eaddario's post with 🔥 10 days ago
view post
Post
1479
Tensor-wise (TWQ) and Layer-wise quantization (LWQ) now available in llama.cpp!

As of version b5125 users can now do TWQ, whereby you quantize a whole tensor at a specific level, or perform LWQ by choosing specific layers per tensor/s

The new --tensor-type option enables llama-quantize to apply user-defined quant levels to any combination of allowed tensors (i.e. tensors with 2 or more dimensions) and layer number, with support for regex patterns.

For example, to TWQ the Attention Value tensor you would use --tensor-type attn_v=q6_k and to perform LWQ you'll use something like --tensor-type "\.([0-9]|1[01257]|31)\.attn_v=q4_k"

In the next few days/weeks I'll update the models in my HF repo (and will add some others) but eaddario/DeepSeek-R1-Distill-Llama-8B-GGUF and eaddario/DeepSeek-R1-Distill-Qwen-7B-GGUF have been already LWQed.

For reference, compared to the naive Q4_K_M model, the LWQ Qwen-7B is almost 11% smaller (4.68GB vs 4.18GB) with only a 0.35% penalty on PPL!

I'll update the https://medium.com/@eaddario/squeezing-tensor-bits-the-quest-for-smaller-llms-86b23bd052ca post to explain the process in detail, but in the meantime the following links will provide some background:

- Changes to llama-quantize: https://github.com/ggml-org/llama.cpp/pull/12511
- TWQ & LWQ tests: https://github.com/ggml-org/llama.cpp/discussions/12741
- Modified llama-imatrix (not yet merged) used to generate imatrix statistics to guide the TWQ and LWQ process: https://github.com/ggml-org/llama.cpp/pull/12718
reacted to grimjim's post with 🧠 21 days ago
view post
Post
1604
I recently have been looking at a paper titled "Why Warmup the Learning Rate? Underlying Mechanisms and Improvements", by Dayal Singh Kalra and Maissam Barkeshli, and was struck by "warmup" being analogous to simulated annealing.
https://arxiv.org/abs/2406.09405
Taking the physical analogy further, the "warmup" is a stochastic process to knock the system out of current local minima, allowing easier transition toward newer minima. It works because it reduces "fit" and therefore "friction".
reacted to BlinkDL's post with 🔥 about 1 month ago
view post
Post
6719
RWKV-7 "Goose" 0.4B trained w/ ctx4k automatically extrapolates to ctx32k+, and perfectly solves NIAH ctx16k 🤯 100% RNN and attention-free. Only trained on the Pile. No finetuning. Replicable training runs. tested by our community: https://github.com/Jellyfish042/LongMamba
reacted to mlabonne's post with 🔥 about 1 month ago
reacted to sharpenb's post with 🔥 about 1 month ago
view post
Post
3093
We open-sourced the pruna package that can be easily installed with pip install pruna :) It allows to easily ccompress and evaluate AI models including transformers and diffusers.

- Github repo: https://github.com/PrunaAI/pruna
- Documentation: https://docs.pruna.ai/en/stable/index.html

With open-sourcing, people can now inspect and contribute to the open code. Beyond the code, we provide detailed readme, tutorials, benchmarks, and documentation to make transparent compression, evaluation, and saving/loading/serving of AI models.

Happy to share it with you and always interested in collecting your feedback :)
  • 2 replies
·
reacted to AdinaY's post with 🚀 about 1 month ago
reacted to eaddario's post with 👀 about 2 months ago
reacted to ngxson's post with 🚀 about 2 months ago
view post
Post
3507
A comprehensive matrix for which format should you use.

Read more on my blog post: https://huggingface.co./blog/ngxson/common-ai-model-formats

| Hardware        | GGUF      | PyTorch                | Safetensors              | ONNX  |
|-----------------|-----------|------------------------|--------------------------|-------|
| CPU             | ✅ (best) | 🟡                      | 🟡                       ||
| GPU             |||||
| Mobile          || 🟡 (via executorch)     |||
| Apple silicon   || 🟡                      | ✅ (via MLX framework)   ||
  • 1 reply
·
reacted to AdinaY's post with 🔥 2 months ago
view post
Post
2740
Wan2.1 🔥📹 new OPEN video model by Alibaba Wan team!

Model: Wan-AI/Wan2.1-T2V-14B
Demo: Wan-AI/Wan2.1

✨Apache 2.0
✨8.19GB VRAM, runs on most GPUs
✨Multi-Tasking: T2V, I2V, Video Editing, T2I, V2A
✨Text Generation: Supports Chinese & English
✨Powerful Video VAE: Encode/decode 1080P w/ temporal precision
  • 1 reply
·
reacted to JingzeShi's post with 🚀 2 months ago
reacted to fantos's post with 🚀 3 months ago
view post
Post
6075
😊 Panorama X3 Image

an innovative system that leverages a Stable Diffusion XL-based tiling pipeline to generate unique and vibrant panoramic images by applying different prompts to the left, center, and right sections of a single image.

Key Features & Strengths
Multi-Area Prompt Support
Input distinct descriptions for the left, center, and right regions (e.g., "dense forest" for the left, "calm lake" for the center, and "majestic mountains" for the right). This allows the system to seamlessly blend multiple scenes into one stunning panoramic image. 🌄

Automatic Korean-to-English Translation
If your prompt contains Korean text, it will be automatically translated into English before image generation.
(For example, "안개 낀 산" becomes "Misty mountain") 🔄
This feature ensures that you can effortlessly use both English and Korean prompts.

Advanced Tiling Technology
The project uses a sophisticated tiling approach that manages overlapping regions to produce natural transitions and high-resolution panoramic images.
This isn't just a simple image merge—it's a refined process that delivers exceptional quality and detail. 🖼️

User-Friendly Interface
Enjoy a modern, visually appealing UI featuring a gradient background, semi-transparent containers, and smooth animation effects.
The prompt input fields clearly indicate that both English and Korean entries are allowed with the label (English/Korean allowed), making it accessible for everyone. 🎨

fantos/Panorama

Panorama X3 Image is the perfect tool for anyone looking to visually express creative ideas. Try it out now by experimenting with various prompts and create your very own breathtaking panoramic image! 🚀

Thank you! 🙏
·
reacted to Xenova's post with 🚀🔥 3 months ago
view post
Post
13216
We did it. Kokoro TTS (v1.0) can now run 100% locally in your browser w/ WebGPU acceleration. Real-time text-to-speech without a server. ⚡️

Generate 10 seconds of speech in ~1 second for $0.

What will you build? 🔥
webml-community/kokoro-webgpu

The most difficult part was getting the model running in the first place, but the next steps are simple:
✂️ Implement sentence splitting, allowing for streamed responses
🌍 Multilingual support (only phonemization left)

Who wants to help?
·
replied to victor's post 3 months ago
view reply

This update is massive!! 🙌

I’d love if we could also filter spaces in a way we could list only the ones in Running state.

reacted to Tonic's post with 🔥 3 months ago
view post
Post
2989
🙋🏻‍♂️ Hey there folks ,

our team made a game during the @mistral-game-jam and we're trying to win the community award !

try our game out and drop us a ❤️ like basically to vote for us !

Mistral-AI-Game-Jam/TextToSurvive

hope you like it !
reacted to AdinaY's post with 🚀 3 months ago
view post
Post
2680
🔥So many exciting releases coming from the Chinese community this month!
https://huggingface.co./collections/zh-ai-community/2025-january-6786b054f492fb223591269e

LLMs:
✨ Qwen2.5 -1M by Alibaba
Qwen/qwen25-1m-679325716327ec07860530ba
✨ InternLM3-8B-Instruct by Shanghai AI Lab
internlm/internlm3-8b-instruct
✨ MiniMax-Text-01 by MiniMax AI
MiniMaxAI/MiniMax-Text-01
✨ RWKV-7 by BlinkDL -- RNN + Transformer 👀
BlinkDL/rwkv-7-world
✨ DeepSeek-R1 by DeepSeek -- THE ONE 🙌
deepseek-ai
✨ Baichuan-M1-14B by Baichuan - Medical 🩺
baichuan-inc/Baichuan-M1-14B-Base
✨ Qwen2.5-Math-PRM by Alibaba - Math 🔢
Qwen/Qwen2.5-Math-PRM-7B

Code:
✨ Tare by Bytedance
https://trae.ai

TTS:
✨ T2A-01-HD by MiniMax AI
https://hailuo.ai/audio
✨ LLaSA by HKUST Audio
HKUSTAudio/Llasa-3B

MLLM:
✨ Kimi k1.5 by Moonshot AI
https://kimi.ai
✨ MiniCPM-o-2_6 by OpenBMB
openbmb/MiniCPM-o-2_6
✨ Sa2VA-4B by ByteDance
ByteDance/Sa2VA-4B
✨ VideoLLaMA 3 by Alibaba DAMO
DAMO-NLP-SG/videollama3-678cdda9281a0e32fe79af15
✨ LLaVA-Mini by Chinese Academy of Sciences
ICTNLP/llava-mini-llama-3.1-8b
✨Hunyuan-7B by Tencent
tencent/Hunyuan-7B-Instruct
✨ Hunyuan 3D 2.0 by Tencent
tencent/Hunyuan3D-2
✨MiniMax-VL-01 by MiniMax AI - A non transformer based VLM 👀
MiniMaxAI/MiniMax-VL-01

Agent:
✨ UI-TARS by Bytedance
https://huggingface.co./bytedance-research/UI-TARS-7B-SFT
✨ GLM-PC by Zhipu AI
https://cogagent.aminer.cn

Dataset:
✨ Fineweb-Edu-Chinese by Opencsg
opencsg/Fineweb-Edu-Chinese-V2.1
✨ Multimodal_textbook by Alibaba
DAMO-NLP-SG/multimodal_textbook
✨ MME-Finance by Hithink AI
·
reacted to ngxson's post with 🚀 4 months ago
reacted to tomaarsen's post with ❤️ 4 months ago
view post
Post
3061
That didn't take long! Nomic AI has finetuned the new ModernBERT-base encoder model into a strong embedding model for search, classification, clustering and more!

Details:
🤖 Based on ModernBERT-base with 149M parameters.
📊 Outperforms both nomic-embed-text-v1 and nomic-embed-text-v1.5 on MTEB!
🏎️ Immediate FA2 and unpacking support for super efficient inference.
🪆 Trained with Matryoshka support, i.e. 2 valid output dimensionalities: 768 and 256.
➡️ Maximum sequence length of 8192 tokens!
2️⃣ Trained in 2 stages: unsupervised contrastive data -> high quality labeled datasets.
➕ Integrated in Sentence Transformers, Transformers, LangChain, LlamaIndex, Haystack, etc.
🏛️ Apache 2.0 licensed: fully commercially permissible

Try it out here: nomic-ai/modernbert-embed-base

Very nice work by Zach Nussbaum and colleagues at Nomic AI.
reacted to MoritzLaurer's post with 👍 4 months ago
view post
Post
2621
Quite excited by the ModernBERT release! 0.15/0.4B small, 2T modern pre-training data and tokenizer with code, 8k context window, great efficient model for embeddings & classification!

This will probably be the basis for many future SOTA encoders! And I can finally stop using DeBERTav3 from 2021 :D

Congrats @answerdotai , @LightOnIO and collaborators like @tomaarsen !

Paper and models here 👇https://huggingface.co./collections/answerdotai/modernbert-67627ad707a4acbf33c41deb
·