Julien Chaumond's picture

Julien Chaumond PRO

julien-c

AI & ML interests

<3 ML/AI for everyone, building products to propel communities fwd

Recent Activity

liked a Space about 6 hours ago
burtenshaw/genything
liked a Space about 6 hours ago
reach-vb/Blazingly-fast-LoRA
View all activity

Organizations

Hugging Face's profile picture Safetensors's profile picture Notebooks-explorers's profile picture Nbconvert-internal's profile picture BigScience Workshop's profile picture Spaces-explorers's profile picture Templates's profile picture The LLM Course's profile picture Giskard's profile picture ph-snps's profile picture Text Generation Inference's profile picture Amazon SageMaker Community's profile picture Training Transformers Together's profile picture Hugging Chat's profile picture Atmos Bank's profile picture Godot Engine Demos's profile picture Pyodide Demos's profile picture Huggingface.js's profile picture Webhooks Explorers (BETA)'s profile picture Workshop June 13 Classroom's profile picture HF Canonical Model Maintainers's profile picture TRL's profile picture Scanned Tokens's profile picture HF Legal's profile picture Language Tools's profile picture Stable Diffusion concepts library's profile picture Teven-projects's profile picture Banana-projects's profile picture Exbert-project's profile picture Blog-explorers's profile picture EU org's profile picture Hacktoberfest 2023's profile picture huggingPartyParis's profile picture Enterprise Explorers's profile picture ZeroGPU Explorers's profile picture OpenAI community's profile picture XLNet community's profile picture ALBERT community's profile picture Transformer-XL community's profile picture Facebook AI community's profile picture DistilBERT community's profile picture BERT community's profile picture T5 community's profile picture choosealicense.com mirror's profile picture Social Post Explorers's profile picture Dev Mode Explorers's profile picture Test's profile picture private beta for deeplinks's profile picture Paris AI Running Club's profile picture kmhf's profile picture Hugging Face Party @ PyTorch Conference's profile picture Nerdy Face's profile picture Hugging Face Science's profile picture open/ acc's profile picture DDUF's profile picture Self-serve FTW's profile picture Inference Explorers's profile picture hf-inference's profile picture

julien-c's activity

replied to their post 2 days ago
reacted to their post with πŸ‘πŸš€πŸ”₯ 2 days ago
view post
Post
3663
Important notice 🚨

For Inference Providers who have built support for our Billing API (currently: Fal, Novita, HF-Inference – with more coming soon), we've started enabling Pay as you go (=PAYG)

What this means is that you can use those Inference Providers beyond the free included credits, and they're charged to your HF account.

You can see it on this view: any provider that does not have a "Billing disabled" badge, is PAYG-compatible.
Β·
reacted to danielhanchen's post with β€οΈπŸ€—πŸ”₯ 2 days ago
view post
Post
5367
πŸ¦₯ Introducing Unsloth Dynamic v2.0 GGUFs!
Our v2.0 quants set new benchmarks on 5-shot MMLU and KL Divergence, meaning you can now run & fine-tune quantized LLMs while preserving as much accuracy as possible.

Llama 4: unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF
DeepSeek-R1: unsloth/DeepSeek-R1-GGUF-UD
Gemma 3: unsloth/gemma-3-27b-it-GGUF

We made selective layer quantization much smarter. Instead of modifying only a subset of layers, we now dynamically quantize all layers so every layer has a different bit. Now, our dynamic method can be applied to all LLM architectures, not just MoE's.

Blog with Details: https://docs.unsloth.ai/basics/dynamic-v2.0

All our future GGUF uploads will leverage Dynamic 2.0 and our hand curated 300K–1.5M token calibration dataset to improve conversational chat performance.

For accurate benchmarking, we built an evaluation framework to match the reported 5-shot MMLU scores of Llama 4 and Gemma 3. This allowed apples-to-apples comparisons between full-precision vs. Dynamic v2.0, QAT and standard iMatrix quants.

Dynamic v2.0 aims to minimize the performance gap between full-precision models and their quantized counterparts.
reacted to their post with πŸ˜ŽπŸ€—πŸ”₯ 2 days ago
view post
Post
3566
BOOOOM: Today I'm dropping TINY AGENTS

the 50 lines of code Agent in Javascript πŸ”₯

I spent the last few weeks working on this, so I hope you will like it.

I've been diving into MCP (Model Context Protocol) to understand what the hype was all about.

It is fairly simple, but still quite powerful: MCP is a standard API to expose sets of Tools that can be hooked to LLMs.

But while doing that, came my second realization:

Once you have a MCP Client, an Agent is literally just a while loop on top of it. 🀯

➑️ read it exclusively on the official HF blog: https://huggingface.co./blog/tiny-agents
  • 1 reply
Β·
posted an update 3 days ago
view post
Post
3566
BOOOOM: Today I'm dropping TINY AGENTS

the 50 lines of code Agent in Javascript πŸ”₯

I spent the last few weeks working on this, so I hope you will like it.

I've been diving into MCP (Model Context Protocol) to understand what the hype was all about.

It is fairly simple, but still quite powerful: MCP is a standard API to expose sets of Tools that can be hooked to LLMs.

But while doing that, came my second realization:

Once you have a MCP Client, an Agent is literally just a while loop on top of it. 🀯

➑️ read it exclusively on the official HF blog: https://huggingface.co./blog/tiny-agents
  • 1 reply
Β·
replied to gtvracer's post 12 days ago
view reply

which provider do you use?

We'll ship a provider="auto" in the coming days BTW, cc @sbrandeis @Wauplin @celinah

In the meantime, the model is served by those providers, you can use one of them, for instance, add provider="novita" to your code:

image.png

replied to OFT's post 25 days ago
replied to jsulz's post about 2 months ago
reacted to jsulz's post with β€οΈπŸš€ about 2 months ago
view post
Post
1441
It's finally here ❀️

Build faster than ever with lightning fast upload and download speeds starting today on the Hub ⚑

Xet storage is rolling out access across the Hub - join the waitlist here https://huggingface.co./join/xet

You can apply for yourself, or your entire organization. Head over to your account settings for more information or join anywhere you see the Xet logo on a repository you know.

Have questions? Join the conversation below πŸ‘‡ or open a discussion on the Xet team page xet-team/README
Β·
posted an update about 2 months ago
view post
Post
3663
Important notice 🚨

For Inference Providers who have built support for our Billing API (currently: Fal, Novita, HF-Inference – with more coming soon), we've started enabling Pay as you go (=PAYG)

What this means is that you can use those Inference Providers beyond the free included credits, and they're charged to your HF account.

You can see it on this view: any provider that does not have a "Billing disabled" badge, is PAYG-compatible.
Β·
reacted to jsulz's post with πŸ”₯β€οΈπŸš€ 2 months ago
view post
Post
3597
Time flies!

Six months after joining Hugging Face the Xet team is kicking off the first migrations from LFS to our storage for a number of repositories on the Hub.

More on the nitty gritty details behind the migration soon, but here are the big takeaways:

πŸ€– We've successfully completed the first migrations from LFS -> Xet to test the infrastructure and prepare for a wider release

βœ… No action on your part needed - you can work with a Xet-backed repo like any other repo on the Hub (for now - major improvements on their way!)

πŸ‘€ Keep an eye out for the Xet logo to see if a repo you know is on our infra! See the screenshots below to spot the difference πŸ‘‡

⏩ ⏩ ⏩ Blazing uploads and downloads coming soon. W’re gearing up for a full integration with the Hub's Python library that will make building on the Hub faster than ever - special thanks to @celinah and @Wauplin for their assistance.

πŸŽ‰ Want Early Access? If you’re curious and want to test it out the bleeding edge that will power the development experience on the Hub, we’d love to partner with you. Let me know!

This is the culmination of a lot of effort from the entire team. Big round of applause to @sirahd @brianronan @jgodlewski @hoytak @seanses @assafvayner @znation @saba9 @rajatarya @port8080 @yuchenglow
  • 1 reply
Β·