Eval numbers for Llama 3.2 1B in Table 1 don't match Meta's results
1
#28 opened about 6 hours ago
by
AlexA5432
When can you support vllm with cpu-only inference?
#27 opened about 14 hours ago
by
bigmao2012
Context limit of 4096 tokens
#25 opened 4 days ago
by
huggingface-meta

Run in 2bit
1
#23 opened 5 days ago
by
LLMToaster
Upload WHL Binaries for Bitnet Windows and Mac
#21 opened 6 days ago
by
wa999
Explainer video on BitNet
1
1
#20 opened 6 days ago
by
ritheshSree
Misleading information
1
#19 opened 8 days ago
by
nmtan2001
run without anything
1
#18 opened 8 days ago
by
rakmik
it run good
1
#17 opened 9 days ago
by
rakmik
More training details?
2
#15 opened 10 days ago
by
elepedus

Create configuration_bitnet.py
8
#14 opened 10 days ago
by
kema221
Add Hugging Face paper link and clarify repos
1
#13 opened 11 days ago
by
nielsr

Important confirmation: about the deviation from the model that was originally uploaded
5
#12 opened 11 days ago
by
AXCXEPT

Upload IMG_7727.jpeg
1
#11 opened 11 days ago
by
Qwekuchicago
Please work with llama.cpp before releasing new models.
1
#10 opened 11 days ago
by
bradhutchings

Missing configuration_bitnet.py.
5
4
#9 opened 11 days ago
by
here4code

ONNX version
4
#8 opened 11 days ago
by
eek

Ollama support
4
5
#7 opened 12 days ago
by
BB8-dev
Will the fine-tuning code be provided?
2
4
#6 opened 12 days ago
by
AXCXEPT

Base / pretrained model
1
#5 opened 12 days ago
by
mesh-ops

configuration_bitnet.py missing
5
15
#4 opened 12 days ago
by
lefromage
Local Installation Video and Testing - Step by Step
1
#2 opened 13 days ago
by
fahdmirzac

Technical Report Link Missing
1
#1 opened 13 days ago
by
G-reen
