plm-logo

๐Ÿ–ฒ๏ธ PLM: Efficient Peripheral Language Models Hardware-Co-Designed for Ubiquitous Computing

๐Ÿ‘‰ Project PLM Website

The PLM (Peripheral Language Model) series introduces a novel model architecture to peripheral computing by delivering powerful language capabilities within the constraints of resource-limited devices. Through modeling and system co-design strategy, PLM optimizes model performance and fits edge system requirements, PLM employs Multi-head Latent Attention and squared ReLU activation to achieve sparsity, significantly reducing memory footprint and computational demands. Coupled with a meticulously crafted training regimen using curated datasets and a Warmup-Stable-Decay-Constant learning rate scheduler, PLM demonstrates superior performance compared to existing small language models, all while maintaining the lowest activated parameters, making it ideally suited for deployment on diverse peripheral platforms like mobile phones and Raspberry Pis.

Here we present the static quants of https://huggingface.co./PLM-Team/PLM-1.8B-Instruct

Provided Quants

Link Type Size/GB Notes
https://huggingface.co./PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-F16.gguf F16 3.66GB Recommanded
https://huggingface.co./PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q2_K.gguf Q2_K 827 MB
https://huggingface.co./PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q3_K_L.gguf Q3_K_L 1.09 GB
https://huggingface.co./PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q3_K_M.gguf Q3_K_M 1.01 GB
https://huggingface.co./PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q3_K_S.gguf Q3_K_S 912 MB
https://huggingface.co./PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q4_0.gguf Q4_0 1.11 GB
https://huggingface.co./PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q4_1.gguf Q4_1 1.21 GB
https://huggingface.co./PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q4_K_M.gguf Q4_K_M 1.18 GB Recommanded
https://huggingface.co./PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q4_K_S.gguf Q4_K_S 1.12 GB
https://huggingface.co./PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q5_0.gguf Q5_0 1.3 GB
https://huggingface.co./PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q5_1.gguf Q5_1 1.4 GB
https://huggingface.co./PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q5_K_M.gguf Q5_K_M 1.34 GB
https://huggingface.co./PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q5_K_S.gguf Q5_K_S 1.3 GB
https://huggingface.co./PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q6_K.gguf Q6_K 1.5 GB
https://huggingface.co./PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q8_0.gguf Q8_0 1.95 GB Recommanded

Usage (llama.cpp)

Now llama.cpp supports our model. Here is the usage:

git clone https://github.com/Si1w/llama.cpp.git
cd llama.cpp

If you want to convert the orginal model into gguf form by yourself, you can

pip install -r requirements.txt
python convert_hf_to_ggyf.py [model] --outtype {f32,f16,bf16,q8_0,tq1_0,tq2_0,auto}

Then, we can build with CPU of GPU (e.g. Orin). The build is based on cmake.

  • For CPU
cmake -B build
cmake --build build --config Release
  • For GPU
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release

Don't forget to download the GGUF files of the PLM. We use the quantization methods in llama.cpp to generate the quantized PLM.

huggingface-cli download --resume-download PLM-Team/PLM-1.8B-Instruct-gguf --local-dir PLM-Team/PLM-1.8B-Instruct-gguf

After build the llama.cpp, we can use llama-cli script to launch the PLM.

./build/bin/llama-cli -m ./PLM-Team/PLM-1.8B-Instruct-gguf/PLM-1.8B-Instruct-Q8_0.gguf -cnv -p "hello!" -n 128

Citation

If you find Project PLM helpful for your research or applications, please cite as follows:

@misc{deng2025plmefficientperipherallanguage,
      title={PLM: Efficient Peripheral Language Models Hardware-Co-Designed for Ubiquitous Computing}, 
      author={Cheng Deng and Luoyang Sun and Jiwen Jiang and Yongcheng Zeng and Xinjian Wu and Wenxin Zhao and Qingfa Xiao and Jiachuan Wang and Lei Chen and Lionel M. Ni and Haifeng Zhang and Jun Wang},
      year={2025},
      eprint={2503.12167},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2503.12167}, 
}
Downloads last month
521
GGUF
Model size
1.83B params
Architecture
plm
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for PLM-Team/PLM-1.8B-Instruct-gguf

Quantized
(4)
this model