sft_model_peft

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Task Arithmetic merge method using google/gemma-3-1b-pt as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

{
  "models": [
    {
      "model": "saransh03sharma/sft_model_state_dict-peft",
      "parameters": {
        "weight": 1.0
      }
    },
    {
      "model": "saransh03sharma/sft-unsafe-model-full",
      "parameters": {
        "weight": -1.0
      }
    }
  ],
  "merge_method": "task_arithmetic",
  "base_model": "google/gemma-3-1b-pt",
  "dtype": "float16",
  "output_dir": "sft_model_peft"
}
Downloads last month
6
Safetensors
Model size
1,000M params
Tensor type
FP16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for saransh03sharma/peft_model_RESTA