🏜️MIRAGE-Bench [NAACL'25]
Collection
Dataset Collection from the MIRAGE-Bench paper
•
13 items
•
Updated
•
2
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the nthakur/mirage-gpt-4o-sft-instruct-mistral dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.3189 | 0.2535 | 200 | 0.2828 |
0.2906 | 0.5070 | 400 | 0.2465 |
0.2525 | 0.7605 | 600 | 0.2326 |
Base model
mistralai/Mistral-7B-Instruct-v0.2