granite-3.3-2b-instruct-critical-thinking

This model is based on ibm-granite/granite-3.3-2b-instruct and it's designed for analyzing arguments, finding logical fallacies, and providing suggestions for improvement. It was pre-trained on the agentlans/reddit-logic dataset and then fine-tuned using supervised learning on the same dataset.

Input Format

The model expects input in the following format:

Critically analyze:
{{YOUR_TEXT_HERE}}

For example:

Critically analyze:
So I've noticed a trend when it comes to the discourse around tipping and I want to be clear from the get go what my views are. I believe a tipping as a system in the US is to allow busine to owners to not pay a fair wage. I disagree with it being the primary way that servers in full service restaurants make their money. That being said, I also believe that if you go to full service restaurant where the waiter isn't giving horrible service then you should be expected to tip. So back to the discourse, it seems like many people are being disingenuous when it comes to caring about the employees by arguing: "I shouldn't be expected to pay them a fair wage". To me this seems like a cop out, because if they truly cared they would not be supporting business that use that model with any money. It seems to me that a lot of people are cheapskates masquerading as rebels to make themselves feel better about what they're doing. To clarify, I do not agree with tipping fast food or other businesses being an expectation where there are guaranteed hourly wages. I only agree with tipping being expected at sit down full service restaurants where tipped minimum wage is in effect.

Output Format

The model outputs a JSON object containing an analysis of the input argument. Here's an example of the expected output format:

{
  "claims": [
    "Tipping is a cop-out for avoiding fair wages.",
    "Tipping is acceptable at full-service restaurants with tipped minimum wage."
  ],
  "ambiguous_terms": [
    "Cop out",
    "fair wage"
  ],
  "assumptions": [
    "Fair wages are a fundamental human right.",
    "Supporting businesses with tipping is hypocritical."
  ],
  "premises": [
    "Tipping is a means to avoid paying fair wages.",
    "Full-service restaurants with tipped minimum wage justify tipping."
  ],
  "evidence": {
    "credibility": "Moderate",
    "relevance": "High",
    "sufficiency": "Adequate for argument's scope"
  },
  "additional_data": "Economic studies on tipping systems, employment statistics on full-service restaurants.",
  "issues": [
    "Overgeneralizes about tipping and fair wages."
  ],
  "competing_explanations": [
    "Tipping can be a way for customers to support quality service.",
    "Tipping may not be feasible in all economic contexts."
  ],
  "validity": "Partially valid",
  "soundness": "Moderate",
  "recommendations": [
    "Clarify the distinction between tipping and fair wages.",
    "Consider the complexities of tipping systems and their impact on workers."
  ]
}

Limitations

  • The model has the same limitations as the agentlans/reddit-logic dataset.
  • May not work as well on data outside the training distribution, including other types of communication and fields of discourse.
  • Lacks specialized knowledge but can offer pointers for continuing research to critically evaluate the arguments.
  • May possibly misinterpret the input or create malformed output, although this hasn't occurred yet in testing so far.
  • May miss some logical fallacies.
  • Doesn't fact check references.

Training procedure

The following setup was used for both pretraining and supervised fine-tuning.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 16
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • num_epochs: 1.0

Framework versions

  • PEFT 0.15.0
  • Transformers 4.49.0
  • Pytorch 2.6.0+cu124
  • Datasets 3.4.1
  • Tokenizers 0.21.0

Licence

Apache 2.0

Downloads last month
3
Safetensors
Model size
2.53B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for agentlans/granite-3.3-2b-instruct-critical-thinking

Finetuned
(3)
this model
Quantizations
1 model

Dataset used to train agentlans/granite-3.3-2b-instruct-critical-thinking