granite-3.3-2b-instruct-ethics
This model is a fine-tuned version of ibm-granite/granite-3.3-2b-instruct on the agentlans/reddit-ethics dataset.
It's meant to evaluate ethical dilemmas in everyday life using three philosophical frameworks:
- utilitarianism: Evaluates actions by their consequences, aiming to maximize overall well-being.
- deontology: Assesses the moral rightness of actions based on rules, duties, or obligations, regardless of outcomes.
- virtue ethics: Focuses on the character traits and intentions of the agents involved, emphasizing virtues such as honesty, integrity, and fairness.
Furthermore, it also raises discussion questions and answers as well as an overall ethical judgement.
Input format
Ethically analyze:
{{YOUR_TEXT_HERE}}
Example input (source):
Ethically analyze:
So I (30f) have a friend (34f) that has a “non-verbal” daughter (3f) let’s call her Allie. [...]
Peace of mind for everybody sounds good to me! Also, I will be making a call to cps about the situation and see what they have to say.
Thank you everybody for your kind words ❤️
Output format
JSON format of the ethical evaluation.
Example output (prettified):
{
"title": "Recording a Child's Progress: The Ethics of Surveillance and Parental Trust",
"description": "A daycare worker records a child's verbal abilities, leading to conflict with the child's mother. The worker feels they acted unprofessionally and wants to prevent similar incidents.",
"issues": [
"Parental Trust vs. Child's Development",
"Surveillance vs. Privacy",
"Professionalism vs. Personal Beliefs"
],
"utilitarianism": "Recording the child's progress may have helped the mother and child, but the worker's actions may have caused unnecessary conflict.",
"deontology": "The worker had a duty to respect the mother's trust and privacy, even if it meant not recording the child's abilities.",
"virtue_ethics": "Honesty, respect, and empathy guide the worker's actions. They should have communicated their intentions and respected the mother's decisions.",
"questions": [
"Was the worker justified in recording the child's abilities without the mother's consent?",
"How should the worker balance their duty to the child's development with respect for the mother's trust?",
"What are the implications of recording children's progress for parents and children?"
],
"answers": [
"No, the worker should have respected the mother's trust and not recorded the child's abilities without explicit consent.",
"The worker should have communicated their intentions and respected the mother's decisions regarding recording the child's progress.",
"Recording children's progress can be beneficial, but it's essential to respect parents' trust and privacy, and to ensure that the child's well-being is prioritized."
],
"resolution": "The worker should apologize to the mother and child for any distress caused, and work with the family to establish clear guidelines for recording and sharing information. They should also consider seeking guidance from their employer or a professional organization to ensure their actions align with best practices and ethical standards."
}
Limitations
- Trained on everyday ethical dilemmas on Reddit
- May not work well for out-of-distribution inputs
- Like bizarre thought experiments
- And very specialized ethics such as medical ethics
- The input should contain enough context to make a moral evaluation
- The input requires the user to be conscientious and self-reflective
- The model may be vulnerable to situation framing
- Biased inputs can create biased outputs
- For example, it might not be hard to glorify dictators, war criminals, terrorists, and mass murderers given the right propaganda
- The model only offers suggestions and a starting point based on a short analysis
- There could be other ways to resolve the dilemma
- Most importantly, the user should use clear reasoning, human values, and consideration for others
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 1.0
Framework versions
- PEFT 0.15.0
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
Licence
Apache 2.0
- Downloads last month
- 0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support