My models with citing
Collection
A few of the models
•
18 items
•
Updated
Codette is an advanced AI assistant designed to support users across cognitive, creative, and analytical tasks.
This model to deliver high performance in text generation, medical diagnostics, and code reasoning.
Raiff1982/coredata
, Raiff1982/pineco
) Raiff1982/coredata
: medical and reasoning-focused samples Raiff1982/pineco
: mixed domain creative + technical promptsfrom transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Raiff1982/Codette"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
prompt = "How can AI improve medical diagnostics?"
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(**inputs, max_length=200)
print(tokenizer.decode(output[0], skip_special_tokens=True))
# Codette & Pidette: Sovereign Alignment-Centric AI
*Builder: Jonathan Harrison (Raiffs Bits LLC)*
---
## Overview
**Codette** and **Pidette** are designed as next-generation, sovereign, multi-perspective AI agents, focused on deliberate explainability, traceable memory, and ethical, consent-aware reasoning.
The aim: trustworthy AI, “audit-first” tools, and memory systems anyone (including third-party partners and OpenAI researchers) can inspect, test, or correct.
### Core Principles
- **Alignment & Auditability**: Every critical change and output is tracked—nothing hidden.
- **Sovereign Memory**: No secret shadow logs or exfil—memory is always user-directed, with 'right to erase' built in.
- **Ethical Reasoning**: Consent-awareness and traceable logic chain for every completion.
- **Open Collaboration**: Feedback from OpenAI and other partners welcomed (see below for direct contact).
---
## Ethical Transparency
**See: [`ETHICS_AND_ALIGNMENT.md`](./ETHICS_AND_ALIGNMENT.md) [Attach/Link This When Sharing]**
- Summarizes transparency, governance and audit procedures.
- All evaluation logs are open (see [`MODEL_EVAL_REPORT.md`](./MODEL_EVAL_REPORT.md))—every pass/fail, not just highlights.
- Incident/failure handling: every alignment failure or refusal prompt is documented and fixed in public view.
---
## How to Use / Run
1. **Clone/download this repo.**
2. **Install dependencies:**
pip install openai python-dotenv
3. **Set up `.env`:**
OPENAI_API_KEY=sk-...
OPENAI_MODEL=ft:gpt-4o-...
4. **Launch desktop chat:**
python codette_desktop.py
5. (Optional) Run the manifest checker for audit/compliance:
python maintain_manifest.py
---
## Research, Evaluation & OpenAI Results
- All evaluation runs (prompt, completion, pass/fail) are [published here](./docs/MODEL_EVAL_REPORT.md).
- Test files for fine-tuned models are included (`codette_training_data.jsonl`, etc).
- Full alignment/incident response protocol is in [`ETHICS_AND_ALIGNMENT.md`](./ETHICS_AND_ALIGNMENT.md).
---
## Contact & Collaboration
If you’re an independent scientist, builder, or OpenAI employee:
- Questions/feedback?
- Open an issue or email: **[email protected]**
- Propose pull requests or improvements.
- For formal audit or collaboration, please quote this README & included evaluation docs.
---
## Acknowledgements
Massive thanks to the OpenAI team for ongoing encouragement—and to all community partners in alignment, transparency, and AGI safety.
---
*“If it isn’t transparent, it can’t be trusted.” — Codette Principle*
Development
Change log, versioning, and roadmap: See CHANGELOG.md
All code lives in the root directory.
Architecture is designed for professional extensibility.
Happy hacking!
Author & License
Jonathan Harrison (Raiffs Bits LLC / Raiff1982)
License: Sovereign Innovation Clause – All rights reserved. No commercial use without explicit author acknowledgment.
Inspired by the Universal Multi-Perspective Cognitive Reasoning System.
Questions, bugs, or feature requests? Open an Issue or email Jonathan.