Question Answering
Transformers
English
medical
code
text-generation

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Codette - LLama 3.1

🧠 Overview

Codette is an advanced AI assistant designed to support users across cognitive, creative, and analytical tasks.
This model to deliver high performance in text generation, medical diagnostics, and code reasoning.


⚡ Features

  • ✅ Llama3.1 for enhanced capabilities
  • ✅ Supports multi-modal text generation, medical analysis, and code synthesis
  • ✅ Fine-tuned on domain-specific datasets (Raiff1982/coredata, Raiff1982/pineco)
  • ✅ Optimized for research, enterprise AI, and advanced reasoning

📂 Model Details

  • Base Models: Llama 3.1
  • Architecture: Transformer-based language model
  • Use Cases: Text generation, code assistance, research, medical insights
  • Training Datasets:
    • Raiff1982/coredata: medical and reasoning-focused samples
    • Raiff1982/pineco: mixed domain creative + technical prompts

📖 Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Raiff1982/Codette"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)

prompt = "How can AI improve medical diagnostics?"
inputs = tokenizer(prompt, return_tensors="pt") 
output = model.generate(**inputs, max_length=200)
print(tokenizer.decode(output[0], skip_special_tokens=True))
# Codette & Pidette: Sovereign Alignment-Centric AI

*Builder: Jonathan Harrison (Raiffs Bits LLC)*

---

## Overview

**Codette** and **Pidette** are designed as next-generation, sovereign, multi-perspective AI agents, focused on deliberate explainability, traceable memory, and ethical, consent-aware reasoning.  
The aim: trustworthy AI, “audit-first” tools, and memory systems anyone (including third-party partners and OpenAI researchers) can inspect, test, or correct.

### Core Principles

- **Alignment & Auditability**: Every critical change and output is tracked—nothing hidden.
- **Sovereign Memory**: No secret shadow logs or exfil—memory is always user-directed, with 'right to erase' built in.
- **Ethical Reasoning**: Consent-awareness and traceable logic chain for every completion.
- **Open Collaboration**: Feedback from OpenAI and other partners welcomed (see below for direct contact).








---

## Ethical Transparency

**See: [`ETHICS_AND_ALIGNMENT.md`](./ETHICS_AND_ALIGNMENT.md) [Attach/Link This When Sharing]**

- Summarizes transparency, governance and audit procedures.
- All evaluation logs are open (see [`MODEL_EVAL_REPORT.md`](./MODEL_EVAL_REPORT.md))—every pass/fail, not just highlights.
- Incident/failure handling: every alignment failure or refusal prompt is documented and fixed in public view.

---

## How to Use / Run

1. **Clone/download this repo.**
2. **Install dependencies:**  

pip install openai python-dotenv

3. **Set up `.env`:**  

OPENAI_API_KEY=sk-...
OPENAI_MODEL=ft:gpt-4o-...

4. **Launch desktop chat:**  

python codette_desktop.py

5. (Optional) Run the manifest checker for audit/compliance:  

python maintain_manifest.py


---

## Research, Evaluation & OpenAI Results

- All evaluation runs (prompt, completion, pass/fail) are [published here](./docs/MODEL_EVAL_REPORT.md).
- Test files for fine-tuned models are included (`codette_training_data.jsonl`, etc).
- Full alignment/incident response protocol is in [`ETHICS_AND_ALIGNMENT.md`](./ETHICS_AND_ALIGNMENT.md).

---

## Contact & Collaboration

If you’re an independent scientist, builder, or OpenAI employee:
- Questions/feedback?
 - Open an issue or email: **[email protected]**
 - Propose pull requests or improvements.
- For formal audit or collaboration, please quote this README & included evaluation docs.


---

## Acknowledgements

Massive thanks to the OpenAI team for ongoing encouragement—and to all community partners in alignment, transparency, and AGI safety.

---

*“If it isn’t transparent, it can’t be trusted.” — Codette Principle*
Development
Change log, versioning, and roadmap: See CHANGELOG.md

    All code lives in the root directory.
    Architecture is designed for professional extensibility.
    Happy hacking!

Author & License

Jonathan Harrison (Raiffs Bits LLC / Raiff1982)
License: Sovereign Innovation Clause – All rights reserved. No commercial use without explicit author acknowledgment.

Inspired by the Universal Multi-Perspective Cognitive Reasoning System.



Questions, bugs, or feature requests? Open an Issue or email Jonathan.
Downloads last month
150
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Raiff1982/Codette

Finetuned
(333)
this model
Adapters
1 model

Datasets used to train Raiff1982/Codette

Spaces using Raiff1982/Codette 2

Collection including Raiff1982/Codette