Skip to content

Feature request: semantic audit during training loop #653

@elly99-AI

Description

@elly99-AI

I propose adding a semantic audit module to the training loop of nanoGPT.
This would allow the model to reflect on its outputs during training, improving coherence and conceptual alignment.

Motivation:

nanoGPT is a minimal and efficient training framework.
Introducing a semantic checkpoint — using Specter embeddings and FAISS — could help detect drift and reinforce epistemic consistency.

Proposed Implementation:

  • Embed intermediate outputs during training
  • Compare against a conceptual memory bank
  • Trigger revision or flagging if semantic misalignment is detected

Inspired by https://github.com/elly99-AI/MarCognity-AI.git, a framework for cognitive orchestration.
Happy to contribute a prototype if aligned with the roadmap.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions