Skip to content

Nomination: WFGY (MIT, 1.4k+ stars) for LLM / RAG debugging and robustness #429

@onestardao

Description

@onestardao

Project details:

  • Project Name: WFGY – Self-healing LLM / RAG Debugging Framework
  • Github URL: https://github.com/onestardao/WFGY
  • Category: Model Interpretability
  • License: MIT
  • Package Managers: (GitHub only for now, not yet on PyPI/Conda)

Additional context:

WFGY is an MIT-licensed semantic reasoning engine for LLMs, focused on robustness and debugging of real-world RAG / agent systems. The repo currently has ~1.4k+ stars and is used by practitioners as a framework-agnostic “debugging layer” on top of their existing Python ML stack.

WFGY 1.0 is the original self-healing LLM systems framework (PDF + experiments). WFGY 2.0 introduces the 16-problem RAG / LLM failure map, each with a dedicated page that explains:

  • the failure mode (e.g. hallucination & chunk drift, long-chain drift, entropy collapse, bootstrap ordering, deployment deadlock),
  • diagnostic prompts / procedures,
  • and proposed fixes that can be implemented inside existing ML/RAG pipelines.

Problem Map index (for quick overview of the 16 failures and docs):

The goal is not to provide another model, but to give ML engineers a reusable, well-documented framework to see why their RAG / LLM stack is failing and to systematically patch those failures while staying within the normal Python tooling ecosystem.

Metadata

Metadata

Assignees

No one assigned

    Labels

    add-projectAdd new project to best-of list

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions